Irena Gao
commited on
Commit
•
bca9a71
1
Parent(s):
fae8652
update README
Browse files
README.md
CHANGED
@@ -18,6 +18,28 @@ This model has cross-attention modules inserted in *every other* decoder block.
|
|
18 |
|
19 |
## Uses
|
20 |
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
### Generation example
|
22 |
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
|
23 |
|
|
|
18 |
|
19 |
## Uses
|
20 |
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
|
21 |
+
|
22 |
+
### Initialization
|
23 |
+
|
24 |
+
``` python
|
25 |
+
from open_flamingo import create_model_and_transforms
|
26 |
+
|
27 |
+
model, image_processor, tokenizer = create_model_and_transforms(
|
28 |
+
clip_vision_encoder_path="ViT-L-14",
|
29 |
+
clip_vision_encoder_pretrained="openai",
|
30 |
+
lang_encoder_path="togethercomputer/RedPajama-INCITE-Base-3B-v1",
|
31 |
+
tokenizer_path="togethercomputer/RedPajama-INCITE-Base-3B-v1",
|
32 |
+
cross_attn_every_n_layers=2
|
33 |
+
)
|
34 |
+
|
35 |
+
# grab model checkpoint from huggingface hub
|
36 |
+
from huggingface_hub import hf_hub_download
|
37 |
+
import torch
|
38 |
+
|
39 |
+
checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-4B-vitl-rpj3b", "checkpoint.pt")
|
40 |
+
model.load_state_dict(torch.load(checkpoint_path), strict=False)
|
41 |
+
```
|
42 |
+
|
43 |
### Generation example
|
44 |
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
|
45 |
|