JustinLin610 commited on
Commit
60d03d8
1 Parent(s): 31aa84f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -5,7 +5,17 @@ license: apache-2.0
5
  # OFA-tiny
6
  This is the **tiny** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
7
 
8
- To use it in Transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers and download the directory of transformers. After installation, you can use it as shown below:
 
 
 
 
 
 
 
 
 
 
9
 
10
  ```
11
  >>> from PIL import Image
 
5
  # OFA-tiny
6
  This is the **tiny** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
7
 
8
+ The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
9
+
10
+ To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
11
+
12
+ ```
13
+ git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
14
+ pip install OFA/transformers/
15
+ it clone https://huggingface.co/OFA-Sys/OFA-tiny
16
+ ```
17
+
18
+ After, refer the path to OFA-tiny to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
19
 
20
  ```
21
  >>> from PIL import Image