JustinLin610 commited on
Commit
a435fd4
1 Parent(s): 90e0293

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -3,10 +3,14 @@ license: apache-2.0
3
  ---
4
 
5
  # OFA-tiny
 
 
6
  This is the **tiny** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
7
 
8
  The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
9
 
 
 
10
  To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
11
 
12
  ```
@@ -17,7 +21,7 @@ git clone https://huggingface.co/OFA-Sys/OFA-tiny
17
 
18
  After, refer the path to OFA-tiny to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
19
 
20
- ```
21
  >>> from PIL import Image
22
  >>> from torchvision import transforms
23
  >>> from transformers import OFATokenizer, OFAModel
 
3
  ---
4
 
5
  # OFA-tiny
6
+
7
+ ## Introduction
8
  This is the **tiny** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
9
 
10
  The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
11
 
12
+
13
+ ## How to use
14
  To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
15
 
16
  ```
 
21
 
22
  After, refer the path to OFA-tiny to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
23
 
24
+ ```python
25
  >>> from PIL import Image
26
  >>> from torchvision import transforms
27
  >>> from transformers import OFATokenizer, OFAModel