arnavkartikeya commited on
Commit
2d8ff97
1 Parent(s): 89fa03e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -116
README.md CHANGED
@@ -1,116 +1,10 @@
1
- ## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
2
-
3
- ## Announcement: BLIP is now officially integrated into [LAVIS](https://github.com/salesforce/LAVIS) - a one-stop library for language-and-vision research and applications!
4
-
5
- <img src="BLIP.gif" width="700">
6
-
7
- This is the PyTorch code of the <a href="https://arxiv.org/abs/2201.12086">BLIP paper</a> [[blog](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/)]. The code has been tested on PyTorch 1.10.
8
- To install the dependencies, run <pre/>pip install -r requirements.txt</pre>
9
-
10
- Catalog:
11
- - [x] Inference demo
12
- - [x] Pre-trained and finetuned checkpoints
13
- - [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
14
- - [x] Pre-training code
15
- - [x] Zero-shot video-text retrieval
16
- - [x] Download of bootstrapped pre-training datasets
17
-
18
-
19
- ### Inference demo:
20
- Run our interactive demo using [Colab notebook](https://colab.research.google.com/github/salesforce/BLIP/blob/main/demo.ipynb) (no GPU needed).
21
- The demo includes code for:
22
- 1. Image captioning
23
- 2. Open-ended visual question answering
24
- 3. Multimodal / unimodal feature extraction
25
- 4. Image-text matching
26
-
27
- Try out the [Web demo](https://huggingface.co/spaces/Salesforce/BLIP), integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio).
28
-
29
- Replicate web demo and Docker image is also available at [![Replicate](https://replicate.com/salesforce/blip/badge)](https://replicate.com/salesforce/blip)
30
-
31
- ### Pre-trained checkpoints:
32
- Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L
33
- --- | :---: | :---: | :---:
34
- 14M | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_14M.pth">Download</a>| - | -
35
- 129M | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth">Download</a> | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large.pth">Download</a>
36
-
37
- ### Finetuned checkpoints:
38
- Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L
39
- --- | :---: | :---: | :---:
40
- Image-Text Retrieval (COCO) | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_retrieval_coco.pth">Download</a>| - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_retrieval_coco.pth">Download</a>
41
- Image-Text Retrieval (Flickr30k) | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_retrieval_flickr.pth">Download</a>| - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_retrieval_flickr.pth">Download</a>
42
- Image Captioning (COCO) | - | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_caption_capfilt_large.pth">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth">Download</a> |
43
- VQA | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_vqa.pth">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_vqa_capfilt_large.pth">Download</a> | -
44
- NLVR2 | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_nlvr.pth">Download</a>| - | -
45
-
46
-
47
- ### Image-Text Retrieval:
48
- 1. Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
49
- 2. To evaluate the finetuned BLIP model on COCO, run:
50
- <pre>python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \
51
- --config ./configs/retrieval_coco.yaml \
52
- --output_dir output/retrieval_coco \
53
- --evaluate</pre>
54
- 3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
55
- <pre>python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \
56
- --config ./configs/retrieval_coco.yaml \
57
- --output_dir output/retrieval_coco </pre>
58
-
59
- ### Image-Text Captioning:
60
- 1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
61
- 2. To evaluate the finetuned BLIP model on COCO, run:
62
- <pre>python -m torch.distributed.run --nproc_per_node=8 train_caption.py --evaluate</pre>
63
- 3. To evaluate the finetuned BLIP model on NoCaps, generate results with: (evaluation needs to be performed on official server)
64
- <pre>python -m torch.distributed.run --nproc_per_node=8 eval_nocaps.py </pre>
65
- 4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth". Then run:
66
- <pre>python -m torch.distributed.run --nproc_per_node=8 train_caption.py </pre>
67
-
68
- ### VQA:
69
- 1. Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml.
70
- 2. To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server)
71
- <pre>python -m torch.distributed.run --nproc_per_node=8 train_vqa.py --evaluate</pre>
72
- 3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/vqa.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth". Then run:
73
- <pre>python -m torch.distributed.run --nproc_per_node=16 train_vqa.py </pre>
74
-
75
- ### NLVR2:
76
- 1. Download NLVR2 dataset from the original websites, and set 'image_root' in configs/nlvr.yaml.
77
- 2. To evaluate the finetuned BLIP model, run
78
- <pre>python -m torch.distributed.run --nproc_per_node=8 train_nlvr.py --evaluate</pre>
79
- 3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/nlvr.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
80
- <pre>python -m torch.distributed.run --nproc_per_node=16 train_nlvr.py </pre>
81
-
82
- ### Finetune with ViT-L:
83
- In order to finetune a model with ViT-L, simply change the config file to set 'vit' as large. Batch size and learning rate may also need to be adjusted accordingly (please see the paper's appendix for hyper-parameter details). <a href="https://github.com/facebookresearch/fairscale">Gradient checkpoint</a> can also be activated in the config file to reduce GPU memory usage.
84
-
85
- ### Pre-train:
86
- 1. Prepare training json files where each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}.
87
- 2. In configs/pretrain.yaml, set 'train_file' as the paths for the json files .
88
- 3. Pre-train the model using 8 A100 GPUs:
89
- <pre>python -m torch.distributed.run --nproc_per_node=8 pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain </pre>
90
-
91
- ### Zero-shot video-text retrieval:
92
- 1. Download MSRVTT dataset following the instructions from https://github.com/salesforce/ALPRO, and set 'video_root' accordingly in configs/retrieval_msrvtt.yaml.
93
- 2. Install [decord](https://github.com/dmlc/decord) with <pre>pip install decord</pre>
94
- 3. To perform zero-shot evaluation, run
95
- <pre>python -m torch.distributed.run --nproc_per_node=8 eval_retrieval_video.py</pre>
96
-
97
- ### Pre-training datasets download:
98
- We provide bootstrapped pre-training datasets as json files. Each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'url': url_of_image, 'caption': text_of_image}.
99
-
100
- Image source | Filtered web caption | Filtered synthetic caption by ViT-B | Filtered synthetic caption by ViT-L
101
- --- | :---: | :---: | :---:
102
- CC3M+CC12M+SBU | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/ccs_filtered.json">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/ccs_synthetic_filtered.json">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/ccs_synthetic_filtered_large.json">Download</a>
103
- LAION115M | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/laion_filtered.json">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/laion_synthetic_filtered.json">Download</a>| <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/laion_synthetic_filtered_large.json">Download</a>
104
-
105
- ### Citation
106
- If you find this code to be useful for your research, please consider citing.
107
- <pre>
108
- @inproceedings{li2022blip,
109
- title={BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
110
- author={Junnan Li and Dongxu Li and Caiming Xiong and Steven Hoi},
111
- year={2022},
112
- booktitle={ICML},
113
- }</pre>
114
-
115
- ### Acknowledgement
116
- The implementation of BLIP relies on resources from <a href="https://github.com/salesforce/ALBEF">ALBEF</a>, <a href="https://github.com/huggingface/transformers">Huggingface Transformers</a>, and <a href="https://github.com/rwightman/pytorch-image-models/tree/master/timm">timm</a>. We thank the original authors for their open-sourcing.
1
+ ---
2
+ title: SCRIPture
3
+ emoji: emoji
4
+ colorFrom: Blue
5
+ colorTo: Green
6
+ sdk: gradio
7
+ sdk_version: 3.9.0
8
+ app_file: app.py
9
+ pinned: false
10
+ ---