Transformers
PyTorch
English
bridgetower
Inference Endpoints
anahita-b commited on
Commit
e7129ac
1 Parent(s): 5089fd7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md CHANGED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - bridgetower
5
+ license: mit
6
+ datasets:
7
+ - conceptual_captions
8
+ - sbu_captions
9
+ - visual_genome
10
+ - mscoco_captions
11
+ ---
12
+
13
+ # BridgeTower base model
14
+
15
+ The BridgeTower model was proposed in [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning] by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
16
+ The model was pretrained model on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in
17
+ [this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
18
+ [this repository](https://github.com/microsoft/BridgeTower).
19
+
20
+ ## Model description
21
+
22
+ The abstract from the paper is the following:
23
+ Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
24
+
25
+ ## Intended uses & limitations(TODO)
26
+
27
+ You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
28
+ See the [model hub](https://huggingface.co/models?filter=BridgeTower) to look for fine-tuned versions on a task that
29
+ interests you.
30
+
31
+ ### How to use
32
+
33
+ Here is how to use this model to get the features of a given text in PyTorch:
34
+ ```python
35
+ from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
36
+ import requests
37
+ from PIL import Image
38
+
39
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
40
+ image = Image.open(requests.get(url, stream=True).raw)
41
+ text = "a bunch of [MASK] laying on a [MASK]."
42
+ # Masked Language Modeling
43
+ processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base")
44
+ model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base")
45
+ # Prepare inputs
46
+ encoding = processor(image, text, return_tensors="pt")
47
+ # Forward pass
48
+ outputs = model(**encoding)
49
+ # Image and Text Retrieval
50
+ model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base")
51
+ # Image and Text Classification
52
+ model = BridgeTowerForImageAndTextClassification.from_pretrained("BridgeTower/bridgetower-base")
53
+ ```
54
+ ### Limitations and bias
55
+
56
+ TODO
57
+
58
+ ## Training data
59
+
60
+ The BridgeTower model was pretrained on four public image-caption datasets:
61
+ - [Conceptual Captions(CC)](https://ai.google.com/research/ConceptualCaptions/),
62
+ - [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/),
63
+ - [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf),
64
+ - [Visual Genome](https://visualgenome.org/)
65
+
66
+ The total number of unique images in the combined data is 4M.
67
+
68
+ ## Training procedure
69
+
70
+ ### Preprocessing
71
+
72
+ TODO
73
+
74
+ ### Pretraining
75
+
76
+ The model was pre-trained for 100k steps on 8 NVIDIA A100 GPUs with a batch size of 4096.
77
+ The optimizer used was AdamW with a learning rate of 1e-5. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 288 x 288.
78
+
79
+ ## Evaluation results
80
+ When fine-tuned on downstream tasks, this model achieves the following results:
81
+
82
+ | Task | | | | | | | | |
83
+ |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
84
+ | | | | | | | | | |
85
+
86
+ ### BibTeX entry and citation info
87
+ ```bibtex
88
+ @article{xu2022bridge,
89
+ title={Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning},
90
+ author={Xu, Xiao and
91
+ Wu, Chenfei and
92
+ Rosenman, Shachar and
93
+ Lal, Vasudev and
94
+ Duan, Nan},
95
+ journal={arXiv preprint arXiv:2206.08657},
96
+ year={2022}
97
+ }
98
+ ```
99
+ <a href="https://huggingface.co/exbert/?model=BridgeTower/bridgetower-base">
100
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
101
+ </a>