Update README.md
Browse files
README.md
CHANGED
@@ -17,13 +17,13 @@ The model was pretrained model on English language using masked language modelin
|
|
17 |
[this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
|
18 |
[this repository](https://github.com/microsoft/BridgeTower).
|
19 |
|
|
|
|
|
20 |
## Model description
|
21 |
|
22 |
The abstract from the paper is the following:
|
23 |
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
|
24 |
|
25 |
-
BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
|
26 |
-
|
27 |
## Intended uses & limitations(TODO)
|
28 |
|
29 |
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
|
|
|
17 |
[this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
|
18 |
[this repository](https://github.com/microsoft/BridgeTower).
|
19 |
|
20 |
+
BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
|
21 |
+
|
22 |
## Model description
|
23 |
|
24 |
The abstract from the paper is the following:
|
25 |
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
|
26 |
|
|
|
|
|
27 |
## Intended uses & limitations(TODO)
|
28 |
|
29 |
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
|