Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
|
|
25 |
The abstract from the paper is the following:
|
26 |
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
|
27 |
|
28 |
-
## Intended uses & limitations
|
29 |
|
30 |
|
31 |
### How to use
|
@@ -79,10 +79,6 @@ print(results)
|
|
79 |
#.a cat looking out of the window.
|
80 |
```
|
81 |
|
82 |
-
### Limitations and bias
|
83 |
-
|
84 |
-
TODO
|
85 |
-
|
86 |
## Training data
|
87 |
|
88 |
The BridgeTower model was pretrained on four public image-caption datasets:
|
@@ -95,10 +91,6 @@ The total number of unique images in the combined data is 4M.
|
|
95 |
|
96 |
## Training procedure
|
97 |
|
98 |
-
### Preprocessing
|
99 |
-
|
100 |
-
TODO
|
101 |
-
|
102 |
### Pretraining
|
103 |
|
104 |
The model was pre-trained for ___ steps on an "Intel AI supercomputing cluster" using 512 Gaudis and 128 Xeons with a batch size of 4096.
|
|
|
25 |
The abstract from the paper is the following:
|
26 |
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
|
27 |
|
28 |
+
## Intended uses & limitations
|
29 |
|
30 |
|
31 |
### How to use
|
|
|
79 |
#.a cat looking out of the window.
|
80 |
```
|
81 |
|
|
|
|
|
|
|
|
|
82 |
## Training data
|
83 |
|
84 |
The BridgeTower model was pretrained on four public image-caption datasets:
|
|
|
91 |
|
92 |
## Training procedure
|
93 |
|
|
|
|
|
|
|
|
|
94 |
### Pretraining
|
95 |
|
96 |
The model was pre-trained for ___ steps on an "Intel AI supercomputing cluster" using 512 Gaudis and 128 Xeons with a batch size of 4096.
|