Update model description
Browse files
README.md
CHANGED
@@ -6,6 +6,12 @@ This model contains just the `IPUConfig` files for running the [lxmert-base-unca
|
|
6 |
|
7 |
**This model contains no model weights, only an IPUConfig.**
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
## Usage
|
10 |
|
11 |
```
|
|
|
6 |
|
7 |
**This model contains no model weights, only an IPUConfig.**
|
8 |
|
9 |
+
## Model description
|
10 |
+
|
11 |
+
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. It acheives the state-of-the-art results on VQA anad GQA.
|
12 |
+
|
13 |
+
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
|
14 |
+
|
15 |
## Usage
|
16 |
|
17 |
```
|