internetoftim
commited on
Commit
•
7827f5b
1
Parent(s):
2951dde
Update README.md
Browse files
README.md
CHANGED
@@ -25,26 +25,32 @@ model-index:
|
|
25 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
26 |
should probably proofread and complete it, then remove this comment. -->
|
27 |
|
28 |
-
# gqa
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
This model is a fine-tuned version of [unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) on the [Graphcore/gqa-lxmert](https://huggingface.co/datasets/Graphcore/gqa-lxmert) dataset.
|
31 |
-
It achieves the following results on the evaluation set:
|
32 |
-
- Loss: 1.9326
|
33 |
-
- Accuracy: 0.5934
|
34 |
|
35 |
## Model description
|
36 |
|
37 |
-
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language
|
38 |
|
39 |
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
|
40 |
|
41 |
## Intended uses & limitations
|
42 |
|
43 |
-
|
|
|
|
|
|
|
|
|
44 |
|
45 |
## Training and evaluation data
|
46 |
|
47 |
-
[Graphcore/gqa-lxmert](https://huggingface.co/datasets/Graphcore/gqa-lxmert) dataset
|
48 |
|
49 |
## Training procedure
|
50 |
|
|
|
25 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
26 |
should probably proofread and complete it, then remove this comment. -->
|
27 |
|
28 |
+
# Graphcore/lxmert-gqa-uncased
|
29 |
+
|
30 |
+
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabeled texts. It enables easy and fast fine-tuning for different downstream task such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
|
31 |
+
|
32 |
+
It was trained with two objectives in pretraining : Masked language modeling(MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
|
33 |
+
|
34 |
+
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
|
35 |
|
|
|
|
|
|
|
|
|
36 |
|
37 |
## Model description
|
38 |
|
39 |
+
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modelling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modelling, and visual-question answering objectives. It achieves the state-of-the-art results on VQA anad GQA.
|
40 |
|
41 |
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
|
42 |
|
43 |
## Intended uses & limitations
|
44 |
|
45 |
+
|
46 |
+
This model is a fine-tuned version of [unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) on the [Graphcore/gqa-lxmert](https://huggingface.co/datasets/Graphcore/gqa-lxmert) dataset.
|
47 |
+
It achieves the following results on the evaluation set:
|
48 |
+
- Loss: 1.9326
|
49 |
+
- Accuracy: 0.5934
|
50 |
|
51 |
## Training and evaluation data
|
52 |
|
53 |
+
- [Graphcore/gqa-lxmert](https://huggingface.co/datasets/Graphcore/gqa-lxmert) dataset
|
54 |
|
55 |
## Training procedure
|
56 |
|