nielsr HF staff commited on
Commit
c6b2049
1 Parent(s): ad5d7e3
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ datasets:
7
 
8
  # Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
9
 
10
- Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](). It was introduced in the paper [ViLT: Vision-and-Language Transformer
11
  Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
12
 
13
  Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
 
7
 
8
  # Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
9
 
10
+ Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer
11
  Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
12
 
13
  Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.