nielsr HF staff commited on
Commit
5da696a
1 Parent(s): 369adb1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ ---
5
+
6
+ # Vision-and-Language Transformer (ViLT), pre-trained only
7
+
8
+ Vision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Note: this model only includes the language modeling head.
9
+
10
+ Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
11
+
12
+ ## Model description
13
+
14
+ (to do)
15
+
16
+ ## Intended uses & limitations
17
+
18
+ You can use the raw model for visual question answering.
19
+
20
+ ### How to use
21
+
22
+ (to do)
23
+
24
+ ## Training data
25
+
26
+ (to do)
27
+
28
+ ## Training procedure
29
+
30
+ ### Preprocessing
31
+
32
+ (to do)
33
+
34
+ ### Pretraining
35
+
36
+ (to do)
37
+
38
+ ## Evaluation results
39
+
40
+ (to do)
41
+
42
+ ### BibTeX entry and citation info
43
+
44
+ ```bibtex
45
+ @misc{kim2021vilt,
46
+ title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
47
+ author={Wonjae Kim and Bokyung Son and Ildoo Kim},
48
+ year={2021},
49
+ eprint={2102.03334},
50
+ archivePrefix={arXiv},
51
+ primaryClass={stat.ML}
52
+ }
53
+ ```