ChongyanChen commited on
Commit
01ca82a
1 Parent(s): bd030ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -22,12 +22,17 @@ It differs from prior datasets; examples include that it contains:
22
  - (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity.
23
 
24
  ## Dataset Structure
25
- We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models.
26
- - Training set: 665 examples
27
- - Validation set: 285 examples
28
- - Test set: 63,746 examples
29
 
 
30
 
 
 
 
 
 
 
 
31
 
32
  ## Contact
33
  - Chongyan Chen: chongyanchen_hci@utexas.edu
 
22
  - (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity.
23
 
24
  ## Dataset Structure
25
+ In total, the VQAonline dataset contains 64,696 visual questions.
 
 
 
26
 
27
+ We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models. Thus, we split the dataset as follows:
28
 
29
+ - Training set: 665 visual questions
30
+ - Validation set: 285 visual questions
31
+ - Test set: 63,746 visual questions
32
+
33
+ The questions, contexts, and answers are provided in the json files.
34
+
35
+ Due to the constraint of huggingface, we separate the image files into 7 folders (named from images1 to images7), each of which contains 10,000 image files, except for folder "images 7".
36
 
37
  ## Contact
38
  - Chongyan Chen: chongyanchen_hci@utexas.edu