File size: 2,177 Bytes
cefe337
 
 
 
 
 
 
 
9af70c3
cefe337
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01ca82a
cefe337
01ca82a
cefe337
01ca82a
 
 
 
 
 
 
cefe337
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: cc-by-sa-4.0
task_categories:
- visual-question-answering
pretty_name: VQAonline
---
# VQAonline

<img src="https://cdn-uploads.huggingface.co/production/uploads/6337e9b676421c05430a0287/6vt42q8w7EWx9vVuZqc3U.png" width="50%">

[**🌐 Homepage**](https://vqaonline.github.io/) | [**🤗 Dataset**](https://huggingface.co/datasets/ChongyanChen/VQAonline/) | [**📖 arXiv**](https://arxiv.org/abs/2311.15562) 

## Dataset Description
We introduce VQAonline, the first VQA dataset in which all contents originate from an authentic use case. 

VQAonline includes 64K visual questions sourced from an online question answering community (i.e., StackExchange).

It differs from prior datasets; examples include that it contains: 
- (1) authentic context that clarifies the question
- (2) an answer the individual asking the question validated as acceptable from all community provided answers,
- (3) answers that are considerably longer (e.g., a mean of 173 words versus typically 11 words or fewer in prior work)
- (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity.

## Dataset Structure
In total, the VQAonline dataset contains 64,696 visual questions.

We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models. Thus, we split the dataset as follows:

- Training set: 665 visual questions 
- Validation set: 285 visual questions 
- Test set: 63,746 visual questions 

The questions, contexts, and answers are provided in the json files. 

Due to the constraint of huggingface, we separate the image files into 7 folders (named from images1 to images7), each of which contains 10,000 image files, except for folder "images 7". 

## Contact
- Chongyan Chen: chongyanchen_hci@utexas.edu

## Citation
**BibTeX:**
```bibtex
@article{chen2023vqaonline,
  title={Fully Authentic Visual Question Answering Dataset from Online Communities},
  author={Chen, Chongyan and Liu, Mengchen and Codella, Noel and Li, Yunsheng and Yuan, Lu and Gurari, Danna},
  journal={arXiv preprint arXiv:2311.15562},
  year={2023}
}
```