|
--- |
|
license: cc-by-sa-4.0 |
|
task_categories: |
|
- visual-question-answering |
|
pretty_name: VQAonline |
|
--- |
|
# VQAonline |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6337e9b676421c05430a0287/6vt42q8w7EWx9vVuZqc3U.png) |
|
|
|
[**🌐 Homepage**](https://vqaonline.github.io/) | [**🤗 Dataset**](https://huggingface.co/datasets/ChongyanChen/VQAonline/) | [**📖 arXiv**](https://arxiv.org/abs/2311.15562) |
|
|
|
## Dataset Description |
|
We introduce VQAonline, the first VQA dataset in which all contents originate from an authentic use case. |
|
|
|
VQAonline includes 64K visual questions sourced from an online question answering community (i.e., StackExchange). |
|
|
|
It differs from prior datasets; examples include that it contains: |
|
- (1) authentic context that clarifies the question |
|
- (2) an answer the individual asking the question validated as acceptable from all community provided answers, |
|
- (3) answers that are considerably longer (e.g., a mean of 173 words versus typically 11 words or fewer in prior work) |
|
- (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity. |
|
|
|
## Dataset Structure |
|
We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models. |
|
- Training set: 665 examples |
|
- Validation set: 285 examples |
|
- Test set: 63,746 examples |
|
|
|
|
|
|
|
## Contact |
|
- Chongyan Chen: chongyanchen_hci@utexas.edu |
|
|
|
## Citation |
|
**BibTeX:** |
|
```bibtex |
|
@article{chen2023vqaonline, |
|
title={Fully Authentic Visual Question Answering Dataset from Online Communities}, |
|
author={Chen, Chongyan and Liu, Mengchen and Codella, Noel and Li, Yunsheng and Yuan, Lu and Gurari, Danna}, |
|
journal={arXiv preprint arXiv:2311.15562}, |
|
year={2023} |
|
} |
|
``` |