VQAonline / README.md
ChongyanChen's picture
Update README.md
3e0b8ae verified
metadata
license: cc-by-sa-4.0
task_categories:
  - visual-question-answering
pretty_name: VQAonline

VQAonline

🌐 Homepage | 🤗 Dataset | 📖 arXiv

Dataset Description

We introduce VQAonline, the first VQA dataset in which all contents originate from an authentic use case.

VQAonline includes 64K visual questions sourced from an online question answering community (i.e., StackExchange).

It differs from prior datasets; examples include that it contains:

  • (1) authentic context that clarifies the question
  • (2) an answer the individual asking the question validated as acceptable from all community provided answers,
  • (3) answers that are considerably longer (e.g., a mean of 173 words versus typically 11 words or fewer in prior work)
  • (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity.

Download

To download, you can use the following code:

git clone https://huggingface.co/datasets/ChongyanChen/VQAonline

Dataset Structure

In total, the VQAonline dataset contains 64,696 visual questions.

We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models. Thus, we split the dataset as follows:

  • Training set: 665 visual questions
  • Validation set: 285 visual questions
  • Test set: 63,746 visual questions

The questions, contexts, and answers are provided in the json files.

Due to the constraint of huggingface, we separate the image files into 7 folders (named from images1 to images7), each of which contains 10,000 image files, except for folder "images 7".

Contact

Citation

BibTeX:

@article{chen2023vqaonline,
  title={Fully Authentic Visual Question Answering Dataset from Online Communities},
  author={Chen, Chongyan and Liu, Mengchen and Codella, Noel and Li, Yunsheng and Yuan, Lu and Gurari, Danna},
  journal={arXiv preprint arXiv:2311.15562},
  year={2023}
}