VQAonline / README.md
ChongyanChen's picture
Create README.md
cefe337
|
raw
history blame
1.78 kB
metadata
license: cc-by-sa-4.0
task_categories:
  - visual-question-answering
pretty_name: VQAonline

VQAonline

image/png

🌐 Homepage | 🤗 Dataset | 📖 arXiv

Dataset Description

We introduce VQAonline, the first VQA dataset in which all contents originate from an authentic use case.

VQAonline includes 64K visual questions sourced from an online question answering community (i.e., StackExchange).

It differs from prior datasets; examples include that it contains:

  • (1) authentic context that clarifies the question
  • (2) an answer the individual asking the question validated as acceptable from all community provided answers,
  • (3) answers that are considerably longer (e.g., a mean of 173 words versus typically 11 words or fewer in prior work)
  • (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity.

Dataset Structure

We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models.

  • Training set: 665 examples
  • Validation set: 285 examples
  • Test set: 63,746 examples

Contact

Citation

BibTeX:

@article{chen2023vqaonline,
  title={Fully Authentic Visual Question Answering Dataset from Online Communities},
  author={Chen, Chongyan and Liu, Mengchen and Codella, Noel and Li, Yunsheng and Yuan, Lu and Gurari, Danna},
  journal={arXiv preprint arXiv:2311.15562},
  year={2023}
}