ICON-QA / README.md
kcz358's picture
Update README.md
94f9ba8 verified
metadata
dataset_info:
  features:
    - name: question_id
      dtype: string
    - name: question
      dtype: string
    - name: choices
      dtype: string
    - name: answer
      dtype: string
    - name: query_image
      dtype: image
    - name: choice_image_0
      dtype: image
    - name: choice_image_1
      dtype: image
    - name: ques_type
      dtype: string
    - name: label
      dtype: string
    - name: grade
      dtype: string
    - name: skills
      dtype: string
  splits:
    - name: val
      num_bytes: 329185883.464
      num_examples: 21488
    - name: test
      num_bytes: 333201645.625
      num_examples: 21489
  download_size: 667286379
  dataset_size: 662387529.089
configs:
  - config_name: default
    data_files:
      - split: val
        path: data/val-*
      - split: test
        path: data/test-*

Large-scale Multi-modality Models Evaluation Suite

Accelerating the development of large-scale multi-modality models (LMMs) with lmms-eval

๐Ÿ  Homepage | ๐Ÿ“š Documentation | ๐Ÿค— Huggingface Datasets

This Dataset

This is a formatted version of ICONQA. It is used in our lmms-eval pipeline to allow for one-click evaluations of large multi-modality models.

@inproceedings{lu2021iconqa,
    title = {IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning},
    author = {Lu, Pan and Qiu, Liang and Chen, Jiaqi and Xia, Tony and Zhao, Yizhou and Zhang, Wei and Yu, Zhou and Liang, Xiaodan and Zhu, Song-Chun},
    booktitle = {The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks},
    year = {2021}
}