JA-VG-VQA-500 / README.md
mkshing's picture
Upload dataset
aa13b70 verified
metadata
language:
  - ja
license: cc-by-4.0
size_categories:
  - 1K<n<10K
task_categories:
  - visual-question-answering
dataset_info:
  features:
    - name: image_id
      dtype: int64
    - name: url
      dtype: string
    - name: width
      dtype: int64
    - name: height
      dtype: int64
    - name: coco_id
      dtype: float64
    - name: flickr_id
      dtype: float64
    - name: qas
      list:
        - name: a_objects
          sequence: 'null'
        - name: answer
          dtype: string
        - name: q_objects
          sequence: 'null'
        - name: qa_id
          dtype: int64
        - name: question
          dtype: string
    - name: image
      dtype: image
  splits:
    - name: test
      num_bytes: 73348776
      num_examples: 500
    - name: train
      num_bytes: 140066760
      num_examples: 1000
  download_size: 495258420
  dataset_size: 497983127
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: train
        path: data/train-*

JA-VG-VQA-500

Dataset Description

JA-VG-VQA-500 is a 500-sample subset of Japanese Visual Genome VQA dataset. This dataset was used in the evaluation of EvoVLM-JP-v1-7B. Please refer to our report and blog for more details. We are grateful to the developers for making the dataset available under Creative Commons Attribution 4.0 License.

Usage

Use the code below to get started with the dataset.

from datasets import load_dataset

dataset = load_dataset("SakanaAI/JA-VG-VQA-500", split="test")

See our GitHub repository to evaluate Japanese VLMs.

Acknowledgement

We would like to thank the developers of the source datasets for their contributions and for making their work available.

Citation

@article{Krishna2016VisualGC,
  title   = {Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations},
  author. = {Ranjay Krishna and Yuke Zhu and Oliver Groth and Justin Johnson and Kenji Hata and Joshua Kravitz and Stephanie Chen and Yannis Kalantidis and Li-Jia Li and David A. Shamma and Michael S. Bernstein and Li Fei-Fei},
  journal = {International Journal of Computer Vision},
  year.   = {2017},
  volume. = {123},
  pages.  = {32-73},
  URL     = {https://doi.org/10.1007/s11263-016-0981-7},
  doi     = {10.1007/s11263-016-0981-7}
}
@InProceedings{C18-1163,
  author    = "Shimizu, Nobuyuki and Rong, Na and Miyazaki, Takashi",
  title     = "Visual Question Answering Dataset for Bilingual Image Understanding: A Study of Cross-Lingual Transfer Using Attention Maps",
  booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
  year      = "2018",
  publisher = "Association for Computational Linguistics",
  pages     = "1918--1928",
  location  = "Santa Fe, New Mexico, USA",
  url       = "http://aclweb.org/anthology/C18-1163"
}