VisOnlyQA_Train / README.md
ryokamoi's picture
Updated README.md - added VLMEvalKit
6365546 verified
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - en
license: gpl-3.0
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - multiple-choice
  - question-answering
  - visual-question-answering
task_ids:
  - multiple-choice-qa
  - visual-question-answering
  - multi-class-classification
tags:
  - multi-modal-qa
  - figure-qa
  - vqa
  - scientific-figure
  - geometry-diagram
  - chart
  - chemistry
dataset_info:
  features:
    - name: image_path
      dtype: string
    - name: question
      dtype: 'null'
    - name: answer
      dtype: string
    - name: prompt_reasoning
      dtype: 'null'
    - name: prompt_no_reasoning
      dtype: string
    - name: image_category
      dtype: string
    - name: task_category
      dtype: string
    - name: question_type
      dtype: string
    - name: response_options
      sequence: string
    - name: source
      dtype: string
    - name: id
      dtype: string
    - name: decoded_image
      dtype: image
  splits:
    - name: syntheticgeometry__triangle
      num_bytes: 328198888
      num_examples: 10000
    - name: syntheticgeometry__quadrilateral
      num_bytes: 327409666
      num_examples: 10000
    - name: syntheticgeometry__length
      num_bytes: 411043854
      num_examples: 10000
    - name: syntheticgeometry__angle
      num_bytes: 397038300
      num_examples: 10000
    - name: syntheticgeometry__area
      num_bytes: 400289876
      num_examples: 10000
    - name: 3d__size
      num_bytes: 1930906822
      num_examples: 10000
    - name: 3d__angle
      num_bytes: 4093207706
      num_examples: 10000
  download_size: 7226264280
  dataset_size: 7888095112
configs:
  - config_name: default
    data_files:
      - split: syntheticgeometry__triangle
        path: data/syntheticgeometry__triangle-*
      - split: syntheticgeometry__quadrilateral
        path: data/syntheticgeometry__quadrilateral-*
      - split: syntheticgeometry__length
        path: data/syntheticgeometry__length-*
      - split: syntheticgeometry__angle
        path: data/syntheticgeometry__angle-*
      - split: syntheticgeometry__area
        path: data/syntheticgeometry__area-*
      - split: 3d__size
        path: data/3d__size-*
      - split: 3d__angle
        path: data/3d__angle-*

VisOnlyQA

This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information".

VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.

@misc{kamoi2024visonlyqa,
    title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information}, 
    author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
    year={2024},
    journal={arXiv preprint arXiv:2412.00947}
}

Dataset

VisOnlyQA is provided in two formats: VLMEvalKit and Hugging Face Dataset. You can use either of them to evaluate your models and report the results in your papers. However, when you report the results, please explicitly mention which version of the dataset you used because the two versions are different.

Examples

VLMEvalKit

VLMEvalKit provides one-command evaluation. However, VLMEvalKit is not designed to reproduce the results in the paper. We welcome using it to report the results on VisOnlyQA in your papers, but please explicitly mention that you used VLMEvalKit.

The major differences are:

  • VisOnlyQA on VLMEvalKit does not include the chemistry__shape_multi split
  • VLMEvalKit uses different prompts and postprocessing.

Refer to this document for the installation and setup of VLMEvalKit. After setting up the environment, you can evaluate any supported models on VisOnlyQA with the following command (this example is for InternVL2-26B).

python run.py --data VisOnlyQA-VLMEvalKit --model InternVL2-26B

Hugging Face Dataset

The original VisOnlyQA dataset is provided in Hugging Face Dataset. If you want to reproduce the results in our paper, please use this version and code in the GitHub repository.

dataset folder of the GitHub repository includes identical datasets, except for the training data.

from datasets import load_dataset

real_eval = load_dataset("ryokamoi/VisOnlyQA_Eval_Real")
real_synthetic = load_dataset("ryokamoi/VisOnlyQA_Eval_Synthetic")

# Splits
print(real_eval.keys())
# dict_keys(['geometry__triangle', 'geometry__quadrilateral', 'geometry__length', 'geometry__angle', 'geometry__area', 'geometry__diameter_radius', 'chemistry__shape_single', 'chemistry__shape_multi', 'charts__extraction', 'charts__intersection'])

print(real_synthetic.keys())
# dict_keys(['syntheticgeometry__triangle', 'syntheticgeometry__quadrilateral', 'syntheticgeometry__length', 'syntheticgeometry__angle', 'syntheticgeometry__area', '3d__size', '3d__angle'])

# Prompt
print(real_eval['geometry__triangle'][0]['prompt_no_reasoning'])
# There is no triangle ADP in the figure. True or False?

# A triangle is a polygon with three edges and three vertices, which are explicitly connected in the figure.

# Your response should only include the final answer (True, False). Do not include any reasoning or explanation in your response.

# Image
print(real_eval['geometry__triangle'][0]['decoded_image'])
# <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=103x165 at 0x7FB4F83236A0>

# Answer
print(real_eval['geometry__triangle'][0]['answer'])
# False

Data Format

Each instance of VisOnlyQA dataset has the following attributes:

Features

  • decoded_image: [PIL.Image] Input image
  • question: [string] Question (without instruction)
  • prompt_reasoning: [string] Prompt with intstruction to use chain-of-thought
  • prompt_no_reasoning: [string] Prompt with intstruction not to use chain-of-thought
  • answer: [string] Correct answer (e.g., True, a)

Metadata

  • image_path: [string] Path to the image file
  • image_category: [string] Category of the image (e.g., geometry, chemistry)
  • question_type: [string] single_answer or multiple answers
  • task_category: [string] Category of the task (e.g., triangle)
  • response_options: [List[string]] Multiple choice options (e.g., ['True', 'False'], ['a', 'b', 'c', 'd', 'e'])
  • source: [string] Source dataset
  • id: [string] Unique ID

Statistics

License

Please refer to LICENSE.md.

Contact

If you have any questions, feel free to open an issue or reach out directly to Ryo Kamoi (ryokamoi@psu.edu).