JosselinSom's picture
Update README.md
c1a30e1 verified
|
raw
history blame
3.77 kB
metadata
dataset_info:
  config_name: music
  features:
    - name: image
      dtype: image
    - name: download_url
      dtype: string
    - name: instance_name
      dtype: string
    - name: date
      dtype: string
    - name: additional_info
      dtype: string
    - name: date_scrapped
      dtype: string
    - name: compilation_info
      dtype: string
    - name: rendering_filters
      dtype: string
    - name: assets
      sequence: string
    - name: category
      dtype: string
    - name: uuid
      dtype: string
    - name: length
      dtype: string
    - name: difficulty
      dtype: string
  splits:
    - name: validation
      num_bytes: 25180255
      num_examples: 300
  download_size: 24944162
  dataset_size: 25180255
configs:
  - config_name: music
    data_files:
      - split: validation
        path: music/validation-*

Image2Struct - Music Sheet

Paper | Website | Datasets (Webpages, Latex, Music sheets) | Leaderboard | HELM repo | Image2Struct repo

License: Apache License Version 2.0, January 2004

Dataset description

Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images. This subdataset focuses on Music sheets. The model is given an image of the expected output with the prompt:

Please generate the Lilypond code to generate a music sheet that looks like this image as much as feasibly possible.
This music sheet was created by me, and I would like to recreate it using Lilypond.

The data was collected from IMSLP and has no ground truth. This means that while we prompt models to output some Lilypond code to recreate the image of the music sheet, we do not have access to a Lilypond code that could reproduce the image and would act as a "ground-truth".

There is no wild subset as this already constitutes a dataset without ground-truths.

Uses

To load the subset music of the dataset to be sent to the model under evaluation in Python:

import datasets
datasets.load_dataset("stanford-crfm/i2s-musicsheet", "music", split="validation")

To evaluate a model on Image2Musicsheet (equation) using HELM, run the following command-line commands:

pip install crfm-helm
helm-run --run-entries image2musicsheet,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10

You can also run the evaluation for only a specific difficulty:

helm-run --run-entries image2musicsheet:difficulty=hard,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10

For more information on running Image2Struct using HELM, refer to the HELM documentation and the article on reproducing leaderboards.

Citation

BibTeX:

@misc{roberts2024image2struct,
      title={Image2Struct: A Benchmark for Evaluating Vision-Language Models in Extracting Structured Information from Images}, 
      author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang},
      year={2024},
      eprint={TBD},
      archivePrefix={arXiv},
      primaryClass={TBD}
}