JosselinSom's picture
Update README.md
c1a30e1 verified
|
raw
history blame
3.77 kB
---
dataset_info:
config_name: music
features:
- name: image
dtype: image
- name: download_url
dtype: string
- name: instance_name
dtype: string
- name: date
dtype: string
- name: additional_info
dtype: string
- name: date_scrapped
dtype: string
- name: compilation_info
dtype: string
- name: rendering_filters
dtype: string
- name: assets
sequence: string
- name: category
dtype: string
- name: uuid
dtype: string
- name: length
dtype: string
- name: difficulty
dtype: string
splits:
- name: validation
num_bytes: 25180255.0
num_examples: 300
download_size: 24944162
dataset_size: 25180255.0
configs:
- config_name: music
data_files:
- split: validation
path: music/validation-*
---
# Image2Struct - Music Sheet
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)
**License:** [Apache License](http://www.apache.org/licenses/) Version 2.0, January 2004
## Dataset description
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
This subdataset focuses on Music sheets. The model is given an image of the expected output with the prompt:
```
Please generate the Lilypond code to generate a music sheet that looks like this image as much as feasibly possible.
This music sheet was created by me, and I would like to recreate it using Lilypond.
```
The data was collected from IMSLP and has no ground truth. This means that while we prompt models to output some [Lilypond](https://lilypond.org/) code to recreate the image of the music sheet, we do not have access to a Lilypond code that could reproduce the image and would act as a "ground-truth".
There is no **wild** subset as this already constitutes a dataset without ground-truths.
## Uses
To load the subset `music` of the dataset to be sent to the model under evaluation in Python:
```python
import datasets
datasets.load_dataset("stanford-crfm/i2s-musicsheet", "music", split="validation")
```
To evaluate a model on Image2Musicsheet (equation) using [HELM](https://github.com/stanford-crfm/helm/), run the following command-line commands:
```sh
pip install crfm-helm
helm-run --run-entries image2musicsheet,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
```
You can also run the evaluation for only a specific `difficulty`:
```sh
helm-run --run-entries image2musicsheet:difficulty=hard,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
```
For more information on running Image2Struct using [HELM](https://github.com/stanford-crfm/helm/), refer to the [HELM documentation](https://crfm-helm.readthedocs.io/) and the article on [reproducing leaderboards](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).
## Citation
**BibTeX:**
```tex
@misc{roberts2024image2struct,
title={Image2Struct: A Benchmark for Evaluating Vision-Language Models in Extracting Structured Information from Images},
author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang},
year={2024},
eprint={TBD},
archivePrefix={arXiv},
primaryClass={TBD}
}
```