File size: 3,770 Bytes
c78943a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93ca6d9
 
c78943a
 
 
 
5979bd9
 
 
 
c78943a
 
 
 
 
 
c36aa22
 
 
c1a30e1
 
 
c36aa22
 
 
 
 
 
 
 
 
 
c1a30e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
dataset_info:
  config_name: music
  features:
  - name: image
    dtype: image
  - name: download_url
    dtype: string
  - name: instance_name
    dtype: string
  - name: date
    dtype: string
  - name: additional_info
    dtype: string
  - name: date_scrapped
    dtype: string
  - name: compilation_info
    dtype: string
  - name: rendering_filters
    dtype: string
  - name: assets
    sequence: string
  - name: category
    dtype: string
  - name: uuid
    dtype: string
  - name: length
    dtype: string
  - name: difficulty
    dtype: string
  splits:
  - name: validation
    num_bytes: 25180255.0
    num_examples: 300
  download_size: 24944162
  dataset_size: 25180255.0
configs:
- config_name: music
  data_files:
  - split: validation
    path: music/validation-*
---
# Image2Struct - Music Sheet
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)

**License:** [Apache License](http://www.apache.org/licenses/) Version 2.0, January 2004


## Dataset description
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
This subdataset focuses on Music sheets. The model is given an image of the expected output with the prompt:
```
Please generate the Lilypond code to generate a music sheet that looks like this image as much as feasibly possible.
This music sheet was created by me, and I would like to recreate it using Lilypond.
```

 The data was collected from IMSLP and has no ground truth. This means that while we prompt models to output some [Lilypond](https://lilypond.org/) code to recreate the image of the music sheet, we do not have access to a Lilypond code that could reproduce the image and would act as a "ground-truth".

There is no **wild** subset as this already constitutes a dataset without ground-truths.


## Uses

To load the subset `music` of the dataset to be sent to the model under evaluation in Python:

```python
import datasets
datasets.load_dataset("stanford-crfm/i2s-musicsheet", "music", split="validation")
```


To evaluate a model on Image2Musicsheet (equation) using [HELM](https://github.com/stanford-crfm/helm/), run the following command-line commands:

```sh
pip install crfm-helm
helm-run --run-entries image2musicsheet,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
```

You can also run the evaluation for only a specific `difficulty`:
```sh
helm-run --run-entries image2musicsheet:difficulty=hard,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
```

For more information on running Image2Struct using [HELM](https://github.com/stanford-crfm/helm/), refer to the [HELM documentation](https://crfm-helm.readthedocs.io/) and the article on [reproducing leaderboards](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).

## Citation

**BibTeX:**

```tex
@misc{roberts2024image2struct,
      title={Image2Struct: A Benchmark for Evaluating Vision-Language Models in Extracting Structured Information from Images}, 
      author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang},
      year={2024},
      eprint={TBD},
      archivePrefix={arXiv},
      primaryClass={TBD}
}
```