Update README.md
Browse files
README.md
CHANGED
@@ -40,3 +40,17 @@ configs:
|
|
40 |
- split: validation
|
41 |
path: music/validation-*
|
42 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
- split: validation
|
41 |
path: music/validation-*
|
42 |
---
|
43 |
+
# Image2Struct - Music Sheet
|
44 |
+
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)
|
45 |
+
|
46 |
+
## Dataset description
|
47 |
+
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
|
48 |
+
This subdataset focuses on Music sheets. The model is given an image of the expected output with the prompt:
|
49 |
+
```
|
50 |
+
Please generate the Lilypond code to generate a music sheet that looks like this image as much as feasibly possible.
|
51 |
+
This music sheet was created by me, and I would like to recreate it using Lilypond.
|
52 |
+
```
|
53 |
+
|
54 |
+
The data was collected from IMSLP and has no ground truth. This means that while we prompt models to output some [Lilypond](https://lilypond.org/) code to recreate the image of the music sheet, we do not have access to a Lilypond code that could reproduce the image and would act as a "ground-truth".
|
55 |
+
|
56 |
+
There is no **wild** subset as this already constitutes a dataset without ground-truths.
|