Datasets:

Modalities:
Audio
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
File size: 2,922 Bytes
f3bad19
 
 
dcb71db
 
f3bad19
 
 
 
 
 
 
8b477b5
 
 
 
 
d23c839
 
 
 
5767e79
43e58a9
5767e79
ba4f51d
 
 
dcb71db
891a66b
d23c839
8b477b5
8bfdf6b
 
8b477b5
 
f78bf44
 
 
 
 
d017675
f78bf44
 
 
 
1282903
f78bf44
 
b0df395
f57f9a9
8b477b5
265a800
8b477b5
265a800
8b477b5
265a800
8b477b5
265a800
 
 
 
 
 
 
 
 
3c97952
 
 
265a800
 
 
 
 
ba23be3
265a800
 
 
 
8b477b5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
task_categories:
- automatic-speech-recognition
multilinguality:
- multilingual
language:
- en
- fr
- de
- es
tags:
- music
- lyrics
- evaluation
- benchmark
- transcription
pretty_name: 'JamALT: A Formatting-Aware Lyrics Transcription Benchmark'
---

# JamALT: A Formatting-Aware Lyrics Transcription Benchmark


## Dataset description

* **Project page:** https://audioshake.github.io/jam-alt/
* **Source code:** https://github.com/audioshake/alt-eval
* **Paper:** https://ismir2023program.ismir.net/lbd_343.html

JamALT is a revision of the [JamendoLyrics](https://github.com/f90/jamendolyrics) dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.

The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling, punctuation, and formatting.
The audio is identical to the JamendoLyrics dataset.
However, only 79 songs are included, as one of the 20 French songs (`La_Fin_des_Temps_-_BuzzBonBon`) has been removed due to concerns about potentially harmful content.

See the [project website](https://audioshake.github.io/jam-alt/) for details.

## Loading the data

```python
from datasets import load_dataset
dataset = load_dataset("audioshake/jam-alt")["test"]
```

A subset is defined for each language (`en`, `fr`, `de`, `es`);
for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs.

Other arguments can be specified to control audio loading:
- `with_audio=False` to skip loading audio.
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
- `decode_audio=False` to skip decoding the audio and just get the MP3 file paths.

## Running the benchmark

The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval):
```python
from datasets import load_dataset
from alt_eval import compute_metrics

dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"]
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```

By default, the dataset includes the audio, allowing you to run transcription directly.
For example, the following code can be used to evaluate Whisper:
```python
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"]
dataset = dataset.cast_column("audio", datasets.Audio(decode=False))  # Get the raw audio file, let Whisper decode it

model = whisper.load_model("tiny")
transcriptions = [
  "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
  for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```
Alternatively, if you already have transcriptions, you might prefer to skip loading the audio:
```python
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", with_audio=False)["test"]
```