|
--- |
|
task_categories: |
|
- automatic-speech-recognition |
|
multilinguality: |
|
- multilingual |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- es |
|
tags: |
|
- music |
|
- lyrics |
|
- evaluation |
|
- benchmark |
|
- transcription |
|
pretty_name: 'JamALT: A Formatting-Aware Lyrics Transcription Benchmark' |
|
paperswithcode_id: jam-alt |
|
--- |
|
|
|
# JamALT: A Formatting-Aware Lyrics Transcription Benchmark |
|
|
|
|
|
## Dataset description |
|
|
|
* **Project page:** https://audioshake.github.io/jam-alt/ |
|
* **Source code:** https://github.com/audioshake/alt-eval |
|
* **Paper:** https://arxiv.org/abs/2311.13987 |
|
|
|
JamALT is a revision of the [JamendoLyrics](https://github.com/f90/jamendolyrics) dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark. |
|
|
|
The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling, punctuation, and formatting. |
|
The audio is identical to the JamendoLyrics dataset. |
|
However, only 79 songs are included, as one of the 20 French songs (`La_Fin_des_Temps_-_BuzzBonBon`) has been removed due to concerns about potentially harmful content. |
|
|
|
**Note:** The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly. |
|
|
|
See the [project website](https://audioshake.github.io/jam-alt/) for details. |
|
|
|
## Loading the data |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("audioshake/jam-alt")["test"] |
|
``` |
|
|
|
A subset is defined for each language (`en`, `fr`, `de`, `es`); |
|
for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs. |
|
|
|
By default, the dataset comes with audio. To skip loading the audio, use `with_audio=False`. |
|
To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`. |
|
Useful arguments to `datasets.Audio()` are: |
|
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels. |
|
- `decode=False` to skip decoding the audio and just get the MP3 file paths. |
|
|
|
## Running the benchmark |
|
|
|
The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval): |
|
```python |
|
from datasets import load_dataset |
|
from alt_eval import compute_metrics |
|
|
|
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"] |
|
# transcriptions: list[str] |
|
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) |
|
``` |
|
|
|
For example, the following code can be used to evaluate Whisper: |
|
```python |
|
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"] |
|
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it |
|
|
|
model = whisper.load_model("tiny") |
|
transcriptions = [ |
|
"\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"]) |
|
for a in dataset["audio"] |
|
] |
|
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) |
|
``` |
|
Alternatively, if you already have transcriptions, you might prefer to skip loading the audio: |
|
```python |
|
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", with_audio=False)["test"] |
|
``` |