Datasets:
task_categories:
- automatic-speech-recognition
multilinguality:
- multilingual
language:
- en
- fr
- de
- es
tags:
- music
- lyrics
- evaluation
- benchmark
- transcription
pretty_name: 'JamALT: A Formatting-Aware Lyrics Transcription Benchmark'
JamALT: A Formatting-Aware Lyrics Transcription Benchmark
Dataset description
- Project page: https://audioshake.github.io/jam-alt/
- Source code: https://github.com/audioshake/alt-eval
- Paper: https://ismir2023program.ismir.net/lbd_343.html
JamALT is a revision of the JamendoLyrics dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.
The lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling, punctuation, and formatting.
The audio is identical to the JamendoLyrics dataset.
However, only 79 songs are included, as one of the 20 French songs (La_Fin_des_Temps_-_BuzzBonBon
) has been removed due to concerns about potentially harmful content.
See the project website for details.
Loading the data
from datasets import load_dataset
dataset = load_dataset("audioshake/jam-alt")["test"]
A subset is defined for each language (en
, fr
, de
, es
);
for example, use load_dataset("audioshake/jam-alt", "es")
to load only the Spanish songs.
Other arguments can be specified to control audio loading:
with_audio=False
to skip loading audio.sampling_rate
andmono=True
to control the sampling rate and number of channels.decode_audio=False
to skip decoding the audio and just get the MP3 file paths.
Running the benchmark
The evaluation is implemented in our alt-eval
package:
from datasets import load_dataset
from alt_eval import compute_metrics
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"]
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
By default, the dataset includes the audio, allowing you to run transcription directly. For example, the following code can be used to evaluate Whisper:
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"]
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it
model = whisper.load_model("tiny")
transcriptions = [
"\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
Alternatively, if you already have transcriptions, you might prefer to skip loading the audio:
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", with_audio=False)["test"]