Datasets:

Modalities:
Audio
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
jam-alt / README.md
cifkao's picture
Version 1.1.0
db8d621
|
raw
history blame
6.63 kB
metadata
task_categories:
  - automatic-speech-recognition
multilinguality:
  - multilingual
language:
  - en
  - fr
  - de
  - es
tags:
  - music
  - lyrics
  - evaluation
  - benchmark
  - transcription
pretty_name: 'JamALT: A Readability-Aware Lyrics Transcription Benchmark'
paperswithcode_id: jam-alt
dataset_info:
  - config_name: all
    features:
      - name: name
        dtype: string
      - name: text
        dtype: string
      - name: language
        dtype: string
      - name: license_type
        dtype: string
      - name: audio
        dtype: audio
    splits:
      - name: test
        num_bytes: 409411912
        num_examples: 79
    download_size: 409150043
    dataset_size: 409411912
  - config_name: de
    features:
      - name: name
        dtype: string
      - name: text
        dtype: string
      - name: language
        dtype: string
      - name: license_type
        dtype: string
      - name: audio
        dtype: audio
    splits:
      - name: test
        num_bytes: 107962802
        num_examples: 20
    download_size: 107942102
    dataset_size: 107962802
  - config_name: en
    features:
      - name: name
        dtype: string
      - name: text
        dtype: string
      - name: language
        dtype: string
      - name: license_type
        dtype: string
      - name: audio
        dtype: audio
    splits:
      - name: test
        num_bytes: 105135091
        num_examples: 20
    download_size: 105041371
    dataset_size: 105135091
  - config_name: es
    features:
      - name: name
        dtype: string
      - name: text
        dtype: string
      - name: language
        dtype: string
      - name: license_type
        dtype: string
      - name: audio
        dtype: audio
    splits:
      - name: test
        num_bytes: 105024257
        num_examples: 20
    download_size: 104979012
    dataset_size: 105024257
  - config_name: fr
    features:
      - name: name
        dtype: string
      - name: text
        dtype: string
      - name: language
        dtype: string
      - name: license_type
        dtype: string
      - name: audio
        dtype: audio
    splits:
      - name: test
        num_bytes: 91289764
        num_examples: 19
    download_size: 91218543
    dataset_size: 91289764
configs:
  - config_name: all
    data_files:
      - split: test
        path: parquet/all/test-*
    default: true
  - config_name: de
    data_files:
      - split: test
        path: parquet/de/test-*
  - config_name: en
    data_files:
      - split: test
        path: parquet/en/test-*
  - config_name: es
    data_files:
      - split: test
        path: parquet/es/test-*
  - config_name: fr
    data_files:
      - split: test
        path: parquet/fr/test-*

JamALT: A Readability-Aware Lyrics Transcription Benchmark

Dataset description

JamALT is a revision of the JamendoLyrics dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.

The lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling, punctuation, and formatting. The audio is identical to the JamendoLyrics dataset. However, only 79 songs are included, as one of the 20 French songs (La_Fin_des_Temps_-_BuzzBonBon) has been removed due to concerns about potentially harmful content.

Note: The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly.

See the project website for details.

Loading the data

from datasets import load_dataset
dataset = load_dataset("audioshake/jam-alt", split="test")

A subset is defined for each language (en, fr, de, es); for example, use load_dataset("audioshake/jam-alt", "es") to load only the Spanish songs.

To control how the audio is decoded, cast the audio column using dataset.cast_column("audio", datasets.Audio(...)). Useful arguments to datasets.Audio() are:

  • sampling_rate and mono=True to control the sampling rate and number of channels.
  • decode=False to skip decoding the audio and just get the MP3 file paths and contents.

The load_dataset function also accepts a columns parameter, which can be useful for example if you want to skip downloading the audio (see the example below).

Running the benchmark

The evaluation is implemented in our alt-eval package:

from datasets import load_dataset
from alt_eval import compute_metrics

dataset = load_dataset("audioshake/jam-alt", revision="v1.1.0", split="test")
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])

For example, the following code can be used to evaluate Whisper:

dataset = load_dataset("audioshake/jam-alt", revision="v1.1.0", split="test")
dataset = dataset.cast_column("audio", datasets.Audio(decode=False))  # Get the raw audio file, let Whisper decode it

model = whisper.load_model("tiny")
transcriptions = [
  "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
  for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])

Alternatively, if you already have transcriptions, you might prefer to skip loading the audio column:

dataset = load_dataset("audioshake/jam-alt", revision="v1.1.0", split="test", columns=["name", "text", "language", "license_type"])

Citation

When using the benchmark, please cite our paper as well as the original JamendoLyrics paper:

@misc{cifka-2024-jam-alt,
  author       = {Ond\v{r}ej C\'ifka and
                  Hendrik Schreiber and
                  Luke Miner and
                  Fabian-Robert St\"oter},
  title        = {Lyrics Transcription for Humans: A Readability-Aware Benchmark},
  booktitle    = {Proceedings of the 25th International Society for 
                  Music Information Retrieval Conference},
  year         = 2024,
  publisher    = {ISMIR},
  note         = {to appear; preprint arXiv:2408.06370}
}
@inproceedings{durand-2023-contrastive,
  author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian},
  booktitle={2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages}, 
  year={2023},
  pages={1-5},
  address={Rhodes Island, Greece},
  doi={10.1109/ICASSP49357.2023.10096725}
}