--- task_categories: - automatic-speech-recognition multilinguality: - multilingual language: - en - fr - de - es tags: - music - lyrics - evaluation - benchmark - transcription pretty_name: 'JamALT: A Readability-Aware Lyrics Transcription Benchmark' paperswithcode_id: jam-alt dataset_info: config_name: all features: - name: name dtype: string - name: text dtype: string - name: language dtype: string - name: license_type dtype: string - name: audio dtype: audio splits: - name: test num_bytes: 409411912.0 num_examples: 79 download_size: 409150043 dataset_size: 409411912.0 configs: - config_name: all data_files: - split: test path: parquet/all/test-* default: true --- # JamALT: A Readability-Aware Lyrics Transcription Benchmark ## Dataset description * **Project page:** https://audioshake.github.io/jam-alt/ * **Source code:** https://github.com/audioshake/alt-eval * **Paper (ISMIR 2024):** https://www.arxiv.org/abs/2408.06370 * **Extended abstract (ISMIR 2023 LBD):** https://arxiv.org/abs/2311.13987 JamALT is a revision of the [JamendoLyrics](https://github.com/f90/jamendolyrics) dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark. The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling, punctuation, and formatting. The audio is identical to the JamendoLyrics dataset. However, only 79 songs are included, as one of the 20 French songs (`La_Fin_des_Temps_-_BuzzBonBon`) has been removed due to concerns about potentially harmful content. **Note:** The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly. See the [project website](https://audioshake.github.io/jam-alt/) for details. ## Loading the data ```python from datasets import load_dataset dataset = load_dataset("audioshake/jam-alt")["test"] ``` A subset is defined for each language (`en`, `fr`, `de`, `es`); for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs. By default, the dataset comes with audio. To skip loading the audio, use `with_audio=False`. To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`. Useful arguments to `datasets.Audio()` are: - `sampling_rate` and `mono=True` to control the sampling rate and number of channels. - `decode=False` to skip decoding the audio and just get the MP3 file paths. ## Running the benchmark The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval): ```python from datasets import load_dataset from alt_eval import compute_metrics dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"] # transcriptions: list[str] compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) ``` For example, the following code can be used to evaluate Whisper: ```python dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"] dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it model = whisper.load_model("tiny") transcriptions = [ "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"]) for a in dataset["audio"] ] compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) ``` Alternatively, if you already have transcriptions, you might prefer to skip loading the audio: ```python dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", with_audio=False)["test"] ``` ## Citation When using the benchmark, please cite [our paper](https://www.arxiv.org/abs/2408.06370) as well as the original [JamendoLyrics paper](https://arxiv.org/abs/2306.07744): ```bibtex @misc{cifka-2024-jam-alt, author = {Ond\v{r}ej C\'ifka and Hendrik Schreiber and Luke Miner and Fabian-Robert St\"oter}, title = {Lyrics Transcription for Humans: A Readability-Aware Benchmark}, booktitle = {Proceedings of the 25th International Society for Music Information Retrieval Conference}, year = 2024, publisher = {ISMIR}, note = {to appear; preprint arXiv:2408.06370} } @inproceedings{durand-2023-contrastive, author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian}, booktitle={2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages}, year={2023}, pages={1-5}, address={Rhodes Island, Greece}, doi={10.1109/ICASSP49357.2023.10096725} } ```