Datasets:

Modalities:
Audio
Languages:
Polish
ArXiv:
DOI:
License:
pl-asr-bigos-v2 / README.md
mj-new
Updated README
89d2a56
---
annotations_creators:
- crowdsourced
- expert-generated
- other
- machine-generated
language:
- pl
language_creators:
- crowdsourced
- expert-generated
- other
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: pl-asr-bigos
size_categories:
- 10K<n<100K
source_datasets:
- original
- extended|multilingual_librispeech
- extended|common_voice
- extended|minds14
- extended|fleurs
tags:
- benchmark
- polish
- asr
- speech
- dataset
- audio
task_categories:
- automatic-speech-recognition
task_ids: []
extra_gated_prompt: |-
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
* [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)
* [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/).
* [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs)
* [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14)
* [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset)
* Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/)
To use selected dataset, you also need to fill in the access forms on the specific datasets pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0
extra_gated_fields:
I hereby confirm that I have read and accepted the license terms of datasets comprising BIGOS corpora: checkbox
I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset: checkbox
---
# Dataset Card for Polish ASR BIGOS corpora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Data curation toolkit:** https://github.com/goodmike31/pl-asr-bigos-tools
- **Eval results V1:** https://ieeexplore.ieee.org/document/10306084
- **Eval results V2:** https://arxiv.org/abs/2408.00005
- **ASR leaderboard:** https://huggingface.co/spaces/amu-cai/pl-asr-leaderboard
- **Contact:** michal.junczyk@amu.edu.pl
### Dataset Summary
The BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.<br>
### Supported Tasks and Leaderboards
BIGOS V2 applications:
* Evaluation of 10 commercial and 15 freely available systems for Polish - [paper](https://arxiv.org/abs/2408.00005) <br>
* Interactive [Polish ASR leaderboard](https://huggingface.co/spaces/amu-cai/pl-asr-leaderboard) <br>
* Open Polish ASR challenge [PolEval](http://poleval.pl/) using BIGOS V2 and [PELCRA for BIGOS](https://huggingface.co/datasets/pelcra/pl-asr-pelcra-for-bigos) datasets.<br>
Note, [BIGOS V1](https://huggingface.co/datasets/michaljunczyk/pl-asr-bigos) was used to evaluate 3 commercial and 5 freely available systems [(paper)](https://annals-csis.org/proceedings/2023/drp/1609.html).
### Languages
Polish
## Dataset Structure
The datasets consist of audio recordings in the WAV format with corresponding metadata.<br>
The audio and metadata can be used in a raw format (TSV) or via the Hugging Face datasets library.<br>
References for the test split will only become available after the completion of the 2024 PolEval challenge.<br>
### Data Instances
The train set consists of 82 025 samples.
The dev set consists of 14 254 samples
The test set consists of 14 993 samples.
### Data Fields
Available fields:
* `audioname` - file identifier
* `split` - test, validation or train split
* `dataset` - source dataset identifier
* `ref_orig` - original transcription of audio file
* `audio` - HF dataset object with binary representation of audio file
* `samplingrate_orig` - sampling rate of the original recording
* `sampling_rate` - sampling rate of recording in the release
* `audio_duration_samples` - duration of recordings in samples
* `audio_duration_seconds` - duration of recordings in seconds
* `audiopath_bigos` - relative filepath to audio file extracted from tar.gz archive
* `audiopath_local` - absolute filepath to audio file extracted with the build script
* `speaker_gender` - gender (sex) of the speaker extracted from the source meta-data (N/A if not available)
* `speaker_age` - age group of the speaker (in CommonVoice format) extracted from the source (N/A if not available)
* `utt_length_words` - length of the utterance in words
* `utt_length_chars` - length of the utterance in characters
* `speech_rate_words` - ratio of words to recording duration.
* `speech_rate_chars` - ratio of characters to recording duration.
<br><br>
### Data Splits
Train split contains recordings intendend for training.
Validation split contains recordings for validation during training procedure.
Test split contains recordings intended for evaluation only.
References for test split are not available until the completion of 2024 PolEval challenge.
| Subset | train | validation | test |
| -------------------------- | ------ | ---------- | ----- |
| fair-mls-20 | 25 042 | 511 | 519 |
| google-fleurs-22 | 2 841 | 338 | 758 |
| mailabs-corpus_librivox-19 | 11 834 | 1 527 | 1 501 |
| mozilla-common_voice_15-23 | 19 119 | 8 895 | 8 896 |
| pjatk-clarin_studio-15 | 10 999 | 1 407 | 1 404 |
| pjatk-clarin_mobile-15 | 2 861 | 242 | 392 |
| polyai-minds14-21 | 462 | 47 | 53 |
| pwr-maleset-unk | 3 783 | 478 | 477 |
| pwr-shortwords-unk | 761 | 86 | 92 |
| pwr-viu-unk | 2 146 | 290 | 267 |
| pwr-azon_read-20 | 1 820 | 382 | 586 |
| pwr-azon_spont-20 | 357 | 51 | 48 |
## Dataset Creation
### Curation Rationale
[Polish ASR Speech Data Catalog](https://github.com/goodmike31/pl-asr-speech-data-survey) was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.<br>
The following mandatory criteria were considered:
* Dataset must be downloadable.
* The license must allow for free, noncommercial use.
* Transcriptions must be available and align with the recordings.
* The sampling rate of audio recordings must be at least 8 kHz.
* Audio encoding using a minimum of 16 bits per sample.
Recordings which either lacked transcriptions or were too short to be useful for training or evaluation were removed during curation.
### Source Data
12 datasets that meet the criteria were chosen as sources for the BIGOS dataset.
* The Common Voice dataset version 15 (mozilla-common_voice_15-23)
* The Multilingual LibriSpeech (MLS) dataset (fair-mls-20)
* The Clarin Studio Corpus (pjatk-clarin_studio-15)
* The Clarin Mobile Corpus (pjatk-clarin_mobile-15)
* The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info [here](https://www.ii.pwr.edu.pl/)
* The Munich-AI Labs Speech corpus (mailabs-corpus-librivox-19)
* The AZON Read and Spontaneous Speech Corpora (pwr-azon_spont-20, pwr-azon_read-20) More info [here](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy)
* The Google FLEURS dataset (google-fleurs-22)
* The PolyAI minds14 dataset (polyai-minds14-21)
<br>
#### Initial Data Collection and Normalization
Source text and audio files were extracted and encoded in a unified format.<br>
Dataset-specific transcription norms are preserved, including punctuation and casing. <br>
In case of original dataset does not have test, dev, train splits provided, the splits were generated pseudorandomly during curation. <br>
<br>
#### Who are the source language producers?
1. Clarin corpora - Polish Japanese Academy of Technology
2. Common Voice - Mozilla foundation
3. Multlingual librispeech - Facebook AI research lab
4. Jerzy Sas and AZON datasets - Politechnika Wrocławska
5. Google - FLEURS
6. PolyAI London - Minds14
Please refer to the [BIGOS V1 paper](https://annals-csis.org/proceedings/2023/drp/1609.html) for more details.
If you use BIGOS please cite the data curator as well as the original authors:
```
@misc {amu_cai_pl_asr_bigos_v2,
author = { {Michał Junczyk} },
title = { pl-asr-bigos-v2 (Revision 37cc976) },
year = 2024,
url = { https://huggingface.co/datasets/amu-cai/pl-asr-bigos-v2 },
doi = { 10.57967/hf/2353 },
publisher = { Hugging Face }
}
@inproceedings{Ardila2020,
abstract = {The Common Voice corpus is a massively-multilingual collection of transcribed speech intended for speech technology research and development. Common Voice is designed for Automatic Speech Recognition purposes but can be useful in other domains (e.g. language identification). To achieve scale and sustainability, the Common Voice project employs crowdsourcing for both data collection and data validation. The most recent release includes 29 languages, and as of November 2019 there are a total of 38 languages collecting data. Over 50,000 individuals have participated so far, resulting in 2,500 hours of collected audio. To our knowledge this is the largest audio corpus in the public domain for speech recognition, both in terms of number of hours and number of languages. As an example use case for Common Voice, we present speech recognition experiments using Mozilla's DeepSpeech Speech-to-Text toolkit. By applying transfer learning from a source English model, we find an average Character Error Rate improvement of 5.99 5.48 for twelve target languages (German, French, Italian, Turkish, Catalan, Slovenian, Welsh, Irish, Breton, Tatar, Chuvash, and Kabyle). For most of these languages, these are the first ever published results on end-to-end Automatic Speech Recognition.},
author = {Rosana Ardila and Megan Branson and Kelly Davis and Michael Kohler and Josh Meyer and Michael Henretty and Reuben Morais and Lindsay Saunders and Francis Tyers and Gregor Weber},
city = {Marseille, France},
editor = {Nicoletta Calzolari and Frédéric Béchet and Philippe Blache and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
isbn = {979-10-95546-34-4},
booktitle = {Proceedings of the Twelfth Language Resources and Evaluation Conference},
month = {5},
pages = {4218-4222},
publisher = {European Language Resources Association},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
url = {https://aclanthology.org/2020.lrec-1.520},
year = {2020},
}
@article{Pratap2020,
abstract = {This paper introduces Multilingual LibriSpeech (MLS) dataset, a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages, including about 44.5K hours of English and a total of about 6K hours for other languages. Additionally, we provide Language Models (LM) and baseline Automatic Speech Recognition (ASR) models and for all the languages in our dataset. We believe such a large transcribed dataset will open new avenues in ASR and Text-To-Speech (TTS) research. The dataset will be made freely available for anyone at http://www.openslr.org.},
author = {Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
doi = {10.21437/Interspeech.2020-2826},
keywords = {Index Terms,multilingual,speech recognition},
month = {12},
title = {MLS: A Large-Scale Multilingual Dataset for Speech Research},
url = {http://arxiv.org/abs/2012.03411 http://dx.doi.org/10.21437/Interspeech.2020-2826},
year = {2020},
}
@article{Conneau2022,
abstract = {We introduce FLEURS, the Few-shot Learning Evaluation of Universal Representations of Speech benchmark. FLEURS is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark, with approximately 12 hours of speech supervision per language. FLEURS can be used for a variety of speech tasks, including Automatic Speech Recognition (ASR), Speech Language Identification (Speech LangID), Translation and Retrieval. In this paper, we provide baselines for the tasks based on multilingual pre-trained models like mSLAM. The goal of FLEURS is to enable speech technology in more languages and catalyze research in low-resource speech understanding.},
author = {Alexis Conneau and Min Ma and Simran Khanuja and Yu Zhang and Vera Axelrod and Siddharth Dalmia and Jason Riesa and Clara Rivera and Ankur Bapna},
month = {5},
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
year = {2022},
}
@misc{Korzinek2016,
author = {Danijel Koržinek and Krzysztof Marasek and Łukasz Brocki},
city = {Aix-en-Provence},
month = {10},
title = {Polish Read Speech Corpus for Speech Tools and Services},
url = {http://clarin-pl.eu},
year = {2016},
}
@article{Gerz2021,
abstract = {We present a systematic study on multilingual and cross-lingual intent detection from spoken data. The study leverages a new resource put forth in this work, termed MInDS-14, a first training and evaluation resource for the intent detection task with spoken data. It covers 14 intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties. Our key results indicate that combining machine translation models with state-of-the-art multilingual sentence encoders (e.g., LaBSE) can yield strong intent detectors in the majority of target languages covered in MInDS-14, and offer comparative analyses across different axes: e.g., zero-shot versus few-shot learning, translation direction, and impact of speech recognition. We see this work as an important step towards more inclusive development and evaluation of multilingual intent detectors from spoken data, in a much wider spectrum of languages compared to prior work.},
author = {Daniela Gerz and Pei-Hao Su and Razvan Kusztos and Avishek Mondal and Michał Lis and Eshan Singhal and Nikola Mrkšić and Tsung-Hsien Wen and Ivan Vulić},
month = {4},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
year = {2021},
}
```
### Annotations
#### Annotation process
Current release contains original transcriptions.
Manual transcriptions of subsets and release of diagnostic dataset are planned for subsequent releases.
#### Who are the annotators?
Depends on the source dataset.
### Personal and Sensitive Information
This corpus does not contain PII or Sensitive Information.
All IDs pf speakers are anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
To be updated.
### Discussion of Biases
To be updated.
### Other Known Limitations
The dataset in the initial release contains only a subset of recordings from original datasets.
## Additional Information
### Dataset Curators
Original authors of the source datasets - please refer to [source-data](#source-data) for details.
Michał Junczyk (michal.junczyk@amu.edu.pl) - curator of BIGOS corpora.
### Licensing Information
The BIGOS corpora is available under [Creative Commons By Attribution Share Alike 4.0 license.](https://creativecommons.org/licenses/by-sa/4.0/)
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
* [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)
* [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/).
* [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs)
* [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14)
* [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset)
* Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/)
### Citation Information
Please cite using [Bibtex](https://dblp.org/rec/conf/fedcsis/Junczyk23.html?view=bibtex)
### Contributions
Thanks to [@goodmike31](https://github.com/goodmike31) for adding this dataset.