--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - ca license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 1K Python interface to the Voice Activity Detector (VAD) developed by Google for the WebRTC. - **Resampling:** From 48000 Hz to 22050 Hz, which is the most common sampling rate for training TTS models - Resampler from [CoquiTTS](https://github.com/coqui-ai/TTS/tree/dev) framework - **Denoising:** Although base quality of the audios is high, we could remove some background noise and small artifcats thanks to the CleanUNet denoiser developed by NVIDIA. - [CleanUNet](https://github.com/NVIDIA/CleanUNet) - [arXiv](https://arxiv.org/abs/2202.07790) We kept the same number of wave files, also the original anonymized file names and transcriptions. ## Uses The purpose of this dataset is mainly for training text-to-speech and automatic speech recognition models in Catalan. ## Dataset Structure The dataset consists of a single split, providing audios and transcriptions: ``` DatasetDict({ train: Dataset({ features: ['audio', 'transcription'], num_rows: 4240 }) }) ``` Each data point is structured as: ``` >> data['train'][0]['audio'] {'path': 'caf_09901_01619988267.wav', 'array': array([-3.05175781e-05, -3.05175781e-05, -3.05175781e-05, ..., -6.10351562e-05, -6.10351562e-05, -6.10351562e-05]) 'sampling_rate': 22050} >> data['train'][0]['transcription'] "L'òpera de Sydney es troba a l'entrada de la badia" ``` ### Dataset Splits - ```audio (dict)```: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: ```dataset[0]["audio"]``` the audio file is automatically decoded and resampled to ```dataset.features["audio"].sampling_rate```. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the "audio" column, i.e. ```dataset[0]["audio"]``` should always be preferred over ```dataset["audio"][0]```. * ```path (str)```: The path to the audio file. * ```array (array)```: Decoded audio array. * ```sampling_rate (int)```: Audio sampling rate. - ```transcription (str)```: The sentence the user was prompted to speak. ## Dataset Creation ### Source Data *SLR69: Crowdsourced high-quality Catalan multi-speaker speech data set* This data set contains transcribed high-quality audio of Catalan sentences recorded by volunteers. The recordings were prepared with the help of Direcció General de Política Lingüística del Departament de Cultura, Generalitat de Catalunya. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains an anonymized FileID and the transcription of audio in the file. The data set has been manually quality checked, but there might still be errors. Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues The original dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License. See [LICENSE](https://www.openslr.org/resources/69/LICENSE) file and [https://github.com/google/language-resources#license](https://github.com/google/language-resources#license) for license information. #### Data Collection and Processing This is a post-processed version of the Catalan [OpenSLR-69](https://www.openslr.org/69) dataset. For more inormation about the original data collection and processing refer to [this paper](https://aclanthology.org/2020.sltu-1.3/). #### Who are the source data producers? Copyright 2018, 2019 Google, Inc. Copyright 2023 Language Technologies Unit (LangTech) at Barcelona Supercomputing Center ### Annotations (N/A) #### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Bias, Risks, and Limitations ### Recommendations This dataset is a post-processed version of another previously created dataset. Please, refer to its documentation to know about any possible risks, biases and limitations. ## Citation The original paper where authors detail how OpenSLR-69 was generated: ``` @inproceedings{kjartansson-etal-2020-open, title = {{Open-Source High Quality Speech Datasets for Basque, Catalan and Galician}}, author = {Kjartansson, Oddur and Gutkin, Alexander and Butryna, Alena and Demirsahin, Isin and Rivera, Clara}, booktitle = {Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)}, year = {2020}, pages = {21--27}, month = may, address = {Marseille, France}, publisher = {European Language Resources association (ELRA)}, url = {https://www.aclweb.org/anthology/2020.sltu-1.3}, ISBN = {979-10-95546-35-1}, } ``` **APA:** ## Funding This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/). ## Dataset Card Contact langtech@bsc.es