Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
Catalan
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
File size: 6,321 Bytes
3c9ef2f
 
b382ba9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9453bfa
 
 
 
92156c2
5f32aa5
92156c2
5f32aa5
92156c2
 
 
 
 
5f32aa5
92156c2
5f32aa5
 
 
 
 
 
 
92156c2
5f32aa5
92156c2
f995ea0
92156c2
f995ea0
 
 
92156c2
 
 
5f32aa5
92156c2
 
5f32aa5
92156c2
5f32aa5
92156c2
5f32aa5
 
 
 
 
 
 
 
 
 
 
 
92156c2
5f32aa5
 
 
92156c2
 
5f32aa5
92156c2
5f32aa5
 
92156c2
5f32aa5
92156c2
5f32aa5
 
 
 
 
92156c2
 
5f32aa5
92156c2
5f32aa5
 
92156c2
 
 
 
 
 
 
 
 
 
 
 
5f32aa5
92156c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f32aa5
 
a3971cb
5f32aa5
 
a3971cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f32aa5
92156c2
 
 
 
 
 
 
5f32aa5
92156c2
 
 
5f32aa5
92156c2
 
 
 
 
5f32aa5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
license: cc-by-sa-4.0
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: transcription
    dtype: string
  splits:
  - name: train
    num_bytes: 3700111065.76
    num_examples: 12435
  download_size: 3331803699
  dataset_size: 3700111065.76
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- text-to-speech
language:
- ca
---
# Dataset Card for festcat_trimmed_denoised

This is a post-processed version of the Catalan Festcat speech dataset. The data can be found [here](http://festcat.talp.cat/ca/download-legacy.php).

## Dataset Details

### Dataset Description

We processed the data of the Catalan Festcat with the following recipe:

- **Trimming:** Long silences from the start and the end of clips have been removed.
  - [py-webrtcvad](https://pypi.org/project/webrtcvad/) -> Python interface to the Voice Activity Detector (VAD) developed by Google for the WebRTC.
- **Resampling:** From 48000 Hz to 22050 Hz, which is the most common sampling rate for training TTS models
  - Resampler from [CoquiTTS](https://github.com/coqui-ai/TTS/tree/dev) framework
- **De-noising:** Although base quality of the audios is high, we could remove some background noise and small artifcats thanks to the CleanUNet denoiser developed by NVIDIA.
  - [CleanUNet](https://github.com/NVIDIA/CleanUNet)
  - [paper](https://arxiv.org/abs/2202.07790)

We kept the same number of wave files, also the original anonymized file names and transcriptions. 

Same license is maintained: 

Creative Commons Attribution-ShareAlike 3.0 Spain License.

To view a copy of this license, visit this [link](http://creativecommons.org/licenses/by-sa/3.0/es/).

## Uses

The purpose of this dataset is mainly for training text-to-speech and automatic speech recognition models in Catalan.


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

The dataset consists of a single split, providing audios and transcriptions:
```
DatasetDict({
  train: Dataset({
    features: ['audio', 'transcription'],
    num_rows: 4240
  })
})
```
Each data point is structured as:
```
>> data['train'][0]['audio']

{'path': 'caf_09901_01619988267.wav',
 'array': array([-3.05175781e-05, -3.05175781e-05, -3.05175781e-05, ..., -6.10351562e-05, -6.10351562e-05, -6.10351562e-05])
 'sampling_rate': 22050}


>> data['train'][0]['transcription']

"L'òpera de Sydney es troba a l'entrada de la badia"
```

### Dataset Splits

- <u>audio (dict)</u>: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
 
  * path (str): The path to the audio file.
  * array (array): Decoded audio array.
  * sampling_rate (int): Audio sampling rate.


- <u>transcription (str)</u>: The sentence the user was prompted to speak.


## Dataset Creation

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->


#### Who are the source data producers?

Copyright 2018, 2019 Google, Inc.


### Annotations [optional]

<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->


## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->


### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

## Citation

These are the relevant publications related to the creation and development of the festcat dataset:

```
@inproceedings{bonafonte2008corpus,
  title={Corpus and Voices for Catalan Speech Synthesis.},
  author={Bonafonte, Antonio and Adell, Jordi and Esquerra, Ignasi and Gallego, Silvia and Moreno, Asunci{\'o}n and P{\'e}rez, Javier},
  booktitle={LREC},
  year={2008}
}
```

```
@article{bonafonte2009recent,
  title={Recent work on the FESTCAT database for speech synthesis},
  author={Bonafonte, Antonio and Aguilar, Lourdes and Esquerra, Ignasi and Oller, Sergio and Moreno, Asunci{\'o}n},
  journal={Proc. SLTECH},
  pages={131--132},
  year={2009}
}
```

```
@inproceedings{bonafonte2008corpus,
  title={Corpus and Voices for Catalan Speech Synthesis.},
  author={Bonafonte, Antonio and Adell, Jordi and Esquerra, Ignasi and Gallego, Silvia and Moreno, Asunci{\'o}n and P{\'e}rez, Javier},
  booktitle={LREC},
  year={2008}
}
```

```
@inproceedings{bonafonte2008corpus,
  title={Corpus and Voices for Catalan Speech Synthesis.},
  author={Bonafonte, Antonio and Adell, Jordi and Esquerra, Ignasi and Gallego, Silvia and Moreno, Asunci{\'o}n and P{\'e}rez, Javier},
  booktitle={LREC},
  year={2008}
}
```
**APA:**


## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->



## More Information [optional]



## Dataset Card Authors [optional]



## Dataset Card Contact