ylacombe HF staff commited on
Commit
030e157
1 Parent(s): d65ca9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -2
README.md CHANGED
@@ -37,7 +37,201 @@ configs:
37
  data_files:
38
  - split: train
39
  path: male/train-*
 
 
 
 
 
 
40
  ---
41
- # Dataset Card for "google-chilean-spanish"
42
 
43
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  data_files:
38
  - split: train
39
  path: male/train-*
40
+ task_categories:
41
+ - text-to-speech
42
+ - text-to-audio
43
+ language:
44
+ - es
45
+ pretty_name: Chilean Spanish Speech
46
  ---
47
+ # Dataset Card for Tamil Speech
48
 
49
+ ## Table of Contents
50
+ - [Dataset Description](#dataset-description)
51
+ - [Dataset Summary](#dataset-summary)
52
+ - [Supported Tasks](#supported-tasks)
53
+ - [How to use](#how-to-use)
54
+ - [Dataset Structure](#dataset-structure)
55
+ - [Data Instances](#data-instances)
56
+ - [Data Fields](#data-fields)
57
+ - [Data Statistics](#data-statistics)
58
+ - [Dataset Creation](#dataset-creation)
59
+ - [Curation Rationale](#curation-rationale)
60
+ - [Source Data](#source-data)
61
+ - [Annotations](#annotations)
62
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
63
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
64
+ - [Social Impact of Dataset](#social-impact-of-dataset)
65
+ - [Discussion of Biases](#discussion-of-biases)
66
+ - [Other Known Limitations](#other-known-limitations)
67
+ - [Additional Information](#additional-information)
68
+ - [Dataset Curators](#dataset-curators)
69
+ - [Licensing Information](#licensing-information)
70
+ - [Citation Information](#citation-information)
71
+ - [Contributions](#contributions)
72
+
73
+ ## Dataset Description
74
+
75
+ - **Homepage:** [Crowdsourced high-quality Chilean Spanish speech data set.](https://www.openslr.org/71/)
76
+ - **Repository:** [Google Language Resources and Tools](https://github.com/google/language-resources)
77
+ - **Paper:** [Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech](https://aclanthology.org/2020.lrec-1.801/)
78
+
79
+ ### Dataset Summary
80
+
81
+ This dataset consists of 7 hours of transcribed high-quality audio of Chilean Spanish sentences recorded by 31 volunteers. The dataset is intended for speech technologies.
82
+
83
+ The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/71/) to make it easier to stream.
84
+
85
+
86
+ ### Supported Tasks
87
+
88
+ - `text-to-speech`, `text-to-audio`: The dataset can be used to train a model for Text-To-Speech (TTS).
89
+ - `automatic-speech-recognition`, `speaker-identification`: The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
90
+
91
+
92
+ ### How to use
93
+
94
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
95
+
96
+ For example, to download the female config, simply specify the corresponding language config name (i.e., "female" for female speakers):
97
+ ```python
98
+ from datasets import load_dataset
99
+
100
+ dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train")
101
+ ```
102
+
103
+ Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
104
+ ```python
105
+ from datasets import load_dataset
106
+
107
+ dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train", streaming=True)
108
+
109
+ print(next(iter(dataset)))
110
+ ```
111
+
112
+ #### *Bonus*
113
+ You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
114
+
115
+ **Local:**
116
+
117
+ ```python
118
+ from datasets import load_dataset
119
+ from torch.utils.data.sampler import BatchSampler, RandomSampler
120
+
121
+ dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train")
122
+ batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
123
+ dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
124
+ ```
125
+
126
+ **Streaming:**
127
+
128
+ ```python
129
+ from datasets import load_dataset
130
+ from torch.utils.data import DataLoader
131
+
132
+ dataset =load_dataset("ylacombe/google-chilean-spanish", "female", split="train", streaming=True)
133
+ dataloader = DataLoader(dataset, batch_size=32)
134
+ ```
135
+
136
+ To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
137
+
138
+ ## Dataset Structure
139
+
140
+ ### Data Instances
141
+
142
+ A typical data point comprises the path to the audio file called `audio` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
143
+
144
+ ```
145
+ {'audio': {'path': 'clf_09334_01278378087.wav', 'array': array([-9.15527344e-05, -4.57763672e-04, -4.88281250e-04, ...,
146
+ 1.86157227e-03, 2.10571289e-03, 2.31933594e-03]), 'sampling_rate': 48000}, 'text': 'La vigencia de tu tarjeta es de ocho meses', 'speaker_id': 9334}
147
+ ```
148
+
149
+ ### Data Fields
150
+
151
+ - audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
152
+
153
+ - text: the transcription of the audio file.
154
+
155
+ - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
156
+
157
+ ### Data Statistics
158
+
159
+ | | Total duration (h) | # speakers | # sentences | # total words | # unique words |
160
+ |--------|--------------------|------------|-------------|---------------|----------------|
161
+ | Female | 2.84 | 13 | 1738 | 16591 | 3279 |
162
+ | Male | 4.31 | 18 | 2636 | 25168 | 4171 |
163
+
164
+ ## Dataset Creation
165
+
166
+ ### Curation Rationale
167
+
168
+ [Needs More Information]
169
+
170
+ ### Source Data
171
+
172
+ #### Initial Data Collection and Normalization
173
+
174
+ [Needs More Information]
175
+
176
+ #### Who are the source language producers?
177
+
178
+ [Needs More Information]
179
+
180
+ ### Annotations
181
+
182
+ #### Annotation process
183
+
184
+ [Needs More Information]
185
+
186
+ #### Who are the annotators?
187
+
188
+ [Needs More Information]
189
+
190
+ ### Personal and Sensitive Information
191
+
192
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
193
+
194
+ ## Considerations for Using the Data
195
+
196
+ ### Social Impact of Dataset
197
+
198
+ [More Information Needed]
199
+
200
+ ### Discussion of Biases
201
+
202
+ [More Information Needed]
203
+
204
+ ### Other Known Limitations
205
+
206
+ [Needs More Information]
207
+
208
+ ## Additional Information
209
+
210
+ ### Dataset Curators
211
+
212
+ [Needs More Information]
213
+
214
+ ### Licensing Information
215
+
216
+ License: ([CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
217
+
218
+ ### Citation Information
219
+
220
+ ```
221
+ @inproceedings{guevara-rukoz-etal-2020-crowdsourcing,
222
+ title = {{Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech}},
223
+ author = {Guevara-Rukoz, Adriana and Demirsahin, Isin and He, Fei and Chu, Shan-Hui Cathy and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Gutkin, Alexander and Butryna, Alena and Kjartansson, Oddur},
224
+ booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
225
+ year = {2020},
226
+ month = may,
227
+ address = {Marseille, France},
228
+ publisher = {European Language Resources Association (ELRA)},
229
+ url = {https://www.aclweb.org/anthology/2020.lrec-1.801},
230
+ pages = {6504--6513},
231
+ ISBN = {979-10-95546-34-4},
232
+ }
233
+ ```
234
+
235
+ ### Contributions
236
+
237
+ Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.