ylacombe HF staff commited on
Commit
08e144a
1 Parent(s): fafc22d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +197 -2
README.md CHANGED
@@ -37,7 +37,202 @@ configs:
37
  data_files:
38
  - split: train
39
  path: male/train-*
 
 
 
 
 
 
 
40
  ---
41
- # Dataset Card for "google-tamil"
42
 
43
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  data_files:
38
  - split: train
39
  path: male/train-*
40
+ license: cc-by-sa-4.0
41
+ task_categories:
42
+ - text-to-speech
43
+ - text-to-audio
44
+ language:
45
+ - ta
46
+ pretty_name: Tamil Speech
47
  ---
48
+ # Dataset Card for Tamil Speech
49
 
50
+ ## Table of Contents
51
+ - [Dataset Description](#dataset-description)
52
+ - [Dataset Summary](#dataset-summary)
53
+ - [Supported Tasks](#supported-tasks)
54
+ - [How to use](#how-to-use)
55
+ - [Dataset Structure](#dataset-structure)
56
+ - [Data Instances](#data-instances)
57
+ - [Data Fields](#data-fields)
58
+ - [Data Statistics](#data-statistics)
59
+ - [Dataset Creation](#dataset-creation)
60
+ - [Curation Rationale](#curation-rationale)
61
+ - [Source Data](#source-data)
62
+ - [Annotations](#annotations)
63
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
64
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
65
+ - [Social Impact of Dataset](#social-impact-of-dataset)
66
+ - [Discussion of Biases](#discussion-of-biases)
67
+ - [Other Known Limitations](#other-known-limitations)
68
+ - [Additional Information](#additional-information)
69
+ - [Dataset Curators](#dataset-curators)
70
+ - [Licensing Information](#licensing-information)
71
+ - [Citation Information](#citation-information)
72
+ - [Contributions](#contributions)
73
+
74
+ ## Dataset Description
75
+
76
+ - **Homepage:** [Crowdsourced high-quality Tamil multi-speaker speech data set.](https://www.openslr.org/65/)
77
+ - **Repository:** [Google Language Resources and Tools](https://github.com/google/language-resources)
78
+ - **Paper:** [Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems](https://aclanthology.org/2020.lrec-1.804/)
79
+
80
+ ### Dataset Summary
81
+
82
+ This dataset consists of 7 hours of transcribed high-quality audio of Tamil sentences recorded by 50 volunteers. The dataset is intended for speech technologies.
83
+
84
+ The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/83) to make it easier to stream.
85
+
86
+
87
+ ### Supported Tasks
88
+
89
+ - `text-to-speech`, `text-to-audio`: The dataset can be used to train a model for Text-To-Speech (TTS).
90
+ - `automatic-speech-recognition`, `speaker-identification`: The dataset can also be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
91
+
92
+
93
+ ### How to use
94
+
95
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
96
+
97
+ For example, to download the female config, simply specify the corresponding language config name (i.e., "female" for female speakers):
98
+ ```python
99
+ from datasets import load_dataset
100
+
101
+ dataset =load_dataset("ylacombe/google-tamil", "female", split="train")
102
+ ```
103
+
104
+ Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
105
+ ```python
106
+ from datasets import load_dataset
107
+
108
+ dataset =load_dataset("ylacombe/google-tamil", "female", split="train", streaming=True)
109
+
110
+ print(next(iter(dataset)))
111
+ ```
112
+
113
+ #### *Bonus*
114
+ You can create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
115
+
116
+ **Local:**
117
+
118
+ ```python
119
+ from datasets import load_dataset
120
+ from torch.utils.data.sampler import BatchSampler, RandomSampler
121
+
122
+ dataset =load_dataset("ylacombe/google-tamil", "female", split="train")
123
+ batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
124
+ dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
125
+ ```
126
+
127
+ **Streaming:**
128
+
129
+ ```python
130
+ from datasets import load_dataset
131
+ from torch.utils.data import DataLoader
132
+
133
+ dataset =load_dataset("ylacombe/google-tamil", "female", split="train", streaming=True)
134
+ dataloader = DataLoader(dataset, batch_size=32)
135
+ ```
136
+
137
+ To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
138
+
139
+ ## Dataset Structure
140
+
141
+ ### Data Instances
142
+
143
+ A typical data point comprises the path to the audio file called `audio` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
144
+
145
+ ```
146
+ {'audio': {'path': 'taf_02345_00348037167.wav', 'array': array([-9.15527344e-05, -9.15527344e-05, -1.22070312e-04, ...,
147
+ -3.05175781e-05, 0.00000000e+00, 3.05175781e-05]), 'sampling_rate': 48000}, 'text': 'ஆஸ்த்ரேலியப் பெண்ணுக��கு முப்பத்தி மூன்று ஆண்டுகளுக்குப் பின்னர் இந்தியா இழப்பீடு வழங்கியது', 'speaker_id': 2345}
148
+ ```
149
+
150
+ ### Data Fields
151
+
152
+ - audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
153
+
154
+ - text: the transcription of the audio file.
155
+
156
+ - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
157
+
158
+ ### Data Statistics
159
+
160
+ | | Total duration (h) | Average duration (s) | # speakers | # sentences | # total words | # unique words | # total syllables | # unique syllables | # total phonemes | # unique phonemes |
161
+ |--------|--------------------|----------------------|------------|-------------|---------------|----------------|-------------------|--------------------|------------------|-------------------|
162
+ | Female | 4.01 | 6.18 | 25 | 2,335 | 15,880 | 6,620 | 56,607 | 1,696 | 126,659 | 37 |
163
+ | Male | 3.07 | 5.66 | 25 | 1,956 | 13,545 | 6,159 | 48,049 | 1,642 | 107,570 | 37 |
164
+
165
+ ## Dataset Creation
166
+
167
+ ### Curation Rationale
168
+
169
+ [Needs More Information]
170
+
171
+ ### Source Data
172
+
173
+ #### Initial Data Collection and Normalization
174
+
175
+ [Needs More Information]
176
+
177
+ #### Who are the source language producers?
178
+
179
+ [Needs More Information]
180
+
181
+ ### Annotations
182
+
183
+ #### Annotation process
184
+
185
+ [Needs More Information]
186
+
187
+ #### Who are the annotators?
188
+
189
+ [Needs More Information]
190
+
191
+ ### Personal and Sensitive Information
192
+
193
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
194
+
195
+ ## Considerations for Using the Data
196
+
197
+ ### Social Impact of Dataset
198
+
199
+ [More Information Needed]
200
+
201
+ ### Discussion of Biases
202
+
203
+ [More Information Needed]
204
+
205
+ ### Other Known Limitations
206
+
207
+ [Needs More Information]
208
+
209
+ ## Additional Information
210
+
211
+ ### Dataset Curators
212
+
213
+ [Needs More Information]
214
+
215
+ ### Licensing Information
216
+
217
+ License: ([CC BY-SA 4.0 DEED](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
218
+
219
+ ### Citation Information
220
+
221
+ ```
222
+ @inproceedings{he-etal-2020-open,
223
+ title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
224
+ author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
225
+ booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
226
+ month = may,
227
+ year = {2020},
228
+ address = {Marseille, France},
229
+ publisher = {European Language Resources Association (ELRA)},
230
+ pages = {6494--6503},
231
+ url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
232
+ ISBN = "{979-10-95546-34-4},
233
+ }
234
+ ```
235
+
236
+ ### Contributions
237
+
238
+ Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.