Datasets:

Multilinguality:
multilingual
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
extended|common_voice
ArXiv:
Tags:
License:
taqwa92 commited on
Commit
c958756
1 Parent(s): d022a6b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +539 -0
README.md ADDED
@@ -0,0 +1,539 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ license:
7
+ - cc0-1.0
8
+ multilinguality:
9
+ - multilingual
10
+ size_categories:
11
+ ab:
12
+ - 10K<n<100K
13
+ ar:
14
+ - 100K<n<1M
15
+ as:
16
+ - 1K<n<10K
17
+ ast:
18
+ - n<1K
19
+ az:
20
+ - n<1K
21
+ ba:
22
+ - 100K<n<1M
23
+ bas:
24
+ - 1K<n<10K
25
+ be:
26
+ - 100K<n<1M
27
+ bg:
28
+ - 1K<n<10K
29
+ bn:
30
+ - 100K<n<1M
31
+ br:
32
+ - 10K<n<100K
33
+ ca:
34
+ - 1M<n<10M
35
+ ckb:
36
+ - 100K<n<1M
37
+ cnh:
38
+ - 1K<n<10K
39
+ cs:
40
+ - 10K<n<100K
41
+ cv:
42
+ - 10K<n<100K
43
+ cy:
44
+ - 100K<n<1M
45
+ da:
46
+ - 1K<n<10K
47
+ de:
48
+ - 100K<n<1M
49
+ dv:
50
+ - 10K<n<100K
51
+ el:
52
+ - 10K<n<100K
53
+ en:
54
+ - 1M<n<10M
55
+ eo:
56
+ - 1M<n<10M
57
+ es:
58
+ - 1M<n<10M
59
+ et:
60
+ - 10K<n<100K
61
+ eu:
62
+ - 100K<n<1M
63
+ fa:
64
+ - 100K<n<1M
65
+ fi:
66
+ - 10K<n<100K
67
+ fr:
68
+ - 100K<n<1M
69
+ fy-NL:
70
+ - 10K<n<100K
71
+ ga-IE:
72
+ - 1K<n<10K
73
+ gl:
74
+ - 10K<n<100K
75
+ gn:
76
+ - 1K<n<10K
77
+ ha:
78
+ - 1K<n<10K
79
+ hi:
80
+ - 10K<n<100K
81
+ hsb:
82
+ - 1K<n<10K
83
+ hu:
84
+ - 10K<n<100K
85
+ hy-AM:
86
+ - 1K<n<10K
87
+ ia:
88
+ - 10K<n<100K
89
+ id:
90
+ - 10K<n<100K
91
+ ig:
92
+ - 1K<n<10K
93
+ it:
94
+ - 100K<n<1M
95
+ ja:
96
+ - 10K<n<100K
97
+ ka:
98
+ - 10K<n<100K
99
+ kab:
100
+ - 100K<n<1M
101
+ kk:
102
+ - 1K<n<10K
103
+ kmr:
104
+ - 10K<n<100K
105
+ ky:
106
+ - 10K<n<100K
107
+ lg:
108
+ - 100K<n<1M
109
+ lt:
110
+ - 10K<n<100K
111
+ lv:
112
+ - 1K<n<10K
113
+ mdf:
114
+ - n<1K
115
+ mhr:
116
+ - 100K<n<1M
117
+ mk:
118
+ - n<1K
119
+ ml:
120
+ - 1K<n<10K
121
+ mn:
122
+ - 10K<n<100K
123
+ mr:
124
+ - 10K<n<100K
125
+ mrj:
126
+ - 10K<n<100K
127
+ mt:
128
+ - 10K<n<100K
129
+ myv:
130
+ - 1K<n<10K
131
+ nan-tw:
132
+ - 10K<n<100K
133
+ ne-NP:
134
+ - n<1K
135
+ nl:
136
+ - 10K<n<100K
137
+ nn-NO:
138
+ - n<1K
139
+ or:
140
+ - 1K<n<10K
141
+ pa-IN:
142
+ - 1K<n<10K
143
+ pl:
144
+ - 100K<n<1M
145
+ pt:
146
+ - 100K<n<1M
147
+ rm-sursilv:
148
+ - 1K<n<10K
149
+ rm-vallader:
150
+ - 1K<n<10K
151
+ ro:
152
+ - 10K<n<100K
153
+ ru:
154
+ - 100K<n<1M
155
+ rw:
156
+ - 1M<n<10M
157
+ sah:
158
+ - 1K<n<10K
159
+ sat:
160
+ - n<1K
161
+ sc:
162
+ - 1K<n<10K
163
+ sk:
164
+ - 10K<n<100K
165
+ skr:
166
+ - 1K<n<10K
167
+ sl:
168
+ - 10K<n<100K
169
+ sr:
170
+ - 1K<n<10K
171
+ sv-SE:
172
+ - 10K<n<100K
173
+ sw:
174
+ - 100K<n<1M
175
+ ta:
176
+ - 100K<n<1M
177
+ th:
178
+ - 100K<n<1M
179
+ ti:
180
+ - n<1K
181
+ tig:
182
+ - n<1K
183
+ tok:
184
+ - 1K<n<10K
185
+ tr:
186
+ - 10K<n<100K
187
+ tt:
188
+ - 10K<n<100K
189
+ tw:
190
+ - n<1K
191
+ ug:
192
+ - 10K<n<100K
193
+ uk:
194
+ - 10K<n<100K
195
+ ur:
196
+ - 100K<n<1M
197
+ uz:
198
+ - 100K<n<1M
199
+ vi:
200
+ - 10K<n<100K
201
+ vot:
202
+ - n<1K
203
+ yue:
204
+ - 10K<n<100K
205
+ zh-CN:
206
+ - 100K<n<1M
207
+ zh-HK:
208
+ - 100K<n<1M
209
+ zh-TW:
210
+ - 100K<n<1M
211
+ source_datasets:
212
+ - extended|common_voice
213
+ task_categories:
214
+ - automatic-speech-recognition
215
+ task_ids: []
216
+ paperswithcode_id: common-voice
217
+ pretty_name: Common Voice Corpus 11.0
218
+ language_bcp47:
219
+ - ab
220
+ - ar
221
+ - as
222
+ - ast
223
+ - az
224
+ - ba
225
+ - bas
226
+ - be
227
+ - bg
228
+ - bn
229
+ - br
230
+ - ca
231
+ - ckb
232
+ - cnh
233
+ - cs
234
+ - cv
235
+ - cy
236
+ - da
237
+ - de
238
+ - dv
239
+ - el
240
+ - en
241
+ - eo
242
+ - es
243
+ - et
244
+ - eu
245
+ - fa
246
+ - fi
247
+ - fr
248
+ - fy-NL
249
+ - ga-IE
250
+ - gl
251
+ - gn
252
+ - ha
253
+ - hi
254
+ - hsb
255
+ - hu
256
+ - hy-AM
257
+ - ia
258
+ - id
259
+ - ig
260
+ - it
261
+ - ja
262
+ - ka
263
+ - kab
264
+ - kk
265
+ - kmr
266
+ - ky
267
+ - lg
268
+ - lt
269
+ - lv
270
+ - mdf
271
+ - mhr
272
+ - mk
273
+ - ml
274
+ - mn
275
+ - mr
276
+ - mrj
277
+ - mt
278
+ - myv
279
+ - nan-tw
280
+ - ne-NP
281
+ - nl
282
+ - nn-NO
283
+ - or
284
+ - pa-IN
285
+ - pl
286
+ - pt
287
+ - rm-sursilv
288
+ - rm-vallader
289
+ - ro
290
+ - ru
291
+ - rw
292
+ - sah
293
+ - sat
294
+ - sc
295
+ - sk
296
+ - skr
297
+ - sl
298
+ - sr
299
+ - sv-SE
300
+ - sw
301
+ - ta
302
+ - th
303
+ - ti
304
+ - tig
305
+ - tok
306
+ - tr
307
+ - tt
308
+ - tw
309
+ - ug
310
+ - uk
311
+ - ur
312
+ - uz
313
+ - vi
314
+ - vot
315
+ - yue
316
+ - zh-CN
317
+ - zh-HK
318
+ - zh-TW
319
+ extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
320
+ attempt to determine the identity of speakers in the Common Voice dataset.
321
+ ---
322
+
323
+ # Dataset Card for Common Voice Corpus 11.0
324
+
325
+ ## Table of Contents
326
+ - [Dataset Description](#dataset-description)
327
+ - [Dataset Summary](#dataset-summary)
328
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
329
+ - [Languages](#languages)
330
+ - [Dataset Structure](#dataset-structure)
331
+ - [Data Instances](#data-instances)
332
+ - [Data Fields](#data-fields)
333
+ - [Data Splits](#data-splits)
334
+ - [Dataset Creation](#dataset-creation)
335
+ - [Curation Rationale](#curation-rationale)
336
+ - [Source Data](#source-data)
337
+ - [Annotations](#annotations)
338
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
339
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
340
+ - [Social Impact of Dataset](#social-impact-of-dataset)
341
+ - [Discussion of Biases](#discussion-of-biases)
342
+ - [Other Known Limitations](#other-known-limitations)
343
+ - [Additional Information](#additional-information)
344
+ - [Dataset Curators](#dataset-curators)
345
+ - [Licensing Information](#licensing-information)
346
+ - [Citation Information](#citation-information)
347
+ - [Contributions](#contributions)
348
+
349
+ ## Dataset Description
350
+
351
+ - **Homepage:** https://commonvoice.mozilla.org/en/datasets
352
+ - **Repository:** https://github.com/common-voice/common-voice
353
+ - **Paper:** https://arxiv.org/abs/1912.06670
354
+ - **Leaderboard:** https://paperswithcode.com/dataset/common-voice
355
+ - **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
356
+
357
+ ### Dataset Summary
358
+
359
+ The Common Voice dataset consists of a unique MP3 and corresponding text file.
360
+ Many of the 24210 recorded hours in the dataset also include demographic metadata like age, sex, and accent
361
+ that can help improve the accuracy of speech recognition engines.
362
+
363
+ The dataset currently consists of 16413 validated hours in 100 languages, but more voices and languages are always added.
364
+ Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
365
+
366
+ ### Supported Tasks and Leaderboards
367
+
368
+ The results for models trained on the Common Voice datasets are available via the
369
+ [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
370
+
371
+ ### Languages
372
+
373
+ ```
374
+ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
375
+ ```
376
+
377
+ ## Dataset Structure
378
+
379
+ ### Data Instances
380
+
381
+ A typical data point comprises the `path` to the audio file and its `sentence`.
382
+ Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
383
+
384
+ ```python
385
+ {
386
+ 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
387
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
388
+ 'audio': {
389
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
390
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
391
+ 'sampling_rate': 48000
392
+ },
393
+ 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
394
+ 'up_votes': 2,
395
+ 'down_votes': 0,
396
+ 'age': 'twenties',
397
+ 'gender': 'male',
398
+ 'accent': '',
399
+ 'locale': 'et',
400
+ 'segment': ''
401
+ }
402
+ ```
403
+
404
+ ### Data Fields
405
+
406
+ `client_id` (`string`): An id for which client (voice) made the recording
407
+
408
+ `path` (`string`): The path to the audio file
409
+
410
+ `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
411
+
412
+ `sentence` (`string`): The sentence the user was prompted to speak
413
+
414
+ `up_votes` (`int64`): How many upvotes the audio file has received from reviewers
415
+
416
+ `down_votes` (`int64`): How many downvotes the audio file has received from reviewers
417
+
418
+ `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
419
+
420
+ `gender` (`string`): The gender of the speaker
421
+
422
+ `accent` (`string`): Accent of the speaker
423
+
424
+ `locale` (`string`): The locale of the speaker
425
+
426
+ `segment` (`string`): Usually an empty field
427
+
428
+ ### Data Splits
429
+
430
+ The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
431
+
432
+ The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
433
+
434
+ The invalidated data is data has been invalidated by reviewers
435
+ and received downvotes indicating that the data is of low quality.
436
+
437
+ The reported data is data that has been reported, for different reasons.
438
+
439
+ The other data is data that has not yet been reviewed.
440
+
441
+ The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
442
+
443
+ ## Data Preprocessing Recommended by Hugging Face
444
+
445
+ The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
446
+
447
+ Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
448
+
449
+ In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
450
+
451
+ ```python
452
+ from datasets import load_dataset
453
+
454
+ ds = load_dataset("mozilla-foundation/common_voice_11_0", "en", use_auth_token=True)
455
+
456
+ def prepare_dataset(batch):
457
+ """Function to preprocess the dataset with the .map method"""
458
+ transcription = batch["sentence"]
459
+
460
+ if transcription.startswith('"') and transcription.endswith('"'):
461
+ # we can remove trailing quotation marks as they do not affect the transcription
462
+ transcription = transcription[1:-1]
463
+
464
+ if transcription[-1] not in [".", "?", "!"]:
465
+ # append a full-stop to sentences that do not end in punctuation
466
+ transcription = transcription + "."
467
+
468
+ batch["sentence"] = transcription
469
+
470
+ return batch
471
+
472
+ ds = ds.map(prepare_dataset, desc="preprocess dataset")
473
+ ```
474
+
475
+ ## Dataset Creation
476
+
477
+ ### Curation Rationale
478
+
479
+ [Needs More Information]
480
+
481
+ ### Source Data
482
+
483
+ #### Initial Data Collection and Normalization
484
+
485
+ [Needs More Information]
486
+
487
+ #### Who are the source language producers?
488
+
489
+ [Needs More Information]
490
+
491
+ ### Annotations
492
+
493
+ #### Annotation process
494
+
495
+ [Needs More Information]
496
+
497
+ #### Who are the annotators?
498
+
499
+ [Needs More Information]
500
+
501
+ ### Personal and Sensitive Information
502
+
503
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
504
+
505
+ ## Considerations for Using the Data
506
+
507
+ ### Social Impact of Dataset
508
+
509
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
510
+
511
+ ### Discussion of Biases
512
+
513
+ [More Information Needed]
514
+
515
+ ### Other Known Limitations
516
+
517
+ [More Information Needed]
518
+
519
+ ## Additional Information
520
+
521
+ ### Dataset Curators
522
+
523
+ [More Information Needed]
524
+
525
+ ### Licensing Information
526
+
527
+ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
528
+
529
+ ### Citation Information
530
+
531
+ ```
532
+ @inproceedings{commonvoice:2020,
533
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
534
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
535
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
536
+ pages = {4211--4215},
537
+ year = 2020
538
+ }
539
+ ```