Datasets:

Multilinguality:
multilingual
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
extended|common_voice
ArXiv:
Tags:
License:
ssahir commited on
Commit
e48a570
1 Parent(s): 873721b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +609 -28
README.md CHANGED
@@ -1,31 +1,612 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path: data/train-*
7
- - split: test
8
- path: data/test-*
9
- dataset_info:
10
- features:
11
- - name: input_features
12
- sequence:
13
- sequence:
14
- sequence: float32
15
- - name: labels
16
- sequence: int64
17
- - name: input_length
18
- dtype: float64
19
- splits:
20
- - name: train
21
- num_bytes: 4714091936.0
22
- num_examples: 4904
23
- - name: test
24
- num_bytes: 2126305304
25
- num_examples: 2212
26
- download_size: 1258101807
27
- dataset_size: 6840397240.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ---
29
- # Dataset Card for "common_voice_13_0_dv_preprocessed"
30
 
31
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ license:
7
+ - cc0-1.0
8
+ multilinguality:
9
+ - multilingual
10
+ size_categories:
11
+ ab:
12
+ - 10K<n<100K
13
+ ar:
14
+ - 100K<n<1M
15
+ as:
16
+ - 1K<n<10K
17
+ ast:
18
+ - 1K<n<10K
19
+ az:
20
+ - n<1K
21
+ ba:
22
+ - 100K<n<1M
23
+ bas:
24
+ - 1K<n<10K
25
+ be:
26
+ - 1M<n<10M
27
+ bg:
28
+ - 10K<n<100K
29
+ bn:
30
+ - 1M<n<10M
31
+ br:
32
+ - 10K<n<100K
33
+ ca:
34
+ - 1M<n<10M
35
+ ckb:
36
+ - 100K<n<1M
37
+ cnh:
38
+ - 1K<n<10K
39
+ cs:
40
+ - 100K<n<1M
41
+ cv:
42
+ - 10K<n<100K
43
+ cy:
44
+ - 100K<n<1M
45
+ da:
46
+ - 10K<n<100K
47
+ de:
48
+ - 100K<n<1M
49
+ dv:
50
+ - 10K<n<100K
51
+ dyu:
52
+ - n<1K
53
+ el:
54
+ - 10K<n<100K
55
+ en:
56
+ - 1M<n<10M
57
+ eo:
58
+ - 1M<n<10M
59
+ es:
60
+ - 1M<n<10M
61
+ et:
62
+ - 10K<n<100K
63
+ eu:
64
+ - 100K<n<1M
65
+ fa:
66
+ - 100K<n<1M
67
+ fi:
68
+ - 10K<n<100K
69
+ fr:
70
+ - 100K<n<1M
71
+ fy-NL:
72
+ - 100K<n<1M
73
+ ga-IE:
74
+ - 10K<n<100K
75
+ gl:
76
+ - 10K<n<100K
77
+ gn:
78
+ - 1K<n<10K
79
+ ha:
80
+ - 10K<n<100K
81
+ hi:
82
+ - 10K<n<100K
83
+ hsb:
84
+ - 1K<n<10K
85
+ hu:
86
+ - 10K<n<100K
87
+ hy-AM:
88
+ - 1K<n<10K
89
+ ia:
90
+ - 10K<n<100K
91
+ id:
92
+ - 10K<n<100K
93
+ ig:
94
+ - 1K<n<10K
95
+ is:
96
+ - n<1K
97
+ it:
98
+ - 100K<n<1M
99
+ ja:
100
+ - 100K<n<1M
101
+ ka:
102
+ - 10K<n<100K
103
+ kab:
104
+ - 100K<n<1M
105
+ kk:
106
+ - 1K<n<10K
107
+ kmr:
108
+ - 10K<n<100K
109
+ ko:
110
+ - 1K<n<10K
111
+ ky:
112
+ - 10K<n<100K
113
+ lg:
114
+ - 100K<n<1M
115
+ lo:
116
+ - n<1K
117
+ lt:
118
+ - 10K<n<100K
119
+ lv:
120
+ - 10K<n<100K
121
+ mdf:
122
+ - n<1K
123
+ mhr:
124
+ - 100K<n<1M
125
+ mk:
126
+ - n<1K
127
+ ml:
128
+ - 1K<n<10K
129
+ mn:
130
+ - 10K<n<100K
131
+ mr:
132
+ - 10K<n<100K
133
+ mrj:
134
+ - 10K<n<100K
135
+ mt:
136
+ - 10K<n<100K
137
+ myv:
138
+ - 1K<n<10K
139
+ nan-tw:
140
+ - 10K<n<100K
141
+ ne-NP:
142
+ - n<1K
143
+ nl:
144
+ - 10K<n<100K
145
+ nn-NO:
146
+ - n<1K
147
+ oc:
148
+ - 1K<n<10K
149
+ or:
150
+ - 1K<n<10K
151
+ pa-IN:
152
+ - 1K<n<10K
153
+ pl:
154
+ - 100K<n<1M
155
+ pt:
156
+ - 100K<n<1M
157
+ quy:
158
+ - n<1K
159
+ rm-sursilv:
160
+ - 1K<n<10K
161
+ rm-vallader:
162
+ - 1K<n<10K
163
+ ro:
164
+ - 10K<n<100K
165
+ ru:
166
+ - 100K<n<1M
167
+ rw:
168
+ - 1M<n<10M
169
+ sah:
170
+ - 1K<n<10K
171
+ sat:
172
+ - n<1K
173
+ sc:
174
+ - 1K<n<10K
175
+ sk:
176
+ - 10K<n<100K
177
+ skr:
178
+ - 1K<n<10K
179
+ sl:
180
+ - 10K<n<100K
181
+ sr:
182
+ - 1K<n<10K
183
+ sv-SE:
184
+ - 10K<n<100K
185
+ sw:
186
+ - 100K<n<1M
187
+ ta:
188
+ - 100K<n<1M
189
+ th:
190
+ - 100K<n<1M
191
+ ti:
192
+ - n<1K
193
+ tig:
194
+ - n<1K
195
+ tk:
196
+ - 1K<n<10K
197
+ tok:
198
+ - 10K<n<100K
199
+ tr:
200
+ - 10K<n<100K
201
+ tt:
202
+ - 10K<n<100K
203
+ tw:
204
+ - n<1K
205
+ ug:
206
+ - 10K<n<100K
207
+ uk:
208
+ - 10K<n<100K
209
+ ur:
210
+ - 100K<n<1M
211
+ uz:
212
+ - 100K<n<1M
213
+ vi:
214
+ - 10K<n<100K
215
+ vot:
216
+ - n<1K
217
+ yo:
218
+ - 1K<n<10K
219
+ yue:
220
+ - 10K<n<100K
221
+ zh-CN:
222
+ - 100K<n<1M
223
+ zh-HK:
224
+ - 100K<n<1M
225
+ zh-TW:
226
+ - 100K<n<1M
227
+ source_datasets:
228
+ - extended|common_voice
229
+ task_categories:
230
+ - automatic-speech-recognition
231
+ paperswithcode_id: common-voice
232
+ pretty_name: Common Voice Corpus 13.0
233
+ language_bcp47:
234
+ - ab
235
+ - ar
236
+ - as
237
+ - ast
238
+ - az
239
+ - ba
240
+ - bas
241
+ - be
242
+ - bg
243
+ - bn
244
+ - br
245
+ - ca
246
+ - ckb
247
+ - cnh
248
+ - cs
249
+ - cv
250
+ - cy
251
+ - da
252
+ - de
253
+ - dv
254
+ - dyu
255
+ - el
256
+ - en
257
+ - eo
258
+ - es
259
+ - et
260
+ - eu
261
+ - fa
262
+ - fi
263
+ - fr
264
+ - fy-NL
265
+ - ga-IE
266
+ - gl
267
+ - gn
268
+ - ha
269
+ - hi
270
+ - hsb
271
+ - hu
272
+ - hy-AM
273
+ - ia
274
+ - id
275
+ - ig
276
+ - is
277
+ - it
278
+ - ja
279
+ - ka
280
+ - kab
281
+ - kk
282
+ - kmr
283
+ - ko
284
+ - ky
285
+ - lg
286
+ - lo
287
+ - lt
288
+ - lv
289
+ - mdf
290
+ - mhr
291
+ - mk
292
+ - ml
293
+ - mn
294
+ - mr
295
+ - mrj
296
+ - mt
297
+ - myv
298
+ - nan-tw
299
+ - ne-NP
300
+ - nl
301
+ - nn-NO
302
+ - oc
303
+ - or
304
+ - pa-IN
305
+ - pl
306
+ - pt
307
+ - quy
308
+ - rm-sursilv
309
+ - rm-vallader
310
+ - ro
311
+ - ru
312
+ - rw
313
+ - sah
314
+ - sat
315
+ - sc
316
+ - sk
317
+ - skr
318
+ - sl
319
+ - sr
320
+ - sv-SE
321
+ - sw
322
+ - ta
323
+ - th
324
+ - ti
325
+ - tig
326
+ - tk
327
+ - tok
328
+ - tr
329
+ - tt
330
+ - tw
331
+ - ug
332
+ - uk
333
+ - ur
334
+ - uz
335
+ - vi
336
+ - vot
337
+ - yo
338
+ - yue
339
+ - zh-CN
340
+ - zh-HK
341
+ - zh-TW
342
+ extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
343
+ attempt to determine the identity of speakers in the Common Voice dataset.
344
  ---
 
345
 
346
+ # Dataset Card for Common Voice Corpus 13.0
347
+
348
+ ## Table of Contents
349
+ - [Dataset Description](#dataset-description)
350
+ - [Dataset Summary](#dataset-summary)
351
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
352
+ - [Languages](#languages)
353
+ - [How to use](#how-to-use)
354
+ - [Dataset Structure](#dataset-structure)
355
+ - [Data Instances](#data-instances)
356
+ - [Data Fields](#data-fields)
357
+ - [Data Splits](#data-splits)
358
+ - [Dataset Creation](#dataset-creation)
359
+ - [Curation Rationale](#curation-rationale)
360
+ - [Source Data](#source-data)
361
+ - [Annotations](#annotations)
362
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
363
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
364
+ - [Social Impact of Dataset](#social-impact-of-dataset)
365
+ - [Discussion of Biases](#discussion-of-biases)
366
+ - [Other Known Limitations](#other-known-limitations)
367
+ - [Additional Information](#additional-information)
368
+ - [Dataset Curators](#dataset-curators)
369
+ - [Licensing Information](#licensing-information)
370
+ - [Citation Information](#citation-information)
371
+ - [Contributions](#contributions)
372
+
373
+ ## Dataset Description
374
+
375
+ - **Homepage:** https://commonvoice.mozilla.org/en/datasets
376
+ - **Repository:** https://github.com/common-voice/common-voice
377
+ - **Paper:** https://arxiv.org/abs/1912.06670
378
+ - **Leaderboard:** https://paperswithcode.com/dataset/common-voice
379
+ - **Point of Contact:** [Vaibhav Srivastav](mailto:vaibhav@huggingface.co)
380
+
381
+ ### Dataset Summary
382
+
383
+ The Common Voice dataset consists of a unique MP3 and corresponding text file.
384
+ Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
385
+ that can help improve the accuracy of speech recognition engines.
386
+
387
+ The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
388
+ Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
389
+
390
+ ### Supported Tasks and Leaderboards
391
+
392
+ The results for models trained on the Common Voice datasets are available via the
393
+ [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
394
+
395
+ ### Languages
396
+
397
+ ```
398
+ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
399
+ ```
400
+
401
+ ## How to use
402
+
403
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
404
+
405
+ For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
406
+ ```python
407
+ from datasets import load_dataset
408
+
409
+ cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
410
+ ```
411
+
412
+ Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
413
+ ```python
414
+ from datasets import load_dataset
415
+
416
+ cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True)
417
+
418
+ print(next(iter(cv_13)))
419
+ ```
420
+
421
+ *Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
422
+
423
+ ### Local
424
+
425
+ ```python
426
+ from datasets import load_dataset
427
+ from torch.utils.data.sampler import BatchSampler, RandomSampler
428
+
429
+ cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
430
+ batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False)
431
+ dataloader = DataLoader(cv_13, batch_sampler=batch_sampler)
432
+ ```
433
+
434
+ ### Streaming
435
+
436
+ ```python
437
+ from datasets import load_dataset
438
+ from torch.utils.data import DataLoader
439
+
440
+ cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
441
+ dataloader = DataLoader(cv_13, batch_size=32)
442
+ ```
443
+
444
+ To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
445
+
446
+ ### Example scripts
447
+
448
+ Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
449
+
450
+ ## Dataset Structure
451
+
452
+ ### Data Instances
453
+
454
+ A typical data point comprises the `path` to the audio file and its `sentence`.
455
+ Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
456
+
457
+ ```python
458
+ {
459
+ 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
460
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
461
+ 'audio': {
462
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
463
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
464
+ 'sampling_rate': 48000
465
+ },
466
+ 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
467
+ 'up_votes': 2,
468
+ 'down_votes': 0,
469
+ 'age': 'twenties',
470
+ 'gender': 'male',
471
+ 'accent': '',
472
+ 'locale': 'et',
473
+ 'segment': ''
474
+ }
475
+ ```
476
+
477
+ ### Data Fields
478
+
479
+ `client_id` (`string`): An id for which client (voice) made the recording
480
+
481
+ `path` (`string`): The path to the audio file
482
+
483
+ `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
484
+
485
+ `sentence` (`string`): The sentence the user was prompted to speak
486
+
487
+ `up_votes` (`int64`): How many upvotes the audio file has received from reviewers
488
+
489
+ `down_votes` (`int64`): How many downvotes the audio file has received from reviewers
490
+
491
+ `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
492
+
493
+ `gender` (`string`): The gender of the speaker
494
+
495
+ `accent` (`string`): Accent of the speaker
496
+
497
+ `locale` (`string`): The locale of the speaker
498
+
499
+ `segment` (`string`): Usually an empty field
500
+
501
+ ### Data Splits
502
+
503
+ The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
504
+
505
+ The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
506
+
507
+ The invalidated data is data has been invalidated by reviewers
508
+ and received downvotes indicating that the data is of low quality.
509
+
510
+ The reported data is data that has been reported, for different reasons.
511
+
512
+ The other data is data that has not yet been reviewed.
513
+
514
+ The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
515
+
516
+ ## Data Preprocessing Recommended by Hugging Face
517
+
518
+ The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
519
+
520
+ Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
521
+
522
+ In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
523
+
524
+ ```python
525
+ from datasets import load_dataset
526
+
527
+ ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True)
528
+
529
+ def prepare_dataset(batch):
530
+ """Function to preprocess the dataset with the .map method"""
531
+ transcription = batch["sentence"]
532
+
533
+ if transcription.startswith('"') and transcription.endswith('"'):
534
+ # we can remove trailing quotation marks as they do not affect the transcription
535
+ transcription = transcription[1:-1]
536
+
537
+ if transcription[-1] not in [".", "?", "!"]:
538
+ # append a full-stop to sentences that do not end in punctuation
539
+ transcription = transcription + "."
540
+
541
+ batch["sentence"] = transcription
542
+
543
+ return batch
544
+
545
+ ds = ds.map(prepare_dataset, desc="preprocess dataset")
546
+ ```
547
+
548
+ ## Dataset Creation
549
+
550
+ ### Curation Rationale
551
+
552
+ [Needs More Information]
553
+
554
+ ### Source Data
555
+
556
+ #### Initial Data Collection and Normalization
557
+
558
+ [Needs More Information]
559
+
560
+ #### Who are the source language producers?
561
+
562
+ [Needs More Information]
563
+
564
+ ### Annotations
565
+
566
+ #### Annotation process
567
+
568
+ [Needs More Information]
569
+
570
+ #### Who are the annotators?
571
+
572
+ [Needs More Information]
573
+
574
+ ### Personal and Sensitive Information
575
+
576
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
577
+
578
+ ## Considerations for Using the Data
579
+
580
+ ### Social Impact of Dataset
581
+
582
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
583
+
584
+ ### Discussion of Biases
585
+
586
+ [More Information Needed]
587
+
588
+ ### Other Known Limitations
589
+
590
+ [More Information Needed]
591
+
592
+ ## Additional Information
593
+
594
+ ### Dataset Curators
595
+
596
+ [More Information Needed]
597
+
598
+ ### Licensing Information
599
+
600
+ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
601
+
602
+ ### Citation Information
603
+
604
+ ```
605
+ @inproceedings{commonvoice:2020,
606
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
607
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
608
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
609
+ pages = {4211--4215},
610
+ year = 2020
611
+ }
612
+ ```