ylacombe commited on
Commit
326dcbd
1 Parent(s): c22a1da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -32
README.md CHANGED
@@ -12,6 +12,7 @@ language:
12
  - es
13
  - pt
14
  - pl
 
15
  license:
16
  - cc-by-4.0
17
  multilinguality:
@@ -22,6 +23,8 @@ source_datasets:
22
  - original
23
  task_categories:
24
  - automatic-speech-recognition
 
 
25
  paperswithcode_id: multilingual-librispeech
26
  pretty_name: MultiLingual LibriSpeech
27
  dataset_info:
@@ -49,19 +52,19 @@ dataset_info:
49
  dtype: string
50
  splits:
51
  - name: dev
52
- num_bytes: 199959986.0
53
  num_examples: 3095
54
  - name: test
55
- num_bytes: 199298575.0
56
  num_examples: 3075
57
  - name: train
58
- num_bytes: 23931679031.0
59
  num_examples: 374287
60
  - name: 9_hours
61
  num_bytes: 139884664.668
62
  num_examples: 2153
63
  - name: 1_hours
64
- num_bytes: 15462181.0
65
  num_examples: 234
66
  download_size: 24376256629
67
  dataset_size: 24486284437.668
@@ -101,7 +104,7 @@ dataset_info:
101
  num_bytes: 142796680.609
102
  num_examples: 2167
103
  - name: 1_hours
104
- num_bytes: 15675831.0
105
  num_examples: 241
106
  download_size: 17381581776
107
  dataset_size: 17459684482.927002
@@ -135,13 +138,13 @@ dataset_info:
135
  num_bytes: 225756069.096
136
  num_examples: 3394
137
  - name: train
138
- num_bytes: 31050881388.0
139
  num_examples: 469942
140
  - name: 9_hours
141
  num_bytes: 142777983.118
142
  num_examples: 2194
143
  - name: 1_hours
144
- num_bytes: 15714704.0
145
  num_examples: 241
146
  download_size: 31526161821
147
  dataset_size: 31659423725.516
@@ -175,13 +178,13 @@ dataset_info:
175
  num_bytes: 83216752.046
176
  num_examples: 1262
177
  - name: train
178
- num_bytes: 3896742625.0
179
  num_examples: 59623
180
  - name: 9_hours
181
  num_bytes: 141671904.428
182
  num_examples: 2173
183
  - name: 1_hours
184
- num_bytes: 15560398.0
185
  num_examples: 240
186
  download_size: 4200633596
187
  dataset_size: 4218799275.522
@@ -209,22 +212,22 @@ dataset_info:
209
  dtype: string
210
  splits:
211
  - name: dev
212
- num_bytes: 32746725.0
213
  num_examples: 512
214
  - name: test
215
- num_bytes: 33735044.0
216
  num_examples: 520
217
  - name: train
218
- num_bytes: 1638889846.0
219
  num_examples: 25043
220
  - name: 9_hours
221
- num_bytes: 142005461.0
222
  num_examples: 2173
223
  - name: 1_hours
224
- num_bytes: 15681216.0
225
  num_examples: 238
226
  download_size: 1855342312
227
- dataset_size: 1863058292.0
228
  - config_name: portuguese
229
  features:
230
  - name: audio
@@ -249,10 +252,10 @@ dataset_info:
249
  dtype: string
250
  splits:
251
  - name: dev
252
- num_bytes: 57533473.0
253
  num_examples: 826
254
  - name: test
255
- num_bytes: 59141979.0
256
  num_examples: 871
257
  - name: train
258
  num_bytes: 2518553713.946
@@ -261,7 +264,7 @@ dataset_info:
261
  num_bytes: 141641902.42
262
  num_examples: 2116
263
  - name: 1_hours
264
- num_bytes: 15697139.0
265
  num_examples: 236
266
  download_size: 2780836500
267
  dataset_size: 2792568207.366
@@ -295,13 +298,13 @@ dataset_info:
295
  num_bytes: 158526899.32
296
  num_examples: 2385
297
  - name: train
298
- num_bytes: 14562584188.0
299
  num_examples: 220701
300
  - name: 9_hours
301
  num_bytes: 142473624.48
302
  num_examples: 2110
303
  - name: 1_hours
304
- num_bytes: 15702048.0
305
  num_examples: 233
306
  download_size: 14971394533
307
  dataset_size: 15037091662.944
@@ -432,11 +435,12 @@ This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
432
  The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
433
 
434
  MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
435
- 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
436
 
437
  ### Supported Tasks and Leaderboards
438
 
439
  - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
 
440
 
441
  ### Languages
442
 
@@ -449,16 +453,13 @@ The `datasets` library allows you to load and pre-process your dataset in pure P
449
  For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
450
  ```python
451
  from datasets import load_dataset
452
-
453
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
454
  ```
455
 
456
  Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
457
  ```python
458
  from datasets import load_dataset
459
-
460
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
461
-
462
  print(next(iter(mls)))
463
  ```
464
 
@@ -469,7 +470,6 @@ Local:
469
  ```python
470
  from datasets import load_dataset
471
  from torch.utils.data.sampler import BatchSampler, RandomSampler
472
-
473
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
474
  batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
475
  dataloader = DataLoader(mls, batch_sampler=batch_sampler)
@@ -480,7 +480,6 @@ Streaming:
480
  ```python
481
  from datasets import load_dataset
482
  from torch.utils.data import DataLoader
483
-
484
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
485
  dataloader = DataLoader(mls, batch_size=32)
486
  ```
@@ -521,12 +520,11 @@ A typical data point comprises the path to the audio file, usually called `file`
521
  - id: unique id of the data sample.
522
 
523
  - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
524
-
525
  - chapter_id: id of the audiobook chapter which includes the transcription.
526
 
527
  ### Data Splits
528
 
529
- | | Train | Train.9h | Train.1h | Dev | Test |
530
  | ----- | ------ | ----- | ---- | ---- | ---- |
531
  | german | 469942 | 2194 | 241 | 3469 | 3394 |
532
  | dutch | 374287 | 2153 | 234 | 3095 | 3075 |
@@ -536,8 +534,6 @@ A typical data point comprises the path to the audio file, usually called `file`
536
  | portuguese | 37533 | 2116 | 236 | 826 | 871 |
537
  | polish | 25043 | 2173 | 238 | 512 | 520 |
538
 
539
-
540
-
541
  ## Dataset Creation
542
 
543
  ### Curation Rationale
@@ -604,7 +600,47 @@ Public Domain, Creative Commons Attribution 4.0 International Public License ([C
604
  }
605
  ```
606
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
607
  ### Contributions
608
 
609
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
610
- and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
 
12
  - es
13
  - pt
14
  - pl
15
+ - en
16
  license:
17
  - cc-by-4.0
18
  multilinguality:
 
23
  - original
24
  task_categories:
25
  - automatic-speech-recognition
26
+ - text-to-speech
27
+ - text-to-audio
28
  paperswithcode_id: multilingual-librispeech
29
  pretty_name: MultiLingual LibriSpeech
30
  dataset_info:
 
52
  dtype: string
53
  splits:
54
  - name: dev
55
+ num_bytes: 199959986
56
  num_examples: 3095
57
  - name: test
58
+ num_bytes: 199298575
59
  num_examples: 3075
60
  - name: train
61
+ num_bytes: 23931679031
62
  num_examples: 374287
63
  - name: 9_hours
64
  num_bytes: 139884664.668
65
  num_examples: 2153
66
  - name: 1_hours
67
+ num_bytes: 15462181
68
  num_examples: 234
69
  download_size: 24376256629
70
  dataset_size: 24486284437.668
 
104
  num_bytes: 142796680.609
105
  num_examples: 2167
106
  - name: 1_hours
107
+ num_bytes: 15675831
108
  num_examples: 241
109
  download_size: 17381581776
110
  dataset_size: 17459684482.927002
 
138
  num_bytes: 225756069.096
139
  num_examples: 3394
140
  - name: train
141
+ num_bytes: 31050881388
142
  num_examples: 469942
143
  - name: 9_hours
144
  num_bytes: 142777983.118
145
  num_examples: 2194
146
  - name: 1_hours
147
+ num_bytes: 15714704
148
  num_examples: 241
149
  download_size: 31526161821
150
  dataset_size: 31659423725.516
 
178
  num_bytes: 83216752.046
179
  num_examples: 1262
180
  - name: train
181
+ num_bytes: 3896742625
182
  num_examples: 59623
183
  - name: 9_hours
184
  num_bytes: 141671904.428
185
  num_examples: 2173
186
  - name: 1_hours
187
+ num_bytes: 15560398
188
  num_examples: 240
189
  download_size: 4200633596
190
  dataset_size: 4218799275.522
 
212
  dtype: string
213
  splits:
214
  - name: dev
215
+ num_bytes: 32746725
216
  num_examples: 512
217
  - name: test
218
+ num_bytes: 33735044
219
  num_examples: 520
220
  - name: train
221
+ num_bytes: 1638889846
222
  num_examples: 25043
223
  - name: 9_hours
224
+ num_bytes: 142005461
225
  num_examples: 2173
226
  - name: 1_hours
227
+ num_bytes: 15681216
228
  num_examples: 238
229
  download_size: 1855342312
230
+ dataset_size: 1863058292
231
  - config_name: portuguese
232
  features:
233
  - name: audio
 
252
  dtype: string
253
  splits:
254
  - name: dev
255
+ num_bytes: 57533473
256
  num_examples: 826
257
  - name: test
258
+ num_bytes: 59141979
259
  num_examples: 871
260
  - name: train
261
  num_bytes: 2518553713.946
 
264
  num_bytes: 141641902.42
265
  num_examples: 2116
266
  - name: 1_hours
267
+ num_bytes: 15697139
268
  num_examples: 236
269
  download_size: 2780836500
270
  dataset_size: 2792568207.366
 
298
  num_bytes: 158526899.32
299
  num_examples: 2385
300
  - name: train
301
+ num_bytes: 14562584188
302
  num_examples: 220701
303
  - name: 9_hours
304
  num_bytes: 142473624.48
305
  num_examples: 2110
306
  - name: 1_hours
307
+ num_bytes: 15702048
308
  num_examples: 233
309
  download_size: 14971394533
310
  dataset_size: 15037091662.944
 
435
  The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
436
 
437
  MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
438
+ 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. It includes about 44.5K hours of English and a total of about 6K hours for other languages.
439
 
440
  ### Supported Tasks and Leaderboards
441
 
442
  - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
443
+ - `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
444
 
445
  ### Languages
446
 
 
453
  For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
454
  ```python
455
  from datasets import load_dataset
 
456
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
457
  ```
458
 
459
  Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
460
  ```python
461
  from datasets import load_dataset
 
462
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
 
463
  print(next(iter(mls)))
464
  ```
465
 
 
470
  ```python
471
  from datasets import load_dataset
472
  from torch.utils.data.sampler import BatchSampler, RandomSampler
 
473
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
474
  batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
475
  dataloader = DataLoader(mls, batch_sampler=batch_sampler)
 
480
  ```python
481
  from datasets import load_dataset
482
  from torch.utils.data import DataLoader
 
483
  mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
484
  dataloader = DataLoader(mls, batch_size=32)
485
  ```
 
520
  - id: unique id of the data sample.
521
 
522
  - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
 
523
  - chapter_id: id of the audiobook chapter which includes the transcription.
524
 
525
  ### Data Splits
526
 
527
+ | Number of samples | Train | Train.9h | Train.1h | Dev | Test |
528
  | ----- | ------ | ----- | ---- | ---- | ---- |
529
  | german | 469942 | 2194 | 241 | 3469 | 3394 |
530
  | dutch | 374287 | 2153 | 234 | 3095 | 3075 |
 
534
  | portuguese | 37533 | 2116 | 236 | 826 | 871 |
535
  | polish | 25043 | 2173 | 238 | 512 | 520 |
536
 
 
 
537
  ## Dataset Creation
538
 
539
  ### Curation Rationale
 
600
  }
601
  ```
602
 
603
+
604
+ ### Data Statistics
605
+
606
+ | Duration (h) | Train | Dev | Test |
607
+ |--------------|-----------|-------|-------|
608
+ | English | 44,659.74 | 15.75 | 15.55 |
609
+ | German | 1,966.51 | 14.28 | 14.29 |
610
+ | Dutch | 1,554.24 | 12.76 | 12.76 |
611
+ | French | 1,076.58 | 10.07 | 10.07 |
612
+ | Spanish | 917.68 | 9.99 | 10 |
613
+ | Italian | 247.38 | 5.18 | 5.27 |
614
+ | Portuguese | 160.96 | 3.64 | 3.74 |
615
+ | Polish | 103.65 | 2.08 | 2.14 |
616
+
617
+ | # Speakers | Train | | Dev | | Test | |
618
+ |------------|-------|------|-----|----|------|----|
619
+ | Gender | M | F | M | F | M | F |
620
+ | English | 2742 | 2748 | 21 | 21 | 21 | 21 |
621
+ | German | 81 | 95 | 15 | 15 | 15 | 15 |
622
+ | Dutch | 9 | 31 | 3 | 3 | 3 | 3 |
623
+ | French | 62 | 80 | 9 | 9 | 9 | 9 |
624
+ | Spanish | 36 | 50 | 10 | 10 | 10 | 10 |
625
+ | Italian | 22 | 43 | 5 | 5 | 5 | 5 |
626
+ | Portuguese | 26 | 16 | 5 | 5 | 5 | 5 |
627
+ | Polish | 6 | 5 | 2 | 2 | 2 | 2 |
628
+
629
+ | # Hours / Gender | Dev | | Test | |
630
+ |------------------|------|------|------|------|
631
+ | Gender | M | F | M | F |
632
+ | English | 7.76 | 7.99 | 7.62 | 7.93 |
633
+ | German | 7.06 | 7.22 | 7 | 7.29 |
634
+ | Dutch | 6.44 | 6.32 | 6.72 | 6.04 |
635
+ | French | 5.13 | 4.94 | 5.04 | 5.02 |
636
+ | Spanish | 4.91 | 5.08 | 4.78 | 5.23 |
637
+ | Italian | 2.5 | 2.68 | 2.38 | 2.9 |
638
+ | Portuguese | 1.84 | 1.81 | 1.83 | 1.9 |
639
+ | Polish | 1.12 | 0.95 | 1.09 | 1.05 |
640
+
641
+
642
+
643
+
644
  ### Contributions
645
 
646
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.