Alix Be commited on
Commit
0b32f5a
·
1 Parent(s): ad8909b
Files changed (1) hide show
  1. README.md +178 -3
README.md CHANGED
@@ -1,3 +1,178 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e25588655a5a5f9cc27f2ad545b34455048c99522c4d0f9c75836ad9eff8b3ab
3
- size 7958
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - machine-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ - machine-generated
8
+ language:
9
+ - en
10
+ license:
11
+ - cc-by-3.0
12
+ - cc-by-4.0
13
+ multilinguality:
14
+ - monolingual
15
+ size_categories:
16
+ - 1T<n
17
+ task_categories:
18
+ - automatic-speech-recognition
19
+ task_ids: []
20
+ pretty_name: LargeScaleASR
21
+ tags:
22
+ - robust-speech-recognition
23
+ - noisy-speech-recognition
24
+ - speech-recognition
25
+ configs:
26
+ - config_name: large
27
+ features:
28
+ - name: ID
29
+ dtype: string
30
+ - name: duration
31
+ dtype: float32
32
+ - name: wav
33
+ dtype:
34
+ audio:
35
+ sample_rate: 16000
36
+ decode: False
37
+ - name: spk_id
38
+ dtype: string
39
+ - name: sex
40
+ dtype: string
41
+ - name: text
42
+ dtype: string
43
+ data_files:
44
+ - split: train
45
+ path: small/train*
46
+ - split: dev
47
+ path: dev/dev*
48
+ - split: test
49
+ path: test/test*
50
+ - config_name: small
51
+ features:
52
+ - name: ID
53
+ dtype: string
54
+ - name: duration
55
+ dtype: float32
56
+ - name: wav
57
+ dtype:
58
+ audio:
59
+ sample_rate: 16000
60
+ decode: False
61
+ - name: spk_id
62
+ dtype: string
63
+ - name: sex
64
+ dtype: string
65
+ - name: text
66
+ dtype: string
67
+ data_files:
68
+ - split: train
69
+ path: small/train*
70
+ - split: dev
71
+ path: dev/dev*
72
+ - split: test
73
+ path: test/test*
74
+ - config_name: medium
75
+ features:
76
+ - name: ID
77
+ dtype: string
78
+ - name: duration
79
+ dtype: float32
80
+ - name: wav
81
+ dtype:
82
+ audio:
83
+ sample_rate: 16000
84
+ decode: False
85
+ - name: spk_id
86
+ dtype: string
87
+ - name: sex
88
+ dtype: string
89
+ - name: text
90
+ dtype: string
91
+ data_files:
92
+ - split: train
93
+ path: medium/train*
94
+ - split: dev
95
+ path: dev/dev*
96
+ - split: test
97
+ path: test/test*
98
+ ---
99
+
100
+ # LargeScaleASR: 25,000 hours of transcribed and heterogeneous English speech recognition data for research and commercial use.
101
+
102
+ Made of 6 subsets:
103
+ 1. **large** contains 25,000 hours of read / spontaneous and clean / noisy transcribed speech.
104
+ 2. **medium** contains 2,500 hours of read / spontaneous and clean / noisy transcribed speech.
105
+ 3. **small** contains 250 hours of read / spontaneous and clean / noisy transcribed speech.
106
+ 4. **clean** contains 13,000 hours of read and clean / less noisy transcribed speech.
107
+ 5. **dev** contains 15 hours (more details in the next section).
108
+ 6. **test** contains 21 hours (more details in the next section).
109
+
110
+ The large split requires 4TB of storage (including HuggingFace extraction). The shards only are 2TB.
111
+
112
+
113
+ Example:
114
+
115
+ ```python
116
+ from datasets import load_dataset
117
+
118
+ ds = load_dataset('speechbrain/LargeScaleASR', {'small'||'medium'||'large'}, num_proc=6)
119
+ print(ds['train'])
120
+
121
+ from io import BytesIO
122
+ import torchaudio
123
+ wav_tensor = torchaudio.load(BytesIO(ds["train"][0]["wav"][bytes]))
124
+ ```
125
+
126
+ ## Data description (Following information are directly copy-pasted from the SpeechBrain data preparation README)
127
+
128
+ TLS is a mix of 5 existing dataset with permissive licences. The way it is mixed
129
+ is described in the following table:
130
+
131
+ | Dataset | Amount Taken (large/medium/small/dev/test) | License |
132
+ | ------------- | ------------- | ------------- |
133
+ | VoxPopuli | 550/500/50/5/7 | CC0 |
134
+ | LibriHeavy | 11,000/500/50/0/0 | CC BY 4.0 |
135
+ | Librispeech (dev-/test-other) | 0/0/0/5/7 | CC BY 4.0 |
136
+ | yodas | 6,100/500/50/0/0 | CC BY 3.0 |
137
+ | people's speech | 5,900/500/50/1.5/1.5 | CC-BY 4.0 |
138
+ | CommonVoice 18.0 | 1660/500/50/5/7 | CC0 |
139
+
140
+ *For dev and tests splits, only data from the corresponding dev and test sets of the considered dataset is used (i.e. not extracted from the train except for YODAS). For YODAS we extract data from the en003 split and verify the audio/transcription manually to form the dev/test partitions*
141
+
142
+ More information relative to each dataset is given as:
143
+
144
+ - [**voxpopuli**](https://arxiv.org/abs/2101.00390): we follow the standard SpeechBrain data preparation.
145
+ - [**LibriHeavy**](https://arxiv.org/html/2309.08105v2): samples are randomly selected, but we follow the standard data preparation.
146
+ - [**Librispeech**](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf): Librispeech is only used for the validation and test sets of LargeScaleASR. More precisely, we extract samples from *dev-others* and *test-others* as they are the most challenging subsets.
147
+ - [**YODAS**](https://arxiv.org/abs/2406.00899): The YODAS dataset is unfortunately unreliable. Indeed, audio are crawled from YouTube, and a lot of them (almost half) do not have the correct language. We used a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) to make sure that we only integrate samples where people speak in English. Transcriptions have also been heavily normalised (see next section). We decided arbitrarily to use the *en000* and *en001* subsets of Yodas. Transcriptions may be a bit noisy. This is why this dataset is excluded from the dev and test sets of LargeScaleASR.
148
+ - [**People's Speech**](https://huggingface.co/datasets/MLCommons/peoples_speech): Only the *clean* subset of this dataset is used in LargeScaleASR as the transcriptions there already have errors. This is why this dataset is excluded from the dev and test sets of LargeScaleASR.
149
+ - [**CommonVoice 18.0**](https://commonvoice.mozilla.org/en): We removed a few speakers that had too many samples (above 9000 samples) to avoid any bias. Aside from this, we used only samples coming from the *validated* csv to ensure an optimal level of transcriptions. Text was also heavily normalised (see next section).
150
+
151
+ ### Text and audio normalisation
152
+
153
+ Some of the above datasets, in particular People's Speech, Yodas and CommonVoice have very little normalisation. This is an important issue as the pronunciation is then either incorrect or uncertain. We normalised all the sentences to ensure a set of characters containing only the standard 26 letter of the European alphabet plus the "'". Numerical values were converted to text using the [Nemo text processing WFST tool](https://github.com/NVIDIA/NeMo-text-processing). The rest of the text was properly filtered to remove symbols, youtube annotations like "applause" or many others elements. When sentences were too noisy, we simply decided to remove them (e.g. too many symbols). The text normalisation can be found in *speechbrain.utils.text_normalisation*.
154
+
155
+ Audios are embedded as raw bytes (can be decoded by soundfile). We chunked and created smaller audio files from long ones based on start and stop supervision from the different manifests of the datasets (this is necessary for HuggingFace). Language ID with a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) was performed on Yodas.
156
+
157
+ #### Referencing SpeechBrain
158
+
159
+ ```
160
+ @article{speechbrainV1,
161
+ author = {Mirco Ravanelli and Titouan Parcollet and Adel Moumen and Sylvain de Langen and Cem Subakan and Peter Plantinga and Yingzhi Wang and Pooneh Mousavi and Luca Della Libera and Artem Ploujnikov and Francesco Paissan and Davide Borra and Salah Zaiem and Zeyu Zhao and Shucong Zhang and Georgios Karakasidis and Sung-Lin Yeh and Pierre Champion and Aku Rouhe and Rudolf Braun and Florian Mai and Juan Zuluaga-Gomez and Seyed Mahed Mousavi and Andreas Nautsch and Ha Nguyen and Xuechen Liu and Sangeet Sagar and Jarod Duret and Salima Mdhaffar and Ga{{\"e}}lle Laperri{{\`e}}re and Mickael Rouvier and Renato De Mori and Yannick Est{{\`e}}ve},
162
+ title = {Open-Source Conversational AI with SpeechBrain 1.0},
163
+ journal = {Journal of Machine Learning Research},
164
+ year = {2024},
165
+ volume = {25},
166
+ number = {333},
167
+ pages = {1--11},
168
+ url = {http://jmlr.org/papers/v25/24-0991.html}
169
+ }
170
+ ```
171
+
172
+ #### About SpeechBrain
173
+ SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
174
+
175
+ Website: https://speechbrain.github.io/
176
+
177
+ GitHub: https://github.com/speechbrain/speechbrain
178
+ >>>>>>> 09b78de33f6386be08e9bdc642d54b79cb002bd0