esc-bencher commited on
Commit
f33c72a
1 Parent(s): 3a31e45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -35
README.md CHANGED
@@ -32,6 +32,16 @@ tags:
32
  task_categories:
33
  - automatic-speech-recognition
34
  task_ids: []
 
 
 
 
 
 
 
 
 
 
35
  ---
36
 
37
  All eight of datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
@@ -48,15 +58,8 @@ librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", split="t
48
 
49
  - `split="train"`: the split. Set this to one of train/validation/test to generate a specific split. Omit the `split` argument to generate all splits for a dataset.
50
 
51
- The datasets are fully prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
52
 
53
- For three datasets of the benchmark: GigaSpeech, SPGISpeech, and Librispeech we provide different configurations.
54
- You can load a specific configuration by passing it to the `"subconfig"` parameter:
55
-
56
- ```python
57
- librispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s", split="train")
58
- ```
59
- If you omit this parameter, the default configuration will be downloaded.
60
 
61
 
62
  ## Dataset Information
@@ -100,7 +103,13 @@ Note that when accessing the audio column: `dataset[0]["audio"]` the audio file
100
  #### Transcriptions
101
  The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
102
 
103
- Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esc-benchmark/esc for scoring!
 
 
 
 
 
 
104
 
105
  ## LibriSpeech
106
 
@@ -110,8 +119,6 @@ Example Usage:
110
 
111
  ```python
112
  librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech")
113
- librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", split="train")
114
- librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", split="validation.clean")
115
  ```
116
 
117
  Train/validation splits:
@@ -123,22 +130,22 @@ Test splits:
123
  - `test.clean`
124
  - `test.other`
125
 
126
- Also available are subsets of the train split:
 
 
 
 
127
  - `clean.100`: 100 hours of training data from the 'clean' subset
128
  - `clean.360`: 360 hours of training data from the 'clean' subset
129
  - `other.500`: 500 hours of training data from the 'other' subset
130
 
131
- ```python
132
- librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", subconfig="clean.100", split="train")
133
- ```
134
-
135
  ## Common Voice
136
  Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.
137
 
138
  Example usage:
139
 
140
  ```python
141
- common_voice = load_dataset("esc-benchmark/esc-datasets", "common_voice")
142
  ```
143
 
144
  Training/validation splits:
@@ -164,7 +171,6 @@ Training/validation splits:
164
  Test splits:
165
  - `test`
166
 
167
-
168
  ## TED-LIUM
169
  TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
170
 
@@ -187,52 +193,49 @@ GigaSpeech is a multi-domain English speech recognition corpus created from audi
187
  Example usage:
188
 
189
  ```python
190
- gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech")
191
  ```
192
 
193
  Training/validation splits:
194
- - `train` (`l` subset)
195
  - `validation`
196
 
197
  Test splits:
198
  - `test`
199
 
200
- Also available are subsets of the train split:
 
 
 
201
  - `xs`: extra-small subset of training data (10 h)
202
  - `s`: small subset of training data (250 h)
203
  - `m`: medium subset of training data (1,000 h)
204
  - `xl`: extra-large subset of training data (10,000 h)
205
 
206
- You can load GigaSpeech with a specific train subset using `subconfig` parameter:
207
- ```python
208
- gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech", subconfig="xs")
209
- ```
210
-
211
-
212
  ## SPGISpeech
213
  SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
214
 
 
 
215
  Example usage:
216
 
217
  ```python
218
- spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech")
219
  ```
220
 
221
  Training/validation splits:
222
- - `train` (`l` subset)
223
  - `validation`
224
 
225
  Test splits:
226
  - `test`
227
 
228
- Also available are subsets of the train split:
229
- - `s`: small subset of training data (~200 h)
230
- - `m`: medium subset of training data (~1,000 h)
231
-
232
- You can load GigaSpeech with a specific train subset using `subconfig` parameter:
233
  ```python
234
- gigaspeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s")
235
  ```
 
 
236
 
237
 
238
  ## Earnings-22
 
32
  task_categories:
33
  - automatic-speech-recognition
34
  task_ids: []
35
+ extra_gated_prompt: |-
36
+ Three of the ESC datasets have specific terms of usage that must be agreed to before using the data.
37
+ To do so, fill in the access forms on the specific datasets' pages:
38
+ * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
39
+ * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
40
+ * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
41
+ extra_gated_fields:
42
+ I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset: checkbox
43
+ I hereby confirm that I have accepted the terms of usages on GigaSpeech page: checkbox
44
+ I hereby confirm that I have accepted the terms of usages on SPGISpeech page: checkbox
45
  ---
46
 
47
  All eight of datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
 
58
 
59
  - `split="train"`: the split. Set this to one of train/validation/test to generate a specific split. Omit the `split` argument to generate all splits for a dataset.
60
 
 
61
 
62
+ The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
 
 
 
 
 
 
63
 
64
 
65
  ## Dataset Information
 
103
  #### Transcriptions
104
  The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
105
 
106
+ Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esc-benchmark/esc for scoring.
107
+
108
+ ### Access
109
+ All eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
110
+ * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
111
+ * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
112
+ * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
113
 
114
  ## LibriSpeech
115
 
 
119
 
120
  ```python
121
  librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech")
 
 
122
  ```
123
 
124
  Train/validation splits:
 
130
  - `test.clean`
131
  - `test.other`
132
 
133
+ Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
134
+ ```python
135
+ librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", subconfig="clean.100")
136
+ ```
137
+
138
  - `clean.100`: 100 hours of training data from the 'clean' subset
139
  - `clean.360`: 360 hours of training data from the 'clean' subset
140
  - `other.500`: 500 hours of training data from the 'other' subset
141
 
 
 
 
 
142
  ## Common Voice
143
  Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.
144
 
145
  Example usage:
146
 
147
  ```python
148
+ common_voice = load_dataset("esc-benchmark/esc-datasets", "common_voice", use_auth_token=True)
149
  ```
150
 
151
  Training/validation splits:
 
171
  Test splits:
172
  - `test`
173
 
 
174
  ## TED-LIUM
175
  TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
176
 
 
193
  Example usage:
194
 
195
  ```python
196
+ gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech", use_auth_token=True)
197
  ```
198
 
199
  Training/validation splits:
200
+ - `train` (`l` subset of training data (2,500 h))
201
  - `validation`
202
 
203
  Test splits:
204
  - `test`
205
 
206
+ Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
207
+ ```python
208
+ gigaspeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="xs", use_auth_token=True)
209
+ ```
210
  - `xs`: extra-small subset of training data (10 h)
211
  - `s`: small subset of training data (250 h)
212
  - `m`: medium subset of training data (1,000 h)
213
  - `xl`: extra-large subset of training data (10,000 h)
214
 
 
 
 
 
 
 
215
  ## SPGISpeech
216
  SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
217
 
218
+ Loading the dataset requires authorization.
219
+
220
  Example usage:
221
 
222
  ```python
223
+ spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", use_auth_token=True)
224
  ```
225
 
226
  Training/validation splits:
227
+ - `train` (`l` subset of training data (~5,000 h))
228
  - `validation`
229
 
230
  Test splits:
231
  - `test`
232
 
233
+ Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
 
 
 
 
234
  ```python
235
+ spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s", use_auth_token=True)
236
  ```
237
+ - `s`: small subset of training data (~200 h)
238
+ - `m`: medium subset of training data (~1,000 h)
239
 
240
 
241
  ## Earnings-22