remg1997 commited on
Commit
8d2a1a7
1 Parent(s): 90c311e

Upload script and readme

Browse files
Files changed (2) hide show
  1. README.md +194 -0
  2. peoples_speech.py +233 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - machine-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ - machine-generated
8
+ language:
9
+ - en
10
+ license:
11
+ - cc-by-2.0
12
+ - cc-by-2.5
13
+ - cc-by-3.0
14
+ - cc-by-4.0
15
+ - cc-by-sa-3.0
16
+ - cc-by-sa-4.0
17
+ multilinguality:
18
+ - monolingual
19
+ pretty_name: People's Speech
20
+ size_categories:
21
+ - 1T<n
22
+ source_datasets:
23
+ - original
24
+ task_categories:
25
+ - automatic-speech-recognition
26
+ task_ids:
27
+ - speech-recognition
28
+ - robust-speech-recognition
29
+ - noisy-speech-recognition
30
+ ---
31
+
32
+ # Dataset Card for People's Speech
33
+
34
+ ## Table of Contents
35
+ - [Dataset Description](#dataset-description)
36
+ - [Dataset Summary](#dataset-summary)
37
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
38
+ - [Languages](#languages)
39
+ - [Dataset Structure](#dataset-structure)
40
+ - [Data Instances](#data-instances)
41
+ - [Data Fields](#data-instances)
42
+ - [Data Splits](#data-instances)
43
+ - [Dataset Creation](#dataset-creation)
44
+ - [Curation Rationale](#curation-rationale)
45
+ - [Source Data](#source-data)
46
+ - [Annotations](#annotations)
47
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
48
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
49
+ - [Social Impact of Dataset](#social-impact-of-dataset)
50
+ - [Discussion of Biases](#discussion-of-biases)
51
+ - [Other Known Limitations](#other-known-limitations)
52
+ - [Additional Information](#additional-information)
53
+ - [Dataset Curators](#dataset-curators)
54
+ - [Licensing Information](#licensing-information)
55
+ - [Citation Information](#citation-information)
56
+
57
+ ## Dataset Description
58
+
59
+ - **Homepage:** https://mlcommons.org/en/peoples-speech/
60
+ - **Repository:** https://github.com/mlcommons/peoples-speech
61
+ - **Paper:** https://arxiv.org/abs/2111.09344
62
+ - **Leaderboard:** [Needs More Information]
63
+ - **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org)
64
+
65
+ ### Dataset Summary
66
+
67
+ The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ [Needs More Information]
72
+
73
+ ### Languages
74
+
75
+ English
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ {
82
+ "id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
83
+ "audio": {
84
+ "path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
85
+ "array": array([-6.10351562e-05, ...]),
86
+ "sampling_rate": 16000
87
+ }
88
+ "duration_ms": 14490,
89
+ "text": "contends that the suspension clause requires a [...]"
90
+ }
91
+
92
+ ### Data Fields
93
+
94
+ {
95
+ "id": datasets.Value("string"),
96
+ "audio": datasets.Audio(sampling_rate=16_000),
97
+ "duration_ms": datasets.Value("int32"),
98
+ "text": datasets.Value("string"),
99
+ }
100
+
101
+ ### Data Splits
102
+
103
+ We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations.
104
+
105
+ ## Dataset Creation
106
+
107
+ ### Curation Rationale
108
+
109
+ See our [paper](https://arxiv.org/abs/2111.09344).
110
+
111
+ ### Source Data
112
+
113
+ #### Initial Data Collection and Normalization
114
+
115
+ Data was downloaded via the archive.org API. No data inference was done.
116
+
117
+ #### Who are the source language producers?
118
+
119
+ [Needs More Information]
120
+
121
+ ### Annotations
122
+
123
+ #### Annotation process
124
+
125
+ No manual annotation is done. We download only source audio with already existing transcripts.
126
+
127
+ #### Who are the annotators?
128
+
129
+ For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
130
+
131
+ ### Personal and Sensitive Information
132
+
133
+ Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
134
+
135
+ ## Considerations for Using the Data
136
+
137
+ ### Social Impact of Dataset
138
+
139
+ The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
140
+
141
+ The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
142
+
143
+ Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
144
+
145
+
146
+ ### Discussion of Biases
147
+
148
+ Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
149
+
150
+ Almost all of our data is American accented English.
151
+
152
+ ### Other Known Limitations
153
+
154
+ As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
155
+
156
+ ## Additional Information
157
+
158
+ ### Dataset Curators
159
+
160
+ [Needs More Information]
161
+
162
+ ### Licensing Information
163
+
164
+ We provide CC-BY and CC-BY-SA subsets of the dataset.
165
+
166
+ ### Citation Information
167
+
168
+ Please cite:
169
+
170
+ ```
171
+ @article{DBLP:journals/corr/abs-2111-09344,
172
+ author = {Daniel Galvez and
173
+ Greg Diamos and
174
+ Juan Ciro and
175
+ Juan Felipe Cer{\'{o}}n and
176
+ Keith Achorn and
177
+ Anjali Gopi and
178
+ David Kanter and
179
+ Maximilian Lam and
180
+ Mark Mazumder and
181
+ Vijay Janapa Reddi},
182
+ title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
183
+ Dataset for Commercial Usage},
184
+ journal = {CoRR},
185
+ volume = {abs/2111.09344},
186
+ year = {2021},
187
+ url = {https://arxiv.org/abs/2111.09344},
188
+ eprinttype = {arXiv},
189
+ eprint = {2111.09344},
190
+ timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
191
+ biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
192
+ bibsource = {dblp computer science bibliography, https://dblp.org}
193
+ }
194
+ ```
peoples_speech.py ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import json
16
+ import os
17
+
18
+ import datasets
19
+ from datasets.tasks import AutomaticSpeechRecognition
20
+ from tqdm.auto import tqdm
21
+
22
+
23
+ # Find for instance the citation on arxiv or on the dataset repo/website
24
+ _CITATION = """\
25
+ @article{DBLP:journals/corr/abs-2111-09344,
26
+ author = {Daniel Galvez and
27
+ Greg Diamos and
28
+ Juan Ciro and
29
+ Juan Felipe Ceron and
30
+ Keith Achorn and
31
+ Anjali Gopi and
32
+ David Kanter and
33
+ Maximilian Lam and
34
+ Mark Mazumder and
35
+ Vijay Janapa Reddi},
36
+ title = {The People's Speech: A Large-Scale Diverse English Speech Recognition
37
+ Dataset for Commercial Usage},
38
+ journal = {CoRR},
39
+ volume = {abs/2111.09344},
40
+ year = {2021},
41
+ url = {https://arxiv.org/abs/2111.09344},
42
+ eprinttype = {arXiv},
43
+ eprint = {2111.09344},
44
+ timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
45
+ biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
46
+ bibsource = {dblp computer science bibliography, https://dblp.org}
47
+ }
48
+ """
49
+
50
+ # You can copy an official description
51
+ _DESCRIPTION = """\
52
+ The People's Speech is a free-to-download 30,000-hour and growing supervised
53
+ conversational English speech recognition dataset licensed for academic and
54
+ commercial usage under CC-BY-SA (with a CC-BY subset).
55
+ """
56
+
57
+ _HOMEPAGE = "https://mlcommons.org/en/peoples-speech/"
58
+
59
+ _LICENSE = [
60
+ "cc-by-2.0", "cc-by-2.5", "cc-by-3.0", "cc-by-4.0", "cc-by-sa-2.5",
61
+ "cc-by-sa-3.0", "cc-by-sa-4.0"
62
+ ]
63
+
64
+ _BASE_URL = "https://huggingface.co/datasets/MLCommons/peoples_speech/resolve/main/"
65
+
66
+ # relative path to data inside dataset's repo
67
+ _DATA_URL = _BASE_URL + "{split}/{config}/{config}_{archive_id:06d}.tar"
68
+
69
+ # relative path to file containing number of audio archives inside dataset's repo
70
+ _N_FILES_URL = _BASE_URL + "{split}/{config}/n_files.txt"
71
+
72
+ # relative path to metadata inside dataset's repo
73
+ _MANIFEST_URL = _BASE_URL + "{split}/{config}.json"
74
+
75
+
76
+ class PeoplesSpeech(datasets.GeneratorBasedBuilder):
77
+ """The People's Speech dataset."""
78
+
79
+ VERSION = datasets.Version("1.1.0")
80
+ BUILDER_CONFIGS = [
81
+ datasets.BuilderConfig(name="microset", version=VERSION, description="Small subset of clean data for example pusposes."),
82
+ datasets.BuilderConfig(name="clean", version=VERSION, description="Clean, CC-BY licensed subset."),
83
+ datasets.BuilderConfig(name="dirty", version=VERSION, description="Dirty, CC-BY licensed subset."),
84
+ datasets.BuilderConfig(name="clean_sa", version=VERSION, description="Clean, CC-BY-SA licensed subset."),
85
+ datasets.BuilderConfig(name="dirty_sa", version=VERSION, description="Dirty, CC-BY-SA licensed subset."),
86
+ ]
87
+ DEFAULT_CONFIG_NAME = "clean"
88
+ DEFAULT_WRITER_BATCH_SIZE = 1
89
+
90
+ def _info(self):
91
+ return datasets.DatasetInfo(
92
+ description=_DESCRIPTION,
93
+ features=datasets.Features(
94
+ {
95
+ "id": datasets.Value("string"),
96
+ "audio": datasets.Audio(sampling_rate=16_000),
97
+ "duration_ms": datasets.Value("int32"),
98
+ "text": datasets.Value("string"),
99
+ }
100
+ ),
101
+ task_templates=[AutomaticSpeechRecognition()],
102
+ supervised_keys=("file", "text"),
103
+ homepage=_HOMEPAGE,
104
+ license="/".join(_LICENSE), # license must be a string
105
+ citation=_CITATION,
106
+ )
107
+
108
+ def _get_n_files(self, dl_manager, split, config):
109
+ n_files_url = _N_FILES_URL.format(split=split, config=config)
110
+ n_files_path = dl_manager.download_and_extract(n_files_url)
111
+
112
+ with open(n_files_path, encoding="utf-8") as f:
113
+ return int(f.read().strip())
114
+
115
+ def _split_generators(self, dl_manager):
116
+
117
+ if self.config.name == "microset":
118
+ # take only first data archive for demo purposes
119
+ url = [_DATA_URL.format(split="train", config="clean", archive_id=0)]
120
+ archive_path = dl_manager.download(url)
121
+ local_extracted_archive_path = dl_manager.extract(archive_path) if not dl_manager.is_streaming else [None]
122
+ manifest_url = _MANIFEST_URL.format(split="train", config="clean_000000") # train/clean_000000.json
123
+ manifest_path = dl_manager.download_and_extract(manifest_url)
124
+
125
+ return [
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TRAIN,
128
+ gen_kwargs={
129
+ "local_extracted_archive_paths": local_extracted_archive_path,
130
+ # use iter_archive here to access the files in the TAR archives:
131
+ "archives": [dl_manager.iter_archive(path) for path in archive_path],
132
+ "manifest_path": manifest_path,
133
+ },
134
+ ),
135
+ ]
136
+
137
+ n_files_train = self._get_n_files(dl_manager, split="train", config=self.config.name)
138
+ n_files_dev = self._get_n_files(dl_manager, split="dev", config="dev")
139
+ n_files_test = self._get_n_files(dl_manager, split="test", config="test")
140
+
141
+ urls = {
142
+ "train": [_DATA_URL.format(split="train", config=self.config.name, archive_id=i) for i in range(n_files_train)],
143
+ "dev": [_DATA_URL.format(split="dev", config="dev", archive_id=i) for i in range(n_files_dev)],
144
+ "test": [_DATA_URL.format(split="test", config="test", archive_id=i) for i in range(n_files_test)],
145
+ }
146
+ archive_paths = dl_manager.download(urls)
147
+
148
+ # In non-streaming mode, we extract the archives to have the data locally:
149
+ local_extracted_archive_paths = dl_manager.extract(archive_paths) if not dl_manager.is_streaming else \
150
+ {
151
+ "train": [None] * len(archive_paths),
152
+ "dev": [None] * len(archive_paths),
153
+ "test": [None] * len(archive_paths),
154
+ }
155
+
156
+ manifest_urls = {
157
+ "train": _MANIFEST_URL.format(split="train", config=self.config.name),
158
+ "dev": _MANIFEST_URL.format(split="dev", config="dev"),
159
+ "test": _MANIFEST_URL.format(split="test", config="test"),
160
+ }
161
+ manifest_paths = dl_manager.download_and_extract(manifest_urls)
162
+
163
+ # To access the audio data from the TAR archives using the download manager,
164
+ # we have to use the dl_manager.iter_archive method
165
+ #
166
+ # This is because dl_manager.download_and_extract
167
+ # doesn't work to stream TAR archives in streaming mode.
168
+ # (we have to stream the files of a TAR archive one by one)
169
+ #
170
+ # The iter_archive method returns an iterable of (path_within_archive, file_obj) for every
171
+ # file in a TAR archive.
172
+
173
+ return [
174
+ datasets.SplitGenerator(
175
+ name=datasets.Split.TRAIN,
176
+ gen_kwargs={
177
+ "local_extracted_archive_paths": local_extracted_archive_paths["train"],
178
+ # use iter_archive here to access the files in the TAR archives:
179
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["train"]],
180
+ "manifest_path": manifest_paths["train"],
181
+ },
182
+ ),
183
+ datasets.SplitGenerator(
184
+ name=datasets.Split.VALIDATION,
185
+ gen_kwargs={
186
+ "local_extracted_archive_paths": local_extracted_archive_paths["dev"],
187
+ # use iter_archive here to access the files in the TAR archives:
188
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["dev"]],
189
+ "manifest_path": manifest_paths["dev"],
190
+ },
191
+ ),
192
+ datasets.SplitGenerator(
193
+ name=datasets.Split.TEST,
194
+ gen_kwargs={
195
+ "local_extracted_archive_paths": local_extracted_archive_paths["dev"],
196
+ # use iter_archive here to access the files in the TAR archives:
197
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["test"]],
198
+ "manifest_path": manifest_paths["test"],
199
+ },
200
+ ),
201
+ ]
202
+
203
+ def _generate_examples(self, local_extracted_archive_paths, archives, manifest_path):
204
+ meta = dict()
205
+ with open(manifest_path, "r", encoding="utf-8") as f:
206
+ for line in tqdm(f, desc="reading metadata file"):
207
+ sample_meta = json.loads(line)
208
+ _id = sample_meta["audio_document_id"]
209
+ texts = sample_meta["training_data"]["label"]
210
+ audio_filenames = sample_meta["training_data"]["name"]
211
+ durations = sample_meta["training_data"]["duration_ms"]
212
+ for audio_filename, text, duration in zip(audio_filenames, texts, durations):
213
+ audio_filename = audio_filename.lstrip("./")
214
+ meta[audio_filename] = {
215
+ "audio_document_id": _id,
216
+ "text": text,
217
+ "duration_ms": duration
218
+ }
219
+
220
+ for local_extracted_archive_path, archive in zip(local_extracted_archive_paths, archives):
221
+ # Here we iterate over all the files within the TAR archive:
222
+ for audio_filename, audio_file in archive:
223
+ audio_filename = audio_filename.lstrip("./")
224
+ # if an audio file exists locally (i.e. in default, non-streaming mode) set the full path to it
225
+ # joining path to directory that the archive was extracted to and audio filename.
226
+ path = os.path.join(local_extracted_archive_path, audio_filename) if local_extracted_archive_path \
227
+ else audio_filename
228
+ yield audio_filename, {
229
+ "id": audio_filename,
230
+ "audio": {"path": path, "bytes": audio_file.read()},
231
+ "text": meta[audio_filename]["text"],
232
+ "duration_ms": meta[audio_filename]["duration_ms"]
233
+ }