soerenray commited on
Commit
a730a2c
โ€ข
1 Parent(s): ce274e0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +221 -10
README.md CHANGED
@@ -3,9 +3,7 @@ license: openrail
3
  dataset_info:
4
  features:
5
  - name: audio
6
- dtype:
7
- audio:
8
- sampling_rate: 16000
9
  - name: label
10
  dtype: int64
11
  - name: is_unknown
@@ -25,15 +23,228 @@ dataset_info:
25
  - name: embedding
26
  sequence: float32
27
  - name: embedding_reduced
28
- sequence: float32
29
  splits:
30
  - name: train
31
- num_bytes: 2137341907.375
32
- num_examples: 60973
33
- download_size: 2094280286
34
- dataset_size: 2137341907.375
35
  ---
36
 
37
- This dataset is an extended version of the MIT/ast-finetuned-speech-commands-v2 dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  It provides predicted labels, their annotations and embeddings, trained with Huggingface's AutoModel and
39
- AutoFeatureExtractor. If you would like to have a closer look at the dataset and model's performance, you can use Spotlight by Renumics to find complex sub-relationships between classes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  dataset_info:
4
  features:
5
  - name: audio
6
+ dtype: audio
 
 
7
  - name: label
8
  dtype: int64
9
  - name: is_unknown
 
23
  - name: embedding
24
  sequence: float32
25
  - name: embedding_reduced
26
+ sequence: float64
27
  splits:
28
  - name: train
29
+ num_bytes: 1774663023.432
30
+ num_examples: 51093
31
+ download_size: 1701177850
32
+ dataset_size: 1774663023.432
33
  ---
34
 
35
+ ## Dataset Description
36
+
37
+ - **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=cifar100-enriched)
38
+ - **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
39
+ - **Dataset Homepage** [Huggingface Dataset](https://huggingface.co/datasets/speech_commands)
40
+ - **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://www.researchgate.net/publication/324435399_Speech_Commands_A_Dataset_for_Limited-Vocabulary_Speech_Recognition)
41
+
42
+ ### Dataset Summary
43
+
44
+ ๐Ÿ“Š [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
45
+ At [Renumics](https://renumics.com/?hf-dataset-card=cifar100-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
46
+
47
+ ๐Ÿ” This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
48
+ 1. Enable new researchers to quickly develop a profound understanding of the dataset.
49
+ 2. Popularize data-centric AI principles and tooling in the ML community.
50
+ 3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
51
+
52
+ ๐Ÿ“š This dataset is an enriched version of the [speech_commands dataset](https://huggingface.co/datasets/speech_commands).
53
  It provides predicted labels, their annotations and embeddings, trained with Huggingface's AutoModel and
54
+ AutoFeatureExtractor. If you would like to have a closer look at the dataset and model's performance, you can use Spotlight by Renumics to find complex sub-relationships between classes.
55
+
56
+ ### Explore the Dataset
57
+
58
+ <!-- ![Analyze CIFAR-100 with Spotlight](https://spotlight.renumics.com/resources/hf-cifar-100-enriched.png) -->
59
+
60
+ The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
61
+
62
+ Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
63
+
64
+ ```python
65
+ !pip install renumics-spotlight datasets
66
+ ```
67
+
68
+ Load the dataset from huggingface in your notebook:
69
+
70
+ ```python
71
+ import datasets
72
+
73
+ dataset = datasets.load_dataset("soerenray/speech_commands_enriched_and_annotated", split="train")
74
+ ```
75
+
76
+ Start exploring with a simple view that leverages embeddings to identify relevant data segments:
77
+
78
+ ```python
79
+ from renumics import spotlight
80
+
81
+ df = dataset.to_pandas()
82
+ df_show = df.drop(columns=['embedding', 'logits'])
83
+ spotlight.show(df_show, port=8000, dtype={"audio": spotlight.Audio, "embedding_reduced": spotlight.Embedding})
84
+ ```
85
+ You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
86
+
87
+
88
+ ### Speech commands Dataset
89
+
90
+ The speech commands dataset consists of 60,973 samples in 30 classes (with an additional silence class).
91
+ The classes are completely mutually exclusive. It was designed to evaluate keyword spotting models.
92
+ We have enriched the dataset by adding **audio embeddings** generated with a [MIT's AST](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2).
93
+ Here is the list of classes in the speech commands:
94
+
95
+ | Class |
96
+ |---------------------------------|
97
+ | "Yes" |
98
+ | "No" |
99
+ | "Up" |
100
+ | "Down" |
101
+ | "Left" |
102
+ | "Right" |
103
+ | "On" |
104
+ | "Off" |
105
+ | "Stop" |
106
+ | "Go" |
107
+ | "Zero" |
108
+ | "One" |
109
+ | "Two" |
110
+ | "Three" |
111
+ | "Four" |
112
+ | "Five" |
113
+ | "Six" |
114
+ | "Seven" |
115
+ | "Eight" |
116
+ | "Nine" |
117
+ | "Bed" |
118
+ | "Bird"|
119
+ | "Cat"|
120
+ | "Dog"|
121
+ | "Happy"|
122
+ | "House"|
123
+ | "Marvin"|
124
+ | "Sheila"|
125
+ | "Tree"|
126
+ | "Wow"|
127
+
128
+ ### Supported Tasks and Leaderboards
129
+
130
+ - `TensorFlow Speech Recognition Challenge`: The goal of this task is to build a speech detector. The leaderboard is available [here](https://www.kaggle.com/c/tensorflow-speech-recognition-challenge).
131
+
132
+ ### Languages
133
+
134
+ English class labels.
135
+
136
+ ## Dataset Structure
137
+
138
+ ### Data Instances
139
+
140
+ A sample from the dataset is provided below:
141
+
142
+ ```python
143
+ {
144
+ "audio": {
145
+ "path":'bed/4a294341_nohash_0.wav',
146
+ "array": array([0.00126146 0.00647549 0.01160542 ... 0.00740056 0.00798924 0.00504583]),
147
+ "sampling_rate": 16000
148
+ },
149
+ "label": 20, # "bed"
150
+ "is_unknown": True,
151
+ "speaker_id": "4a294341",
152
+ "utterance_id": 0,
153
+ "logits": array([-9.341216087341309, -10.528160095214844, -8.605941772460938, ..., -9.13764476776123,
154
+ -9.4379243850708, -9.254714012145996]),
155
+ "Probability": 0.99669,
156
+ "Predicted Label": "bed",
157
+ "Annotated Labels": "bed",
158
+ "embedding": array([ 1.5327608585357666, -3.3523001670837402, 2.5896875858306885, ..., 0.1423477828502655,
159
+ 2.0368740558624268, 0.6912304759025574]),
160
+ "embedding_reduced": array([-5.691406726837158, -0.15976890921592712])
161
+
162
+ }
163
+ ```
164
+
165
+ ### Data Fields
166
+
167
+ | Feature | Data Type |
168
+ |---------------------------------|------------------------------------------------|
169
+ |audio| Audio(sampling_rate=16000, mono=True, decode=True, id=None)|
170
+ | label| Value(dtype='int64', id=None)|
171
+ | is_unknown| Value(dtype='bool', id=None)|
172
+ | speaker_id| Value(dtype='string', id=None)|
173
+ | utterance_id| Value(dtype='int8', id=None)|
174
+ | logits| Sequence(feature=Value(dtype='float32', id=None), length=35, id=None)|
175
+ | Probability| Value(dtype='float64', id=None)|
176
+ | Predicted Label| Value(dtype='string', id=None)|
177
+ | Annotated Labels| Value(dtype='string', id=None)|
178
+ | embedding| Sequence(feature=Value(dtype='float32', id=None), length=768, id=None)|
179
+ | embedding_reduced | Sequence(feature=Value(dtype='float32', id=None), length=2, id=None) |
180
+
181
+
182
+ ### Source Data
183
+
184
+ #### Initial Data Collection and Normalization
185
+
186
+ [More Information Needed]
187
+
188
+ #### Who are the source language producers?
189
+
190
+ [More Information Needed]
191
+
192
+ ### Annotations
193
+
194
+ #### Annotation process
195
+
196
+ [More Information Needed]
197
+
198
+ #### Who are the annotators?
199
+
200
+ [More Information Needed]
201
+
202
+ ### Personal and Sensitive Information
203
+
204
+ [More Information Needed]
205
+
206
+ ## Considerations for Using the Data
207
+
208
+ ### Social Impact of Dataset
209
+
210
+ [More Information Needed]
211
+
212
+ ### Discussion of Biases
213
+
214
+ [More Information Needed]
215
+
216
+ ### Other Known Limitations
217
+
218
+ [More Information Needed]
219
+
220
+ ## Additional Information
221
+
222
+ ### Dataset Curators
223
+
224
+ [More Information Needed]
225
+
226
+ ### Licensing Information
227
+
228
+ [More Information Needed]
229
+
230
+ ### Citation Information
231
+
232
+ If you use this dataset, please cite the following paper:
233
+ ```
234
+ @article{speechcommandsv2,
235
+ author = { {Warden}, P.},
236
+ title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
237
+ journal = {ArXiv e-prints},
238
+ archivePrefix = "arXiv",
239
+ eprint = {1804.03209},
240
+ primaryClass = "cs.CL",
241
+ keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
242
+ year = 2018,
243
+ month = apr,
244
+ url = {https://arxiv.org/abs/1804.03209},
245
+ }
246
+ ```
247
+
248
+ ### Contributions
249
+
250
+ Pete Warden and Soeren Raymond(Renumics GmbH).