Syoy commited on
Commit
281aef9
1 Parent(s): a86c658

First commit - Todos in Readme before release

Browse files
README.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - audio-classification
5
+ pretty_name: >-
6
+ Enriched DCASE 2023 Challenge Task 2 Dataset
7
+ size_categories:
8
+ - 1K<n<10K
9
+ tags:
10
+ - anomaly detection
11
+ - anomalous sound detection
12
+ - acoustic condition monitoring
13
+ - sound machine fault diagnosis
14
+ - machine learning
15
+ - unsupervised learning
16
+ - acoustic scene classification
17
+ - acoustic event detection
18
+ - acoustic signal processing
19
+ - audio domain shift
20
+ - domain generalization
21
+ ---
22
+
23
+
24
+ # Dataset Card for the Enriched "DCASE 2023 Challenge Task 2 Dataset".
25
+
26
+ [//]: # (todo)
27
+
28
+ ## Table of contents
29
+
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Explore the data with Spotlight](#explore-the-data-with-spotlight)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Supported Tasks and Leaderboards (original: Overview of the task)](#supported-tasks-and-leaderboards-original-overview-of-the-task)
40
+ - [Source Data](#source-data)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Baseline system](#baseline-system)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information - Condition of use](#licensing-information---condition-of-use)
49
+ - [Citation Information (original)](#citation-information-original)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** [Renumics Homepage](https://renumics.com/)
54
+ - **Original Dataset Upload** [ZENODO: DCASE 2023 Challenge Task 2 Development Dataset](https://zenodo.org/record/7687464#.Y_9VtdLMLmE)
55
+ - **Paper** [MIMII DG](https://arxiv.org/abs/2205.13879)
56
+ - **Paper** [ToyADMOS2](https://arxiv.org/abs/2106.02369)
57
+
58
+ ### Dataset Summary
59
+
60
+ [//]: # (todo)
61
+
62
+ [//]: # (todo: verantwortlichkeit)
63
+
64
+ ### Explore the data with Spotlight
65
+
66
+ Install Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip)
67
+ ```bash
68
+ pip install renumics-spotlight
69
+ ```
70
+
71
+ and simply run the following codeblock:
72
+ ```jupyterpython
73
+ from datasets import load_dataset
74
+ from renumics import spotlight
75
+
76
+ ds = load_dataset("renumics/dcase23-task2-enriched", "dev")
77
+ df = ds.to_pandas()
78
+ spotlight.show(df, dtype={'path': spotlight.Audio})
79
+ ```
80
+
81
+ advanced view:
82
+ ```jupyterpython
83
+ from datasets import load_dataset, load_dataset_builder
84
+ from renumics import spotlight
85
+
86
+ ds = load_dataset("renumics/dcase23-task2-enriched", "dev", split="train", streaming=False)
87
+ db = load_dataset_builder("renumics/dcase23-task2-enriched", "dev")
88
+
89
+ df = db.config.to_spotlight(ds)
90
+ spotlight.show(df, dtype={'audio': spotlight.Audio, 'baseline-embeddings': spotlight.Embedding}, layout=db.config.get_layout())
91
+ ```
92
+
93
+ [//]: # (todo: add embeddings column)
94
+
95
+ ## Dataset Structure
96
+
97
+ ### Data Instances
98
+
99
+ For each instance, there is a Audio for the audio, a string for the path, an integer for the section, a string for the d1p (parameter), a string for the d1v (value),
100
+ a ClassLabel for the label and a ClassLabel for the class.
101
+
102
+ [//]: # (todo)
103
+ ```
104
+ {'audio': {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
105
+ 0. , 0. ], dtype=float32),
106
+ 'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav',
107
+ 'sampling_rate': 16000
108
+ }
109
+ 'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav'
110
+ 'section': 1
111
+ 'd1p': 'f-n'
112
+ 'd1v': 'A'
113
+ 'd2p': ''
114
+ 'd2v': ''
115
+ 'd3p': ''
116
+ 'd3v': ''
117
+ 'label': 0 (normal)
118
+ 'class': 0 (fan)
119
+ }
120
+ ```
121
+
122
+ The length of each audio file is 10 seconds.
123
+
124
+ ### Data Fields
125
+
126
+ [//]: # (todo)
127
+
128
+ - `audio`:
129
+ - `path`: a string representing the path of the audio file inside the _tar.gz._-archive.
130
+ - `section`: an integer
131
+ - `d*p`:
132
+ - `d*v`:
133
+ - `class`:
134
+ - `label`: an integer whose value may be either _0_, indicating that the audio sample is _normal_, _1_, indicating that the audio sample contains an _anomaly_.
135
+
136
+ [//]: # (todo)
137
+
138
+ ### Data Splits
139
+
140
+ The development dataset has 2 splits: _train_ and _test_.
141
+
142
+ | Dataset Split | Number of Instances in Split |
143
+ | ------------- |----------------------------- |
144
+ | Train | TODO |
145
+ | Test | TODO |
146
+
147
+ The information for the evaluation dataset will follow after release.
148
+
149
+ ## Dataset Creation
150
+
151
+ ### Curation Rationale
152
+
153
+ This dataset is the "development dataset" for the [DCASE 2023 Challenge Task 2 "First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring"](https://dcase.community/challenge2023/task-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring).
154
+
155
+ The data consists of the normal/anomalous operating sounds of seven types of real/toy machines. Each recording is a single-channel 10-second audio that includes both a machine's operating sound and environmental noise. The following seven types of real/toy machines are used in this task:
156
+
157
+ - ToyCar
158
+ - ToyTrain
159
+ - Fan
160
+ - Gearbox
161
+ - Bearing
162
+ - Slide rail
163
+ - Valve
164
+
165
+ ### Supported Tasks and Leaderboards (original: Overview of the task)
166
+
167
+ Anomalous sound detection (ASD) is the task of identifying whether the sound emitted from a target machine is normal or anomalous. Automatic detection of mechanical failure is an essential technology in the fourth industrial revolution, which involves artificial-intelligence-based factory automation. Prompt detection of machine anomalies by observing sounds is useful for monitoring the condition of machines.
168
+
169
+ This task is the follow-up from DCASE 2020 Task 2 to DCASE 2022 Task 2. The task this year is to develop an ASD system that meets the following four requirements.
170
+
171
+ **1. Train a model using only normal sound (unsupervised learning scenario)**
172
+
173
+ Because anomalies rarely occur and are highly diverse in real-world factories, it can be difficult to collect exhaustive patterns of anomalous sounds. Therefore, the system must detect unknown types of anomalous sounds that are not provided in the training data. This is the same requirement as in the previous tasks.
174
+
175
+ **2. Detect anomalies regardless of domain shifts (domain generalization task)**
176
+
177
+ In real-world cases, the operational states of a machine or the environmental noise can change to cause domain shifts. Domain-generalization techniques can be useful for handling domain shifts that occur frequently or are hard-to-notice. In this task, the system is required to use domain-generalization techniques for handling these domain shifts. This requirement is the same as in DCASE 2022 Task 2.
178
+
179
+ **3. Train a model for a completely new machine type**
180
+
181
+ For a completely new machine type, hyperparameters of the trained model cannot be tuned. Therefore, the system should have the ability to train models without additional hyperparameter tuning.
182
+
183
+ **4. Train a model using only one machine from its machine type**
184
+
185
+ While sounds from multiple machines of the same machine type can be used to enhance detection performance, it is often the case that sound data from only one machine are available for a machine type. In such a case, the system should be able to train models using only one machine from a machine type.
186
+
187
+ ### Source Data
188
+
189
+ #### Definition
190
+
191
+ We first define key terms in this task: "machine type," "section," "source domain," "target domain," and "attributes.".
192
+
193
+ - "Machine type" indicates the type of machine, which in the development dataset is one of seven: fan, gearbox, bearing, slide rail, valve, ToyCar, and ToyTrain.
194
+ - A section is defined as a subset of the dataset for calculating performance metrics.
195
+ - The source domain is the domain under which most of the training data and some of the test data were recorded, and the target domain is a different set of domains under which some of the training data and some of the test data were recorded. There are differences between the source and target domains in terms of operating speed, machine load, viscosity, heating temperature, type of environmental noise, signal-to-noise ratio, etc.
196
+ - Attributes are parameters that define states of machines or types of noise.
197
+
198
+ #### Description
199
+
200
+ This dataset consists of seven machine types. For each machine type, one section is provided, and the section is a complete set of training and test data. For each section, this dataset provides (i) 990 clips of normal sounds in the source domain for training, (ii) ten clips of normal sounds in the target domain for training, and (iii) 100 clips each of normal and anomalous sounds for the test. The source/target domain of each sample is provided. Additionally, the attributes of each sample in the training and test data are provided in the file names and attribute csv files.
201
+
202
+ #### Recording procedure
203
+
204
+ Normal/anomalous operating sounds of machines and its related equipment are recorded. Anomalous sounds were collected by deliberately damaging target machines. For simplifying the task, we use only the first channel of multi-channel recordings; all recordings are regarded as single-channel recordings of a fixed microphone. We mixed a target machine sound with environmental noise, and only noisy recordings are provided as training/test data. The environmental noise samples were recorded in several real factory environments. We will publish papers on the dataset to explain the details of the recording procedure by the submission deadline.
205
+
206
+ ## Considerations for Using the Data
207
+
208
+ ### Social Impact of Dataset
209
+
210
+ Not applicable.
211
+
212
+ ### Discussion of Biases
213
+
214
+ Not applicable.
215
+
216
+ ### Other Known Limitations
217
+
218
+ Not applicable.
219
+
220
+ ## Additional Information
221
+
222
+ ### Baseline system
223
+
224
+ [//]: # (todo)
225
+ Two baseline systems are available on the Github repository. The baseline systems provide a simple entry-level approach that gives a reasonable performance in the dataset of Task 2. They are good starting points, especially for entry-level researchers who want to get familiar with the anomalous-sound-detection task.
226
+
227
+ ### Dataset Curators
228
+
229
+ [//]: # (todo)
230
+ Example: The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the [Stanford NLP group](https://nlp.stanford.edu/).
231
+
232
+ It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.
233
+
234
+ ### Licensing Information - Condition of use
235
+
236
+ The [original dataset](https://zenodo.org/record/7687464#.Y_9dd9LMLmH) was created jointly by **Hitachi, Ltd.** and **NTT Corporation** and is available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
237
+
238
+
239
+ ### Citation Information (original)
240
+
241
+ If you use this dataset, please cite all the following papers. We will publish a paper on DCASE 2023 Task 2, so pleasure make sure to cite the paper, too.
242
+
243
+ - Kota Dohi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yuki Nikaido, and Yohei Kawaguchi. MIMII DG: sound dataset for malfunctioning industrial machine investigation and inspection for domain generalization task. In arXiv e-prints: 2205.13879, 2022. [[URL](https://arxiv.org/abs/2205.13879)]
244
+ - Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Masahiro Yasuda, and Shoichiro Saito. ToyADMOS2: another dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions. In Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 1–5. Barcelona, Spain, November 2021. [[URL](https://dcase.community/documents/workshop2021/proceedings/DCASE2021Workshop_Harada_6.pdf)]
245
+
246
+ ```
247
+ @dataset{kota_dohi_2023_7687464,
248
+ author = {Kota Dohi and
249
+ Keisuke and
250
+ Noboru and
251
+ Daisuke and
252
+ Yuma and
253
+ Tomoya and
254
+ Harsh and
255
+ Takashi and
256
+ Yohei},
257
+ title = {DCASE 2023 Challenge Task 2 Development Dataset},
258
+ month = mar,
259
+ year = 2023,
260
+ publisher = {Zenodo},
261
+ version = {1.0},
262
+ doi = {10.5281/zenodo.7687464},
263
+ url = {https://doi.org/10.5281/zenodo.7687464}
264
+ }
265
+ ```
data/.gitattributes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ config-spotlight-layout.json filter=lfs diff=lfs merge=lfs -text
2
+ *.tag.gz filter=lfs diff=lfs merge=lfs -text
3
+ dev_metadata.csv filter=lfs diff=lfs merge=lfs -text
data/config-spotlight-layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:671174dd0587e99ddcc999af2b9c149a4b3715564f6b838853e57256a176a4f3
3
+ size 1380
data/dev_metadata.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd4c496569294420a1b5e4a5b9a8b130b740da5bd9d9d4714746627429b24535
3
+ size 825327
data/dev_test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3163c1b0fc19edc4478bd27e4e2065e6a6f69b9a97a6e08a55d0e6b0b56e2d5
3
+ size 364523414
data/dev_test.tar.gz.lock ADDED
File without changes
data/dev_train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973e811e8911a2d264ec26d7ca0bb0d3f06d2bf1394cabcda4ef902d934b765f
3
+ size 1830350107
data/dev_train.tar.gz.lock ADDED
File without changes
dcase23-task2-enriched.py ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import datasets
3
+ import datasets.info
4
+ import pandas as pd
5
+ from pathlib import Path
6
+ from datasets import load_dataset
7
+ from typing import Iterable, Dict, Optional, Union, List
8
+
9
+
10
+ _CITATION = """\
11
+ @dataset{kota_dohi_2023_7687464,
12
+ author = {Kota Dohi and
13
+ Keisuke and
14
+ Noboru and
15
+ Daisuke and
16
+ Yuma and
17
+ Tomoya and
18
+ Harsh and
19
+ Takashi and
20
+ Yohei},
21
+ title = {DCASE 2023 Challenge Task 2 Development Dataset},
22
+ month = mar,
23
+ year = 2023,
24
+ publisher = {Zenodo},
25
+ version = {1.0},
26
+ doi = {10.5281/zenodo.7687464},
27
+ url = {https://doi.org/10.5281/zenodo.7687464}
28
+ }
29
+ """
30
+ _LICENSE = "Creative Commons Attribution 4.0 International Public License"
31
+
32
+ _METADATA_REG = r"attributes_\d+.csv"
33
+
34
+ _NUM_TARGETS = 2
35
+ _NUM_CLASSES = 7
36
+
37
+ _TARGET_NAMES = ["normal", "anomaly"]
38
+ _CLASS_NAMES = ["gearbox", "fan", "bearing", "slider", "ToyCar", "ToyTrain", "valve"]
39
+
40
+ _HOMEPAGE = {
41
+ "dev": "https://zenodo.org/record/7687464#.Y_96q9LMLmH",
42
+ "add": None,
43
+ "eval": None,
44
+ }
45
+
46
+ DATA_URLS = {
47
+ "dev": {
48
+ "train": "data/dev_train.tar.gz",
49
+ "test": "data/dev_test.tar.gz",
50
+ "metadata": "data/dev_metadata.csv",
51
+ },
52
+ "add": None,
53
+ "eval": None,
54
+ }
55
+
56
+ STATS = {
57
+ "name": "Enriched Dataset of 'DCASE 2023 Challenge Task 2'",
58
+ "configs": {
59
+ 'dev': {
60
+ 'date': "Mar 1, 2023",
61
+ 'version': "1.0.0",
62
+ 'homepage': "https://zenodo.org/record/7687464#.ZABmANLMLmH",
63
+ "splits": ["train", "test"],
64
+ },
65
+ 'add': {
66
+ 'date': None,
67
+ 'version': "0.0.0",
68
+ 'homepage': None,
69
+ "splits": ["train", "test"],
70
+ },
71
+ 'eval': {
72
+ 'date': None,
73
+ 'version': "0.0.0",
74
+ 'homepage': None,
75
+ "splits": ["test"],
76
+ },
77
+ }
78
+ }
79
+
80
+ DATASET = {
81
+ 'dev': 'DCASE 2023 Challenge Task 2 Development Dataset',
82
+ 'add': 'DCASE 2023 Challenge Task 2 Additional Train Dataset',
83
+ 'eval': 'DCASE 2023 Challenge Task 2 Evaluation Dataset',
84
+ }
85
+
86
+ _SPOTLIGHT_LAYOUT = "data/config-spotlight-layout.json"
87
+
88
+ _SPOTLIGHT_RENAME = {
89
+ "audio": "original_audio",
90
+ "path": "audio",
91
+ }
92
+
93
+
94
+ class DCASE2023Task2DatasetConfig(datasets.BuilderConfig):
95
+ """BuilderConfig for DCASE2023Task2Dataset."""
96
+
97
+ def __init__(self, name, version, **kwargs):
98
+ self.release_date = kwargs.pop("release_date", None)
99
+ self.homepage = kwargs.pop("homepage", None)
100
+ self.data_urls = kwargs.pop("data_urls", None)
101
+ self.splits = kwargs.pop("splits", None)
102
+ self._rename = kwargs.pop("rename", None)
103
+ self._layout = kwargs.pop("layout", None)
104
+ description = (
105
+ f"Dataset for the DCASE 2023 Challenge Task 2 'First-Shot Unsupervised Anomalous Sound Detection "
106
+ f"for Machine Condition Monitoring'. released on {self.release_date}. Original data available under"
107
+ f"{self.homepage}. "
108
+ f"SPLIT: {name}."
109
+ )
110
+ super(DCASE2023Task2DatasetConfig, self).__init__(
111
+ name=name,
112
+ version=datasets.Version(version),
113
+ description=description,
114
+ )
115
+
116
+ def to_spotlight(self, data: Union[pd.DataFrame, datasets.Dataset]) -> pd.DataFrame:
117
+ if type(data) == datasets.Dataset:
118
+ df = data.to_pandas()
119
+ df["split"] = data.split
120
+ df["config"] = data.config_name
121
+
122
+ class_names = data.features["class"].names
123
+ df["class_name"] = df["class"].apply(lambda x: class_names[x])
124
+ elif type(data) == pd.DataFrame:
125
+ df = data
126
+ else:
127
+ raise TypeError("type(data) not in Union[pd.DataFrame, datasets.Dataset]")
128
+
129
+ df["file_path"] = df["path"]
130
+ df.rename(columns=self._rename, inplace=True)
131
+
132
+ return df.copy()
133
+
134
+ def get_layout(self):
135
+ return self._layout
136
+
137
+
138
+ class DCASE2023Task2Dataset(datasets.GeneratorBasedBuilder):
139
+ """Dataset for the DCASE 2023 Challenge Task 2 "First-Shot Unsupervised Anomalous Sound Detection
140
+ for Machine Condition Monitoring"."""
141
+
142
+ VERSION = datasets.Version("0.0.2")
143
+
144
+ DEFAULT_CONFIG_NAME = "dev"
145
+
146
+ BUILDER_CONFIGS = [
147
+ DCASE2023Task2DatasetConfig(
148
+ name=key,
149
+ version=stats["version"],
150
+ dataset=DATASET[key],
151
+ homepage=_HOMEPAGE[key],
152
+ data_urls=DATA_URLS[key],
153
+ release_date=stats["date"],
154
+ splits=stats["splits"],
155
+ layout=_SPOTLIGHT_LAYOUT,
156
+ rename=_SPOTLIGHT_RENAME,
157
+ )
158
+ for key, stats in STATS["configs"].items()
159
+ ]
160
+
161
+ def _info(self):
162
+ features = datasets.Features(
163
+ {
164
+ "audio": datasets.Audio(sampling_rate=16_000),
165
+ "path": datasets.Value("string"),
166
+ "section": datasets.Value("int64"),
167
+ "d1p": datasets.Value("string"),
168
+ "d1v": datasets.Value("string"),
169
+ "d2p": datasets.Value("string"),
170
+ "d2v": datasets.Value("string"),
171
+ "d3p": datasets.Value("string"),
172
+ "d3v": datasets.Value("string"),
173
+ "label": datasets.ClassLabel(num_classes=_NUM_TARGETS, names=_TARGET_NAMES),
174
+ "class": datasets.ClassLabel(num_classes=_NUM_CLASSES, names=_CLASS_NAMES),
175
+ # "baseline-embeddings": datasets.Array2D(shape=(64, 1), dtype='float32'), # todo: add
176
+ }
177
+ )
178
+
179
+ return datasets.DatasetInfo(
180
+ # This is the description that will appear on the datasets page.
181
+ description=self.config.description,
182
+ features=features,
183
+ supervised_keys=datasets.info.SupervisedKeysData("label"),
184
+ homepage=self.config.homepage,
185
+ license=_LICENSE,
186
+ citation=_CITATION,
187
+ )
188
+
189
+ def _split_generators(
190
+ self,
191
+ dl_manager: datasets.DownloadManager
192
+ ):
193
+ """Returns SplitGenerators."""
194
+ dl_manager.download_config.ignore_url_params = True
195
+ audio_path = {}
196
+ local_extracted_archive = {}
197
+ split_type = {"train": datasets.Split.TRAIN, "test": datasets.Split.TEST}
198
+
199
+ for split in split_type:
200
+ audio_path[split] = dl_manager.download(self.config.data_urls[split])
201
+ local_extracted_archive[split] = dl_manager.extract(
202
+ audio_path[split]) if not dl_manager.is_streaming else None
203
+
204
+ return [
205
+ datasets.SplitGenerator(
206
+ name=split_type[split],
207
+ gen_kwargs={
208
+ "split": split,
209
+ "local_extracted_archive": local_extracted_archive[split],
210
+ "audio_files": dl_manager.iter_archive(audio_path[split]),
211
+ "metadata_file": dl_manager.download_and_extract(self.config.data_urls["metadata"]),
212
+ },
213
+ ) for split in split_type
214
+ ]
215
+
216
+ def _generate_examples(
217
+ self,
218
+ split: str,
219
+ local_extracted_archive: Union[Dict, List],
220
+ audio_files: Optional[Iterable],
221
+ metadata_file: Optional[str],
222
+ ):
223
+ """Yields examples."""
224
+ metadata = pd.read_csv(metadata_file)
225
+ data_fields = list(self._info().features.keys())
226
+
227
+ id_ = 0
228
+ for path, f in audio_files:
229
+ lookup = Path(path).parent.name + "/" + Path(path).name
230
+ if lookup in metadata["path"].values:
231
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
232
+ audio = {"path": path, "bytes": f.read()}
233
+ result = {field: None for field in data_fields}
234
+ result.update(metadata[metadata["path"] == lookup].T.squeeze().to_dict())
235
+ result["path"] = path
236
+ yield id_, {**result, "audio": audio}
237
+ id_ += 1