Datasets:

Sub-tasks:
slot-filling
Size Categories:
100K<n<100M
Language Creators:
unknown
Annotations Creators:
unknown
Source Datasets:
unknown
License:
Fhrozen commited on
Commit
5bff49b
β€’
1 Parent(s): ea2cd9e

add readme

Browse files
Files changed (3) hide show
  1. LICENSE +11 -0
  2. README.md +342 -1
  3. dcase22_task3.py +20 -0
LICENSE ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -----------COPYRIGHT NOTICE STARTS WITH THIS LINE------------
2
+
3
+ Copyright (c) 2022 SONY and Tampere University
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
6
+
7
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
8
+
9
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
10
+
11
+ -----------COPYRIGHT NOTICE ENDS WITH THIS LINE------------
README.md CHANGED
@@ -1,3 +1,344 @@
1
  ---
2
- license: cc-by-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: MIT
3
+ annotations_creators:
4
+ - unknown
5
+ language_creators:
6
+ - unknown
7
+ size_categories:
8
+ - 100K<n<100M
9
+ source_datasets:
10
+ - unknown
11
+ task_categories:
12
+ - audio-classification
13
+ task_ids:
14
+ - other-audio-slot-filling
15
  ---
16
+
17
+
18
+ # DCASE 2022 Task 3 Data sets: STARSS22 Dataset + Synthetic SELD mixtures
19
+
20
+ [Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/)
21
+ [Creative AI Lab/ SONY R&D Center](https://www.sony.com/en/SonyInfo/research/research-areas/audio-acoustics/)
22
+
23
+ ## Important
24
+ **This is a copy from the Zenodo Original one**
25
+
26
+ AUTHORS
27
+
28
+ **Tampere University**
29
+ - Archontis Politis ([contact](mailto:archontis.politis@tuni.fi), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en))
30
+ - Parthasaarathy Sudarsanam([contact](mailto:parthasaarathy.ariyakulamsudarsanam@tuni.fi), [profile](https://scholar.google.com/citations?user=yxZ1qAIAAAAJ&hl=en))
31
+ - Sharath Adavanne ([contact](mailto:sharath.adavanne@tuni.fi), [profile](https://www.aane.in))
32
+ - Daniel Krause ([contact](mailto:daniel.krause@tuni.fi), [profile](https://scholar.google.com/citations?user=pSLng-8AAAAJ&hl=en))
33
+ - Tuomas Virtanen ([contact](mailto:tuomas.virtanen@tuni.fi), [profile](https://homepages.tuni.fi/tuomas.virtanen/))
34
+
35
+ **SONY**
36
+ - Yuki Mitsufuji ([contact](mailto:yuhki.mitsufuji@sony.com), [profile](https://scholar.google.com/citations?user=GMytI10AAAAJ))
37
+ - Kazuki Shimada ([contact](mailto:kazuki.shimada@sony.com), [profile](https://scholar.google.com/citations?user=-t9IslAAAAAJ&hl=en))
38
+ - Naoya Takahashi ([profile](https://scholar.google.com/citations?user=JbtYJMoAAAAJ))
39
+ - Yuichiro Koyama
40
+ - Shusuke Takahashi
41
+
42
+ # Description
43
+
44
+ The **Sony-TAu Realistic Spatial Soundscapes 2022 (STARSS22)** dataset contains multichannel recordings of sound scenes in various rooms and environments, together with temporal and spatial annotations of prominent events belonging to a set of target classes. The dataset is collected in two different countries, in Tampere, Finland by the Audio Researh Group (ARG) of **Tampere University (TAU)**, and in Tokyo, Japan by **SONY**, using a similar setup and annotation procedure. The dataset is delivered in two 4-channel spatial recording formats, a microphone array one (**MIC**), and first-order Ambisonics one (**FOA**). These recordings serve as the development dataset for theΒ [DCASE 2022 Sound Event Localization and Detection Task](https://dcase.community/challenge2022/task-sound-event-localization-and-detection)Β of theΒ [DCASE 2022 Challenge](https://dcase.community/challenge2022/).
45
+
46
+ Contrary to the three previous datasets of synthetic spatial sound scenes of TAU Spatial Sound Events 2019 ([development](10.5281/zenodo.2599196)/[evaluation](10.5281/zenodo.3377088)), [TAU-NIGENS Spatial Sound Events 2020](https://doi.org/10.5281/zenodo.4064792), and [TAU-NIGENS Spatial Sound Events 2021](10.5281/zenodo.5476980
47
+ ) associated with the previous iterations of the DCASE Challenge, the STARSS22 dataset contains recordings of real sound scenes and hence it avoids some of the pitfalls of synthetic generation of scenes. Some such key properties are:
48
+
49
+ - annotations are based on a combination of human annotators for sound event activity and optical tracking for spatial positions
50
+ - the annotated target event classes are determined by the composition of the real scenes
51
+ - the density, polyphony, occurences and co-occurences of events and sound classes is not random, and it follows actions and interactions of participants in the real scenes
52
+
53
+ The recordings were collected between September 2021 and February 2022. Collection of data from the TAU side has received funding from Google.
54
+
55
+ # Aim
56
+
57
+ The dataset is suitable for training and evaluation of machine-listening models for sound event detection (SED), general sound source localization with diverse sounds or signal-of-interest localization, and joint sound-event-localization-and-detection (SELD). Additionally, the dataset can be used for evaluation of signal processing methods that do not necessarily rely on training, such as acoustic source localization methods and multiple-source acoustic tracking. The dataset allows evaluation of the performance and robustness of the aforementioned applications for diverse types of sounds, and under diverse acoustic conditions.
58
+
59
+ # Recording procedure
60
+
61
+ The sound scene recordings were captured with a high-channel-count spherical microphone array ([Eigenmike em32 by mh Acoustics](https://mhacoustics.com/products)), simultaneously with a 360Β° video recording spatially aligned with the spherical array recording ([Ricoh Theta V](https://theta360.com/en/about/theta/v.html)). Additionally, the main sound sources of interest were equipped with tracking markers, which are tracked throughout the recording with an [Optitrack Flex 13](https://optitrack.com/cameras/flex-13/) system arranged around each scene. All scenes were based on human actors performing some actions, interacting between them and with the objects in the scene, and were by design dynamic. Since the actors were producing most of the sounds in the scene (but not all), they were additionally equipped with [DPA Wireless Go II](https://rode.com/microphones/wireless/wirelessgoii) microphones, providing close-miked recordings of the main events. Recording would start and stop according to a scene being acted, usually lasting between 1~5mins. Recording would start in all microphones and tracking devices before the beginning of the scene, and would stop right after. A clapper sound would initiate the acting and it would serve as a reference signal for synchronization between the em32 recording, the Ricoh Theta V video, the DPA wireless microphone recordings, and the Optitrack tracker data. Synchronized clips of all of them would be cropped and stored in the end of each recording session.
62
+
63
+ # Annotation procedure
64
+
65
+ By combining information from the wireless microphones, the optical tracking data, and the 360Β° videos, spatiotemporal annotations were extracted semi-automatically, and validated manually. More specifically, the actors were tracked all through each recording session wearing headbands with markers, and the spatial positions of other human-related sources, such as mouth, hands, or footsteps were geometrically extrapolated from those head coordinates. Additional trackers were mounted on other sources of interest (e.g. vacuum cleaner, guitar, water tap, cupboard, door handle, a.o.). Each actor had a wireless microphone mounted on their lapel, providing a clear recording of all sound events produced by that actor, and/or any independent sources closer to that actor than the rest. The temporal annotation was based primarily on those close-miked recordings. The annotators would annotate the sound event activity and label their class during the recording by listening those close-miked signals. Events that were not audible in the overall scene recording of the em32 were not annotated, even if they were audible in the lapel recordings. In ambiguous cases, the annotators could rely on the 360Β° video to associate an event with a certain actor or source. The final sound event temporal annotations were associated with the tracking data through the class of each sound event and the actor that produced them. All tracked Cartesian coordinates delivered by the tracker were converted to directions-of-arrival (DOAs) with respect to the coordinates of the Eigenmike. Finally, the final class, temporal, and spatial annotations were combined and converted to the challenge format. Validation of the annotations was done by observing videos of the activities of each class visualized as markers positioned at their respective DOAs on the 360Β° video plane, overlapped with the 360Β° from the Ricoh Theta V.
66
+
67
+ # Recording formats
68
+
69
+ The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
70
+
71
+ **For the first-order ambisonics (FOA):**
72
+
73
+ \begin{eqnarray}
74
+ H_1(\phi, \theta, f) &=& 1 \\
75
+ H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\
76
+ H_3(\phi, \theta, f) &=& \sin(\theta) \\
77
+ H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta)
78
+ \end{eqnarray}
79
+ The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above.
80
+
81
+ **For the tetrahedral microphone array (MIC):**
82
+
83
+ The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
84
+
85
+ \begin{eqnarray}
86
+ M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\
87
+ M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
88
+ M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
89
+ M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
90
+ \end{eqnarray}
91
+
92
+ Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
93
+ \begin{equation}
94
+ H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))
95
+ \end{equation}
96
+
97
+ where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator).
98
+
99
+
100
+ # Dataset specifications
101
+
102
+ The specifications of the dataset can be summarized in the following:
103
+
104
+ - 70 recording clips of 30 sec ~ 5 min durations, with a total time of ~2hrs, contributed by SONY (development dataset).
105
+ - 51 recording clips of 1 min ~ 5 min durations, with a total time of ~3hrs, contributed by TAU (development dataset).
106
+ - A training-test split is provided for reporting results using the development dataset.
107
+ - 40 recordings contributed by SONY for the training split, captured in 2 rooms (dev-train-sony).
108
+ - 30 recordings contributed by SONY for the testing split, captured in 2 rooms (dev-test-sony).
109
+ - 27 recordings contributed by TAU for the training split, captured in 4 rooms (dev-train-tau).
110
+ - 24 recordings contributed by TAU for the testing split, captured in 3 rooms (dev-test-tau).
111
+ - A total of 11 unique rooms captured in the recordings, 4 from SONY and 7 from TAU (development set).
112
+ - Sampling rate 24kHz.
113
+ - Two 4-channel 3-dimensional recording formats: first-order Ambisonics (FOA) and tetrahedral microphone array (MIC).
114
+ - Recordings are taken in two different countries and two different sites.
115
+ - Each recording clip is part of a recording session happening in a unique room.
116
+ - Groups of participants, sound making props, and scene scenarios are unique for each session (with a few exceptions).
117
+ - To achieve good variability and efficiency in the data, in terms of presence, density, movement, and/or spatial distribution of the sounds events, the scenes are loosely scripted.
118
+ - 13 target classes are identified in the recordings and strongly annotated by humans.
119
+ - Spatial annotations for those active events are captured by an optical tracking system.
120
+ - Sound events out of the target classes are considered as interference.
121
+
122
+
123
+ # Sound event classes
124
+
125
+ 13 target sound event classes were annotated. The classes follow loosely the [Audioset ontology](https://research.google.com/audioset/ontology/index.html).
126
+
127
+ 0. Female speech, woman speaking
128
+ 1. Male speech, man speaking
129
+ 2. Clapping
130
+ 3. Telephone
131
+ 4. Laughter
132
+ 5. Domestic sounds
133
+ 6. Walk, footsteps
134
+ 7. Door, open or close
135
+ 8. Music
136
+ 9. Musical instrument
137
+ 10. Water tap, faucet
138
+ 11. Bell
139
+ 12. Knock
140
+
141
+ The content of some of these classes corresponds to events of a limited range of Audioset-related subclasses. These are detailed here as additional information on the diversity of those sound events:
142
+
143
+ - Telephone
144
+ - Mostly traditional _Telephone Bell Ringing_ and _Ringtone_ sounds, without musical ringtones.
145
+ - Domestic sounds
146
+ - Sounds of _Vacuum cleaner_
147
+ - Sounds of water boiler, closer to _Boiling_
148
+ - Sounds of air circulator, closer to _Mechanical fan_
149
+ - Door, open or close
150
+ - Combination of _Door_ and _Cupboard open or close_
151
+ - Music
152
+ - _Background music_ and _Pop music_ played by a loudspeaker in the room.
153
+ - Musical Instrument
154
+ - Acoustic guitar
155
+ - Marimba, xylophone
156
+ - Cowbell
157
+ - Piano
158
+ - Rattle (instrument)
159
+ - Bell
160
+ - Combination of sounds from hotel bell and glass bell, closer to _Bicycle bell_ and single _Chime_.
161
+
162
+ Some additional notes:
163
+ - The speech classes contain speech in a few different languages.
164
+ - There are occasionally localized sound events that are not annotated and are considered as interferers, with examples such as _computer keyboard_, _shuffling cards_, _dishes, pots, and pans_.
165
+ - There is natural background noise (e.g. HVAC noise) in all recordings, at very low levels in some and at quite high levels in others. Such mostly diffuse background noise should be distinct from other noisy target sources (e.g. vacuum cleaner, mechanical fan) since these are clearly spatially localized.
166
+
167
+
168
+ # Naming Convention (Development dataset)
169
+
170
+ The recordings in the development dataset follow the naming convention:
171
+
172
+ fold[fold number]_room[room number]_mix[recording number per room].wav
173
+
174
+ The fold number at the moment is used only to distinguish between the training and testing split. The room information is provided for the user of the dataset to potentially help understand the performance of their method with respect to different conditions.
175
+
176
+
177
+ # Reference labels and directions-of-arrival
178
+
179
+ For each recording in the development dataset, the labels and DoAs are provided in a plain text CSV file of the same filename as the recording, in the following format:
180
+
181
+ [frame number (int)], [active class index (int)], [source number index (int)], [azimuth (int)], [elevation (int)]
182
+
183
+ Frame, class, and source enumeration begins at 0. Frames correspond to a temporal resolution of 100msec. Azimuth and elevation angles are given in degrees, rounded to the closest integer value, with azimuth and elevation being zero at the front, azimuth $\phi \in [-180^{\circ}, 180^{\circ}]$, and elevation $\theta \in [-90^{\circ}, 90^{\circ}]$. Note that the azimuth angle is increasing counter-clockwise ($\phi = 90^{\circ}$ at the left).
184
+
185
+ The source index is a unique integer for each source in the scene, and it is provided only as additional information. Note that each unique actor gets assigned one such identifier, but not individual events produced by the same actor; e.g. a _clapping_ event and a _laughter_ event produced by the same person have the same identifier. Independent sources that are not actors (e.g. a loudspeaker playing music in the room) get a 0 identifier. Note that source identifier information is only included in the development metadata and is not required to be provided by the participants in their results.
186
+
187
+ Overlapping sound events are indicated with duplicate frame numbers, and can belong to a different or the same class. An example sequence could be as:
188
+
189
+ 10, 1, 1, -50, 30
190
+ 11, 1, 1, -50, 30
191
+ 11, 1, 2, 10, -20
192
+ 12, 1, 2, 10, -20
193
+ 13, 1, 2, 10, -20
194
+ 13, 8, 0, -40, 0
195
+
196
+ which describes that in frame 10-11, an event of class _male speech_ (_class 1_) belonging to one actor (_source 1_) is active at direction (-50Β°,30Β°). However, at frame 11 a second instance of the same class appears simultaneously at a different direction (10Β°,-20Β°) belonging to another actor (_source 2_), while at frame 13 an additional event of class _music_ (_class 8_) appears belonging to a non-actor source (_source 0_). Frames that contain no sound events are not included in the sequence.
197
+
198
+
199
+ # Task setup
200
+
201
+ The dataset is associated with the [DCASE 2022 Challenge](http://dcase.community/challenge2022/). To have consistent reporting of results between participants on the development set a pre-defined training-testing split is provided. To compare against the challenge baseline and with other participants during the development stage, models should be trained on the training split only, and results should be reported on the testing split only.
202
+
203
+ **Note that even though there are two origins of the data, SONY and TAU, the challenge task considers the dataset as a single entity. Hence models should not be trained separately for each of the two origins, and tested individually on recordings of each of them. Instead, the recordings of the individual training splits (_dev-test-sony_, _dev_test_tau_) and testing splits (_dev-test-sony_, _dev_test_tau_) should be combined (_dev_train_, _dev_test_) and the models should be trained and evaluated in the respective combined splits.**
204
+
205
+ The evaluation part of the dataset will be published here as a new dataset version, a few weeks before the final challenge submission deadline. The additional evaluation files consist of only audio recordings without any metadata/labels. Participants can decide the training procedure, i.e. the amount of training and validation files in the development dataset, the number of ensemble models etc., and submit the results of the SELD performance on the evaluation dataset.
206
+
207
+
208
+ # File structure
209
+
210
+ ```
211
+ dataset root
212
+ β”‚ README.md this file, markdown-format
213
+ | LICENSE the license file
214
+ β”‚
215
+ └───foa_dev Ambisonic format, 24kHz, four channels
216
+ | | dev-train-sony to be used for training when reporting development set results (SONY recordings)
217
+ β”‚ β”‚ | fold3_room21_mix001.wav
218
+ β”‚ β”‚ | fold3_room21_mix002.wav
219
+ β”‚ β”‚ | ...
220
+ β”‚ β”‚ | fold3_room22_mix001.wav
221
+ β”‚ β”‚ | fold3_room22_mix002.wav
222
+ β”‚ | β”‚ ...
223
+ | | dev-test-sony to be used for testing when reporting development set results (SONY recordings)
224
+ β”‚ β”‚ | fold4_room23_mix001.wav
225
+ β”‚ β”‚ | fold4_room23_mix002.wav
226
+ β”‚ β”‚ | ...
227
+ β”‚ β”‚ | fold4_room24_mix001.wav
228
+ β”‚ β”‚ | fold4_room24_mix002.wav
229
+ β”‚ β”‚ | ...
230
+ | | dev-train-tau to be used for training when reporting development set results (TAU recordings)
231
+ β”‚ β”‚ | fold3_room4_mix001.wav
232
+ β”‚ β”‚ | fold3_room4_mix002.wav
233
+ β”‚ β”‚ | ...
234
+ β”‚ β”‚ | fold3_room6_mix001.wav
235
+ β”‚ β”‚ | fold3_room6_mix002.wav
236
+ β”‚ | β”‚ ...
237
+ β”‚ β”‚ | fold3_room7_mix001.wav
238
+ β”‚ β”‚ | fold3_room7_mix002.wav
239
+ β”‚ | β”‚ ...
240
+ β”‚ β”‚ | fold3_room9_mix001.wav
241
+ β”‚ β”‚ | fold3_room9_mix002.wav
242
+ β”‚ | β”‚ ...
243
+ | | dev-test-tau to be used for testing when reporting development set results (TAU recordings)
244
+ β”‚ β”‚ | fold4_room2_mix001.wav
245
+ β”‚ β”‚ | fold4_room2_mix002.wav
246
+ β”‚ β”‚ | ...
247
+ β”‚ β”‚ | fold4_room8_mix001.wav
248
+ β”‚ β”‚ | fold4_room8_mix002.wav
249
+ β”‚ β”‚ | ...
250
+ β”‚ β”‚ | fold4_room10_mix001.wav
251
+ β”‚ β”‚ | fold4_room10_mix002.wav
252
+ β”‚ β”‚ | ...
253
+ β”‚
254
+ └───mic_dev Microphone array format, 24kHz, four channels
255
+ | | dev-train-sony to be used for training when reporting development set results (SONY recordings)
256
+ β”‚ β”‚ | fold3_room21_mix001.wav
257
+ β”‚ β”‚ | fold3_room21_mix002.wav
258
+ β”‚ β”‚ | ...
259
+ β”‚ β”‚ | fold3_room22_mix001.wav
260
+ β”‚ β”‚ | fold3_room22_mix002.wav
261
+ β”‚ | β”‚ ...
262
+ | | dev-test-sony to be used for testing when reporting development set results (SONY recordings)
263
+ β”‚ β”‚ | fold4_room23_mix001.wav
264
+ β”‚ β”‚ | fold4_room23_mix002.wav
265
+ β”‚ β”‚ | ...
266
+ β”‚ β”‚ | fold4_room24_mix001.wav
267
+ β”‚ β”‚ | fold4_room24_mix002.wav
268
+ β”‚ β”‚ | ...
269
+ | | dev-train-tau to be used for training when reporting development set results (TAU recordings)
270
+ β”‚ β”‚ | fold3_room4_mix001.wav
271
+ β”‚ β”‚ | fold3_room4_mix002.wav
272
+ β”‚ β”‚ | ...
273
+ β”‚ β”‚ | fold3_room6_mix001.wav
274
+ β”‚ β”‚ | fold3_room6_mix002.wav
275
+ β”‚ | β”‚ ...
276
+ β”‚ β”‚ | fold3_room7_mix001.wav
277
+ β”‚ β”‚ | fold3_room7_mix002.wav
278
+ β”‚ | β”‚ ...
279
+ β”‚ β”‚ | fold3_room9_mix001.wav
280
+ β”‚ β”‚ | fold3_room9_mix002.wav
281
+ β”‚ | β”‚ ...
282
+ | | dev-test-tau to be used for testing when reporting development set results (TAU recordings)
283
+ β”‚ β”‚ | fold4_room2_mix001.wav
284
+ β”‚ β”‚ | fold4_room2_mix002.wav
285
+ β”‚ β”‚ | ...
286
+ β”‚ β”‚ | fold4_room8_mix001.wav
287
+ β”‚ β”‚ | fold4_room8_mix002.wav
288
+ β”‚ β”‚ | ...
289
+ β”‚ β”‚ | fold4_room10_mix001.wav
290
+ β”‚ β”‚ | fold4_room10_mix002.wav
291
+ β”‚ β”‚ | ...
292
+ β”‚
293
+ └───metadata_dev `csv` format, 600 files
294
+ | | dev-train-sony to be used for training when reporting development set results (SONY recordings)
295
+ β”‚ β”‚ | fold3_room21_mix001.csv
296
+ β”‚ β”‚ | fold3_room21_mix002.csv
297
+ β”‚ β”‚ | ...
298
+ β”‚ β”‚ | fold3_room22_mix001.csv
299
+ β”‚ β”‚ | fold3_room22_mix002.csv
300
+ β”‚ | β”‚ ...
301
+ | | dev-test-sony to be used for testing when reporting development set results (SONY recordings)
302
+ β”‚ β”‚ | fold4_room23_mix001.csv
303
+ β”‚ β”‚ | fold4_room23_mix002.csv
304
+ β”‚ β”‚ | ...
305
+ β”‚ β”‚ | fold4_room24_mix001.csv
306
+ β”‚ β”‚ | fold4_room24_mix002.csv
307
+ β”‚ β”‚ | ...
308
+ | | dev-train-tau to be used for training when reporting development set results (TAU recordings)
309
+ β”‚ β”‚ | fold3_room4_mix001.csv
310
+ β”‚ β”‚ | fold3_room4_mix002.csv
311
+ β”‚ β”‚ | ...
312
+ β”‚ β”‚ | fold3_room6_mix001.csv
313
+ β”‚ β”‚ | fold3_room6_mix002.csv
314
+ β”‚ | β”‚ ...
315
+ β”‚ β”‚ | fold3_room7_mix001.csv
316
+ β”‚ β”‚ | fold3_room7_mix002.csv
317
+ β”‚ | β”‚ ...
318
+ β”‚ β”‚ | fold3_room9_mix001.csv
319
+ β”‚ β”‚ | fold3_room9_mix002.csv
320
+ β”‚ | β”‚ ...
321
+ | | dev-test-tau to be used for testing when reporting development set results (TAU recordings)
322
+ β”‚ β”‚ | fold4_room2_mix001.csv
323
+ β”‚ β”‚ | fold4_room2_mix002.csv
324
+ β”‚ β”‚ | ...
325
+ β”‚ β”‚ | fold4_room8_mix001.csv
326
+ β”‚ β”‚ | fold4_room8_mix002.csv
327
+ β”‚ β”‚ | ...
328
+ β”‚ β”‚ | fold4_room10_mix001.csv
329
+ β”‚ β”‚ | fold4_room10_mix002.csv
330
+ β”‚ β”‚ | ...
331
+
332
+
333
+ ```
334
+ # Download
335
+
336
+ git clone
337
+
338
+ # Example application
339
+
340
+ An implementation of a trainable model of a convolutional recurrent neural network, performing joint SELD, trained and evaluated with this dataset is provided [here](https://github.com/sharathadavanne/seld-dcase2022). ThisΒ implementation will serve as the baseline method in theΒ DCASE 2022 Sound Event Localization and Detection Task.
341
+
342
+ # License
343
+
344
+ This datast is licensed under the [MIT](https://opensource.org/licenses/MIT) license.
dcase22_task3.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+
3
+ # Copyright 2022 Nelson Yalta (Hitachi Ltd.)
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ """DCASE 2022 - Task 3, an audio dataset of spoken words designed to help train and evaluate keyword spotting systems. """
18
+
19
+
20
+ import datasets