Datasets:

Task Categories: audio-classification
Size Categories: n<1K
Language Creators: unknown
Annotations Creators: unknown
Source Datasets: unknown
Licenses: unknown
Fhrozen commited on
Commit
2191bab
1 Parent(s): 99ac694

add license and reame

Browse files

Files changed (2) hide show
  1. LICENSE.txt +16 -0
  2. README.md +195 -1
LICENSE.txt ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright (c) 2022 Tampere University and its licensors
2
+ All rights reserved.
3
+
4
+ Permission is hereby granted, without written agreement and without license or royalty fees, to use and copy the TAU Spatial Room Impulse Response Database (“Work”) described in this document and composed of audio and metadata. This grant is only for experimental and non-commercial purposes, provided that the copyright notice in its entirety appear in all copies of this Work, and the original source of this Work, (Audio Research Group at Tampere University), is acknowledged in any publication that reports research using this Work.
5
+
6
+ Any commercial use of the Work or any part thereof is strictly prohibited.
7
+
8
+ Commercial use include, but is not limited to:
9
+ - selling or reproducing the Work
10
+ - selling or distributing the results or content achieved by use of the Work
11
+ - providing services by using the Work.
12
+
13
+ IN NO EVENT SHALL TAMPERE UNIVERSITY OR ITS LICENSORS BE LIABLE TO ANY PARTY
14
+ FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OR ITS LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
15
+
16
+ TAMPERE UNIVERSITY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE TAMPERE UNIVERSITY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
README.md CHANGED
@@ -1,3 +1,197 @@
1
  ---
2
- license: other
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: unknown
3
+ annotations_creators:
4
+ - unknown
5
+ language_creators:
6
+ - unknown
7
+ size_categories:
8
+ - n<1K
9
+ source_datasets:
10
+ - unknown
11
+ task_categories:
12
+ - audio-classification
13
+ task_ids:
14
+ - other-audio-slot-filling
15
  ---
16
+
17
+ # TAU Spatial Room Impulse Response Database (TAU-SRIR DB)
18
+
19
+ ## Important
20
+
21
+ **This is a copy from the Zenodo Original one**
22
+
23
+ ## Description
24
+
25
+ [Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/)
26
+
27
+ AUTHORS
28
+
29
+ **Tampere University**
30
+
31
+ - Archontis Politis ([contact](mailto:archontis.politis@tuni.fi), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en))
32
+ - Sharath Adavanne ([contact](mailto:sharath.adavanne@tuni.fi), [profile](https://www.aane.in))
33
+ - Tuomas Virtanen ([contact](mailto:tuomas.virtanen@tuni.fi), [profile](https://homepages.tuni.fi/tuomas.virtanen/))
34
+
35
+ **Data Collection 2019-2020**
36
+
37
+ - Archontis Politis
38
+ - Aapo Hakala
39
+ - Ali Gohar
40
+
41
+ **Data Collection 2017-2018**
42
+
43
+ - Sharath Adavanne
44
+ - Aapo Hakala
45
+ - Eemi Fagerlund
46
+ - Aino Koskimies
47
+
48
+ The **TAU Spatial Room Impulse Response Database (TAU-SRIR DB)** database contains spatial room impulse responses (SRIRs) captured in various spaces of Tampere University (TAU), Finland, for a fixed receiver position and multiple source positions per room, along with separate recordings of spatial ambient noise captured at the same recording point. The dataset is intended for emulation of spatial multichannel recordings for evaluation and/or training of multichannel processing algorithms in realistic reverberant conditions and over multiple rooms. The major distinct properties of the database compared to other databases of room impulse responses are:
49
+
50
+ - Capturing in a high resolution multichannel format (32 channels) from which multiple more limited application-specific formats can be derived (e.g. tetrahedral array, circular array, first-order Ambisonics, higher-order Ambisonics, binaural).
51
+ - Extraction of densely spaced SRIRs along measurement trajectories, allowing emulation of moving source scenarios.
52
+ - Multiple source distances, azimuths, and elevations from the receiver per room, allowing emulation of complex configurations for multi-source methods.
53
+ - Multiple rooms, allowing evaluation of methods at various acoustic conditions, and training of methods with the aim of generalization on different rooms.
54
+
55
+ The RIRs were collected by staff of TAU between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The data collection received funding from the European Research Council, grant agreement [637422 EVERYSOUND](https://cordis.europa.eu/project/id/637422).
56
+ [![ERC](https://erc.europa.eu/sites/default/files/content/erc_banner-horizontal.jpg "ERC")](https://erc.europa.eu/)
57
+
58
+ > **NOTE**: This database is a work-in-progress. We intend to publish additional rooms, additional formats, and potentially higher-fidelity versions of the captured responses in the near future, as new versions of the database in this repository.
59
+
60
+ ## Report and reference
61
+
62
+ A compact description of the dataset, recording setup, recording procedure, and extraction can be found in:
63
+
64
+ >Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). **A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection**. In _Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)_, Tokyo, Japan.
65
+
66
+ available [here](https://dcase.community/documents/workshop2020/proceedings/DCASE2020Workshop_Politis_88.pdf). A more detailed report specifically focusing on the dataset collection and properties will follow.
67
+
68
+ ## Aim
69
+
70
+ The dataset can be used for generating multichannel or monophonic mixtures for testing or training of methods under realistic reverberation conditions, related to e.g. multichannel speech enhancement, acoustic scene analysis, and machine listening, among others. It is especially suitable for the follow application scenarios:
71
+
72
+ - monophonic and multichannal reverberant single- or multi-source speech in multi-room reverberant conditions,
73
+ - monophonic and multichannel polyphonic sound events in multi-room reverberant conditions,
74
+ - single-source and multi-source localization in multi-room reverberant conditions, in static or dynamic scenarios,
75
+ - single-source and multi-source tracking in multi-room reverberant conditions, in static or dynamic scenarios,
76
+ - sound event localization and detection in multi-room reverberant conditions, in static or dynamic scenarios.
77
+
78
+ ## Specifications
79
+
80
+ The SRIRs were captured using an [Eigenmike](https://mhacoustics.com/products) spherical microphone array. A [Genelec G Three loudspeaker](https://www.genelec.com/g-three) was used to playback a maximum length sequence (MLS) around the Eigenmike. The SRIRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency. In this version of the dataset the SRIRs and ambient noise are downsampled to 24kHz for compactness.
81
+
82
+ The currently published SRIR set was recorded at nine different indoor locations inside the Tampere University campus at Hervanta, Finland. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. SRIR directions and distances differ with the room. Possible azimuths span the whole range of $\phi\in[-180,180)$, while the elevations span approximately a range between $\theta\in[-45,45]$ degrees. The currently shared measured spaces are as follows:
83
+
84
+ 1. Large open space in underground bomb shelter, with plastic-coated floor and rock walls. Ventilation noise.
85
+ 2. Large open gym space. Ambience of people using weights and gym equipment in adjacent rooms.
86
+ 3. Small classroom (PB132) with group work tables and carpet flooring. Ventilation noise.
87
+ 4. Meeting room (PC226) with hard floor and partially glass walls. Ventilation noise.
88
+ 5. Lecture hall (SA203) with inclined floor and rows of desks. Ventilation noise.
89
+ 6. Small classroom (SC203) with group work tables and carpet flooring. Ventilation noise.
90
+ 7. Large classroom (SE203) with hard floor and rows of desks. Ventilation noise.
91
+ 8. Lecture hall (TB103) with inclined floor and rows of desks. Ventilation noise.
92
+ 9. Meeting room (TC352) with hard floor and partially glass walls. Ventilation noise.
93
+
94
+ The measurement trajectories were organized in groups, with each group being specified by a circular or linear trace at the floor at a certain distance (range) from the z-axis of the microphone. For circular trajectories two ranges were measured, a _close_ and a _far_ one, except room TC352, where the same range was measured twice, but with different furniture configuration and open or closed doors. For linear trajectories also two ranges were measured, _close_ and _far_, but with linear paths at either side of the array, resulting in 4 unique trajectory groups, with the exception of room SA203 where 3 ranges were measurd resulting on 6 trajectory groups. Linear trajectory groups are always parallel to each other, in the same room.
95
+
96
+ Each trajectory group had multiple measurement trajectories, following the same floor path, but with the source at different heights.
97
+ The SRIRs are extracted from the noise recordings of the slowly moving source across those trajectories, at an angular spacing of approximately every 1 degree from the microphone. This extraction scheme instead of extracting SRIRs at equally spaced points along the path (e.g. every 20cm) was found more practical for synthesis purposes, making emulation of moving sources at an approximately constant angular speed easier.
98
+
99
+ The following table summarizes the above properties for the currently available rooms:
100
+
101
+ | | Room name | Room type | Traj. type | # ranges | # trajectory groups | # heights/group | # trajectories (total) | # RIRs/DOAs |
102
+ |---|--------------------------|----------------------------|------------|-------------|-----------------------|---------------------|------------------------|-------------|
103
+ | 1 | Bomb shelter | Complex/semi-open | Circular | 2 | 2 | 9 | 18 | 6480 |
104
+ | 2 | Gym | Rectangular/large | Circular | 2 | 2 | 9 | 18 | 6480 |
105
+ | 3 | PB132 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
106
+ | 4 | PC226 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
107
+ | 5 | SA203 Lecture hall | Trapezoidal/large | Linear | 3 | 6 | 3 | 18 | 1594 |
108
+ | 6 | SC203 Classroom | Rectangular/medium | Linear | 2 | 4 | 5 | 20 | 1592 |
109
+ | 7 | SE203 Classroom | Rectangular/large | Linear | 2 | 4 | 4 | 16 | 1760 |
110
+ | 8 | TB103 Classroom | Trapezoidal/large | Linear | 2 | 4 | 3 | 12 | 1184 |
111
+ | 9 | TC352 Meeting room | Rectangular/small | Circular | 1 | 2 | 9 | 18 | 6480 |
112
+
113
+ More details on the trajectory geometries can be found in the database info file (`measinfo.mat`).
114
+
115
+ ## Recording formats
116
+
117
+ The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
118
+
119
+ **For the first-order ambisonics (FOA):**
120
+
121
+ \begin{eqnarray}
122
+ H_1(\phi, \theta, f) &=& 1 \\
123
+ H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\
124
+ H_3(\phi, \theta, f) &=& \sin(\theta) \\
125
+ H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta)
126
+ \end{eqnarray}
127
+
128
+ The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. Routines that can compute the matrix of encoding filters for spherical and general arrays, based on theoretical array models or measurements, can be found [here](https://github.com/polarch/Spherical-Array-Processing).
129
+
130
+ **For the tetrahedral microphone array (MIC):**
131
+
132
+ The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
133
+
134
+ \begin{eqnarray}
135
+ M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\
136
+ M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
137
+ M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
138
+ M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
139
+ \end{eqnarray}
140
+
141
+ Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
142
+ \begin{equation}
143
+ H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))
144
+ \end{equation}
145
+
146
+ where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator).
147
+
148
+ ## Reference directions-of-arrival
149
+
150
+ For each extracted RIR across a measurement trajectory there is a direction-of-arrival (DOA) associated with it, which can be used as the reference direction for sound source spatialized using this RIR, for training or evaluation purposes. The DOAs were determined acoustically from the extracted RIRs, by windowing the direct sound part and applying a broadband version of the MUSIC localization algorithm on the windowed multichannel signal.
151
+
152
+ The DOAs are provided as Cartesian components [x, y, z] of unit length vectors.
153
+
154
+ ## Scene generator
155
+
156
+ A set of routines is shared, here termed scene generator, that can spatialize a bank of sound samples using the SRIRs and noise recordings of this library, to emulate scenes for the two target formats. The code is the same as the one used to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset, and has been ported to Python from the original version written in Matlab.
157
+
158
+ The generator can be found [**here**](https://github.com/danielkrause/DCASE2022-data-generator), along with more details on its use.
159
+
160
+ The generator at the moment is set to work with the [NIGENS](https://zenodo.org/record/2535878) sound event sample database, and the [FSD50K](https://zenodo.org/record/4060432) sound event database, but additional sample banks can be added with small modifications.
161
+
162
+ The dataset together with the generator has been used by the authors in the following public challenges:
163
+
164
+ - [DCASE 2019 Challenge Task 3](https://dcase.community/challenge2019/task-sound-event-localization-and-detection), to generate the **TAU Spatial Sound Events 2019** dataset ([development](https://doi.org/10.5281/zenodo.2599196)/[evaluation](https://doi.org/10.5281/zenodo.3377088))
165
+ - [DCASE 2020 Challenge Task 3](https://dcase.community/challenge2020/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2020**](https://doi.org/10.5281/zenodo.4064792) dataset
166
+ - [DCASE2021 Challenge Task 3](https://dcase.community/challenge2021/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset
167
+ - [DCASE2022 Challenge Task 3](https://dcase.community/challenge2022/task-sound-event-localization-and-detection), to generate additional [SELD synthetic mixtures for training the task baseline](https://doi.org/10.5281/zenodo.6406873)
168
+
169
+ > **NOTE**: The current version of the generator is work-in-progress, with some code being quite "rough". If something does not work as intended or it is not clear what certain parts do, please contact [daniel.krause@tuni.fi](mailto:daniel.krause@tuni.fi), or [archontis.politis@tuni.fi](mailto:archontis.politis@tuni.fi).
170
+
171
+ ## Dataset structure
172
+
173
+ The dataset contains a folder of the SRIRs (`TAU-SRIR_DB`), with all the SRIRs per room in a single _mat_ file, e.g. `rirs_09_tb103.mat`. The specific room had 4 trajectory groups measured at 3 different heights, hence the mat file contains an `rirs` array of 4x3 structures, each with the fields `mic` and `foa`. Selecting e.g. the 2nd trajectory and 3rd height with `rirs(2,3)` returns `mic` and `foa` fields with an array of size `[7200x4x114]` on each. The array contains the SRIRs for the specific format, and it is arranged as `[samples x channels x DOAs]`, meaning that 300msec long (7200samples@24kHz) 4 channel RIRs are extracted at 114 positions along that specific trajectory.
174
+
175
+ The file `rirdata.mat` contains some general information such as sample rate, format specifications, and most importantly the DOAs of every extracted SRIR. Those can be found in the `rirdata.room` field, which is an array of 9 structures itself, one per room. Checking for example `rirdata.room(8)` returns the name of the specific room (_tb103_), the year the measurements were done, the numbers of SRIRs extracted for each trajectory, and finally the DOAs of the extracted SRIRs. The DOAs of a certain trajectory can be retrieved as e.g. `rirdata.room(8).rirs(2,3).doa_xyz` which returns an array of size `[114x3]`. These are the DOAs of the 114 SRIRs retrieved in the previous step for the 2nd trajectory, 3rd source height, of room `TB103`.
176
+
177
+ The file `measinfo.mat` contains measurement and recording information in each room. Those details are the name of each room, its dimensions for rectangular or trapezoidal shapes, start and end positions for the linear trajectories, or distances from center for the circular ones, the source heights for each trajectory group, the target formats, the trajectory type, the recording device, the A-weighted ambient sound pressure level, and the maximum and minimum A-weighted sound pressure level of the measurement noise signal. Coordinates are defined with respect to the origina being at the base of the microphone. Based on the information included in the `measinfo.mat`, one can plot a 3D arrangement of the trajectories around the microphone, even though keep in mind that these would be the ideal circular or linear intended trajectories, while the actual DOAs obtained from acoustic analysis have some deviations around those ideal paths.
178
+
179
+ Finally, the dataset contains a folder of spatial ambient noise recordings (`TAU-SNoise_DB`), with one subfolder per room having two audio recordings fo the spatial ambience, one for each format, FOA or MIC. The recordings vary in length between rooms, ranging from about 20 mins to 30 mins. Users of the dataset can segment these recordings and add them to spatialized sound samples at desired SNRs, or mix different segments to augment the recordings to additional ambience than the original recording time. Such a use case is demonstrated in the scene generator examples.
180
+
181
+ ## Download
182
+
183
+ The files `TAU-SRIR_DB.z01`, ..., `TAU-SRIR_DB.zip` contain the SRIRs and measurement info files.
184
+ The files `TAU-SNoise_DB.z01`, ..., `TAU-SNoise_DB.zip` contain the ambient noise recordings.
185
+
186
+ Download the zip files and use your preferred compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:
187
+
188
+ Combine the split archive to a single archive:
189
+ >zip -s 0 split.zip --out single.zip
190
+
191
+ Extract the single archive using unzip:
192
+ >unzip single.zip
193
+
194
+ # License
195
+
196
+ The database is published under a custom **open non-commercial with attribution** license. It can be found in the `LICENSE.txt` file that accompanies the data.
197
+