Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
French
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
jhauret zinc75 commited on
Commit
9ef8275
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files

Co-authored-by: zinc75 <zinc75@users.noreply.huggingface.co>

Files changed (2) hide show
  1. .gitattributes +55 -0
  2. README.md +615 -0
.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,615 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ - expert-generated
7
+ language:
8
+ - fr
9
+ license: cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets: []
15
+ task_categories:
16
+ - audio-to-audio
17
+ - automatic-speech-recognition
18
+ - audio-classification
19
+ - text-to-speech
20
+ task_ids:
21
+ - speaker-identification
22
+ pretty_name: VibraVox
23
+ dataset_info:
24
+ - config_name: speech_clean
25
+ features:
26
+ - name: audio.headset_mic
27
+ dtype: audio
28
+ - name: audio.forehead_accelerometer
29
+ dtype: audio
30
+ - name: audio.soft_in_ear_mic
31
+ dtype: audio
32
+ - name: audio.rigid_in_ear_mic
33
+ dtype: audio
34
+ - name: audio.temple_vibration_pickup
35
+ dtype: audio
36
+ - name: audio.laryngophone
37
+ dtype: audio
38
+ - name: gender
39
+ dtype: string
40
+ - name: speaker_id
41
+ dtype: string
42
+ - name: sentence_id
43
+ dtype: int64
44
+ - name: duration
45
+ dtype: float64
46
+ - name: raw_text
47
+ dtype: string
48
+ - name: normalized_text
49
+ dtype: string
50
+ - name: phonemized_text
51
+ dtype: string
52
+ splits:
53
+ - name: train
54
+ num_bytes: 117568373033.0
55
+ num_examples: 21800
56
+ - name: validation
57
+ num_bytes: 14971040212.4
58
+ num_examples: 2800
59
+ - name: test
60
+ num_bytes: 17201168816.504
61
+ num_examples: 3209
62
+ download_size: 149986969306
63
+ dataset_size: 149740582061.904
64
+ - config_name: speech_noisy
65
+ features:
66
+ - name: audio.headset_mic
67
+ dtype: audio
68
+ - name: audio.forehead_accelerometer
69
+ dtype: audio
70
+ - name: audio.soft_in_ear_mic
71
+ dtype: audio
72
+ - name: audio.rigid_in_ear_mic
73
+ dtype: audio
74
+ - name: audio.temple_vibration_pickup
75
+ dtype: audio
76
+ - name: audio.laryngophone
77
+ dtype: audio
78
+ - name: gender
79
+ dtype: string
80
+ - name: speaker_id
81
+ dtype: string
82
+ - name: sentence_id
83
+ dtype: int64
84
+ - name: duration
85
+ dtype: float64
86
+ - name: raw_text
87
+ dtype: string
88
+ - name: normalized_text
89
+ dtype: string
90
+ - name: phonemized_text
91
+ dtype: string
92
+ splits:
93
+ - name: train
94
+ num_bytes: 7088359015.472
95
+ num_examples: 1242
96
+ - name: validation
97
+ num_bytes: 864199678.0
98
+ num_examples: 148
99
+ - name: test
100
+ num_bytes: 1035602630.0
101
+ num_examples: 181
102
+ download_size: 8952630385
103
+ dataset_size: 8988161323.472
104
+ - config_name: speechless_clean
105
+ features:
106
+ - name: audio.headset_mic
107
+ dtype: audio
108
+ - name: audio.forehead_accelerometer
109
+ dtype: audio
110
+ - name: audio.soft_in_ear_mic
111
+ dtype: audio
112
+ - name: audio.rigid_in_ear_mic
113
+ dtype: audio
114
+ - name: audio.temple_vibration_pickup
115
+ dtype: audio
116
+ - name: audio.laryngophone
117
+ dtype: audio
118
+ - name: gender
119
+ dtype: string
120
+ - name: speaker_id
121
+ dtype: string
122
+ - name: duration
123
+ dtype: float64
124
+ splits:
125
+ - name: train
126
+ num_bytes: 9535038216.0
127
+ num_examples: 153
128
+ - name: validation
129
+ num_bytes: 1184152856.0
130
+ num_examples: 19
131
+ - name: test
132
+ num_bytes: 1308799894.0
133
+ num_examples: 21
134
+ download_size: 10936768066
135
+ dataset_size: 12027990966.0
136
+ - config_name: speechless_noisy
137
+ features:
138
+ - name: audio.headset_mic
139
+ dtype: audio
140
+ - name: audio.forehead_accelerometer
141
+ dtype: audio
142
+ - name: audio.soft_in_ear_mic
143
+ dtype: audio
144
+ - name: audio.rigid_in_ear_mic
145
+ dtype: audio
146
+ - name: audio.temple_vibration_pickup
147
+ dtype: audio
148
+ - name: audio.laryngophone
149
+ dtype: audio
150
+ - name: gender
151
+ dtype: string
152
+ - name: speaker_id
153
+ dtype: string
154
+ - name: duration
155
+ dtype: float64
156
+ splits:
157
+ - name: train
158
+ num_bytes: 25386889944.0
159
+ num_examples: 153
160
+ - name: validation
161
+ num_bytes: 3152688128.0
162
+ num_examples: 19
163
+ - name: test
164
+ num_bytes: 3484450558.0
165
+ num_examples: 21
166
+ download_size: 31701691011
167
+ dataset_size: 32024028630.0
168
+ configs:
169
+ - config_name: speech_clean
170
+ data_files:
171
+ - split: train
172
+ path: speech_clean/train-*
173
+ - split: validation
174
+ path: speech_clean/validation-*
175
+ - split: test
176
+ path: speech_clean/test-*
177
+ - config_name: speech_noisy
178
+ data_files:
179
+ - split: train
180
+ path: speech_noisy/train-*
181
+ - split: validation
182
+ path: speech_noisy/validation-*
183
+ - split: test
184
+ path: speech_noisy/test-*
185
+ - config_name: speechless_clean
186
+ data_files:
187
+ - split: train
188
+ path: speechless_clean/train-*
189
+ - split: validation
190
+ path: speechless_clean/validation-*
191
+ - split: test
192
+ path: speechless_clean/test-*
193
+ - config_name: speechless_noisy
194
+ data_files:
195
+ - split: train
196
+ path: speechless_noisy/train-*
197
+ - split: validation
198
+ path: speechless_noisy/validation-*
199
+ - split: test
200
+ path: speechless_noisy/test-*
201
+ ---
202
+
203
+
204
+ # Dataset Card for VibraVox
205
+
206
+ <p align="center">
207
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65302a613ecbe51d6a6ddcec/zhB1fh-c0pjlj-Tr4Vpmr.png" style="object-fit:contain; width:280px; height:280px;" >
208
+ </p>
209
+
210
+ ---
211
+
212
+
213
+ ## DATASET SUMMARY
214
+
215
+ The [VibraVox dataset](https://vibravox.cnam.fr) is a general purpose audio dataset of french speech captured with body-conduction transducers.
216
+ This dataset can be used for various audio machine learning tasks :
217
+ - **Automatic Speech Recognition (ASR)** (Speech-to-Text , Speech-to-Phoneme)
218
+ - **Audio Bandwidth Extension (BWE)**
219
+ - **Speaker Verification (SPKV)** / identification
220
+ - **Voice cloning**
221
+ - etc ...
222
+
223
+ ### Dataset usage
224
+
225
+ VibraVox contains 4 subsets, corresponding to different situations tailored for specific tasks. To load a specific subset simply use the following command (```subset``` can be any of the following : ``` "speech_clean" ``` , ``` "speech_noisy" ``` , ``` "speechless_clean" ``` , ``` "speechless_noisy" ```):
226
+
227
+ ```python
228
+ from datasets import load_dataset
229
+ subset = "speech_clean"
230
+ vibravox = load_dataset("Cnam-LMSSC/vibravox", subset)
231
+ ```
232
+
233
+ The dataset is also compatible with the `streaming` mode:
234
+
235
+ ```python
236
+ from datasets import load_dataset
237
+ subset = "speech_clean"
238
+ vibravox = load_dataset("Cnam-LMSSC/vibravox", subset, streaming=True)
239
+ ```
240
+
241
+ ### Citations, links and details
242
+
243
+
244
+ - **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr)
245
+ - **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for Automatic Speech Recognition, Bandwidth Extension and Speaker Verification using the Vibravox dataset
246
+ - **Point of Contact:** [Eric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
247
+ - **Curated by:** AVA Team of the [LMSSC Research Laboratory](https://lmssc.cnam.fr)
248
+ - **Funded by:** French [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6)
249
+ - **Language:** French
250
+ - **License:** Creative Commons Attributions 4.0
251
+
252
+ I you use the Vibravox dataset for research, **cite this paper** :
253
+
254
+ ```bibtex
255
+ @article{jhauret-et-al-2024-vibravox,
256
+ title = "{V}ibravox : A general purpose dataset of speech captured with body-conduction microphones",
257
+ author = "Hauret, Julien and Olivier, Malo and Joubaud, Thomas and Langrenne, Christophe and
258
+ Poirée, Sarah and Zimpfer, Véronique and Bavu, Éric",
259
+ journal = "arXiv preprint / TODO : add arXiv reference",
260
+ year = "2024",
261
+ }
262
+ ```
263
+
264
+ **and this repository**, which is linked to a DOI :
265
+
266
+ ```bibtex
267
+ @misc{cnam-lmssc-2024-vibravox-dataset,
268
+ title = "{V}ibravox",
269
+ author = "Hauret, Julien and Olivier, Malo and Langrenne, Christophe and
270
+ Poirée, Sarah and Bavu, Éric",
271
+ journal = "Huggingface Datasets repository",
272
+ year = "2024",
273
+ publisher="Huggingface",
274
+ howpublished = {\url{https://huggingface.co/datasets/Cnam-LMSSC/vibravox}},
275
+ doi = "TODO: add doi"
276
+ }
277
+ ```
278
+
279
+ ---
280
+
281
+ ## SUPPORTED TASKS
282
+ <!-- and Leaderboards -->
283
+
284
+ ### Automatic-speech-recognition
285
+
286
+ - The model is presented with an audio file and asked to transcribe the audio file to written text (either normalized text of phonemized text). The most common evaluation metrics are the word error rate (WER), character error rate (CER), or phoneme error rate (PER).
287
+ - **Training code:** An example of implementation for the speech-to-phoneme task using [wav2vec2.0](https://arxiv.org/abs/2006.11477) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
288
+ - **Trained models:** We also provide trained models for the speech-to-phoneme task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_phonemizers](https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers)
289
+
290
+ ### Bandwidth-extension
291
+
292
+ - Also known as audio super-resolution, which is required to enhance the audio quality of body-conducted captured speech. The model is presented with a pair of audio clips (from a body-conducted captured speech, and from the corresponding clean, full bandwidth airborne-captured speech), and asked to enhance the audio by denoising and regenerating mid and high frequencies from low frequency content only.
293
+ - **Training code:** An example of implementation of this task using [Configurable EBEN](https://ieeexplore.ieee.org/document/10244161) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
294
+ - **Trained models:** We also provide trained models for the BWE task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_EBEN_bwe_models](https://huggingface.co/Cnam-LMSSC/vibravox_EBEN_bwe_models).
295
+ - **BWE-Enhanced dataset:** An EBEN-enhanced version of the `test`splits of the Vibravox dataset, generated using these 6 bwe models, is also available on Huggingface at [Cnam-LMSSC/vibravox_enhanced_by_EBEN](]https://huggingface.co/datasets/Cnam-LMSSC/vibravox_enhanced_by_EBEN).
296
+
297
+ ### Speaker-verification
298
+
299
+ - Given an input audio clip and a reference audio clip of a known speaker, the model's objective is to compare the two clips and verify if they are from the same individual. This often involves extracting embeddings from a deep neural network trained on a large dataset of voices. The model then measures the similarity between these feature sets using techniques like cosine similarity or a learned distance metric. This task is crucial in applications requiring secure access control, such as biometric authentication systems, where a person's voice acts as a unique identifier.
300
+ - **Testing code:** An example of implementation of this task using a pretrained [ECAPA2 model](https://arxiv.org/abs/2401.08342) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
301
+
302
+
303
+ ### Adding your models for supported tasks or contributing for new tasks
304
+
305
+ Feel free to contribute at the [Vibravox Github repository](https://github.com/jhauret/vibravox), by following the [contributor guidelines](https://github.com/jhauret/vibravox/blob/main/CONTRIBUTING.md).
306
+
307
+ ---
308
+
309
+ ## DATASET DETAILS
310
+
311
+ ### Dataset Description
312
+
313
+ VibraVox ([vibʁavɔks]) is a GDPR-compliant dataset scheduled released in June 2024. It includes speech recorded simultaneously using multiple audio and vibration sensors (from top to bottom on the following figure) :
314
+
315
+ - a forehead miniature vibration sensor (green)
316
+ - an in-ear comply foam-embedded microphone (red)
317
+ - an in-ear rigid earpiece-embedded microphone (blue)
318
+ - a temple vibration pickup (cyan)
319
+ - a headset microphone located near the mouth (purple)
320
+ - a laryngophone (orange)
321
+
322
+ The technology and references of each sensor is described and documented in [the dataset creation](#dataset-creation) section and [https://vibravox.cnam.fr/documentation/hardware/](https://vibravox.cnam.fr/documentation/hardware).
323
+
324
+ <p align="center">
325
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/P-_IWM3IMED5RBS3Lhydc.png" />
326
+ </p>
327
+
328
+ ### Goals
329
+
330
+ The VibraVox speech corpus has been recorded with 200 participants under various acoustic conditions imposed by a [5th order ambisonics spatialization sphere](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html).
331
+
332
+ VibraVox aims at serving as a valuable resource for advancing the field of **body-conducted speech analysis** and facilitating the development of **robust communication systems for real-world applications**.
333
+
334
+ Unlike traditional microphones, which rely on airborne sound waves, body-conduction sensors capture speech signals directly from the body, offering advantages in noisy environments by eliminating the capture of ambient noise. Although body-conduction sensors have been available for decades, their limited bandwidth has restricted their widespread usage. However, this may be the awakening of this technology to a wide public for speech capture and communication in noisy environments.
335
+
336
+ ### Data / sensor mapping
337
+
338
+ Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology :
339
+
340
+ | Vibravox dataset column name | Sensor | Location | Technology |
341
+ | ------------- | -------------------- | --------------------- | --------------------- |
342
+ | ```audio.headset_mic``` | Headset microphone | Near the mouth | Cardioid electrodynamic microphone
343
+ | ```audio.laryngophone``` | Laryngophone | Throat / Larynx | Piezoelectric sensor |
344
+ | ```audio.soft_in_ear_mic``` | In-ear soft foam-embedded microphone | Right ear canal | Omnidirectional electret condenser microphone |
345
+ | ```audio.rigid_in_ear_mic``` | In-ear rigid earpiece-embedded microphone | Left ear-canal | Omnidirectional MEMS microphone |
346
+ | ```audio.forehead_accelerometer``` | Forehead vibration sensor | Frontal bone | One-axis accelerometer |
347
+ | ```audio.temple_vibration_pickup``` | Temple vibration pickup | Zygomatic bone | Figure of-eight pre-polarized condenser transducer |
348
+
349
+
350
+ ---
351
+
352
+ ## DATASET STRUCTURE
353
+
354
+ ### Subsets
355
+
356
+ Each of the 4 subsets contain **6 columns of audio data**, corresponding to the 5 different body conduction sensors, plus the standard headset microphone.
357
+
358
+ Recording was carried out simultaneously on all 6 sensors, **audio files being sampled at 48 kHz and encoded as .wav PCM32 files**.
359
+
360
+ The 4 subsets correspond to :
361
+
362
+ - **```speech_clean```** : the speaker reads sentences sourced from the French Wikipedia. This split contains the most data for training for various tasks.
363
+
364
+ - **```speech_noisy```** : the speaker reads sentences sourced from the French Wikipedia, in a noisy environment based on ambisonic recordings replayed in a spatialization sphere equipped with 56 loudspeakers surrounding the speaker. This will primarily serve to test the different systems (Speech Enhancement, Automatic Speech Recognition, Speaker Verification) that will be developed based on the recordings from the first three phases. It is primarily intended for testing the various systems (speech enhancement, automatic speech recognition, speaker verification) that will be developed on the basis of the recordings from ```speech_clean```.
365
+
366
+ - **```speechless_clean```** : wearer of the devices remains speechless in a complete silence, but are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. These samples can be valuable for tasks such as heart rate tracking or simply analyzing the noise properties of the various microphones, but also to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors.
367
+
368
+ - **```speechless_noisy```** : wearer of the devices remains speechless in a noisy environment created using [AudioSet](https://research.google.com/audioset/) noise samples. These samples have been selected from relevant classes, normalized in loudness, pseudo-spatialized and are played from random directions around the participant using [5th order ambisonic 3D sound spatializer](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html) equipped with 56 loudspeakers. The objective of this split is to gather background noises that can be combined with the `speech_clean` recordings to maintain a clean reference. This allows to use those samples for **realistic data-augmentation** using noise captured by body-conduction sensors, with the inherent attenuation of each sensors on different device wearers.
369
+
370
+
371
+ ### Splits
372
+
373
+ All the subsets are available in 3 splits (train, validation and test), with a standard 80 / 10 / 10 repartition, without overlapping any speaker in each split.
374
+
375
+ The speakers / participants in specific splits are the same for each subset, thus allowing to
376
+
377
+ - use the `speechless_noisy` for data augmentation for example
378
+ - test on the `speech_noisy` testset your models trained on the `speech_clean` trainset without having to worry that a speaker would have been presented in the training phase.
379
+
380
+ ### Data Fields
381
+
382
+ In non-streaming mode (default), the path value of all dataset.Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
383
+
384
+ **Common Data Fields for all subsets :**
385
+
386
+ * `audio.headset_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate.
387
+ * `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
388
+ * `audio.soft_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate.
389
+ * `audio.rigid_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate.
390
+ * `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
391
+ * `audio.laryngophone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate.
392
+ * `gender` (string) - gender of speaker (```male```or ```female```)
393
+ * `speaker_id` (string) - encrypted id of speaker
394
+ * `duration` (float32) - the audio length in seconds.
395
+
396
+
397
+ **Extra Data Fields for `speech_clean` and `speech_noisy` splits:**
398
+
399
+ For **speech** subsets, the datasets has columns corresponding to the pronounced sentences, which are absent of the **speechless** subsets :
400
+
401
+ * `sentence_id` (int) - id of the pronounced sentence
402
+ * `raw_text` (string) - audio segment text (cased and with punctuation preserved)
403
+ * `normalized_text` (string) - audio segment normalized text (lower cased, no punctuation, diacritics replaced by standard 26 french alphabet letters, plus 3 accented characters : é,è,ê and ç -- which hold phonetic significance -- and the space character, which corresponds to 31 possible characters : ``` [' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'ç', 'è', 'é', 'ê'] ```).
404
+ * `phonemes` (string) - audio segment phonemized text using exclusively the strict french IPA (33) characters
405
+
406
+
407
+ ### Phonemes list and tokenizer
408
+
409
+ - The strict french IPA characters used in Vibravox are : ``` [' ', 'a', 'b', 'd', 'e', 'f', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ø', 'ŋ', 'œ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɡ', 'ɲ', 'ʁ', 'ʃ', 'ʒ', '̃'] ```.
410
+ - For convience and research reproducibility, we provide a tokenizer for speech-to-phonemes tasks that corresponds to those phonemes at [https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer](https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer).
411
+
412
+
413
+ ### Examples of data Instances
414
+
415
+ #### `speech_clean` or `speech_noisy` splits:
416
+
417
+ ```python
418
+ {
419
+ 'audio.headset_mic': {
420
+ 'path': '02472_headset_mic.wav',
421
+ 'array': array([ 0.00045776, 0.00039673, 0.0005188 , ..., -0.00149536,
422
+ -0.00094604, 0.00036621]),
423
+ 'sampling_rate': 48000},
424
+ 'audio.forehead_accelerometer': {
425
+ 'path': '02472_forehead_accelerometer.wav',
426
+ 'array': array([ 0.0010376 , -0.00045776, -0.00085449, ..., -0.00491333,
427
+ -0.00524902, -0.00302124]),
428
+ 'sampling_rate': 48000},
429
+ 'audio.soft_in_ear_mic': {
430
+ 'path': '02472_soft_in_ear_mic.wav',
431
+ 'array': array([-0.06472778, -0.06384277, -0.06292725, ..., -0.02133179,
432
+ -0.0213623 , -0.02145386]),
433
+ 'sampling_rate': 48000},
434
+ 'audio.rigid_in_ear_mic': {
435
+ 'path': '02472_rigid_in_ear_mic.wav',
436
+ 'array': array([-0.01824951, -0.01821899, -0.01812744, ..., -0.00387573,
437
+ -0.00427246, -0.00439453]),
438
+ 'sampling_rate': 48000},
439
+ 'audio.temple_vibration_pickup':{
440
+ 'path': '02472_temple_vibration_pickup.wav',
441
+ 'array': array([-0.0177002 , -0.01791382, -0.01745605, ..., 0.01098633,
442
+ 0.01260376, 0.01220703]),
443
+ 'sampling_rate': 48000},
444
+ 'audio.laryngophone': {
445
+ 'path': '02472_laryngophone.wav',
446
+ 'array': array([-2.44140625e-04, -3.05175781e-05, 2.13623047e-04, ...,
447
+ 4.88281250e-04, 4.27246094e-04, 3.66210938e-04]),
448
+ 'sampling_rate': 48000},
449
+ 'gender': 'female',
450
+ 'speaker_id': 'qt4TPMEPwF',
451
+ 'sentence_id': 2472,
452
+ 'duration': 4.5,
453
+ 'raw_text': "Cette mémoire utilise le changement de phase du verre pour enregistrer l'information.",
454
+ 'normalized_text': 'cette mémoire utilise le changement de phase du verre pour enregistrer l information',
455
+ 'phonemized_text': 'sɛt memwaʁ ytiliz lə ʃɑ̃ʒmɑ̃ də faz dy vɛʁ puʁ ɑ̃ʁʒistʁe lɛ̃fɔʁmasjɔ̃'
456
+ }
457
+ ```
458
+
459
+ #### `speechless_clean` or `speechless_noisy` splits
460
+
461
+ (thus missing the text-related fields)
462
+
463
+ ```python
464
+ {
465
+ 'audio.headset_mic': {
466
+ 'path': 'jMngOy7BdQ_headset_mic.wav',
467
+ 'array': array([-1.92260742e-03, -2.44140625e-03, -2.99072266e-03, ...,
468
+ 0.00000000e+00, 3.05175781e-05, -3.05175781e-05]),
469
+ 'sampling_rate': 48000},
470
+ 'audio.forehead_accelerometer': {
471
+ 'path': 'jMngOy7BdQ_forehead_accelerometer.wav',
472
+ 'array': array([-0.0032959 , -0.00259399, 0.00177002, ..., -0.00073242,
473
+ -0.00076294, -0.0005188 ]),
474
+ 'sampling_rate': 48000},
475
+ 'audio.soft_in_ear_mic': {
476
+ 'path': 'jMngOy7BdQ_soft_in_ear_mic.wav',
477
+ 'array': array([0.00653076, 0.00671387, 0.00683594, ..., 0.00045776, 0.00042725,
478
+ 0.00042725]),
479
+ 'sampling_rate': 48000},
480
+ 'audio.rigid_in_ear_mic': {
481
+ 'path': 'jMngOy7BdQ_rigid_in_ear_mic.wav',
482
+ 'array': array([ 1.05895996e-02, 1.03759766e-02, 1.05590820e-02, ...,
483
+ 0.00000000e+00, -3.05175781e-05, -9.15527344e-05]),
484
+ 'sampling_rate': 48000},
485
+ 'audio.temple_vibration_pickup': {
486
+ 'path': 'jMngOy7BdQ_temple_vibration_pickup.wav',
487
+ 'array': array([-0.00082397, -0.0020752 , -0.0012207 , ..., -0.00738525,
488
+ -0.00814819, -0.00579834]), 'sampling_rate': 48000},
489
+ 'audio.laryngophone': {
490
+ 'path': 'jMngOy7BdQ_laryngophone.wav',
491
+ 'array': array([ 0.00000000e+00, 3.05175781e-05, 1.83105469e-04, ...,
492
+ -6.10351562e-05, -1.22070312e-04, -9.15527344e-05]),
493
+ 'sampling_rate': 48000},
494
+ 'gender': 'male',
495
+ 'speaker_id': 'jMngOy7BdQ',
496
+ 'duration': 54.097
497
+ }
498
+ ```
499
+
500
+
501
+ ---
502
+
503
+ ## DATA STATISTICS
504
+
505
+ ### Speakers gender balance
506
+
507
+ To increase the representativeness and inclusivity of the dataset, a deliberate effort was made to recruit a diverse and gender-balanced group of speakers : the overall gender repartition male/female in terms of number of speakers included in the dataset is 48.3% / 51.6% for all subsets.
508
+
509
+ ### Speakers age balance
510
+
511
+ TODO : update values when final dataset is uploaded
512
+
513
+ | Quantity | Mean | Median | Min | Max |
514
+ |-----------------------|-------|--------|-------|--------|
515
+ | Age, all speakers (years) | 27.62 | 24.00 | 18.00 | 82.00 |
516
+ | Age, male speakers (years) | 31.00 | 27.00 | 18.00 | 82.00 |
517
+ | Age, female speakers (years) | 24.50 | 22.00 | 19.00 | 59.00 |
518
+
519
+
520
+ ### Audio data
521
+
522
+ TODO : update values when final dataset is uploaded
523
+
524
+ | Subset / split | Audio duration | # of audio clips | Download size | # of Speakers (M/F) | Gender repartition M/F (in audio duration) |
525
+ |:---:|:---:|:---:|:---:|:---:|:---:|
526
+ | `speech_clean`/`train` | 6 x 23.5 h | 6 x 18800 | 46.8 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% |
527
+ | `speech_clean`/`validation` | 6 x 2.9 h | 6 x 2510 | 6.5 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% |
528
+ | `speech_clean`/`test` | 6 x 2.8 h | 6 x 2787 | 6.8 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% |
529
+ | `speech_noisy`/`train` | 6 x 1.1 h | 6 x 845 | 2.2 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% |
530
+ | `speech_noisy`/`validation` | 6 x 0.2 h | 6 x 118 | 0.3 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% |
531
+ | `speech_noisy`/`test` | 6 x 0.17 h | 6 x 97 | 0.25 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% |
532
+ | `speechless_clean`/`train` | 6 x 2.35 h | 6 x 157 | 4.5 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% |
533
+ | `speechless_clean`/`validation` | 6 x 0.3 h | 6 x 20 | 0.5 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% |
534
+ | `speechless_clean`/`test` | 6 x 0.28 h | 6 x 19 | 0.5 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% |
535
+ | `speechless_noisy`/`train` | 6 x 6.3 h | 6 x 157 | 12.1 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% |
536
+ | `speechless_noisy`/`validation` | 6 x 0.8 h | 6 x 20 | 1.5 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% |
537
+ | `speechless_noisy`/`test` | 6 x 0.76 h | 6 x 19 | 1.45 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% |
538
+ | **Total** | 6 x 41.5 h | 6 x 25549 | 83.4 GiB | 196 (95 M, 101 F) | 48.3% / 51.6% |
539
+
540
+ ### Audio clip durations
541
+
542
+ TODO : update values when final dataset is uploaded
543
+
544
+ | Subset / split | Mean | Median | Max | Min |
545
+ |:---:|:---:|:---:|:---:|:---:|
546
+ | `speech_clean`/`train` | 4.05 s | 3.96 s | 11.20 s | 0.90 s |
547
+ | `speech_clean`/`validation` | 4.24 s | 4.22 s | 8.66 s | 1.18 s |
548
+ | `speech_clean`/`test` | 4.05 s | 3.94 s | 9.66 s | 1.12 s |
549
+ | `speech_noisy`/`train` | 4.28 s | 4.17 s | 8.48 s | 0.82 s |
550
+ | `speech_noisy`/`validation` | 4.62 s | 4.57 s | 7.48 s | 1.16 s |
551
+ | `speech_noisy`/`test` | 4.30 s | 4.30 s | 7.94 s | 1.58 s |
552
+ | `speechless_clean`/`train` | 54.10 s | 54.10 s | 54.10 s | 54.10 s |
553
+ | `speechless_clean`/`validation`| 54.10 s | 54.10 s | 54.10 s | 54.10 s |
554
+ | `speechless_clean`/`test | 54.10 s | 54.10 s | 54.10 s | 54.10 s |
555
+ | `speechless_noisy`/`train` | 144.04 s | 144.03 s | 144.05 s | 144.02 s |
556
+ | `speechless_noisy`/`validation`| 144.03 s | 144.03 s | 144.04 s | 144.03 s |
557
+ | `speechless_noisy`/`test` | 144.04 s | 144.03 s | 144.05 s | 144.03 s |
558
+
559
+
560
+ ---
561
+
562
+ ## DATASET CREATION
563
+
564
+ ### Textual source data
565
+
566
+ The text read by all participants is collected from the French Wikipedia subset of Common voice ( [link1](https://github.com/common-voice/common-voice/blob/main/server/data/fr/wiki-1.fr.txt) [link2](https://github.com/common-voice/common-voice/blob/main/server/data/fr/wiki-2.fr.txt) ) . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect.
567
+
568
+ All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using the [bootphon/phonemizer](https://github.com/bootphon/phonemizer) and manually edited to only keep strict french IPA characters.
569
+
570
+ ### Audio Data Collection
571
+
572
+
573
+ #### Sensors positioning and documentation
574
+
575
+
576
+ | **Sensor** | **Image** | **Transducer** | **Online documentation** |
577
+ |:---------------------------|:---------------------|:-------------|:----------------------------------------------------------------------------------------------------------------------|
578
+ | Reference headset microphone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/iVYX1_7wAdZb4oDrc9v6l.png) | Shure WH20 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/airborne/index.html) |
579
+ | In-ear comply foam-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/Uf1VOwx-kxPiYY1oMW5pz.png)| Knowles FG-23329-P07 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/soft_inear/index.html) |
580
+ | In-ear rigid earpiece-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/EBY9dIKFN8GDaDXUuhp7n.png)| Knowles SPH1642HT5H | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/rigid_inear/index.html) |
581
+ | Forehead miniature vibration sensor |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/2zHrN-7OpbH-zJTqASZ7J.png)| Knowles BU23173-000 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/forehead/index.html) |
582
+ | Temple vibration pickup |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/wAcTQlmzvl0O4kNyA3MnC.png)| AKG C411 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/temple/index.html) |
583
+ | Laryngophone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/4SGNSgXYc6hBJcI1cRXY_.png)| iXRadio XVTM822D-D35 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/throat/index.html) |
584
+
585
+
586
+ #### Recorded audio data post-processing
587
+
588
+ Across the sentences collected from the 200 participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof :mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors.
589
+
590
+ To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in [link to the paper]() : TODO : add link to arxiv paper when uploaded
591
+
592
+
593
+ - The first filter uses a pre-trained ASR model run on the headset microphone data, which allows to address discrepancies between the labeled transcription and actual pronunciation, ensuring high-quality labels for the speech-to-phoneme task.
594
+ - The second filter confirms that the sensor is functioning correctly by verifying that speech exhibits higher energy than silence, thereby identifying potentially unreliable recordings with low vocal energy levels or sensor malfunction.
595
+ - The third filter detects sensitivity drift in the sensors, which can occur due to electronic malfunctions or mechanical blockages in the transducer.
596
+ - If an audio clip passes all filters, it is not immediately added to the dataset. Instead, VAD-generated timestamps from [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) are used, extending them by 0.3 seconds on both sides. This method helps remove mouse clicks at audio boundaries and ensures the capture of vocal segments without excluding valid speech portions.
597
+
598
+ ### Personal and Sensitive Information
599
+
600
+ The VibraVox dataset does not contain any data that might be considered as personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.).
601
+
602
+ The `speaker_id` were generated using a powerful Fernet encryption algorithm, and the extraction of a subset of the encrypted id, guaranteeing a strict anonymisation of the voice recordings, while allowing the dataset maintainers to delete corresponding data under the right to oblivion.
603
+
604
+ A [consent form](https://vibravox.cnam.fr/documentation/consent/index.html) has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All [Cnil](https://www.cnil.fr/en) requirements have been checked, including the right to oblivion during 50 years.
605
+
606
+ ---
607
+
608
+ ## DATASET CARD AUTHORS
609
+
610
+ Éric Bavu (https://huggingface.co/zinc75)
611
+
612
+ ### Dataset Card Contact
613
+
614
+ [Eric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
615
+