Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
French
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
File size: 35,058 Bytes
9ef8275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2fb0e2a
40b660f
318542c
a0143e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01f14ce
 
a0143e6
01f14ce
 
a0143e6
01f14ce
 
 
 
4de42db
318542c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
363bf6e
 
318542c
363bf6e
 
318542c
363bf6e
 
 
 
4e0cb84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7990b7d
6925218
4e0cb84
7990b7d
6925218
4e0cb84
7990b7d
4e0cb84
6925218
7990b7d
4de42db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16ffd3d
 
4de42db
16ffd3d
 
4de42db
16ffd3d
4de42db
16ffd3d
 
318542c
a0143e6
 
 
 
 
 
 
 
318542c
 
 
 
 
 
 
 
4e0cb84
 
 
 
 
 
 
 
4de42db
 
 
 
 
 
 
 
9ef8275
 
 
44f8e2f
9ef8275
 
 
 
 
 
 
 
962bf3b
44f8e2f
9ef8275
 
 
 
 
 
 
 
 
 
 
74dfae0
9ef8275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24aa8ac
8dfc2a8
24aa8ac
 
9ef8275
66b65a7
bd93f64
66b65a7
9ef8275
 
 
 
 
 
f2a351c
 
 
6b18d11
 
 
 
 
9ef8275
 
 
 
 
 
5d3885b
 
 
ae9cdc9
5d3885b
 
 
 
9ef8275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d13242
9ef8275
df9d841
9ef8275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8dfc2a8
cae0a9c
8dfc2a8
 
 
 
 
 
9ef8275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24aa8ac
9ef8275
8dfc2a8
9ef8275
 
8dfc2a8
9ef8275
 
 
8dfc2a8
9ef8275
 
 
8dfc2a8
9ef8275
8dfc2a8
 
24aa8ac
8dfc2a8
9ef8275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e60b00f
9ef8275
 
 
 
3e0f3e0
 
 
 
 
 
9ef8275
 
 
 
8ca8952
5261442
 
44a2abf
 
 
 
5261442
9ef8275
 
 
 
 
 
 
 
24aa8ac
9ef8275
 
 
 
 
 
 
 
 
 
 
6967845
 
 
 
 
 
9ef8275
 
 
 
66b65a7
9ef8275
6b18d11
9ef8275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- fr
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- audio-to-audio
- automatic-speech-recognition
- audio-classification
- text-to-speech
task_ids:
- speaker-identification
pretty_name: Vibravox
viewer: true
dataset_info:
- config_name: speech_clean
  features:
  - name: audio.headset_microphone
    dtype: audio
  - name: audio.forehead_accelerometer
    dtype: audio
  - name: audio.soft_in_ear_microphone
    dtype: audio
  - name: audio.rigid_in_ear_microphone
    dtype: audio
  - name: audio.temple_vibration_pickup
    dtype: audio
  - name: audio.throat_microphone
    dtype: audio
  - name: gender
    dtype: string
  - name: speaker_id
    dtype: string
  - name: sentence_id
    dtype: int64
  - name: duration
    dtype: float64
  - name: raw_text
    dtype: string
  - name: normalized_text
    dtype: string
  - name: phonemized_text
    dtype: string
  splits:
  - name: train
    num_bytes: 109247789463.0
    num_examples: 20981
  - name: validation
    num_bytes: 12896618986.0
    num_examples: 2523
  - name: test
    num_bytes: 15978915932.0
    num_examples: 3064
  download_size: 136955541722
  dataset_size: 138123324381.0
- config_name: speech_noisy
  features:
  - name: audio.headset_microphone
    dtype: audio
  - name: audio.forehead_accelerometer
    dtype: audio
  - name: audio.soft_in_ear_microphone
    dtype: audio
  - name: audio.rigid_in_ear_microphone
    dtype: audio
  - name: audio.temple_vibration_pickup
    dtype: audio
  - name: audio.throat_microphone
    dtype: audio
  - name: gender
    dtype: string
  - name: speaker_id
    dtype: string
  - name: sentence_id
    dtype: int64
  - name: duration
    dtype: float64
  - name: raw_text
    dtype: string
  - name: normalized_text
    dtype: string
  - name: phonemized_text
    dtype: string
  splits:
  - name: train
    num_bytes: 6522270562.0
    num_examples: 1220
  - name: validation
    num_bytes: 706141725.0
    num_examples: 132
  - name: test
    num_bytes: 937186370.0
    num_examples: 175
  download_size: 8156941693
  dataset_size: 8165598657.0
- config_name: speechless_clean
  features:
  - name: audio.headset_microphone
    dtype: audio
  - name: audio.forehead_accelerometer
    dtype: audio
  - name: audio.soft_in_ear_microphone
    dtype: audio
  - name: audio.rigid_in_ear_microphone
    dtype: audio
  - name: audio.temple_vibration_pickup
    dtype: audio
  - name: audio.throat_microphone
    dtype: audio
  - name: gender
    dtype: string
  - name: speaker_id
    dtype: string
  - name: duration
    dtype: float64
  splits:
  - name: train
    num_bytes: 9285823162.0
    num_examples: 149
  - name: validation
    num_bytes: 1121767128.0
    num_examples: 18
  - name: test
    num_bytes: 1308782974.0
    num_examples: 21
  download_size: 10651939843
  dataset_size: 11716373264.0
- config_name: speechless_noisy
  features:
  - name: audio.headset_microphone
    dtype: audio
  - name: audio.forehead_accelerometer
    dtype: audio
  - name: audio.soft_in_ear_microphone
    dtype: audio
  - name: audio.rigid_in_ear_microphone
    dtype: audio
  - name: audio.temple_vibration_pickup
    dtype: audio
  - name: audio.throat_microphone
    dtype: audio
  - name: gender
    dtype: string
  - name: speaker_id
    dtype: string
  - name: duration
    dtype: float64
  splits:
  - name: train
    num_bytes: 24723250192.0
    num_examples: 149
  - name: validation
    num_bytes: 2986606278.0
    num_examples: 18
  - name: test
    num_bytes: 3484522468.0
    num_examples: 21
  download_size: 30881658818
  dataset_size: 31194378938.0
configs:
- config_name: speech_clean
  data_files:
  - split: train
    path: speech_clean/train-*
  - split: validation
    path: speech_clean/validation-*
  - split: test
    path: speech_clean/test-*
- config_name: speech_noisy
  data_files:
  - split: train
    path: speech_noisy/train-*
  - split: validation
    path: speech_noisy/validation-*
  - split: test
    path: speech_noisy/test-*
- config_name: speechless_clean
  data_files:
  - split: train
    path: speechless_clean/train-*
  - split: validation
    path: speechless_clean/validation-*
  - split: test
    path: speechless_clean/test-*
- config_name: speechless_noisy
  data_files:
  - split: train
    path: speechless_noisy/train-*
  - split: validation
    path: speechless_noisy/validation-*
  - split: test
    path: speechless_noisy/test-*
---



# Dataset Card for VibraVox

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/65302a613ecbe51d6a6ddcec/zhB1fh-c0pjlj-Tr4Vpmr.png" style="object-fit:contain; width:280px; height:280px;" >
</p>

--- 

👀 While waiting for the [TooBigContentError issue](https://github.com/huggingface/dataset-viewer/issues/2215) to be resolved by the HuggingFace team, you can explore the dataset viewer of [vibravox-test](https://huggingface.co/datasets/Cnam-LMSSC/vibravox-test)
which has exactly the same architecture.

## DATASET SUMMARY

The [VibraVox dataset](https://vibravox.cnam.fr) is a general purpose audio dataset of french speech captured with body-conduction transducers.
This dataset can be used for various audio machine learning tasks :
- **Automatic Speech Recognition (ASR)** (Speech-to-Text , Speech-to-Phoneme)
- **Audio Bandwidth Extension (BWE)**
-  **Speaker Verification (SPKV)** / identification
-  **Voice cloning**
-   etc ...


### Dataset usage

VibraVox contains 4 subsets, corresponding to different situations tailored for specific tasks. To load a specific subset simply use the following command (```subset``` can be any of the following : ``` "speech_clean" ``` , ``` "speech_noisy" ``` , ``` "speechless_clean" ``` , ``` "speechless_noisy" ```):

```python
from datasets import load_dataset
subset = "speech_clean"
vibravox = load_dataset("Cnam-LMSSC/vibravox", subset)
```

The dataset is also compatible with the `streaming` mode:

```python
from datasets import load_dataset
subset = "speech_clean"
vibravox = load_dataset("Cnam-LMSSC/vibravox", subset, streaming=True)
```

### Citations, links and details


- **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr)
- **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for ASR, BWE and SPKV tasks using the Vibravox dataset
- **Point of Contact:** [Julien Hauret](https://www.linkedin.com/in/julienhauret/) and [Éric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
- **Curated by:** [AVA Team](https://lmssc.cnam.fr/fr/recherche/identification-localisation-synthese-de-sources-acoustiques-et-vibratoires) of the [LMSSC Research Laboratory](https://lmssc.cnam.fr)
- **Funded by:** [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6)
- **Language:** French
- **Download size** : 186.64 GB
- **Total audio duration** : 38.31 hours (x6 audio channels)
- **Number of speech utterances** : 28,095
- **License:** Creative Commons Attributions 4.0

I you use the Vibravox dataset for research, **cite this paper** :

```bibtex
@article{jhauret-et-al-2024-vibravox,
      title={{Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors}},
      author={Hauret, Julien and Olivier, Malo and Joubaud, Thomas and Langrenne, Christophe and
        Poir{\'e}e, Sarah and Zimpfer, Véronique and Bavu, {\'E}ric},
      year={2024},
      eprint={2407.11828},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2407.11828}, 
}
```

**and this repository**, which is linked to a DOI :

```bibtex
@misc{cnamlmssc2024vibravoxdataset,
    author={Hauret, Julien and Olivier, Malo and Langrenne, Christophe and
        Poir{\'e}e, Sarah and Bavu, {\'E}ric},
    title        = { {Vibravox} (Revision 7990b7d) },
    year         = 2024,
    url          = { https://huggingface.co/datasets/Cnam-LMSSC/vibravox },
    doi          = { 10.57967/hf/2727 },
    publisher    = { Hugging Face }
}
```

--- 

## SUPPORTED TASKS
<!-- and Leaderboards -->

### Automatic-speech-recognition  

- The model is presented with an audio file and asked to transcribe the audio file to written text (either normalized text of phonemized text). The most common evaluation metrics are the word error rate (WER), character error rate (CER), or phoneme error rate (PER).
- **Training code:** An example of implementation for the speech-to-phoneme task using [wav2vec2.0](https://arxiv.org/abs/2006.11477) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
- **Trained models:** We also provide trained models for the speech-to-phoneme task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_phonemizers](https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers)

### Bandwidth-extension

- Also known as audio super-resolution, which is required to enhance the audio quality of body-conducted captured speech. The model is presented with a pair of audio clips (from a body-conducted captured speech, and from the corresponding clean, full bandwidth  airborne-captured speech), and asked to enhance the audio by denoising and regenerating mid and high frequencies from low frequency content only.
- **Training code:** An example of implementation of this task using [Configurable EBEN](https://ieeexplore.ieee.org/document/10244161) ([arXiv link](https://arxiv.org/abs/2303.10008)) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).
- **Trained models:** We also provide trained models for the BWE task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_EBEN_bwe_models](https://huggingface.co/Cnam-LMSSC/vibravox_EBEN_bwe_models).
- **BWE-Enhanced dataset:** An EBEN-enhanced version of the `test`splits of the Vibravox dataset, generated using these 6 bwe models, is also available on Huggingface at [Cnam-LMSSC/vibravox_enhanced_by_EBEN](https://huggingface.co/datasets/Cnam-LMSSC/vibravox_enhanced_by_EBEN).

### Speaker-verification

  - Given an input audio clip and a reference audio clip of a known speaker, the model's objective is to compare the two clips and verify if they are from the same individual.  This often involves extracting embeddings from a deep neural network trained on a large dataset of voices. The model then measures the similarity between these feature sets using techniques like cosine similarity or a learned distance metric. This task is crucial in applications requiring secure access control, such as biometric authentication systems, where a person's voice acts as a unique identifier.
  - **Testing code:** An example of implementation of this task using a pretrained [ECAPA2 model](https://arxiv.org/abs/2401.08342) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox).


### Adding your models for supported tasks or contributing for new tasks

  Feel free to contribute at the [Vibravox Github repository](https://github.com/jhauret/vibravox), by following the [contributor guidelines](https://github.com/jhauret/vibravox/blob/main/CONTRIBUTING.md).

--- 

## DATASET DETAILS

### Dataset Description

VibraVox ([vibʁavɔks]) is a GDPR-compliant dataset scheduled released in June 2024. It includes speech recorded simultaneously using multiple audio and vibration sensors (from top to bottom on the following figure) :

- a forehead miniature vibration sensor (green)
- an in-ear comply foam-embedded microphone (red)
- an in-ear rigid earpiece-embedded microphone (blue)
- a temple vibration pickup	(cyan)
- a headset microphone located near the mouth (purple)
- a laryngophone (orange)

The technology and references of each sensor is described and documented in [the dataset creation](#dataset-creation) section and [https://vibravox.cnam.fr/documentation/hardware/](https://vibravox.cnam.fr/documentation/hardware).

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/P-_IWM3IMED5RBS3Lhydc.png" />
</p>

### Goals

The VibraVox speech corpus has been recorded with 200 participants under various acoustic conditions imposed by a [5th order ambisonics spatialization sphere](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html).

VibraVox aims at serving as a valuable resource for advancing the field of **body-conducted speech analysis** and facilitating the development of **robust communication systems for real-world applications**.

Unlike traditional microphones, which rely on airborne sound waves, body-conduction sensors capture speech signals directly from the body, offering advantages in noisy environments by eliminating the capture of ambient noise. Although body-conduction sensors have been available for decades, their limited bandwidth has restricted their widespread usage. However, this may be the awakening of this technology to a wide public for speech capture and communication in noisy environments.

### Data / sensor mapping

Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology :

| Vibravox dataset column name         |  Sensor                                    | Location         |  Technology                                        |
|:------------------------------------ |:------------------------------------------ |:---------------- |:-------------------------------------------------- |
|  ```audio.headset_microphone```      |  Headset microphone                        | Near the mouth   | Cardioid electrodynamic microphone                 |
|  ```audio.throat_microphone```       |  Laryngophone                              | Throat / Larynx  | Piezoelectric sensor                               |
|  ```audio.soft_in_ear_microphone```  |  In-ear soft foam-embedded microphone      | Right ear canal  | Omnidirectional electret condenser microphone      |
|  ```audio.rigid_in_ear_microphone``` |  In-ear rigid earpiece-embedded microphone | Left ear-canal   | Omnidirectional  MEMS microphone                   |
|  ```audio.forehead_accelerometer```  |  Forehead vibration sensor                 | Frontal bone     | One-axis accelerometer                             | 
|  ```audio.temple_vibration_pickup``` |  Temple vibration pickup                   | Zygomatic bone   | Figure of-eight pre-polarized condenser transducer |


--- 

## DATASET STRUCTURE

### Subsets

Each of the 4 subsets contain **6 columns of audio data**, corresponding to the 5 different body conduction sensors, plus the standard headset microphone.

Recording was carried out simultaneously on all 6 sensors, **audio files being sampled at 48 kHz and encoded as .wav PCM32 files**.

The 4 subsets correspond to :

- **```speech_clean```** : the speaker reads sentences sourced from the French Wikipedia. This split contains the most data for training for various tasks.

- **```speech_noisy```** : the speaker reads sentences sourced from the French Wikipedia, in a noisy environment based on ambisonic recordings replayed in a spatialization sphere equipped with 56 loudspeakers surrounding the speaker. This will primarily serve to test the different systems (Speech Enhancement, Automatic Speech Recognition, Speaker Verification) that will be developed based on the recordings from the first three phases.    It is primarily intended for testing the various systems (speech enhancement, automatic speech recognition, speaker verification) that will be developed on the basis of the recordings from ```speech_clean```.

- **```speechless_clean```** : wearer of the devices remains speechless in a complete silence, but are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. These samples can be valuable for tasks such as heart rate tracking or simply analyzing the noise properties of the various microphones, but also to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors.

- **```speechless_noisy```** : wearer of the devices remains speechless in a noisy environment created using [AudioSet](https://research.google.com/audioset/) noise samples. These samples have been selected from relevant classes, normalized in loudness, pseudo-spatialized and are played from random directions around the participant  using  [5th order ambisonic 3D sound spatializer](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html) equipped with 56 loudspeakers. The objective of this split is to gather background noises that can be combined with the `speech_clean` recordings to maintain a clean reference. This allows to use those samples for **realistic data-augmentation** using noise captured by body-conduction sensors, with the inherent attenuation of each sensors on different device wearers.


### Splits

All the subsets are available in 3 splits (train, validation and test), with a standard 80% / 10% / 10% repartition, without overlapping any speaker in each split.

The speakers / participants in specific splits are the same for each subset, thus allowing to:

- use the `speechless_noisy` for data augmentation for example
- test on the `speech_noisy` testset your models trained on the `speech_clean` trainset without having to worry that a speaker would have been presented in the training phase.

### Data Fields

In non-streaming mode (default), the path value of all dataset. Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).

**Common Data Fields for all subsets :**

* `audio.headset_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate.
* `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
* `audio.soft_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate.
* `audio.rigid_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate.
* `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
* `audio.throat_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate.
* `gender` (string) - gender of speaker (```male```or ```female```)
* `speaker_id` (string) - encrypted id of speaker
* `duration` (float32) - the audio length in seconds.


**Extra Data Fields for `speech_clean` and `speech_noisy` splits:**

For **speech** subsets, the datasets has columns corresponding to the pronounced sentences, which are absent of the **speechless** subsets :

* `sentence_id` (int) - id of the pronounced sentence
* `raw_text` (string) - audio segment text (cased and with punctuation preserved)
* `normalized_text` (string) - audio segment normalized text (lower cased, no punctuation, diacritics replaced by standard 26 french alphabet letters, plus 3 accented characters : é,è,ê and ç -- which hold phonetic significance -- and the space character, which corresponds to 31 possible characters : ``` [' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'ç', 'è', 'é', 'ê'] ```).
* `phonemes` (string) - audio segment phonemized text using exclusively the strict french IPA (33) characters


### Phonemes list and tokenizer

  - The strict french IPA characters used in Vibravox are :  ``` [' ', 'a', 'b', 'd', 'e', 'f', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ø', 'ŋ', 'œ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɡ', 'ɲ', 'ʁ', 'ʃ', 'ʒ', '̃'] ```.
  - For convience and research reproducibility, we provide a tokenizer for speech-to-phonemes tasks that corresponds to those phonemes at [https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer](https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer).


### Examples of data Instances

#### `speech_clean` or `speech_noisy` splits:

```python
{
    'audio.headset_mic': {
        'path': '02472_headset_mic.wav',
        'array': array([ 0.00045776,  0.00039673,  0.0005188 , ..., -0.00149536,
                        -0.00094604,  0.00036621]),
        'sampling_rate': 48000},
    'audio.forehead_accelerometer': {
        'path': '02472_forehead_accelerometer.wav',
        'array': array([ 0.0010376 , -0.00045776, -0.00085449, ..., -0.00491333,
                        -0.00524902, -0.00302124]),
        'sampling_rate': 48000},
    'audio.soft_in_ear_mic': {
        'path': '02472_soft_in_ear_mic.wav',
        'array': array([-0.06472778, -0.06384277, -0.06292725, ..., -0.02133179,
                        -0.0213623 , -0.02145386]),
        'sampling_rate': 48000},
    'audio.rigid_in_ear_mic': {
     'path': '02472_rigid_in_ear_mic.wav',
     'array': array([-0.01824951, -0.01821899, -0.01812744, ..., -0.00387573,
                     -0.00427246, -0.00439453]),
        'sampling_rate': 48000},
    'audio.temple_vibration_pickup':{
        'path': '02472_temple_vibration_pickup.wav',
        'array': array([-0.0177002 , -0.01791382, -0.01745605, ...,  0.01098633,
                        0.01260376,  0.01220703]),
        'sampling_rate': 48000},
    'audio.laryngophone': {
        'path': '02472_laryngophone.wav',
        'array': array([-2.44140625e-04, -3.05175781e-05,  2.13623047e-04, ...,
                        4.88281250e-04,  4.27246094e-04,  3.66210938e-04]),
        'sampling_rate': 48000},
    'gender': 'female',
    'speaker_id': 'qt4TPMEPwF',
    'sentence_id': 2472,
    'duration': 4.5,
    'raw_text': "Cette mémoire utilise le changement de phase du verre pour enregistrer l'information.",
    'normalized_text': 'cette mémoire utilise le changement de phase du verre pour enregistrer l information',
    'phonemized_text': 'sɛt memwaʁ ytiliz lə ʃɑ̃ʒmɑ̃ də faz dy vɛʁ puʁ ɑ̃ʁʒistʁe lɛ̃fɔʁmasjɔ̃'
}
```

#### `speechless_clean` or `speechless_noisy` splits

 (thus missing the text-related fields)

```python
{
    'audio.headset_mic': {
        'path': 'jMngOy7BdQ_headset_mic.wav',
        'array': array([-1.92260742e-03, -2.44140625e-03, -2.99072266e-03, ...,
                        0.00000000e+00,  3.05175781e-05, -3.05175781e-05]),
        'sampling_rate': 48000},
    'audio.forehead_accelerometer': {
        'path': 'jMngOy7BdQ_forehead_accelerometer.wav',
        'array': array([-0.0032959 , -0.00259399,  0.00177002, ..., -0.00073242,
                        -0.00076294, -0.0005188 ]),
        'sampling_rate': 48000},
    'audio.soft_in_ear_mic': {
        'path': 'jMngOy7BdQ_soft_in_ear_mic.wav',
        'array': array([0.00653076, 0.00671387, 0.00683594, ..., 0.00045776, 0.00042725,
                       0.00042725]),
        'sampling_rate': 48000},
    'audio.rigid_in_ear_mic': {
        'path': 'jMngOy7BdQ_rigid_in_ear_mic.wav',
        'array': array([ 1.05895996e-02,  1.03759766e-02,  1.05590820e-02, ...,
                        0.00000000e+00, -3.05175781e-05, -9.15527344e-05]),
        'sampling_rate': 48000},
    'audio.temple_vibration_pickup': {
        'path': 'jMngOy7BdQ_temple_vibration_pickup.wav',
        'array': array([-0.00082397, -0.0020752 , -0.0012207 , ..., -0.00738525,
                        -0.00814819, -0.00579834]), 'sampling_rate': 48000},
    'audio.laryngophone': {
        'path': 'jMngOy7BdQ_laryngophone.wav',
        'array': array([ 0.00000000e+00,  3.05175781e-05,  1.83105469e-04, ...,
                        -6.10351562e-05, -1.22070312e-04, -9.15527344e-05]),
        'sampling_rate': 48000},
    'gender': 'male',
    'speaker_id': 'jMngOy7BdQ',
    'duration': 54.097
}
```


--- 

## DATA STATISTICS

### Speakers gender balance

To increase the representativeness and inclusivity of the dataset, a deliberate effort was made to recruit a diverse and gender-balanced group of speakers. The overall gender repartition in terms of number of speakers included in the dataset is **51.6% female participants / 48.4% male participants for all subsets**.

### Speakers age balance


| Gender      | Mean age (years) | Median age (years)  |  Min age (years)   |  Max age (years)    |
|:------------|:-----------------|:--------------------|:-------------------|:--------------------|
| Female      | 25.9             | 22                  | 19                 | 59                  |
| Male        | 31.4             | 27                  | 18                 | 82                  |
| **All**     | **28.55**        | **25**              | **18**             | **82**              |



### Audio data


| Subset             | Split                                  | Audio duration (hours)          | Number of audio clips              | Download size                       | Number of Speakers <br> (Female/Male)  | F/M Gender repartition <br> (audio duration)             | Mean audio duration (s)           | Median audio duration (s)        | Max audio duration (s)              | Min audio duration (s)         |
|:-------------------|:---------------------------------------|:--------------------------------|:-----------------------------------|:------------------------------------|:---------------------------------------|:---------------------------------------------------------|:----------------------------------|:---------------------------------|:------------------------------------|:-------------------------------|
| `speech_clean`     | `train` <br> `validation` <br> `test`  | 6x20.94 <br> 6x2.42 <br> 6x3.03 | 6x20,981 <br> 6x2,523 <br> 6x3,064 | 108.32GB <br> 12.79GB <br> 15.84GB | 77F/72M <br> 9F/9M <br> 11F/10M         | 52.46%/47.54% <br> 52.13%/47.87% <br> 55.74%/44.26%      | 3.59 <br> 3.46 <br> 3.56       | 3.50 <br> 3.38 <br> 3.48         | 12.20 <br> 9.44 <br> 9.58           | 0.52 <br> 0.66 <br> 0.58       |
| `speech_noisy`     | `train` <br> `validation` <br> `test`  | 6x1.26 <br> 6x0.13 <br> 6x0.18  | 6x1,220 <br> 6x132 <br> 6x175      | 6.52GB <br> 0.71GB <br> 0.94GB     | 77F/72M <br> 9F/9M <br> 11F/10M         | 54.31%/45.69% <br> 56.61%/43.39% <br> 55.54%/44.46%      | 3.71 <br> 3.67 <br> 3.66       | 3.64 <br> 3.47 <br> 3.70         | 8.66 <br> 7.36 <br> 6.88            | 0.46 <br> 1.10 <br> 1.00       |
| `speechless_clean` | `train` <br> `validation` <br> `test`  | 6x2.24 <br> 6x0.27 <br> 6x0.32  | 6x149 <br> 6x18 <br> 6x21          | 8.44GB <br> 1.02GB <br> 1.19GB     | 77F/72M <br> 9F/9M <br> 11F/10M         | 51.68%/48.32% <br> 50.00%/50.00% <br> 52.38%/47.62%      | 54.10 <br> 54.10 <br> 54.10    | 54.10 <br> 54.10 <br> 54.10      | 54.10 <br> 54.10 <br> 54.10         | 53.99 <br> 54.05 <br> 54.10    |
| `speechless_noisy` | `train` <br> `validation` <br> `test`  | 6x5.96 <br> 6x0.72 <br> 6x0.84  | 6x149 <br> 6x18 <br> 6x21          | 24.48GB <br> 2.96GB <br> 3.45GB    | 77F/72M <br> 9F/9M <br> 11F/10M         | 51.68%/48.32% <br> 50.00%/50.00% <br> 52.38%/47.62%      | 144.03 <br> 144.03 <br> 144.04 | 144.03 <br> 144.03 <br> 144.03   | 144.17 <br> 144.05 <br> 144.05      | 143.84 <br> 143.94 <br> 144.03 |
| **Total**          |                                        | **6x38.31**                     | **6x28,471**                       | **186.64GB**                       | **97F/91M**                             | **52.55%/47.45%**                                        |                                   |                                  |                                     |                                |


--- 

## DATASET CREATION

### Textual source data

The text read by all participants is collected from the French Wikipedia subset of Common voice ( [link1](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-1.fr.txt) [link2](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-2.fr.txt) )  . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect.

All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using  the [bootphon/phonemizer](https://github.com/bootphon/phonemizer) and manually edited to only keep strict french IPA characters.

### Audio Data Collection


#### Sensors positioning and documentation


| **Sensor**                 | **Image** | **Transducer** |  **Online documentation**    |
|:---------------------------|:---------------------|:-------------|:----------------------------------------------------------------------------------------------------------------------|
| Reference headset microphone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/iVYX1_7wAdZb4oDrc9v6l.png) | Shure WH20 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/airborne/index.html) |
| In-ear comply foam-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/Uf1VOwx-kxPiYY1oMW5pz.png)|  Knowles FG-23329-P07  | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/soft_inear/index.html) |
| In-ear rigid earpiece-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/EBY9dIKFN8GDaDXUuhp7n.png)| Knowles SPH1642HT5H  | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/rigid_inear/index.html)  |
| Forehead miniature vibration sensor |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/2zHrN-7OpbH-zJTqASZ7J.png)| Knowles BU23173-000   | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/forehead/index.html) |
| Temple vibration pickup |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/wAcTQlmzvl0O4kNyA3MnC.png)| AKG C411   | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/temple/index.html) |
| Laryngophone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/4SGNSgXYc6hBJcI1cRXY_.png)| iXRadio XVTM822D-D35  | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/throat/index.html)  |


#### Recorded audio data post-processing

Across the sentences collected from the participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof : mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors.

To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in [our paper](https://arxiv.org/abs/2407.11828) :


- The first filter uses a pre-trained ASR model run on the headset microphone data, which allows to address discrepancies between the labeled transcription and actual pronunciation, ensuring high-quality labels for the speech-to-phoneme task.
- The second filter confirms that the sensor is functioning correctly by verifying that speech exhibits higher energy than silence, thereby identifying potentially unreliable recordings with low vocal energy levels or sensor malfunction.
- The third filter detects sensitivity drift in the sensors, which can occur due to electronic malfunctions or mechanical blockages in the transducer.
- If an audio clip passes all filters, it is not immediately added to the dataset. Instead, VAD-generated timestamps from [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) are used, extending them by 0.3 seconds on both sides. This method helps remove mouse clicks at audio boundaries and ensures the capture of vocal segments without excluding valid speech portions.

### Personal and Sensitive Information

The VibraVox dataset does not contain any data that might be considered as personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.).

The `speaker_id` were generated using a powerful Fernet encryption algorithm, and the extraction of a subset of the encrypted id, guaranteeing a strict anonymisation of the voice recordings, while allowing the dataset maintainers to delete corresponding data under the right to oblivion.

A [consent form](https://vibravox.cnam.fr/documentation/consent/index.html) has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All [Cnil](https://www.cnil.fr/en) requirements have been checked, including the right to oblivion during 50 years.