---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- fr
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K
### Goals The VibraVox speech corpus has been recorded with 200 participants under various acoustic conditions imposed by a [5th order ambisonics spatialization sphere](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html). VibraVox aims at serving as a valuable resource for advancing the field of **body-conducted speech analysis** and facilitating the development of **robust communication systems for real-world applications**. Unlike traditional microphones, which rely on airborne sound waves, body-conduction sensors capture speech signals directly from the body, offering advantages in noisy environments by eliminating the capture of ambient noise. Although body-conduction sensors have been available for decades, their limited bandwidth has restricted their widespread usage. However, this may be the awakening of this technology to a wide public for speech capture and communication in noisy environments. ### Data / sensor mapping Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology : | Vibravox dataset column name | Sensor | Location | Technology | | ------------- | -------------------- | --------------------- | --------------------- | | ```audio.headset_mic``` | Headset microphone | Near the mouth | Cardioid electrodynamic microphone | ```audio.laryngophone``` | Laryngophone | Throat / Larynx | Piezoelectric sensor | | ```audio.soft_in_ear_mic``` | In-ear soft foam-embedded microphone | Right ear canal | Omnidirectional electret condenser microphone | | ```audio.rigid_in_ear_mic``` | In-ear rigid earpiece-embedded microphone | Left ear-canal | Omnidirectional MEMS microphone | | ```audio.forehead_accelerometer``` | Forehead vibration sensor | Frontal bone | One-axis accelerometer | | ```audio.temple_vibration_pickup``` | Temple vibration pickup | Zygomatic bone | Figure of-eight pre-polarized condenser transducer | --- ## DATASET STRUCTURE ### Subsets Each of the 4 subsets contain **6 columns of audio data**, corresponding to the 5 different body conduction sensors, plus the standard headset microphone. Recording was carried out simultaneously on all 6 sensors, **audio files being sampled at 48 kHz and encoded as .wav PCM32 files**. The 4 subsets correspond to : - **```speech_clean```** : the speaker reads sentences sourced from the French Wikipedia. This split contains the most data for training for various tasks. - **```speech_noisy```** : the speaker reads sentences sourced from the French Wikipedia, in a noisy environment based on ambisonic recordings replayed in a spatialization sphere equipped with 56 loudspeakers surrounding the speaker. This will primarily serve to test the different systems (Speech Enhancement, Automatic Speech Recognition, Speaker Verification) that will be developed based on the recordings from the first three phases. It is primarily intended for testing the various systems (speech enhancement, automatic speech recognition, speaker verification) that will be developed on the basis of the recordings from ```speech_clean```. - **```speechless_clean```** : wearer of the devices remains speechless in a complete silence, but are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. These samples can be valuable for tasks such as heart rate tracking or simply analyzing the noise properties of the various microphones, but also to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. - **```speechless_noisy```** : wearer of the devices remains speechless in a noisy environment created using [AudioSet](https://research.google.com/audioset/) noise samples. These samples have been selected from relevant classes, normalized in loudness, pseudo-spatialized and are played from random directions around the participant using [5th order ambisonic 3D sound spatializer](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html) equipped with 56 loudspeakers. The objective of this split is to gather background noises that can be combined with the `speech_clean` recordings to maintain a clean reference. This allows to use those samples for **realistic data-augmentation** using noise captured by body-conduction sensors, with the inherent attenuation of each sensors on different device wearers. ### Splits All the subsets are available in 3 splits (train, validation and test), with a standard 80 / 10 / 10 repartition, without overlapping any speaker in each split. The speakers / participants in specific splits are the same for each subset, thus allowing to - use the `speechless_noisy` for data augmentation for example - test on the `speech_noisy` testset your models trained on the `speech_clean` trainset without having to worry that a speaker would have been presented in the training phase. ### Data Fields In non-streaming mode (default), the path value of all dataset.Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). **Common Data Fields for all subsets :** * `audio.headset_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate. * `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate. * `audio.soft_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate. * `audio.rigid_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate. * `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate. * `audio.laryngophone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate. * `gender` (string) - gender of speaker (```male```or ```female```) * `speaker_id` (string) - encrypted id of speaker * `duration` (float32) - the audio length in seconds. **Extra Data Fields for `speech_clean` and `speech_noisy` splits:** For **speech** subsets, the datasets has columns corresponding to the pronounced sentences, which are absent of the **speechless** subsets : * `sentence_id` (int) - id of the pronounced sentence * `raw_text` (string) - audio segment text (cased and with punctuation preserved) * `normalized_text` (string) - audio segment normalized text (lower cased, no punctuation, diacritics replaced by standard 26 french alphabet letters, plus 3 accented characters : é,è,ê and ç -- which hold phonetic significance -- and the space character, which corresponds to 31 possible characters : ``` [' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'ç', 'è', 'é', 'ê'] ```). * `phonemes` (string) - audio segment phonemized text using exclusively the strict french IPA (33) characters ### Phonemes list and tokenizer - The strict french IPA characters used in Vibravox are : ``` [' ', 'a', 'b', 'd', 'e', 'f', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ø', 'ŋ', 'œ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɡ', 'ɲ', 'ʁ', 'ʃ', 'ʒ', '̃'] ```. - For convience and research reproducibility, we provide a tokenizer for speech-to-phonemes tasks that corresponds to those phonemes at [https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer](https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer). ### Examples of data Instances #### `speech_clean` or `speech_noisy` splits: ```python { 'audio.headset_mic': { 'path': '02472_headset_mic.wav', 'array': array([ 0.00045776, 0.00039673, 0.0005188 , ..., -0.00149536, -0.00094604, 0.00036621]), 'sampling_rate': 48000}, 'audio.forehead_accelerometer': { 'path': '02472_forehead_accelerometer.wav', 'array': array([ 0.0010376 , -0.00045776, -0.00085449, ..., -0.00491333, -0.00524902, -0.00302124]), 'sampling_rate': 48000}, 'audio.soft_in_ear_mic': { 'path': '02472_soft_in_ear_mic.wav', 'array': array([-0.06472778, -0.06384277, -0.06292725, ..., -0.02133179, -0.0213623 , -0.02145386]), 'sampling_rate': 48000}, 'audio.rigid_in_ear_mic': { 'path': '02472_rigid_in_ear_mic.wav', 'array': array([-0.01824951, -0.01821899, -0.01812744, ..., -0.00387573, -0.00427246, -0.00439453]), 'sampling_rate': 48000}, 'audio.temple_vibration_pickup':{ 'path': '02472_temple_vibration_pickup.wav', 'array': array([-0.0177002 , -0.01791382, -0.01745605, ..., 0.01098633, 0.01260376, 0.01220703]), 'sampling_rate': 48000}, 'audio.laryngophone': { 'path': '02472_laryngophone.wav', 'array': array([-2.44140625e-04, -3.05175781e-05, 2.13623047e-04, ..., 4.88281250e-04, 4.27246094e-04, 3.66210938e-04]), 'sampling_rate': 48000}, 'gender': 'female', 'speaker_id': 'qt4TPMEPwF', 'sentence_id': 2472, 'duration': 4.5, 'raw_text': "Cette mémoire utilise le changement de phase du verre pour enregistrer l'information.", 'normalized_text': 'cette mémoire utilise le changement de phase du verre pour enregistrer l information', 'phonemized_text': 'sɛt memwaʁ ytiliz lə ʃɑ̃ʒmɑ̃ də faz dy vɛʁ puʁ ɑ̃ʁʒistʁe lɛ̃fɔʁmasjɔ̃' } ``` #### `speechless_clean` or `speechless_noisy` splits (thus missing the text-related fields) ```python { 'audio.headset_mic': { 'path': 'jMngOy7BdQ_headset_mic.wav', 'array': array([-1.92260742e-03, -2.44140625e-03, -2.99072266e-03, ..., 0.00000000e+00, 3.05175781e-05, -3.05175781e-05]), 'sampling_rate': 48000}, 'audio.forehead_accelerometer': { 'path': 'jMngOy7BdQ_forehead_accelerometer.wav', 'array': array([-0.0032959 , -0.00259399, 0.00177002, ..., -0.00073242, -0.00076294, -0.0005188 ]), 'sampling_rate': 48000}, 'audio.soft_in_ear_mic': { 'path': 'jMngOy7BdQ_soft_in_ear_mic.wav', 'array': array([0.00653076, 0.00671387, 0.00683594, ..., 0.00045776, 0.00042725, 0.00042725]), 'sampling_rate': 48000}, 'audio.rigid_in_ear_mic': { 'path': 'jMngOy7BdQ_rigid_in_ear_mic.wav', 'array': array([ 1.05895996e-02, 1.03759766e-02, 1.05590820e-02, ..., 0.00000000e+00, -3.05175781e-05, -9.15527344e-05]), 'sampling_rate': 48000}, 'audio.temple_vibration_pickup': { 'path': 'jMngOy7BdQ_temple_vibration_pickup.wav', 'array': array([-0.00082397, -0.0020752 , -0.0012207 , ..., -0.00738525, -0.00814819, -0.00579834]), 'sampling_rate': 48000}, 'audio.laryngophone': { 'path': 'jMngOy7BdQ_laryngophone.wav', 'array': array([ 0.00000000e+00, 3.05175781e-05, 1.83105469e-04, ..., -6.10351562e-05, -1.22070312e-04, -9.15527344e-05]), 'sampling_rate': 48000}, 'gender': 'male', 'speaker_id': 'jMngOy7BdQ', 'duration': 54.097 } ``` --- ## DATA STATISTICS ### Speakers gender balance To increase the representativeness and inclusivity of the dataset, a deliberate effort was made to recruit a diverse and gender-balanced group of speakers : the overall gender repartition male/female in terms of number of speakers included in the dataset is 48.3% / 51.6% for all subsets. ### Speakers age balance TODO : update values when final dataset is uploaded | Quantity | Mean | Median | Min | Max | |-----------------------|-------|--------|-------|--------| | Age, all speakers (years) | 27.62 | 24.00 | 18.00 | 82.00 | | Age, male speakers (years) | 31.00 | 27.00 | 18.00 | 82.00 | | Age, female speakers (years) | 24.50 | 22.00 | 19.00 | 59.00 | ### Audio data TODO : update values when final dataset is uploaded | Subset / split | Audio duration | # of audio clips | Download size | # of Speakers (M/F) | Gender repartition M/F (in audio duration) | |:---:|:---:|:---:|:---:|:---:|:---:| | `speech_clean`/`train` | 6 x 23.5 h | 6 x 18800 | 46.8 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% | | `speech_clean`/`validation` | 6 x 2.9 h | 6 x 2510 | 6.5 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% | | `speech_clean`/`test` | 6 x 2.8 h | 6 x 2787 | 6.8 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% | | `speech_noisy`/`train` | 6 x 1.1 h | 6 x 845 | 2.2 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% | | `speech_noisy`/`validation` | 6 x 0.2 h | 6 x 118 | 0.3 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% | | `speech_noisy`/`test` | 6 x 0.17 h | 6 x 97 | 0.25 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% | | `speechless_clean`/`train` | 6 x 2.35 h | 6 x 157 | 4.5 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% | | `speechless_clean`/`validation` | 6 x 0.3 h | 6 x 20 | 0.5 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% | | `speechless_clean`/`test` | 6 x 0.28 h | 6 x 19 | 0.5 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% | | `speechless_noisy`/`train` | 6 x 6.3 h | 6 x 157 | 12.1 GiB | 157 (76 M, 81 F) | 48.3% / 51.6% | | `speechless_noisy`/`validation` | 6 x 0.8 h | 6 x 20 | 1.5 GiB | 20 (10 M, 10 F) | 48.3% / 51.6% | | `speechless_noisy`/`test` | 6 x 0.76 h | 6 x 19 | 1.45 GiB | 19 (9 M, 10 F) | 48.3% / 51.6% | | **Total** | 6 x 41.5 h | 6 x 25549 | 83.4 GiB | 196 (95 M, 101 F) | 48.3% / 51.6% | ### Audio clip durations TODO : update values when final dataset is uploaded | Subset / split | Mean | Median | Max | Min | |:---:|:---:|:---:|:---:|:---:| | `speech_clean`/`train` | 4.05 s | 3.96 s | 11.20 s | 0.90 s | | `speech_clean`/`validation` | 4.24 s | 4.22 s | 8.66 s | 1.18 s | | `speech_clean`/`test` | 4.05 s | 3.94 s | 9.66 s | 1.12 s | | `speech_noisy`/`train` | 4.28 s | 4.17 s | 8.48 s | 0.82 s | | `speech_noisy`/`validation` | 4.62 s | 4.57 s | 7.48 s | 1.16 s | | `speech_noisy`/`test` | 4.30 s | 4.30 s | 7.94 s | 1.58 s | | `speechless_clean`/`train` | 54.10 s | 54.10 s | 54.10 s | 54.10 s | | `speechless_clean`/`validation`| 54.10 s | 54.10 s | 54.10 s | 54.10 s | | `speechless_clean`/`test | 54.10 s | 54.10 s | 54.10 s | 54.10 s | | `speechless_noisy`/`train` | 144.04 s | 144.03 s | 144.05 s | 144.02 s | | `speechless_noisy`/`validation`| 144.03 s | 144.03 s | 144.04 s | 144.03 s | | `speechless_noisy`/`test` | 144.04 s | 144.03 s | 144.05 s | 144.03 s | --- ## DATASET CREATION ### Textual source data The text read by all participants is collected from the French Wikipedia subset of Common voice ( [link1](https://github.com/common-voice/common-voice/blob/main/server/data/fr/wiki-1.fr.txt) [link2](https://github.com/common-voice/common-voice/blob/main/server/data/fr/wiki-2.fr.txt) ) . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect. All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using the [bootphon/phonemizer](https://github.com/bootphon/phonemizer) and manually edited to only keep strict french IPA characters. ### Audio Data Collection #### Sensors positioning and documentation | **Sensor** | **Image** | **Transducer** | **Online documentation** | |:---------------------------|:---------------------|:-------------|:----------------------------------------------------------------------------------------------------------------------| | Reference headset microphone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/iVYX1_7wAdZb4oDrc9v6l.png) | Shure WH20 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/airborne/index.html) | | In-ear comply foam-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/Uf1VOwx-kxPiYY1oMW5pz.png)| Knowles FG-23329-P07 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/soft_inear/index.html) | | In-ear rigid earpiece-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/EBY9dIKFN8GDaDXUuhp7n.png)| Knowles SPH1642HT5H | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/rigid_inear/index.html) | | Forehead miniature vibration sensor |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/2zHrN-7OpbH-zJTqASZ7J.png)| Knowles BU23173-000 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/forehead/index.html) | | Temple vibration pickup |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/wAcTQlmzvl0O4kNyA3MnC.png)| AKG C411 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/temple/index.html) | | Laryngophone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/4SGNSgXYc6hBJcI1cRXY_.png)| iXRadio XVTM822D-D35 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/microphones/throat/index.html) | #### Recorded audio data post-processing Across the sentences collected from the 200 participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof :mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors. To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in [link to the paper]() : TODO : add link to arxiv paper when uploaded - The first filter uses a pre-trained ASR model run on the headset microphone data, which allows to address discrepancies between the labeled transcription and actual pronunciation, ensuring high-quality labels for the speech-to-phoneme task. - The second filter confirms that the sensor is functioning correctly by verifying that speech exhibits higher energy than silence, thereby identifying potentially unreliable recordings with low vocal energy levels or sensor malfunction. - The third filter detects sensitivity drift in the sensors, which can occur due to electronic malfunctions or mechanical blockages in the transducer. - If an audio clip passes all filters, it is not immediately added to the dataset. Instead, VAD-generated timestamps from [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) are used, extending them by 0.3 seconds on both sides. This method helps remove mouse clicks at audio boundaries and ensures the capture of vocal segments without excluding valid speech portions. ### Personal and Sensitive Information The VibraVox dataset does not contain any data that might be considered as personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). The `speaker_id` were generated using a powerful Fernet encryption algorithm, and the extraction of a subset of the encrypted id, guaranteeing a strict anonymisation of the voice recordings, while allowing the dataset maintainers to delete corresponding data under the right to oblivion. A [consent form](https://vibravox.cnam.fr/documentation/consent/index.html) has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All [Cnil](https://www.cnil.fr/en) requirements have been checked, including the right to oblivion during 50 years. --- ## DATASET CARD AUTHORS Éric Bavu (https://huggingface.co/zinc75) ### Dataset Card Contact [Eric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)