--- annotations_creators: - expert-generated language_creators: - crowdsourced - expert-generated language: - fr license: cc-by-4.0 multilinguality: - monolingual size_categories: - 100K

--- 👀 While waiting for the [TooBigContentError issue](https://github.com/huggingface/dataset-viewer/issues/2215) to be resolved by the HuggingFace team, you can explore the dataset viewer of [vibravox-test](https://huggingface.co/datasets/Cnam-LMSSC/vibravox-test) which has exactly the same architecture. ## DATASET SUMMARY The [VibraVox dataset](https://vibravox.cnam.fr) is a general purpose audio dataset of french speech captured with body-conduction transducers. This dataset can be used for various audio machine learning tasks : - **Automatic Speech Recognition (ASR)** (Speech-to-Text , Speech-to-Phoneme) - **Audio Bandwidth Extension (BWE)** - **Speaker Verification (SPKV)** / identification - **Voice cloning** - etc ... ### Dataset usage VibraVox contains 4 subsets, corresponding to different situations tailored for specific tasks. To load a specific subset simply use the following command (```subset``` can be any of the following : ``` "speech_clean" ``` , ``` "speech_noisy" ``` , ``` "speechless_clean" ``` , ``` "speechless_noisy" ```): ```python from datasets import load_dataset subset = "speech_clean" vibravox = load_dataset("Cnam-LMSSC/vibravox", subset) ``` The dataset is also compatible with the `streaming` mode: ```python from datasets import load_dataset subset = "speech_clean" vibravox = load_dataset("Cnam-LMSSC/vibravox", subset, streaming=True) ``` ### Citations, links and details - **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr) - **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for ASR, BWE and SPKV tasks using the Vibravox dataset - **Point of Contact:** [Julien Hauret](https://www.linkedin.com/in/julienhauret/) and [Éric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact) - **Curated by:** [AVA Team](https://lmssc.cnam.fr/fr/recherche/identification-localisation-synthese-de-sources-acoustiques-et-vibratoires) of the [LMSSC Research Laboratory](https://lmssc.cnam.fr) - **Funded by:** [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6) - **Language:** French - **Download size** : 186.64 GB - **Total audio duration** : 38.31 hours (x6 audio channels) - **Number of speech utterances** : 28,095 - **License:** Creative Commons Attributions 4.0 I you use the Vibravox dataset for research, **cite this paper** : ```bibtex @article{jhauret-et-al-2024-vibravox, title={{Vibravox: A Dataset of French Speech Captured with Body-conduction Audio Sensors}}, author={Hauret, Julien and Olivier, Malo and Joubaud, Thomas and Langrenne, Christophe and Poir{\'e}e, Sarah and Zimpfer, Véronique and Bavu, {\'E}ric}, year={2024}, eprint={2407.11828}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2407.11828}, } ``` **and this repository**, which is linked to a DOI : ```bibtex @misc{cnamlmssc2024vibravoxdataset, author={Hauret, Julien and Olivier, Malo and Langrenne, Christophe and Poir{\'e}e, Sarah and Bavu, {\'E}ric}, title = { {Vibravox} (Revision 7990b7d) }, year = 2024, url = { https://huggingface.co/datasets/Cnam-LMSSC/vibravox }, doi = { 10.57967/hf/2727 }, publisher = { Hugging Face } } ``` --- ## SUPPORTED TASKS ### Automatic-speech-recognition - The model is presented with an audio file and asked to transcribe the audio file to written text (either normalized text of phonemized text). The most common evaluation metrics are the word error rate (WER), character error rate (CER), or phoneme error rate (PER). - **Training code:** An example of implementation for the speech-to-phoneme task using [wav2vec2.0](https://arxiv.org/abs/2006.11477) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox). - **Trained models:** We also provide trained models for the speech-to-phoneme task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_phonemizers](https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers) ### Bandwidth-extension - Also known as audio super-resolution, which is required to enhance the audio quality of body-conducted captured speech. The model is presented with a pair of audio clips (from a body-conducted captured speech, and from the corresponding clean, full bandwidth airborne-captured speech), and asked to enhance the audio by denoising and regenerating mid and high frequencies from low frequency content only. - **Training code:** An example of implementation of this task using [Configurable EBEN](https://ieeexplore.ieee.org/document/10244161) ([arXiv link](https://arxiv.org/abs/2303.10008)) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox). - **Trained models:** We also provide trained models for the BWE task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at [Cnam-LMSSC/vibravox_EBEN_bwe_models](https://huggingface.co/Cnam-LMSSC/vibravox_EBEN_bwe_models). - **BWE-Enhanced dataset:** An EBEN-enhanced version of the `test`splits of the Vibravox dataset, generated using these 6 bwe models, is also available on Huggingface at [Cnam-LMSSC/vibravox_enhanced_by_EBEN](https://huggingface.co/datasets/Cnam-LMSSC/vibravox_enhanced_by_EBEN). ### Speaker-verification - Given an input audio clip and a reference audio clip of a known speaker, the model's objective is to compare the two clips and verify if they are from the same individual. This often involves extracting embeddings from a deep neural network trained on a large dataset of voices. The model then measures the similarity between these feature sets using techniques like cosine similarity or a learned distance metric. This task is crucial in applications requiring secure access control, such as biometric authentication systems, where a person's voice acts as a unique identifier. - **Testing code:** An example of implementation of this task using a pretrained [ECAPA2 model](https://arxiv.org/abs/2401.08342) is available on the [Vibravox Github repository](https://github.com/jhauret/vibravox). ### Adding your models for supported tasks or contributing for new tasks Feel free to contribute at the [Vibravox Github repository](https://github.com/jhauret/vibravox), by following the [contributor guidelines](https://github.com/jhauret/vibravox/blob/main/CONTRIBUTING.md). --- ## DATASET DETAILS ### Dataset Description VibraVox ([vibʁavɔks]) is a GDPR-compliant dataset scheduled released in June 2024. It includes speech recorded simultaneously using multiple audio and vibration sensors (from top to bottom on the following figure) : - a forehead miniature vibration sensor (green) - an in-ear comply foam-embedded microphone (red) - an in-ear rigid earpiece-embedded microphone (blue) - a temple vibration pickup (cyan) - a headset microphone located near the mouth (purple) - a laryngophone (orange) The technology and references of each sensor is described and documented in [the dataset creation](#dataset-creation) section and [https://vibravox.cnam.fr/documentation/hardware/](https://vibravox.cnam.fr/documentation/hardware).

### Goals The VibraVox speech corpus has been recorded with 200 participants under various acoustic conditions imposed by a [5th order ambisonics spatialization sphere](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html). VibraVox aims at serving as a valuable resource for advancing the field of **body-conducted speech analysis** and facilitating the development of **robust communication systems for real-world applications**. Unlike traditional microphones, which rely on airborne sound waves, body-conduction sensors capture speech signals directly from the body, offering advantages in noisy environments by eliminating the capture of ambient noise. Although body-conduction sensors have been available for decades, their limited bandwidth has restricted their widespread usage. However, this may be the awakening of this technology to a wide public for speech capture and communication in noisy environments. ### Data / sensor mapping Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology : | Vibravox dataset column name | Sensor | Location | Technology | |:------------------------------------ |:------------------------------------------ |:---------------- |:-------------------------------------------------- | | ```audio.headset_microphone``` | Headset microphone | Near the mouth | Cardioid electrodynamic microphone | | ```audio.throat_microphone``` | Laryngophone | Throat / Larynx | Piezoelectric sensor | | ```audio.soft_in_ear_microphone``` | In-ear soft foam-embedded microphone | Right ear canal | Omnidirectional electret condenser microphone | | ```audio.rigid_in_ear_microphone``` | In-ear rigid earpiece-embedded microphone | Left ear-canal | Omnidirectional MEMS microphone | | ```audio.forehead_accelerometer``` | Forehead vibration sensor | Frontal bone | One-axis accelerometer | | ```audio.temple_vibration_pickup``` | Temple vibration pickup | Zygomatic bone | Figure of-eight pre-polarized condenser transducer | --- ## DATASET STRUCTURE ### Subsets Each of the 4 subsets contain **6 columns of audio data**, corresponding to the 5 different body conduction sensors, plus the standard headset microphone. Recording was carried out simultaneously on all 6 sensors, **audio files being sampled at 48 kHz and encoded as .wav PCM32 files**. The 4 subsets correspond to : - **```speech_clean```** : the speaker reads sentences sourced from the French Wikipedia. This split contains the most data for training for various tasks. - **```speech_noisy```** : the speaker reads sentences sourced from the French Wikipedia, in a noisy environment based on ambisonic recordings replayed in a spatialization sphere equipped with 56 loudspeakers surrounding the speaker. This will primarily serve to test the different systems (Speech Enhancement, Automatic Speech Recognition, Speaker Verification) that will be developed based on the recordings from the first three phases. It is primarily intended for testing the various systems (speech enhancement, automatic speech recognition, speaker verification) that will be developed on the basis of the recordings from ```speech_clean```. - **```speechless_clean```** : wearer of the devices remains speechless in a complete silence, but are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. These samples can be valuable for tasks such as heart rate tracking or simply analyzing the noise properties of the various microphones, but also to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. - **```speechless_noisy```** : wearer of the devices remains speechless in a noisy environment created using [AudioSet](https://research.google.com/audioset/) noise samples. These samples have been selected from relevant classes, normalized in loudness, pseudo-spatialized and are played from random directions around the participant using [5th order ambisonic 3D sound spatializer](https://vibravox.cnam.fr/documentation/hardware/sphere/index.html) equipped with 56 loudspeakers. The objective of this split is to gather background noises that can be combined with the `speech_clean` recordings to maintain a clean reference. This allows to use those samples for **realistic data-augmentation** using noise captured by body-conduction sensors, with the inherent attenuation of each sensors on different device wearers. ### Splits All the subsets are available in 3 splits (train, validation and test), with a standard 80% / 10% / 10% repartition, without overlapping any speaker in each split. The speakers / participants in specific splits are the same for each subset, thus allowing to: - use the `speechless_noisy` for data augmentation for example - test on the `speech_noisy` testset your models trained on the `speech_clean` trainset without having to worry that a speaker would have been presented in the training phase. ### Data Fields In non-streaming mode (default), the path value of all dataset. Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). **Common Data Fields for all subsets :** * `audio.headset_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate. * `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate. * `audio.soft_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate. * `audio.rigid_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate. * `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate. * `audio.throat_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate. * `gender` (string) - gender of speaker (```male```or ```female```) * `speaker_id` (string) - encrypted id of speaker * `duration` (float32) - the audio length in seconds. **Extra Data Fields for `speech_clean` and `speech_noisy` splits:** For **speech** subsets, the datasets has columns corresponding to the pronounced sentences, which are absent of the **speechless** subsets : * `sentence_id` (int) - id of the pronounced sentence * `raw_text` (string) - audio segment text (cased and with punctuation preserved) * `normalized_text` (string) - audio segment normalized text (lower cased, no punctuation, diacritics replaced by standard 26 french alphabet letters, plus 3 accented characters : é,è,ê and ç -- which hold phonetic significance -- and the space character, which corresponds to 31 possible characters : ``` [' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'ç', 'è', 'é', 'ê'] ```). * `phonemes` (string) - audio segment phonemized text using exclusively the strict french IPA (33) characters ### Phonemes list and tokenizer - The strict french IPA characters used in Vibravox are : ``` [' ', 'a', 'b', 'd', 'e', 'f', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ø', 'ŋ', 'œ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɡ', 'ɲ', 'ʁ', 'ʃ', 'ʒ', '̃'] ```. - For convience and research reproducibility, we provide a tokenizer for speech-to-phonemes tasks that corresponds to those phonemes at [https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer](https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer). ### Examples of data Instances #### `speech_clean` or `speech_noisy` splits: ```python { 'audio.headset_mic': { 'path': '02472_headset_mic.wav', 'array': array([ 0.00045776, 0.00039673, 0.0005188 , ..., -0.00149536, -0.00094604, 0.00036621]), 'sampling_rate': 48000}, 'audio.forehead_accelerometer': { 'path': '02472_forehead_accelerometer.wav', 'array': array([ 0.0010376 , -0.00045776, -0.00085449, ..., -0.00491333, -0.00524902, -0.00302124]), 'sampling_rate': 48000}, 'audio.soft_in_ear_mic': { 'path': '02472_soft_in_ear_mic.wav', 'array': array([-0.06472778, -0.06384277, -0.06292725, ..., -0.02133179, -0.0213623 , -0.02145386]), 'sampling_rate': 48000}, 'audio.rigid_in_ear_mic': { 'path': '02472_rigid_in_ear_mic.wav', 'array': array([-0.01824951, -0.01821899, -0.01812744, ..., -0.00387573, -0.00427246, -0.00439453]), 'sampling_rate': 48000}, 'audio.temple_vibration_pickup':{ 'path': '02472_temple_vibration_pickup.wav', 'array': array([-0.0177002 , -0.01791382, -0.01745605, ..., 0.01098633, 0.01260376, 0.01220703]), 'sampling_rate': 48000}, 'audio.laryngophone': { 'path': '02472_laryngophone.wav', 'array': array([-2.44140625e-04, -3.05175781e-05, 2.13623047e-04, ..., 4.88281250e-04, 4.27246094e-04, 3.66210938e-04]), 'sampling_rate': 48000}, 'gender': 'female', 'speaker_id': 'qt4TPMEPwF', 'sentence_id': 2472, 'duration': 4.5, 'raw_text': "Cette mémoire utilise le changement de phase du verre pour enregistrer l'information.", 'normalized_text': 'cette mémoire utilise le changement de phase du verre pour enregistrer l information', 'phonemized_text': 'sɛt memwaʁ ytiliz lə ʃɑ̃ʒmɑ̃ də faz dy vɛʁ puʁ ɑ̃ʁʒistʁe lɛ̃fɔʁmasjɔ̃' } ``` #### `speechless_clean` or `speechless_noisy` splits (thus missing the text-related fields) ```python { 'audio.headset_mic': { 'path': 'jMngOy7BdQ_headset_mic.wav', 'array': array([-1.92260742e-03, -2.44140625e-03, -2.99072266e-03, ..., 0.00000000e+00, 3.05175781e-05, -3.05175781e-05]), 'sampling_rate': 48000}, 'audio.forehead_accelerometer': { 'path': 'jMngOy7BdQ_forehead_accelerometer.wav', 'array': array([-0.0032959 , -0.00259399, 0.00177002, ..., -0.00073242, -0.00076294, -0.0005188 ]), 'sampling_rate': 48000}, 'audio.soft_in_ear_mic': { 'path': 'jMngOy7BdQ_soft_in_ear_mic.wav', 'array': array([0.00653076, 0.00671387, 0.00683594, ..., 0.00045776, 0.00042725, 0.00042725]), 'sampling_rate': 48000}, 'audio.rigid_in_ear_mic': { 'path': 'jMngOy7BdQ_rigid_in_ear_mic.wav', 'array': array([ 1.05895996e-02, 1.03759766e-02, 1.05590820e-02, ..., 0.00000000e+00, -3.05175781e-05, -9.15527344e-05]), 'sampling_rate': 48000}, 'audio.temple_vibration_pickup': { 'path': 'jMngOy7BdQ_temple_vibration_pickup.wav', 'array': array([-0.00082397, -0.0020752 , -0.0012207 , ..., -0.00738525, -0.00814819, -0.00579834]), 'sampling_rate': 48000}, 'audio.laryngophone': { 'path': 'jMngOy7BdQ_laryngophone.wav', 'array': array([ 0.00000000e+00, 3.05175781e-05, 1.83105469e-04, ..., -6.10351562e-05, -1.22070312e-04, -9.15527344e-05]), 'sampling_rate': 48000}, 'gender': 'male', 'speaker_id': 'jMngOy7BdQ', 'duration': 54.097 } ``` --- ## DATA STATISTICS ### Speakers gender balance To increase the representativeness and inclusivity of the dataset, a deliberate effort was made to recruit a diverse and gender-balanced group of speakers. The overall gender repartition in terms of number of speakers included in the dataset is **51.6% female participants / 48.4% male participants for all subsets**. ### Speakers age balance | Gender | Mean age (years) | Median age (years) | Min age (years) | Max age (years) | |:------------|:-----------------|:--------------------|:-------------------|:--------------------| | Female | 25.9 | 22 | 19 | 59 | | Male | 31.4 | 27 | 18 | 82 | | **All** | **28.55** | **25** | **18** | **82** | ### Audio data | Subset | Split | Audio duration (hours) | Number of audio clips | Download size | Number of Speakers
(Female/Male) | F/M Gender repartition
(audio duration) | Mean audio duration (s) | Median audio duration (s) | Max audio duration (s) | Min audio duration (s) | |:-------------------|:---------------------------------------|:--------------------------------|:-----------------------------------|:------------------------------------|:---------------------------------------|:---------------------------------------------------------|:----------------------------------|:---------------------------------|:------------------------------------|:-------------------------------| | `speech_clean` | `train`
`validation`
`test` | 6x20.94
6x2.42
6x3.03 | 6x20,981
6x2,523
6x3,064 | 108.32GB
12.79GB
15.84GB | 77F/72M
9F/9M
11F/10M | 52.46%/47.54%
52.13%/47.87%
55.74%/44.26% | 3.59
3.46
3.56 | 3.50
3.38
3.48 | 12.20
9.44
9.58 | 0.52
0.66
0.58 | | `speech_noisy` | `train`
`validation`
`test` | 6x1.26
6x0.13
6x0.18 | 6x1,220
6x132
6x175 | 6.52GB
0.71GB
0.94GB | 77F/72M
9F/9M
11F/10M | 54.31%/45.69%
56.61%/43.39%
55.54%/44.46% | 3.71
3.67
3.66 | 3.64
3.47
3.70 | 8.66
7.36
6.88 | 0.46
1.10
1.00 | | `speechless_clean` | `train`
`validation`
`test` | 6x2.24
6x0.27
6x0.32 | 6x149
6x18
6x21 | 8.44GB
1.02GB
1.19GB | 77F/72M
9F/9M
11F/10M | 51.68%/48.32%
50.00%/50.00%
52.38%/47.62% | 54.10
54.10
54.10 | 54.10
54.10
54.10 | 54.10
54.10
54.10 | 53.99
54.05
54.10 | | `speechless_noisy` | `train`
`validation`
`test` | 6x5.96
6x0.72
6x0.84 | 6x149
6x18
6x21 | 24.48GB
2.96GB
3.45GB | 77F/72M
9F/9M
11F/10M | 51.68%/48.32%
50.00%/50.00%
52.38%/47.62% | 144.03
144.03
144.04 | 144.03
144.03
144.03 | 144.17
144.05
144.05 | 143.84
143.94
144.03 | | **Total** | | **6x38.31** | **6x28,471** | **186.64GB** | **97F/91M** | **52.55%/47.45%** | | | | | --- ## DATASET CREATION ### Textual source data The text read by all participants is collected from the French Wikipedia subset of Common voice ( [link1](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-1.fr.txt) [link2](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-2.fr.txt) ) . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect. All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using the [bootphon/phonemizer](https://github.com/bootphon/phonemizer) and manually edited to only keep strict french IPA characters. ### Audio Data Collection #### Sensors positioning and documentation | **Sensor** | **Image** | **Transducer** | **Online documentation** | |:---------------------------|:---------------------|:-------------|:----------------------------------------------------------------------------------------------------------------------| | Reference headset microphone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/iVYX1_7wAdZb4oDrc9v6l.png) | Shure WH20 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/airborne/index.html) | | In-ear comply foam-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/Uf1VOwx-kxPiYY1oMW5pz.png)| Knowles FG-23329-P07 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/soft_inear/index.html) | | In-ear rigid earpiece-embedded microphone |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/EBY9dIKFN8GDaDXUuhp7n.png)| Knowles SPH1642HT5H | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/rigid_inear/index.html) | | Forehead miniature vibration sensor |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/2zHrN-7OpbH-zJTqASZ7J.png)| Knowles BU23173-000 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/forehead/index.html) | | Temple vibration pickup |![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/wAcTQlmzvl0O4kNyA3MnC.png)| AKG C411 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/temple/index.html) | | Laryngophone | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6390fc80e6d656eb421bab69/4SGNSgXYc6hBJcI1cRXY_.png)| iXRadio XVTM822D-D35 | [See documentation on vibravox.cnam.fr](https://vibravox.cnam.fr/documentation/hardware/sensors/throat/index.html) | #### Recorded audio data post-processing Across the sentences collected from the participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof : mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors. To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in [our paper](https://arxiv.org/abs/2407.11828) : - The first filter uses a pre-trained ASR model run on the headset microphone data, which allows to address discrepancies between the labeled transcription and actual pronunciation, ensuring high-quality labels for the speech-to-phoneme task. - The second filter confirms that the sensor is functioning correctly by verifying that speech exhibits higher energy than silence, thereby identifying potentially unreliable recordings with low vocal energy levels or sensor malfunction. - The third filter detects sensitivity drift in the sensors, which can occur due to electronic malfunctions or mechanical blockages in the transducer. - If an audio clip passes all filters, it is not immediately added to the dataset. Instead, VAD-generated timestamps from [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) are used, extending them by 0.3 seconds on both sides. This method helps remove mouse clicks at audio boundaries and ensures the capture of vocal segments without excluding valid speech portions. ### Personal and Sensitive Information The VibraVox dataset does not contain any data that might be considered as personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). The `speaker_id` were generated using a powerful Fernet encryption algorithm, and the extraction of a subset of the encrypted id, guaranteeing a strict anonymisation of the voice recordings, while allowing the dataset maintainers to delete corresponding data under the right to oblivion. A [consent form](https://vibravox.cnam.fr/documentation/consent/index.html) has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All [Cnil](https://www.cnil.fr/en) requirements have been checked, including the right to oblivion during 50 years.