Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
French
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
vibravox / README.md
jhauret's picture
Upload dataset (part 00001-of-00002)
16ffd3d verified
|
raw
history blame
34 kB
metadata
annotations_creators:
  - expert-generated
language_creators:
  - crowdsourced
  - expert-generated
language:
  - fr
license: cc-by-4.0
multilinguality:
  - monolingual
size_categories:
  - 100K<n<1M
source_datasets: []
task_categories:
  - audio-to-audio
  - automatic-speech-recognition
  - audio-classification
  - text-to-speech
task_ids:
  - speaker-identification
pretty_name: Vibravox
viewer: false
dataset_info:
  - config_name: speech_clean
    features:
      - name: audio.headset_microphone
        dtype: audio
      - name: audio.forehead_accelerometer
        dtype: audio
      - name: audio.soft_in_ear_microphone
        dtype: audio
      - name: audio.rigid_in_ear_microphone
        dtype: audio
      - name: audio.temple_vibration_pickup
        dtype: audio
      - name: audio.throat_microphone
        dtype: audio
      - name: gender
        dtype: string
      - name: speaker_id
        dtype: string
      - name: sentence_id
        dtype: int64
      - name: duration
        dtype: float64
      - name: raw_text
        dtype: string
      - name: normalized_text
        dtype: string
      - name: phonemized_text
        dtype: string
    splits:
      - name: train
        num_bytes: 114682286917
        num_examples: 22015
      - name: validation
        num_bytes: 14829386319
        num_examples: 2912
      - name: test
        num_bytes: 15225001521
        num_examples: 2875
    download_size: 143520155725
    dataset_size: 144736674757
  - config_name: speech_noisy
    features:
      - name: audio.headset_microphone
        dtype: audio
      - name: audio.forehead_accelerometer
        dtype: audio
      - name: audio.soft_in_ear_microphone
        dtype: audio
      - name: audio.rigid_in_ear_microphone
        dtype: audio
      - name: audio.temple_vibration_pickup
        dtype: audio
      - name: audio.throat_microphone
        dtype: audio
      - name: gender
        dtype: string
      - name: speaker_id
        dtype: string
      - name: sentence_id
        dtype: int64
      - name: duration
        dtype: float64
      - name: raw_text
        dtype: string
      - name: normalized_text
        dtype: string
      - name: phonemized_text
        dtype: string
    splits:
      - name: train
        num_bytes: 6647321819
        num_examples: 1245
      - name: validation
        num_bytes: 898599786
        num_examples: 169
      - name: test
        num_bytes: 859721538
        num_examples: 159
    download_size: 8396929848
    dataset_size: 8405643143
  - config_name: speechless_clean
    features:
      - name: audio.headset_microphone
        dtype: audio
      - name: audio.forehead_accelerometer
        dtype: audio
      - name: audio.soft_in_ear_microphone
        dtype: audio
      - name: audio.rigid_in_ear_microphone
        dtype: audio
      - name: audio.temple_vibration_pickup
        dtype: audio
      - name: audio.throat_microphone
        dtype: audio
      - name: gender
        dtype: string
      - name: speaker_id
        dtype: string
      - name: duration
        dtype: float64
    splits:
      - name: train
        num_bytes: 9535241448
        num_examples: 153
      - name: validation
        num_bytes: 1184141822
        num_examples: 19
      - name: test
        num_bytes: 1308607540
        num_examples: 21
    download_size: 10937505148
    dataset_size: 12027990810
  - config_name: speechless_noisy
    features:
      - name: audio.headset_microphone
        dtype: audio
      - name: audio.forehead_accelerometer
        dtype: audio
      - name: audio.soft_in_ear_microphone
        dtype: audio
      - name: audio.rigid_in_ear_microphone
        dtype: audio
      - name: audio.temple_vibration_pickup
        dtype: audio
      - name: audio.throat_microphone
        dtype: audio
      - name: gender
        dtype: string
      - name: speaker_id
        dtype: string
      - name: duration
        dtype: float64
    splits:
      - name: train
        num_bytes: 24723250192
        num_examples: 149
      - name: validation
        num_bytes: 2986606278
        num_examples: 18
      - name: test
        num_bytes: 3484522468
        num_examples: 21
    download_size: 30881658818
    dataset_size: 31194378938
configs:
  - config_name: speech_clean
    data_files:
      - split: train
        path: speech_clean/train-*
      - split: validation
        path: speech_clean/validation-*
      - split: test
        path: speech_clean/test-*
  - config_name: speech_noisy
    data_files:
      - split: train
        path: speech_noisy/train-*
      - split: validation
        path: speech_noisy/validation-*
      - split: test
        path: speech_noisy/test-*
  - config_name: speechless_clean
    data_files:
      - split: train
        path: speechless_clean/train-*
      - split: validation
        path: speechless_clean/validation-*
      - split: test
        path: speechless_clean/test-*
  - config_name: speechless_noisy
    data_files:
      - split: train
        path: speechless_noisy/train-*
      - split: validation
        path: speechless_noisy/validation-*
      - split: test
        path: speechless_noisy/test-*

Dataset Card for VibraVox


DATASET SUMMARY

The VibraVox dataset is a general purpose audio dataset of french speech captured with body-conduction transducers. This dataset can be used for various audio machine learning tasks :

  • Automatic Speech Recognition (ASR) (Speech-to-Text , Speech-to-Phoneme)
  • Audio Bandwidth Extension (BWE)
  • Speaker Verification (SPKV) / identification
  • Voice cloning
  • etc ...

Dataset usage

VibraVox contains 4 subsets, corresponding to different situations tailored for specific tasks. To load a specific subset simply use the following command (subset can be any of the following : "speech_clean" , "speech_noisy" , "speechless_clean" , "speechless_noisy"):

from datasets import load_dataset
subset = "speech_clean"
vibravox = load_dataset("Cnam-LMSSC/vibravox", subset)

The dataset is also compatible with the streaming mode:

from datasets import load_dataset
subset = "speech_clean"
vibravox = load_dataset("Cnam-LMSSC/vibravox", subset, streaming=True)

Citations, links and details

I you use the Vibravox dataset for research, cite this paper :

@article{jhauret-et-al-2024-vibravox,
    title = "{V}ibravox : A general purpose dataset of speech captured  with body-conduction microphones",
    author = "Hauret, Julien and Olivier, Malo and Joubaud, Thomas and Langrenne, Christophe and
        Poirée, Sarah and Zimpfer, Véronique and Bavu, Éric",
    journal = "arXiv preprint / TODO : add arXiv reference",
    year = "2024",
}

and this repository, which is linked to a DOI :

@misc{cnam-lmssc-2024-vibravox-dataset,
    title = "{V}ibravox",
    author = "Hauret, Julien and Olivier, Malo  and Langrenne, Christophe and
        Poirée, Sarah and Bavu, Éric",
        journal = "Huggingface Datasets repository",
        year = "2024",
        publisher="Huggingface",
        howpublished = {\url{https://huggingface.co/datasets/Cnam-LMSSC/vibravox}},
        doi = "TODO: add doi"
}

SUPPORTED TASKS

Automatic-speech-recognition

  • The model is presented with an audio file and asked to transcribe the audio file to written text (either normalized text of phonemized text). The most common evaluation metrics are the word error rate (WER), character error rate (CER), or phoneme error rate (PER).
  • Training code: An example of implementation for the speech-to-phoneme task using wav2vec2.0 is available on the Vibravox Github repository.
  • Trained models: We also provide trained models for the speech-to-phoneme task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at Cnam-LMSSC/vibravox_phonemizers

Bandwidth-extension

  • Also known as audio super-resolution, which is required to enhance the audio quality of body-conducted captured speech. The model is presented with a pair of audio clips (from a body-conducted captured speech, and from the corresponding clean, full bandwidth airborne-captured speech), and asked to enhance the audio by denoising and regenerating mid and high frequencies from low frequency content only.
  • Training code: An example of implementation of this task using Configurable EBEN is available on the Vibravox Github repository.
  • Trained models: We also provide trained models for the BWE task for each of the 6 speech sensors of the Vibravox dataset on Huggingface at Cnam-LMSSC/vibravox_EBEN_bwe_models.
  • BWE-Enhanced dataset: An EBEN-enhanced version of the testsplits of the Vibravox dataset, generated using these 6 bwe models, is also available on Huggingface at Cnam-LMSSC/vibravox_enhanced_by_EBEN.

Speaker-verification

  • Given an input audio clip and a reference audio clip of a known speaker, the model's objective is to compare the two clips and verify if they are from the same individual. This often involves extracting embeddings from a deep neural network trained on a large dataset of voices. The model then measures the similarity between these feature sets using techniques like cosine similarity or a learned distance metric. This task is crucial in applications requiring secure access control, such as biometric authentication systems, where a person's voice acts as a unique identifier.
  • Testing code: An example of implementation of this task using a pretrained ECAPA2 model is available on the Vibravox Github repository.

Adding your models for supported tasks or contributing for new tasks

Feel free to contribute at the Vibravox Github repository, by following the contributor guidelines.


DATASET DETAILS

Dataset Description

VibraVox ([vibʁavɔks]) is a GDPR-compliant dataset scheduled released in June 2024. It includes speech recorded simultaneously using multiple audio and vibration sensors (from top to bottom on the following figure) :

  • a forehead miniature vibration sensor (green)
  • an in-ear comply foam-embedded microphone (red)
  • an in-ear rigid earpiece-embedded microphone (blue)
  • a temple vibration pickup (cyan)
  • a headset microphone located near the mouth (purple)
  • a laryngophone (orange)

The technology and references of each sensor is described and documented in the dataset creation section and https://vibravox.cnam.fr/documentation/hardware/.

Goals

The VibraVox speech corpus has been recorded with 200 participants under various acoustic conditions imposed by a 5th order ambisonics spatialization sphere.

VibraVox aims at serving as a valuable resource for advancing the field of body-conducted speech analysis and facilitating the development of robust communication systems for real-world applications.

Unlike traditional microphones, which rely on airborne sound waves, body-conduction sensors capture speech signals directly from the body, offering advantages in noisy environments by eliminating the capture of ambient noise. Although body-conduction sensors have been available for decades, their limited bandwidth has restricted their widespread usage. However, this may be the awakening of this technology to a wide public for speech capture and communication in noisy environments.

Data / sensor mapping

Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology :

Vibravox dataset column name Sensor Location Technology
audio.headset_mic Headset microphone Near the mouth Cardioid electrodynamic microphone
audio.laryngophone Laryngophone Throat / Larynx Piezoelectric sensor
audio.soft_in_ear_mic In-ear soft foam-embedded microphone Right ear canal Omnidirectional electret condenser microphone
audio.rigid_in_ear_mic In-ear rigid earpiece-embedded microphone Left ear-canal Omnidirectional MEMS microphone
audio.forehead_accelerometer Forehead vibration sensor Frontal bone One-axis accelerometer
audio.temple_vibration_pickup Temple vibration pickup Zygomatic bone Figure of-eight pre-polarized condenser transducer

DATASET STRUCTURE

Subsets

Each of the 4 subsets contain 6 columns of audio data, corresponding to the 5 different body conduction sensors, plus the standard headset microphone.

Recording was carried out simultaneously on all 6 sensors, audio files being sampled at 48 kHz and encoded as .wav PCM32 files.

The 4 subsets correspond to :

  • speech_clean : the speaker reads sentences sourced from the French Wikipedia. This split contains the most data for training for various tasks.

  • speech_noisy : the speaker reads sentences sourced from the French Wikipedia, in a noisy environment based on ambisonic recordings replayed in a spatialization sphere equipped with 56 loudspeakers surrounding the speaker. This will primarily serve to test the different systems (Speech Enhancement, Automatic Speech Recognition, Speaker Verification) that will be developed based on the recordings from the first three phases. It is primarily intended for testing the various systems (speech enhancement, automatic speech recognition, speaker verification) that will be developed on the basis of the recordings from speech_clean.

  • speechless_clean : wearer of the devices remains speechless in a complete silence, but are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors. These samples can be valuable for tasks such as heart rate tracking or simply analyzing the noise properties of the various microphones, but also to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors.

  • speechless_noisy : wearer of the devices remains speechless in a noisy environment created using AudioSet noise samples. These samples have been selected from relevant classes, normalized in loudness, pseudo-spatialized and are played from random directions around the participant using 5th order ambisonic 3D sound spatializer equipped with 56 loudspeakers. The objective of this split is to gather background noises that can be combined with the speech_clean recordings to maintain a clean reference. This allows to use those samples for realistic data-augmentation using noise captured by body-conduction sensors, with the inherent attenuation of each sensors on different device wearers.

Splits

All the subsets are available in 3 splits (train, validation and test), with a standard 80% / 10% / 10% repartition, without overlapping any speaker in each split.

The speakers / participants in specific splits are the same for each subset, thus allowing to

  • use the speechless_noisy for data augmentation for example
  • test on the speech_noisy testset your models trained on the speech_clean trainset without having to worry that a speaker would have been presented in the training phase.

Data Fields

In non-streaming mode (default), the path value of all dataset.Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).

Common Data Fields for all subsets :

  • audio.headset_mic (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate.
  • audio.forehead_accelerometer (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
  • audio.soft_in_ear_mic (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate.
  • audio.rigid_in_ear_mic (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate.
  • audio.temple_vibration_pickup (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
  • audio.laryngophone (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate.
  • gender (string) - gender of speaker (maleor female)
  • speaker_id (string) - encrypted id of speaker
  • duration (float32) - the audio length in seconds.

Extra Data Fields for speech_clean and speech_noisy splits:

For speech subsets, the datasets has columns corresponding to the pronounced sentences, which are absent of the speechless subsets :

  • sentence_id (int) - id of the pronounced sentence
  • raw_text (string) - audio segment text (cased and with punctuation preserved)
  • normalized_text (string) - audio segment normalized text (lower cased, no punctuation, diacritics replaced by standard 26 french alphabet letters, plus 3 accented characters : é,è,ê and ç -- which hold phonetic significance -- and the space character, which corresponds to 31 possible characters : [' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'ç', 'è', 'é', 'ê']).
  • phonemes (string) - audio segment phonemized text using exclusively the strict french IPA (33) characters

Phonemes list and tokenizer

  • The strict french IPA characters used in Vibravox are : [' ', 'a', 'b', 'd', 'e', 'f', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ø', 'ŋ', 'œ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɡ', 'ɲ', 'ʁ', 'ʃ', 'ʒ', '̃'].
  • For convience and research reproducibility, we provide a tokenizer for speech-to-phonemes tasks that corresponds to those phonemes at https://huggingface.co/Cnam-LMSSC/vibravox-phonemes-tokenizer.

Examples of data Instances

speech_clean or speech_noisy splits:

{
    'audio.headset_mic': {
        'path': '02472_headset_mic.wav',
        'array': array([ 0.00045776,  0.00039673,  0.0005188 , ..., -0.00149536,
                        -0.00094604,  0.00036621]),
        'sampling_rate': 48000},
    'audio.forehead_accelerometer': {
        'path': '02472_forehead_accelerometer.wav',
        'array': array([ 0.0010376 , -0.00045776, -0.00085449, ..., -0.00491333,
                        -0.00524902, -0.00302124]),
        'sampling_rate': 48000},
    'audio.soft_in_ear_mic': {
        'path': '02472_soft_in_ear_mic.wav',
        'array': array([-0.06472778, -0.06384277, -0.06292725, ..., -0.02133179,
                        -0.0213623 , -0.02145386]),
        'sampling_rate': 48000},
    'audio.rigid_in_ear_mic': {
     'path': '02472_rigid_in_ear_mic.wav',
     'array': array([-0.01824951, -0.01821899, -0.01812744, ..., -0.00387573,
                     -0.00427246, -0.00439453]),
        'sampling_rate': 48000},
    'audio.temple_vibration_pickup':{
        'path': '02472_temple_vibration_pickup.wav',
        'array': array([-0.0177002 , -0.01791382, -0.01745605, ...,  0.01098633,
                        0.01260376,  0.01220703]),
        'sampling_rate': 48000},
    'audio.laryngophone': {
        'path': '02472_laryngophone.wav',
        'array': array([-2.44140625e-04, -3.05175781e-05,  2.13623047e-04, ...,
                        4.88281250e-04,  4.27246094e-04,  3.66210938e-04]),
        'sampling_rate': 48000},
    'gender': 'female',
    'speaker_id': 'qt4TPMEPwF',
    'sentence_id': 2472,
    'duration': 4.5,
    'raw_text': "Cette mémoire utilise le changement de phase du verre pour enregistrer l'information.",
    'normalized_text': 'cette mémoire utilise le changement de phase du verre pour enregistrer l information',
    'phonemized_text': 'sɛt memwaʁ ytiliz lə ʃɑ̃ʒmɑ̃ də faz dy vɛʁ puʁ ɑ̃ʁʒistʁe lɛ̃fɔʁmasjɔ̃'
}

speechless_clean or speechless_noisy splits

(thus missing the text-related fields)

{
    'audio.headset_mic': {
        'path': 'jMngOy7BdQ_headset_mic.wav',
        'array': array([-1.92260742e-03, -2.44140625e-03, -2.99072266e-03, ...,
                        0.00000000e+00,  3.05175781e-05, -3.05175781e-05]),
        'sampling_rate': 48000},
    'audio.forehead_accelerometer': {
        'path': 'jMngOy7BdQ_forehead_accelerometer.wav',
        'array': array([-0.0032959 , -0.00259399,  0.00177002, ..., -0.00073242,
                        -0.00076294, -0.0005188 ]),
        'sampling_rate': 48000},
    'audio.soft_in_ear_mic': {
        'path': 'jMngOy7BdQ_soft_in_ear_mic.wav',
        'array': array([0.00653076, 0.00671387, 0.00683594, ..., 0.00045776, 0.00042725,
                       0.00042725]),
        'sampling_rate': 48000},
    'audio.rigid_in_ear_mic': {
        'path': 'jMngOy7BdQ_rigid_in_ear_mic.wav',
        'array': array([ 1.05895996e-02,  1.03759766e-02,  1.05590820e-02, ...,
                        0.00000000e+00, -3.05175781e-05, -9.15527344e-05]),
        'sampling_rate': 48000},
    'audio.temple_vibration_pickup': {
        'path': 'jMngOy7BdQ_temple_vibration_pickup.wav',
        'array': array([-0.00082397, -0.0020752 , -0.0012207 , ..., -0.00738525,
                        -0.00814819, -0.00579834]), 'sampling_rate': 48000},
    'audio.laryngophone': {
        'path': 'jMngOy7BdQ_laryngophone.wav',
        'array': array([ 0.00000000e+00,  3.05175781e-05,  1.83105469e-04, ...,
                        -6.10351562e-05, -1.22070312e-04, -9.15527344e-05]),
        'sampling_rate': 48000},
    'gender': 'male',
    'speaker_id': 'jMngOy7BdQ',
    'duration': 54.097
}

DATA STATISTICS

Speakers gender balance

To increase the representativeness and inclusivity of the dataset, a deliberate effort was made to recruit a diverse and gender-balanced group of speakers : the overall gender repartition male/female in terms of number of speakers included in the dataset is 48.3% / 51.6% for all subsets.

Speakers age balance

TODO : update values when final dataset is uploaded

Quantity Mean Median Min Max
Age, all speakers (years) 27.62 24.00 18.00 82.00
Age, male speakers (years) 31.00 27.00 18.00 82.00
Age, female speakers (years) 24.50 22.00 19.00 59.00

Audio data

TODO : update values when final dataset is uploaded

Subset / split Audio duration # of audio clips Download size # of Speakers (M/F) Gender repartition M/F (in audio duration)
speech_clean/train 6 x 23.5 h 6 x 18800 46.8 GiB 157 (76 M, 81 F) 48.3% / 51.6%
speech_clean/validation 6 x 2.9 h 6 x 2510 6.5 GiB 20 (10 M, 10 F) 48.3% / 51.6%
speech_clean/test 6 x 2.8 h 6 x 2787 6.8 GiB 19 (9 M, 10 F) 48.3% / 51.6%
speech_noisy/train 6 x 1.1 h 6 x 845 2.2 GiB 157 (76 M, 81 F) 48.3% / 51.6%
speech_noisy/validation 6 x 0.2 h 6 x 118 0.3 GiB 20 (10 M, 10 F) 48.3% / 51.6%
speech_noisy/test 6 x 0.17 h 6 x 97 0.25 GiB 19 (9 M, 10 F) 48.3% / 51.6%
speechless_clean/train 6 x 2.35 h 6 x 157 4.5 GiB 157 (76 M, 81 F) 48.3% / 51.6%
speechless_clean/validation 6 x 0.3 h 6 x 20 0.5 GiB 20 (10 M, 10 F) 48.3% / 51.6%
speechless_clean/test 6 x 0.28 h 6 x 19 0.5 GiB 19 (9 M, 10 F) 48.3% / 51.6%
speechless_noisy/train 6 x 6.3 h 6 x 157 12.1 GiB 157 (76 M, 81 F) 48.3% / 51.6%
speechless_noisy/validation 6 x 0.8 h 6 x 20 1.5 GiB 20 (10 M, 10 F) 48.3% / 51.6%
speechless_noisy/test 6 x 0.76 h 6 x 19 1.45 GiB 19 (9 M, 10 F) 48.3% / 51.6%
Total 6 x 41.5 h 6 x 25549 83.4 GiB 196 (95 M, 101 F) 48.3% / 51.6%

Audio clip durations

TODO : update values when final dataset is uploaded

Subset / split Mean Median Max Min
speech_clean/train 4.05 s 3.96 s 11.20 s 0.90 s
speech_clean/validation 4.24 s 4.22 s 8.66 s 1.18 s
speech_clean/test 4.05 s 3.94 s 9.66 s 1.12 s
speech_noisy/train 4.28 s 4.17 s 8.48 s 0.82 s
speech_noisy/validation 4.62 s 4.57 s 7.48 s 1.16 s
speech_noisy/test 4.30 s 4.30 s 7.94 s 1.58 s
speechless_clean/train 54.10 s 54.10 s 54.10 s 54.10 s
speechless_clean/validation 54.10 s 54.10 s 54.10 s 54.10 s
speechless_clean/`test 54.10 s 54.10 s 54.10 s 54.10 s
speechless_noisy/train 144.04 s 144.03 s 144.05 s 144.02 s
speechless_noisy/validation 144.03 s 144.03 s 144.04 s 144.03 s
speechless_noisy/test 144.04 s 144.03 s 144.05 s 144.03 s

DATASET CREATION

Textual source data

The text read by all participants is collected from the French Wikipedia subset of Common voice ( link1 link2 ) . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect.

All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using the bootphon/phonemizer and manually edited to only keep strict french IPA characters.

Audio Data Collection

Sensors positioning and documentation

Sensor Image Transducer Online documentation
Reference headset microphone image/png Shure WH20 See documentation on vibravox.cnam.fr
In-ear comply foam-embedded microphone image/png Knowles FG-23329-P07 See documentation on vibravox.cnam.fr
In-ear rigid earpiece-embedded microphone image/png Knowles SPH1642HT5H See documentation on vibravox.cnam.fr
Forehead miniature vibration sensor image/png Knowles BU23173-000 See documentation on vibravox.cnam.fr
Temple vibration pickup image/png AKG C411 See documentation on vibravox.cnam.fr
Laryngophone image/png iXRadio XVTM822D-D35 See documentation on vibravox.cnam.fr

Recorded audio data post-processing

Across the sentences collected from the 200 participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof :mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors.

To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in link to the paper : TODO : add link to arxiv paper when uploaded

  • The first filter uses a pre-trained ASR model run on the headset microphone data, which allows to address discrepancies between the labeled transcription and actual pronunciation, ensuring high-quality labels for the speech-to-phoneme task.
  • The second filter confirms that the sensor is functioning correctly by verifying that speech exhibits higher energy than silence, thereby identifying potentially unreliable recordings with low vocal energy levels or sensor malfunction.
  • The third filter detects sensitivity drift in the sensors, which can occur due to electronic malfunctions or mechanical blockages in the transducer.
  • If an audio clip passes all filters, it is not immediately added to the dataset. Instead, VAD-generated timestamps from whisper-timestamped are used, extending them by 0.3 seconds on both sides. This method helps remove mouse clicks at audio boundaries and ensures the capture of vocal segments without excluding valid speech portions.

Personal and Sensitive Information

The VibraVox dataset does not contain any data that might be considered as personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.).

The speaker_id were generated using a powerful Fernet encryption algorithm, and the extraction of a subset of the encrypted id, guaranteeing a strict anonymisation of the voice recordings, while allowing the dataset maintainers to delete corresponding data under the right to oblivion.

A consent form has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All Cnil requirements have been checked, including the right to oblivion during 50 years.


DATASET CARD AUTHORS

Éric Bavu (https://huggingface.co/zinc75)

Dataset Card Contact

Eric Bavu