You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for VibraVox

Dataset Summary

image/png

The VibraVox dataset is a general purpose dataset of french speech captured with body-conduction transducers. This dataset can be used for various audio machine learning tasks : Automatic Speech Recognition (ASR) (Speech-to-Text , Speech-to-Phoneme), Audio Bandwidth Extension (BWE), Speaker identification / recognition , Voice cloning , etc ...

Dataset Details

Dataset Description

VibraVox ([vibʁavɔks]) is a GDPR-compliant dataset scheduled for release in early 2024. It includes speech recorded simultaneously using multiple microphones :

  • a forehead miniature accelerometer
  • an in-ear comply foam microphone
  • an in-ear rigid earpiece microphone
  • a mouth headworn reference microphone
  • a temple contact microphone
  • a throat piezoelectric sensor

Those sensors are described and documented in the dataset creation section.

The VibraVox speech corpus has been recorded with 200 participants under various acoustic conditions imposed by a 5th order ambisonics spatialization sphere. VibraVox aims at serving as a valuable resource for advancing the field of body-conducted speech analysis and facilitating the development of robust communication systems for real-world applications. Unlike traditional microphones, which rely on airborne sound waves, body-conduction microphones capture speech signals directly from the body, offering advantages in noisy environments by eliminating the capture of ambient noise. Although body-conduction microphones have been available for decades, their limited bandwidth has restricted their widespread usage. However, thanks to two tracks of improvements, this may be the awakening of this technology to a wide public for speech capture and communication in noisy environments.

Example usage

VibraVox contains labelled data for 11 configurations tailored for specific tasks, mainly oriented towards Automatic Speech Recognition and Bandwidth Extension.

ASR Datasets

The ASR dataset configurations contain mono audio along with corresponding text transcriptions (cased and with punctuation) for french speech, using 6 different kinds of microphones. Recording was carried out simultaneously on all 6 sensors. The audio files were sampled at 48 kHz and encoded as .wav PCM32 files.

To load a specific configuration simply use the following command :

from datasets import load_dataset

config_name = "asr_mouth_headworn_reference_microphone"

vibravox_asr = load_dataset("Cnam-LMSSC/vibravox", config_name)

config_name can be any of the following : "asr_mouth_headworn_reference_microphone" (full bandwidth microphone),"asr_in-ear_comply_foam_microphone" (body conduction), "asr_in-ear_rigid_earpiece_microphone" (body conduction), "asr_forehead_miniature_accelerometer" (body-conduction, high frequency response), "asr_temple_contact_microphone" (body-conduction, low SNR), "asr_throat_piezoelectric_sensor" (body-conduction, high distorsion rate).

BWE Datasets

The BWE dataset configurations contain stereo audio along with corresponding text transcriptions (cased and with punctuation) for french speech, using 5 different kinds of body conduction microphones (first channel), and a standard reference microphone (second channel). Recording was carried out simultaneously on all 6 sensors, therefore the 5 BWE configurations only differ on the band-limited sensor used, and the reference microphone channel is the same as the data found in the "ASR_Mouth_headworn_reference_microphone" configuration.

The stereo audio files are sampled at 48 kHz and encoded as .wav PCM32 files. The label of the sensors on each channel is given in the sensor_id feature of the dataset.

To load a specific configuration simply use the following command :

from datasets import load_dataset

config_name = "bwe_in-ear_comply_foam_microphone"

vibravox_bwe = load_dataset("Cnam-LMSSC/vibravox", config_name)

config_name can be any of the following : "bwe_in-ear_comply_foam_microphone" (body conduction), "bwe_in-ear_rigid_earpiece_microphone" (body conduction), "bwe_forehead_miniature_accelerometer" (body-conduction, high frequency response), "bwe_temple_contact_microphone" (body-conduction, low SNR), "bwe_throat_piezoelectric_sensor" (body-conduction, high distorsion rate).

To load all the sensors in a single dataset (intersection of validated audio for each sensor), use "bwe_all" config name, which contains 6-channel audios sampled at 48 kHz and encoded as .wav PCM32 files. The label of the sensors on each channel is given in the sensor_id feature of the dataset.

vibravox_bwe_all = load_dataset("Cnam-LMSSC/vibravox", "bwe_all_sensors")

Dataset Structure

Data Instances

ASR datasets :

{
  'audio': {
    'path': '/home/zinc/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/SiS_00004369_throat.wav', 'array': array([-1.23381615e-04, -9.16719437e-05, -1.23262405e-04, ...,
       -1.40666962e-05, -2.26497650e-05,  8.22544098e-06]), 'sampling_rate': 48000},
    'audio_length': 5.5399791667,
    'transcription': "Le courant de sortie est toujours la valeur absolue du courant d'entrée.",
    'text': 'le courant de sortie est toujours la valeur absolue du courant d entrée',
    'phonemes': 'lə kuʁɑ̃ də sɔʁti ɛ tuʒuʁ la valœʁ absoly dy kuʁɑ̃ dɑ̃tʁe',
    'num_channels': 1,
    'sensor_id': ['Larynx piezoelectric transducer'],
    'speaker_id': '039',
    'gender': 'male',
    'is_speech': True,
    'is_noisy': False,
    'split': 'train',
    'sentence_id': '00004369'
    }

BWE datasets :

{'audio': {'path': '/home/zinc/.cache/huggingface/datasets/downloads/extracted/56cdda80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c632f/SiS_00012330_inearde_stereo.wav', 'array': array([[-7.68899918e-04, -8.36610794e-04, -8.05854797e-04, ...,
         1.35087967e-03,  1.31452084e-03,  1.27232075e-03],
       [-3.21865082e-06,  8.18967819e-05,  8.13007355e-05, ...,
         7.52210617e-05,  1.05500221e-04,  1.66416168e-04]]),
          'sampling_rate': 48000},
          'audio_length': 6.1,
          'transcription': 'En programmation informatique, une assertion est une expression qui doit être évaluée à vrai.', 'text': 'en programmation informatique une assertion est une expression qui doit être évaluée a vrai',
          'is_gold_transcript': True,
          'num_channels': 2,
          'sensor_id': ['In-ear rigid earpiece microphone', 'Mouth headworn reference microphone'],
          'speaker_id': '092',
          'gender': 'female',
          'is_speech': True,
          'is_noisy': False,
          'split': 'train',
          'sentence_id': '00012330'}

Data Fields

Common Data Fields for all datasets :

  • audio (datasets.Audio) - a dictionary containing the path to the audio, the decoded (mono) audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
  • audio_length (float32) - the audio length in seconds.
  • transcription (string) - audio segment text (cased and with punctuation preserved)
  • num_channels (int) - the number of audio channels in audio
  • sensor_id (string) - a list of sensors used in this audio, ordered by channel number
  • speaker_id (string) - id of speaker
  • gender (string) - gender of speaker (maleor female)
  • is_speech (bool) - wether the audio contains speech audio (True) or if the speaker remains silent (False)
  • is_noisy (bool) - wether the audio contains external environmental noise (True) or if the speaker is in a quiet recoring room (False)
  • split(string) - split (can be "train", "val", or "test")
  • sentence_id (string) - id of the pronounced sentence

Extra Data Fields for ASR datasets :

  • text (string) - audio segment normalized text (lower cased, no punctuation, diacritics replaced by standard 26 french alphabet letters, plus 3 accented characters : é,è,ê and ç -- which hold phonetic significance -- and the space character, which corresponds to 31 possible characters : [' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'ç', 'è', 'é', 'ê']).
  • phonemes (string) - audio segment phonemized text (using exclusively the strict french IPA (33) characters : [' ', 'a', 'b', 'd', 'e', 'f', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ø', 'ŋ', 'œ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɡ', 'ɲ', 'ʁ', 'ʃ', 'ʒ', '̃'] ).

Extra Data Fields for BWE datasets :

  • is_gold_transcript (bool) - wether the transcription has been validated using an external Whisper-large ASR process to ensure the speech matches the transcription(True) or if the obtained WER during automated post-processing is too high to consider the audio and the transcription matches (False).

Data Splits

Almost all configs contain data in three splits: train, validation and test (with a standard 80 / 10 / 10 repartition, without overlapping any speaker in each split).

Noise-only configs (Physiological noise and environmental noise, with is_speech = False), which are intended for data augmentation only contain a train split.

Speech in noise config SiN configuration (intended for extreme tests) contains only a test split.

Data statistics

The dataset has been prepared for the different available sensors, for two main tasks : Automatic Speech Recognition (Speech-to-Text and Speech-to-Phonemes) and speech Bandwidth extension.

The ASR configs of VibraVox contain labelled audio (transcribed with text, normalized text, and phonemic transcription on the strict french IPA alphabet) data for 6 sensors :

Sensor Transcribed Hours Download size Number of transcribed Speakers Gender repartition M/F (in audio length) Transcribed Tokens (normalized text, w/o spaces) Transcribed Tokens (phonemes, w/o spaces) Config name
Reference microphone (headset) 17.85 11.63 GiB 121 41.9 / 58.1 0.81 M 0.65 M asr_mouth_headworn_reference_microphone
In-ear microphone Type 1 (Comply Foam) 17.55 11.42 GiB 121 42.5 / 57.5 0.8 M 0.64 M asr_in-ear_comply_foam_microphone
In-ear microphone Type 2 (rigid earpiece) 17.46 11.37 GiB 120 42.4 / 57.6 0.79 M 0.64 M asr_in-ear_rigid_earpiece_microphone
Forehead (miniature accelerometer) 16.72 10.89 GiB 111 41.6 / 58.4 0.76 M 0.61 M asr_forehead_miniature_accelerometer
Temple contact microphone 16.0 10.44 GiB 119 45.3 / 54.7 0.73 M 0.59 M asr_temple_contact_microphone
Larynx piezoelectric sensor 17.85 11.63 GiB 121 41.9 / 58.1 0.81 M 0.65 M asr_throat_piezoelectric_sensor

The BWE configs of VibraVox contain labelled audio (transcribed with text) data for 5 body-conduction sensors. Each audio file for BWE configs is a stereo wavfile (the first channel corresponds to the body-conduction sensor, the second channel to the reference aligned audio, captured with the reference headworn microphone).

Sensor Recorded Hours Download size Number of Speakers Gender repartition M/F (in audio length) Config name
In-ear microphone Type 1 (Comply Foam) 20.4 27.05 GiB 121 46.6 / 53.4 bwe_in-ear_comply_foam_microphone
In-ear microphone Type 2 (rigid earpiece) 20.31 26.92 GiB 120 46.4 / 53.6 bwe_in-ear_rigid_earpiece_microphone
Forehead (miniature accelerometer) 19.17 25.57 GiB 111 45.6 / 54.4 bwe_forehead_miniature_accelerometer
Temple contact microphone 18.75 24.92 GiB 119 49.5 / 50.5 bwe_temple_contact_microphone
Larynx piezoelectric sensor 20.7 27.49 GiB 121 45.9 / 54.1 bwe_throat_piezoelectric_sensor
All sensors 16.81 67.27 GiB 108 50.1 / 49.9 bwe_all_sensors

VibraVox's external noise configs contain audio data for all 6 sensors. In these configurations, the speakers remain silent (is_speech = False), and environmental noise is generated around them using a using a 5th order ambisonic 3D sound spatializer ( is_noisy = True). Wearers of the devices are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used for realistic data-augmentation using noise captured by body-conduction sensors, with the inherent attenuation of each sensors on different device wearers.

Sensor Recorded Hours Download size Number of individuals Gender repartition M/F (in audio length) Config name
Reference microphone (headset) env_noise_mouth_headworn_reference_microphone
In-ear microphone Type 1 (Comply Foam) env_noisein-ear_comply_foam_microphone
In-ear microphone Type 2 (rigid earpiece) env_noise_in-ear_rigid_earpiece_microphone
Forehead (miniature accelerometer) env_noise_forehead_miniature_accelerometer
Temple contact microphone env_noise_temple_contact_microphone
Larynx piezoelectric sensor env_noise_throat_piezoelectric_sensor

VibraVox's physiological noise configs contain audio data for all 6 sensors. In these configurations, the speakers remain silent (is_speech = False), and no extraneous noise is generated around them ( is_noisy = False). Wearers of the devices are free to move their bodies and faces, and can swallow and breathe naturally. This configuration can be conveniently used to generate synthetic datasets with realistic physiological (and sensor-inherent) noise captured by body-conduction sensors.

Sensor Recorded Hours Download size Number of individuals Gender repartition M/F (in audio length) Config name
Reference microphone (headset) phys_noise_mouth_headworn_reference_microphone
In-ear microphone Type 1 (Comply Foam) phys_noise_in-ear_comply_foam_microphone
In-ear microphone Type 2 (rigid earpiece) phys_noise_in-ear_rigid_earpiece_microphone
Forehead (miniature accelerometer) phys_noise_forehead_miniature_accelerometer
Temple contact microphone phys_noise_temple_contact_microphone
Larynx piezoelectric sensor phys_noise_throat_piezoelectric_sensor

Links and details :

Supported Tasks and Leaderboards

  • automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).

  • bandwidth-extension : TODO : explain

Dataset Creation

Textual source data

The text data is collected from the french Wikipedia (CITE SOURCE FILE IN COMMONVOICE)

Audio Data Collection

Sensor Image Transducer Online documentation
Reference microphone (headset) image/png Shure WH20 See documentation on vibravox.cnam.fr
In-ear microphone Type 1 (Comply Foam) image/png STMicroelectronics MP34DT01 See documentation on vibravox.cnam.fr
In-ear microphone Type 2 (rigid earpiece) image/png Knowles SPH1642HT5H See documentation on vibravox.cnam.fr
Forehead (miniature accelerometer) image/png Knowles BU23173-000 See documentation on vibravox.cnam.fr
Temple contact microphone image/png AKG C411 See documentation on vibravox.cnam.fr
Larynx piezoelectric sensor image/png iXRadio XVTM822D-D35 See documentation on vibravox.cnam.fr

Data Processing

[More Information Needed]

Who are the source language producers?

Speakers are TODO : explain

[More Information Needed]

Personal and Sensitive Information

The VibraVox dataset does not contain any data that might be considered as personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.).

A consent form has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All Cnil requirements have been checked, including the right to oblivion.

TODO : describe the anonymization process.

Recommendations, Bias, Risks, and Limitations

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Dataset Card Authors

Éric Bavu (https://huggingface.co/zinc75)

Dataset Card Contact

Eric Bavu

Downloads last month
500
Edit dataset card

Models trained or fine-tuned on Cnam-LMSSC/vibravox

Collection including Cnam-LMSSC/vibravox