Datasets:
Formats:
parquet
Sub-tasks:
speaker-identification
Languages:
French
Size:
10K - 100K
ArXiv:
DOI:
License:
new audio names
Browse files
README.md
CHANGED
@@ -247,7 +247,7 @@ vibravox = load_dataset("Cnam-LMSSC/vibravox", subset, streaming=True)
|
|
247 |
|
248 |
- **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr)
|
249 |
- **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for ASR, BWE and SPKV tasks using the Vibravox dataset
|
250 |
-
- **Point of Contact:** [
|
251 |
- **Curated by:** [AVA Team](https://lmssc.cnam.fr/fr/recherche/identification-localisation-synthese-de-sources-acoustiques-et-vibratoires) of the [LMSSC Research Laboratory](https://lmssc.cnam.fr)
|
252 |
- **Funded by:** [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6)
|
253 |
- **Language:** French
|
@@ -341,14 +341,14 @@ Unlike traditional microphones, which rely on airborne sound waves, body-conduct
|
|
341 |
|
342 |
Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology :
|
343 |
|
344 |
-
| Vibravox dataset column name
|
345 |
-
|
|
346 |
-
| ```audio.
|
347 |
-
| ```audio.
|
348 |
-
| ```audio.
|
349 |
-
| ```audio.
|
350 |
-
| ```audio.forehead_accelerometer```
|
351 |
-
| ```audio.temple_vibration_pickup```
|
352 |
|
353 |
|
354 |
---
|
@@ -376,23 +376,23 @@ The 4 subsets correspond to :
|
|
376 |
|
377 |
All the subsets are available in 3 splits (train, validation and test), with a standard 80% / 10% / 10% repartition, without overlapping any speaker in each split.
|
378 |
|
379 |
-
The speakers / participants in specific splits are the same for each subset, thus allowing to
|
380 |
|
381 |
- use the `speechless_noisy` for data augmentation for example
|
382 |
-
-
|
383 |
|
384 |
### Data Fields
|
385 |
|
386 |
-
In non-streaming mode (default), the path value of all dataset.Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
|
387 |
|
388 |
**Common Data Fields for all subsets :**
|
389 |
|
390 |
-
* `audio.
|
391 |
* `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
|
392 |
-
* `audio.
|
393 |
-
* `audio.
|
394 |
* `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
|
395 |
-
* `audio.
|
396 |
* `gender` (string) - gender of speaker (```male```or ```female```)
|
397 |
* `speaker_id` (string) - encrypted id of speaker
|
398 |
* `duration` (float32) - the audio length in seconds.
|
@@ -589,7 +589,7 @@ All lines of the textual source data from Wikipedia-extracted textual dataset ha
|
|
589 |
|
590 |
#### Recorded audio data post-processing
|
591 |
|
592 |
-
Across the sentences collected from the 200 participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof :mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors.
|
593 |
|
594 |
To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in [link to the paper]() : TODO : add link to arxiv paper when uploaded
|
595 |
|
@@ -607,13 +607,3 @@ The `speaker_id` were generated using a powerful Fernet encryption algorithm, an
|
|
607 |
|
608 |
A [consent form](https://vibravox.cnam.fr/documentation/consent/index.html) has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All [Cnil](https://www.cnil.fr/en) requirements have been checked, including the right to oblivion during 50 years.
|
609 |
|
610 |
-
---
|
611 |
-
|
612 |
-
## DATASET CARD AUTHORS
|
613 |
-
|
614 |
-
Éric Bavu (https://huggingface.co/zinc75)
|
615 |
-
|
616 |
-
### Dataset Card Contact
|
617 |
-
|
618 |
-
[Eric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
|
619 |
-
|
|
|
247 |
|
248 |
- **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr)
|
249 |
- **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for ASR, BWE and SPKV tasks using the Vibravox dataset
|
250 |
+
- **Point of Contact:** [Julien Hauret](https://www.linkedin.com/in/julienhauret/) and [Éric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
|
251 |
- **Curated by:** [AVA Team](https://lmssc.cnam.fr/fr/recherche/identification-localisation-synthese-de-sources-acoustiques-et-vibratoires) of the [LMSSC Research Laboratory](https://lmssc.cnam.fr)
|
252 |
- **Funded by:** [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6)
|
253 |
- **Language:** French
|
|
|
341 |
|
342 |
Even if the names of the columns in Vibravox dataset are self-explanatory, here is the mapping, with informations on the positioning of sensors and their technology :
|
343 |
|
344 |
+
| Vibravox dataset column name | Sensor | Location | Technology |
|
345 |
+
| ------------------------------------ | ------------------------------------------ | ---------------- | -------------------------------------------------- |
|
346 |
+
| ```audio.headset_microphone``` | Headset microphone | Near the mouth | Cardioid electrodynamic microphone |
|
347 |
+
| ```audio.throat_microphone``` | Laryngophone | Throat / Larynx | Piezoelectric sensor |
|
348 |
+
| ```audio.soft_in_ear_microphone``` | In-ear soft foam-embedded microphone | Right ear canal | Omnidirectional electret condenser microphone |
|
349 |
+
| ```audio.rigid_in_ear_microphone``` | In-ear rigid earpiece-embedded microphone | Left ear-canal | Omnidirectional MEMS microphone |
|
350 |
+
| ```audio.forehead_accelerometer``` | Forehead vibration sensor | Frontal bone | One-axis accelerometer |
|
351 |
+
| ```audio.temple_vibration_pickup``` | Temple vibration pickup | Zygomatic bone | Figure of-eight pre-polarized condenser transducer |
|
352 |
|
353 |
|
354 |
---
|
|
|
376 |
|
377 |
All the subsets are available in 3 splits (train, validation and test), with a standard 80% / 10% / 10% repartition, without overlapping any speaker in each split.
|
378 |
|
379 |
+
The speakers / participants in specific splits are the same for each subset, thus allowing to:
|
380 |
|
381 |
- use the `speechless_noisy` for data augmentation for example
|
382 |
+
- test on the `speech_noisy` testset your models trained on the `speech_clean` trainset without having to worry that a speaker would have been presented in the training phase.
|
383 |
|
384 |
### Data Fields
|
385 |
|
386 |
+
In non-streaming mode (default), the path value of all dataset. Audio dictionnary points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
|
387 |
|
388 |
**Common Data Fields for all subsets :**
|
389 |
|
390 |
+
* `audio.headset_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate.
|
391 |
* `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
|
392 |
+
* `audio.soft_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate.
|
393 |
+
* `audio.rigid_in_ear_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate.
|
394 |
* `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
|
395 |
+
* `audio.throat_microphone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate.
|
396 |
* `gender` (string) - gender of speaker (```male```or ```female```)
|
397 |
* `speaker_id` (string) - encrypted id of speaker
|
398 |
* `duration` (float32) - the audio length in seconds.
|
|
|
589 |
|
590 |
#### Recorded audio data post-processing
|
591 |
|
592 |
+
Across the sentences collected from the 200 participants, a small number of audio clips exhibited various shortcomings. Despite researchers monitoring and validating each recording individually, the process was not entirely foolproof : mispronounced sentences, sensors shifting from their initial positions, or more significant microphone malfunctions occasionally occurred. In instances where sensors were functional but not ideally positioned—such as when the participant's ear canal was too small for the rigid in-ear microphone to achieve proper acoustic sealing—we chose to retain samples where the bandwidth was slightly narrower than desired. This decision was made to enhance the robustness of our models against the effects of misplaced sensors.
|
593 |
|
594 |
To address those occasional shortcomings and offer a high-quality dataset, we implemented a series of 3 automatic filters to retain only the best audio from the speech_clean subset. We preserved only those sentences where all sensors were in optimal recording condition, adhering to predefined criteria, defined in [link to the paper]() : TODO : add link to arxiv paper when uploaded
|
595 |
|
|
|
607 |
|
608 |
A [consent form](https://vibravox.cnam.fr/documentation/consent/index.html) has been signed by each participant to the VibraVox dataset. This consent form has been approved by the Cnam lawyer. All [Cnil](https://www.cnil.fr/en) requirements have been checked, including the right to oblivion during 50 years.
|
609 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|