Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
French
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
jhauret commited on
Commit
24aa8ac
1 Parent(s): 2fb0e2a

minor changes

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -64,10 +64,10 @@ vibravox = load_dataset("Cnam-LMSSC/vibravox", subset, streaming=True)
64
 
65
 
66
  - **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr)
67
- - **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for Automatic Speech Recognition, Bandwidth Extension and Speaker Verification using the Vibravox dataset
68
  - **Point of Contact:** [Eric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
69
- - **Curated by:** AVA Team of the [LMSSC Research Laboratory](https://lmssc.cnam.fr)
70
- - **Funded by:** French [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6)
71
  - **Language:** French
72
  - **License:** Creative Commons Attributions 4.0
73
 
@@ -192,7 +192,7 @@ The 4 subsets correspond to :
192
 
193
  ### Splits
194
 
195
- All the subsets are available in 3 splits (train, validation and test), with a standard 80 / 10 / 10 repartition, without overlapping any speaker in each split.
196
 
197
  The speakers / participants in specific splits are the same for each subset, thus allowing to
198
 
@@ -207,10 +207,10 @@ In non-streaming mode (default), the path value of all dataset.Audio dictionnary
207
 
208
  * `audio.headset_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate.
209
  * `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
210
- * `audio.soft_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate.
211
- * `audio.rigid_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate.
212
- * `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
213
- * `audio.laryngophone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate.
214
  * `gender` (string) - gender of speaker (```male```or ```female```)
215
  * `speaker_id` (string) - encrypted id of speaker
216
  * `duration` (float32) - the audio length in seconds.
@@ -385,7 +385,7 @@ TODO : update values when final dataset is uploaded
385
 
386
  ### Textual source data
387
 
388
- The text read by all participants is collected from the French Wikipedia subset of Common voice ( [link1](https://github.com/common-voice/common-voice/blob/main/server/data/fr/wiki-1.fr.txt) [link2](https://github.com/common-voice/common-voice/blob/main/server/data/fr/wiki-2.fr.txt) ) . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect.
389
 
390
  All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using the [bootphon/phonemizer](https://github.com/bootphon/phonemizer) and manually edited to only keep strict french IPA characters.
391
 
 
64
 
65
 
66
  - **Homepage:** For more information about the project, visit our project page on [https://vibravox.cnam.fr](https://vibravox.cnam.fr)
67
+ - **Github repository:** [jhauret/vibravox](https://github.com/jhauret/vibravox) : Source code for ASR, BWE and SPKV tasks using the Vibravox dataset
68
  - **Point of Contact:** [Eric Bavu](https://acoustique.cnam.fr/contacts/bavu/en/#contact)
69
+ - **Curated by:** [AVA Team](https://lmssc.cnam.fr/fr/recherche/identification-localisation-synthese-de-sources-acoustiques-et-vibratoires) of the [LMSSC Research Laboratory](https://lmssc.cnam.fr)
70
+ - **Funded by:** [Agence Nationale Pour la Recherche / AHEAD Project](https://anr.fr/en/funded-projects-and-impact/funded-projects/project/funded/project/b2d9d3668f92a3b9fbbf7866072501ef-5aac4914c7/?tx_anrprojects_funded%5Bcontroller%5D=Funded&cHash=fa352121b44b60bf6a5917180d5205e6)
71
  - **Language:** French
72
  - **License:** Creative Commons Attributions 4.0
73
 
 
192
 
193
  ### Splits
194
 
195
+ All the subsets are available in 3 splits (train, validation and test), with a standard 80% / 10% / 10% repartition, without overlapping any speaker in each split.
196
 
197
  The speakers / participants in specific splits are the same for each subset, thus allowing to
198
 
 
207
 
208
  * `audio.headset_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the headset microphone, the decoded (mono) audio array, and the sampling rate.
209
  * `audio.forehead_accelerometer` (datasets.Audio) - a dictionary containing the path to the audio recorded by the forehead miniature accelerometer, the decoded (mono) audio array, and the sampling rate.
210
+ * `audio.soft_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear soft foam-embedded microphone, the decoded (mono) audio array, and the sampling rate.
211
+ * `audio.rigid_in_ear_mic` (datasets.Audio) - a dictionary containing the path to the audio recorded by the in-ear rigid earpiece-embedded microphone, the decoded (mono) audio array, and the sampling rate.
212
+ * `audio.temple_vibration_pickup` (datasets.Audio) - a dictionary containing the path to the audio recorded by the temple vibration pickup, the decoded (mono) audio array, and the sampling rate.
213
+ * `audio.laryngophone` (datasets.Audio) - a dictionary containing the path to the audio recorded by the piezeoelectric laryngophone, the decoded (mono) audio array, and the sampling rate.
214
  * `gender` (string) - gender of speaker (```male```or ```female```)
215
  * `speaker_id` (string) - encrypted id of speaker
216
  * `duration` (float32) - the audio length in seconds.
 
385
 
386
  ### Textual source data
387
 
388
+ The text read by all participants is collected from the French Wikipedia subset of Common voice ( [link1](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-1.fr.txt) [link2](https://github.com/common-voice/common-voice/blob/6e43e7e61318bf4605b59379e3f35ba5333d7a29/server/data/fr/wiki-2.fr.txt) ) . We applied some additional filters to these textual datasets in order to create a simplified dataset with a minimum number of tokens and to reduce the uncertainty of the pronunciation of some proper names. We therefore removed all proper names except common first names and the list of french towns. We also removed any utterances that contain numbers, Greek letters, math symbols, or that are syntactically incorrect.
389
 
390
  All lines of the textual source data from Wikipedia-extracted textual dataset has then been phonemized using the [bootphon/phonemizer](https://github.com/bootphon/phonemizer) and manually edited to only keep strict french IPA characters.
391