--- annotations_creators: - crowdsourced language_creators: - found language: - ar - de - es - fr - hu - ko - nl - pl - pt - ru - tr - vi license: - cc-by-nc-sa-4.0 multilinguality: - multilingual size_categories: - 10K(Male/Female/Unidentified) | |:---:|:---:|:---:|:---:|:---:| | ar-SA | validation | 2033 | 2.12 | 36 (22/14/0) | | | test | 2974 | 3.23 | 37 (15/17/5) | | | train_115 | 115 | 0.14 | 8 (4/4/0) | | de-DE | validation | 2033 | 2.33 | 68 (35/32/1) | | | test | 2974 | 3.41 | 82 (36/36/10) | | | train | 11514 | 12.61 | 117 (50/63/4) | | | train_115 | 115 | 0.15 | 7 (3/4/0) | | es-ES | validation | 2033 | 2.53 | 109 (51/53/5) | | | test | 2974 | 3.61 | 85 (37/33/15) | | | train_115 | 115 | 0.13 | 7 (3/4/0) | | fr-FR | validation | 2033 | 2.20 | 55 (26/26/3) | | | test | 2974 | 2.65 | 75 (31/35/9) | | | train | 11514 | 12.42 | 103 (50/52/1) | | | train_115 | 115 | 0.12 | 103 (50/52/1) | | hu-HU | validation | 2033 | 2.27 | 69 (33/33/3) | | | test | 2974 | 3.30 | 55 (25/24/6) | | | train_115 | 115 | 0.12 | 8 (3/4/1) | | ko-KR | validation | 2033 | 2.12 | 21 (8/13/0) | | | test | 2974 | 2.66 | 31 (10/18/3) | | | train_115 | 115 | 0.14 | 8 (4/4/0) | | nl-NL | validation | 2033 | 2.14 | 37 (17/19/1) | | | test | 2974 | 3.30 | 100 (48/49/3) | | | train_115 | 115 | 0.12 | 7 (3/4/0) | | pl-PL | validation | 2033 | 2.24 | 105 (50/52/3) | | | test | 2974 | 3.21 | 151 (73/71/7) | | | train_115 | 115 | 0.10 | 7 (3/4/0) | | pt-PT | validation | 2033 | 2.20 | 107 (51/53/3) | | | test | 2974 | 3.25 | 102 (48/50/4) | | | train_115 | 115 | 0.12 | 8 (4/4/0) | | ru-RU | validation | 2033 | 2.25 | 40 (7/31/2) | | | test | 2974 | 3.44 | 51 (25/23/3) | | | train_115 | 115 | 0.12 | 7 (3/4/0) | | tr-TR | validation | 2033 | 2.17 | 71 (36/34/1) | | | test | 2974 | 3.00 | 42 (17/18/7) | | | train_115 | 115 | 0.11 | 6 (3/3/0) | | vi-VN | validation | 2033 | 2.10 | 28 (13/14/1) | | | test | 2974 | 3.23 | 30 (11/14/5) | || train_115 | 115 | 0.11 | 7 (2/4/1) | ## How to use ### How to use The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. For example, to download the French config, simply specify the corresponding language config name (i.e., "fr-FR" for French): ```python from datasets import load_dataset speech_massive_fr_train = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR", split="train") ``` In case you don't have enough space in the machine, you can stream dataset by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. ```python from datasets import load_dataset speech_massive_de_train = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE", split="train", streaming=True) list(speech_massive_de_train.take(2)) ``` You can also load all the available languages and splits at once. And then access each split. ```python from datasets import load_dataset speech_massive = load_dataset("FBK-MT/Speech-MASSIVE", "all") multilingual_validation = speech_massive['validation'] ``` Or you can load dataset's all the splits per language to separate languages more easily. ```python from datasets import load_dataset, interleave_datasets, concatenate_datasets # creating full train set by interleaving between German and French speech_massive_de = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE") speech_massive_fr = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR") speech_massive_train_de_fr = interleave_datasets([speech_massive_de['train'], speech_massive_fr['train']]) # creating train_115 few-shot set by concatenating Korean and Russian speech_massive_ko = load_dataset("FBK-MT/Speech-MASSIVE", "ko-KR") speech_massive_ru = load_dataset("FBK-MT/Speech-MASSIVE", "ru-RU") speech_massive_train_115_ko_ru = concatenate_datasets([speech_massive_ko['train_115'], speech_massive_ru['train_115']]) ``` ## Dataset Structure ### Data configs - `all`: load all the 12 languages in one single dataset instance - `lang`: load only `lang` in the dataset instance, by specifying one of below languages - ```ar-SA, de-DE, es-ES, fr-FR, hu-HU, ko-KR, nl-NL, pl-PL, pt-PT, ru-RU, tr-TR, vi-VN``` ### Data Splits - `validation`: validation(dev) split available for all the 12 languages - `train_115`: few-shot (115 samples) split available for all the 12 languages - `train`: train split available for French (fr-FR) and German (de-DE) > [!WARNING] > `test` split is uploaded as a separate dataset on HF to prevent possible data contamination - ⚠️ `test`: available **_only_** in the separate HF dataset repository. ⚠️ - [https://huggingface.co/datasets/FBK-MT/Speech-MASSIVE-test](https://huggingface.co/datasets/FBK-MT/Speech-MASSIVE-test) ### Data Instances ```json { // Start of the data collected in Speech-MASSIVE 'audio': { 'path': 'train/2b12a21ca64a729ccdabbde76a8f8d56.wav', 'array': array([-7.80913979e-...7259e-03]), 'sampling_rate': 16000}, 'path': '/path/to/wav/file.wav', 'is_transcript_reported': False, 'is_validated': True, 'speaker_id': '60fcc09cb546eee814672f44', 'speaker_sex': 'Female', 'speaker_age': '25', 'speaker_ethnicity_simple': 'White', 'speaker_country_of_birth': 'France', 'speaker_country_of_residence': 'Ireland', 'speaker_nationality': 'France', 'speaker_first_language': 'French', // End of the data collected in Speech-MASSIVE // Start of the data extracted from MASSIVE // (https://huggingface.co/datasets/AmazonScience/massive/blob/main/README.md#data-instances) 'id': '7509', 'locale': 'fr-FR', 'partition': 'train', 'scenario': 2, 'scenario_str': 'calendar', 'intent_idx': 32, 'intent_str': 'calendar_query', 'utt': 'après les cours de natation quoi d autre sur mon calendrier mardi', 'annot_utt': 'après les cours de natation quoi d autre sur mon calendrier [date : mardi]', 'worker_id': '22', 'slot_method': {'slot': ['date'], 'method': ['translation']}, 'judgments': { 'worker_id': ['22', '19', '0'], 'intent_score': [1, 2, 1], 'slots_score': [1, 1, 1], 'grammar_score': [4, 4, 4], 'spelling_score': [2, 1, 2], 'language_identification': ['target', 'target', 'target'] }, 'tokens': ['après', 'les', 'cours', 'de', 'natation', 'quoi', 'd', 'autre', 'sur', 'mon', 'calendrier', 'mardi'], 'labels': ['Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'date'], // End of the data extracted from MASSIVE } ``` ### Data Fields `audio.path`: Original audio file name `audio.array`: Read audio file with the sampling rate of 16,000 `audio.sampling_rate`: Sampling rate `path`: Original audio file full path `is_transcript_reported`: Whether the transcript is reported as 'syntatically wrong' by crowd-source worker `is_validated`: Whether the recorded audio has been validated to check if the audio matches transcript exactly by crowd-source worker `speaker_id`: Unique hash id of the crowd source speaker `speaker_sex`: Speaker's sex information provided by the crowd-source platform ([Prolific](http://prolific.com)) - Male - Female - Unidentified : Information not available from Prolific `speaker_age`: Speaker's age information provided by Prolific - age value (`str`) - Unidentified : Information not available from Prolific `speaker_ethnicity_simple`: Speaker's ethnicity information provided by Prolific - ethnicity value (`str`) - Unidentified : Information not available from Prolific `speaker_country_of_birth`: Speaker's country of birth information provided by Prolific - country value (`str`) - Unidentified : Information not available from Prolific `speaker_country_of_residence`: Speaker's country of residence information provided by Prolific - country value (`str`) - Unidentified : Information not available from Prolific `speaker_nationality`: Speaker's nationality information provided by Prolific - nationality value (`str`) - Unidentified : Information not available from Prolific `speaker_first_language`: Speaker's first language information provided by Prolific - language value (`str`) - Unidentified : Information not available from Prolific ### Limitations As Speech-MASSIVE is constructed based on the MASSIVE dataset, it inherently retains certain grammatical errors present in the original MASSIVE text. Correcting these errors was outside the scope of our project. However, by providing the `is_transcripted_reported` attribute in Speech-MASSIVE, we enable users of the dataset to be aware of these errors. ## License All datasets are licensed under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en). ### Citation Information Speech-MASSIVE is accepted at INTERSPEECH 2024 (Kos, Greece). You can access the [Speech-MASSIVE paper on arXiv](https://arxiv.org/abs/2408.03900). Please cite the paper when referencing the Speech-MASSIVE corpus as: ``` @misc{lee2024speechmassivemultilingualspeechdataset, title={Speech-MASSIVE: A Multilingual Speech Dataset for SLU and Beyond}, author={Beomseok Lee and Ioan Calapodescu and Marco Gaido and Matteo Negri and Laurent Besacier}, year={2024}, eprint={2408.03900}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2408.03900}, } ```