--- annotations_creators: - expert-generated language: - en - de - es - fr - it license: - cc-by-4.0 - mpl-2.0 multilinguality: - multilingual dataset_info: - config_name: config features: - name: audio_id dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string --- # MOCKS: Multilingual Open Custom Keyword Spotting Testset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** [MOCKS 1.0: Multilingual Open Custom Keyword Spotting Testset](https://www.isca-speech.org/archive/pdfs/interspeech_2023/pudo23_interspeech.pdf) ### Dataset Summary Multilingual Open Custom Keyword Spotting Testset (MOCKS) is a comprehensive audio testset for evaluation and benchmarking Open-Vocabulary Keyword Spotting (OV-KWS) models. It supports multiple OV-KWS problems: both text-based and audio-based keyword spotting, as well as offline and online (streaming) modes. It is based on the LibriSpeech and Mozilla Common Voice datasets and contains almost 50,000 keywords, with audio data available in English, French, German, Italian, and Spanish. The testset was generated using automatically generated alignments used for the extraction of parts of the recordings that were split into keywords and test samples. MOCKS contains both positive and negative examples selected based on phonetic transcriptions that are challenging and should allow for in-depth OV-KWS model evaluation. Please refer to our [paper](https://www.isca-speech.org/archive/pdfs/interspeech_2023/pudo23_interspeech.pdf) for further details. ### Supported Tasks and Leaderboards The MOCKS dataset can be used for Open-Vocabulary Keyword Spotting (OV-KWS) task. It supports two OV-KWS types: - Query-by-Text, where keyword is provided by text and needs to be detected on audio stream. - Query-by-Example, where keyword is provided with enrollment audio for detection on audio stream. It also allows for: - offline keyword detection, where test audio is trimed to contrain only keyword of interest. - online (streaming) keyword detection, where test audio have past and future context besides keyword of interest. ### Languages The MOCKS incorporates 5 languages: - English - primary and largest test set, - German, - Spanish, - French, - Italian. ## Dataset Structure The MOCKS testset is split by language, source dataset and OV-KWS type: ``` MOCKS │ └───de │ └───MCV │ │ └───test │ │ │ └───offline │ │ │ │ │ all.pair.different.tsv │ │ │ │ │ all.pair.positive.tsv │ │ │ │ │ all.pair.similar.tsv │ │ │ │ │ data.tar.gz │ │ │ │ │ subset.pair.different.tsv │ │ │ │ │ subset.pair.positive.tsv │ │ │ │ │ subset.pair.similar.tsv │ │ │ │ │ │ │ └───online │ │ │ │ │ all.pair.different.tsv │ │ │ │ │ ... │ │ │ │ data.offline.transcription.tsv │ │ │ │ data.online.transcription.tsv │ └───en │ └───LS-clean │ │ └───test │ │ │ └───offline │ │ │ │ │ all.pair.different.tsv │ │ │ │ │ ... │ │ │ │ ... │ │ │ └───LS-other │ │ └───test │ │ │ └───offline │ │ │ │ │ all.pair.different.tsv │ │ │ │ │ ... │ │ │ │ ... │ │ │ └───MCV │ │ └───test │ │ │ └───offline │ │ │ │ │ all.pair.different.tsv │ │ │ │ │ ... │ │ │ │ ... │ └───... ``` Each split is divided into: - positive examples (`all.pair.positive.tsv`) - test examples with true keyword, 5000-8000 keywords in each subset, - similar examples (`all.pair.similar.tsv`) - test examples with similar phrases to keyword selected based on phonetic transcription distance, - different examples (`all.pair.different.tsv`) - test examples with completaly different prases. All those files contain columns separated by tab: - `keyword_path` - path to audio containing keyword phrase. - `adversary_keyword_path` - path to test audio. - `adversary_keyword_timestamp_start` - start time in seconds of phrase of interest for given keyword from `keyword_path`, field only available in **offline** split. - `adversary_keyword_timestamp_end` - end time in seconds of phrase of interest for given keyword from `keyword_path`, field only available in **offline** split. - `label` - whether the `adversary_keyword_path` contain keyword from `keyword_path` or not (1 - contains keyword, 0 - doesn't contain keyword). Each split also contains subset of whole data with the same field sctructure to allow faster evaluation (`subset.pair.*.tsv`). Also, trascriptions are provided for each audio in: - `data_offline_transcription.tsv` - transcriptions for **offline** examples and `keyword_path` from **online** scenario, - `data_online_transcription.tsv` - transcriptions for adversary, test examples from **online** scenario, three columns are present within each file: - `path_to_keyword`/`path_to_adversary_keyword` - path to audio file, - `keyword_transcription`/`adversary_keyword_transcription` - audio transcription, - `keyword_phonetic_transcription`/`adversary_keyword_phonetic_transcription` - audio phonetic transcription. ## Dataset Creation The MOCKS testset was created from LibriSpeech and Mozilla Common Voice (MCV) datasets that are publicly available. To create it: - a [MFA](https://mfa-models.readthedocs.io/en/latest/acoustic/index.html) with publicly available models was used to extract word-level alignments, - an internally-developed, rule-based grapheme-to-phoneme (G2P) algorithm was used to prepare phonetic transcriptions for each sample. The data is stored in a 16-bit, single-channel WAV format. 16kHz sampling rate is used for LibriSpeech based testset and 48kHz sampling rate for MCV based testset. The offline testset contains additional 0.1 second at the beginning and end of extracted audio sample to mitigate the cut-speech effect. The online version contrains additional 1 second or so at the beginning and end of extracted audio sample. The MOCKS testset is gender balanced. ## Citation Information ```bibtex @inproceedings{pudo23_interspeech, author={Mikołaj Pudo and Mateusz Wosik and Adam Cieślak and Justyna Krzywdziak and Bożena Łukasiak and Artur Janicki}, title={{MOCKS} 1.0: Multilingual Open Custom Keyword Spotting Testset}, year={2023}, booktitle={Proc. Interspeech 2023}, } ```