Update README.md after loading script refactoring
Browse files
README.md
CHANGED
@@ -160,6 +160,60 @@ three columns are present within each file:
|
|
160 |
- `keyword_transcription`/`adversary_keyword_transcription` - audio transcription,
|
161 |
- `keyword_phonetic_transcription`/`adversary_keyword_phonetic_transcription` - audio phonetic transcription.
|
162 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
163 |
## Dataset Creation
|
164 |
|
165 |
The MOCKS testset was created from LibriSpeech and Mozilla Common Voice (MCV) datasets that are publicly available. To create it:
|
|
|
160 |
- `keyword_transcription`/`adversary_keyword_transcription` - audio transcription,
|
161 |
- `keyword_phonetic_transcription`/`adversary_keyword_phonetic_transcription` - audio phonetic transcription.
|
162 |
|
163 |
+
## Using the Dataset
|
164 |
+
|
165 |
+
The dataset can be used by:
|
166 |
+
- downloading the archive and constructing all the test cases based on the provided `tsv` files,
|
167 |
+
- `datasets` package.
|
168 |
+
|
169 |
+
In the latter case the following should work:
|
170 |
+
```load_dataset(path="voiceintelligenceresearch/MOCKS", name="en.LS-clean", split="offline")```
|
171 |
+
|
172 |
+
The allowed values for `name` are:
|
173 |
+
- `en.LS-{clean,other}`,
|
174 |
+
- `en.LS-{clean,other}.positive`,
|
175 |
+
- `en.LS-{clean,other}.similar`,
|
176 |
+
- `en.LS-{clean,other}.different`,
|
177 |
+
- `en.LS-{clean,other}.subset`,
|
178 |
+
- `en.LS-{clean,other}.positive_subset`,
|
179 |
+
- `en.LS-{clean,other}.similar_subset`,
|
180 |
+
- `en.LS-{clean,other}.different_subset`,
|
181 |
+
- `{de,en,es,fr,it}.MCV.positive`,
|
182 |
+
- `{de,en,es,fr,it}.MCV.positive.similar`,
|
183 |
+
- `{de,en,es,fr,it}.MCV.positive.different`,
|
184 |
+
- `{de,en,es,fr,it}.MCV.positive.subset`,
|
185 |
+
- `{de,en,es,fr,it}.MCV.positive.positive_subset`,
|
186 |
+
- `{de,en,es,fr,it}.MCV.positive.similar_subset`,
|
187 |
+
- `{de,en,es,fr,it}.MCV.positive.different_subset`.
|
188 |
+
|
189 |
+
The allowed values for `split` are:
|
190 |
+
- `offline`,
|
191 |
+
- `online`.
|
192 |
+
|
193 |
+
`load_dataset` provides a list of the dictionary objects with the following contents:
|
194 |
+
```
|
195 |
+
{
|
196 |
+
"keyword_id": datasets.Value("string"),
|
197 |
+
"keyword_transcription": datasets.Value("string"),
|
198 |
+
"test_id": datasets.Value("string"),
|
199 |
+
"test_transcription": datasets.Value("string"),
|
200 |
+
"test_audio": datasets.Audio(sampling_rate=16000),
|
201 |
+
"label": datasets.Value("bool"),
|
202 |
+
}
|
203 |
+
```
|
204 |
+
|
205 |
+
Each element of this list represents a single test case for the QbyT KWS:
|
206 |
+
- `keyword_id` - the name of the keyword audio file in `data.tar.gz` (not used in QbyT KWS),
|
207 |
+
- `keyword_transcription` - transcription of the keyword,
|
208 |
+
- `test_id` - the name of the test audio file in `data.tar.gz`,
|
209 |
+
- `test_transcription` - transcription of the test sample,
|
210 |
+
- `test_audio` - raw data of the test audio,
|
211 |
+
- `label` - `True` if the test case is positive (`keyword_transcription` is a substring of the `test_transcription`), `False` otherwise (`similar` and `different` subsets).
|
212 |
+
|
213 |
+
Note that each test case can be extended to QbyE KWS by reading the proper `keyword_id` file. Unfortunately, there is no easy way to do that in the loading script.
|
214 |
+
|
215 |
+
All the test files are provided in 16 kHz, even though `{de,en,es,fr,it}.MCV` files are stored in the original sampling (usually 48 kHz) in the `data.tar.gz` archives.
|
216 |
+
|
217 |
## Dataset Creation
|
218 |
|
219 |
The MOCKS testset was created from LibriSpeech and Mozilla Common Voice (MCV) datasets that are publicly available. To create it:
|