polinaeterna HF staff commited on
Commit
9590440
1 Parent(s): d237cba

update readme

Browse files
Files changed (1) hide show
  1. README.md +17 -4
README.md CHANGED
@@ -116,13 +116,26 @@ automation. This dataset is generated by applying forced alignment on crowd-sour
116
  audio to produce per-word timing estimates for extraction.
117
  All alignments are included in the dataset.
118
 
119
- Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). To download a specific format pass it to the `format`
120
- argument (default format is `wav`):
121
 
122
  ```python
123
- ds = load_dataset("polinaeterna/ml_spoken_words", languages="tt", format="opus")
124
  ```
125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  ### Supported Tasks and Leaderboards
127
 
128
  Keyword spotting, Spoken term search
@@ -213,7 +226,7 @@ Hig-resourced (>100 hours):
213
 
214
  ### Data Fields
215
 
216
- * file: strinrelative audio path inside the archive **#TODO: change according to the new local path schema?**
217
  * is_valid: if a sample is valid
218
  * language: language of an instance. Makes sense only when providing multiple languages to the
219
  dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`)
 
116
  audio to produce per-word timing estimates for extraction.
117
  All alignments are included in the dataset.
118
 
119
+ Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). Default configurations look like
120
+ `"{lang}_{format}"`, so to load, for example, Tatar in wav format do:
121
 
122
  ```python
123
+ ds = load_dataset("polinaeterna/ml_spoken_words", "tt_wav")
124
  ```
125
 
126
+ To download multiple languages in a single dataset pass list of languages to `languages` argument:
127
+ ```python
128
+ ds = load_dataset("polinaeterna/ml_spoken_words", languages=["ar", "tt", "br"])
129
+ ```
130
+
131
+ To download a specific format pass it to the `format` argument (default format is `wav`):
132
+ ```python
133
+ ds = load_dataset("polinaeterna/ml_spoken_words", languages=["ar", "tt", "br"], format="opus")
134
+ ```
135
+ Note that each time you provide different sets of languages,
136
+ examples are generated from scratch even if you already provided one or several of them before
137
+ because custom configurations are created each time (the data is **not** redownloaded though).
138
+
139
  ### Supported Tasks and Leaderboards
140
 
141
  Keyword spotting, Spoken term search
 
226
 
227
  ### Data Fields
228
 
229
+ * file: strinrelative audio path inside the archive
230
  * is_valid: if a sample is valid
231
  * language: language of an instance. Makes sense only when providing multiple languages to the
232
  dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`)