Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -155,17 +155,18 @@ See the [project website](https://audioshake.github.io/jam-alt/) for details.
|
|
155 |
|
156 |
```python
|
157 |
from datasets import load_dataset
|
158 |
-
dataset = load_dataset("audioshake/jam-alt"
|
159 |
```
|
160 |
|
161 |
A subset is defined for each language (`en`, `fr`, `de`, `es`);
|
162 |
for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs.
|
163 |
|
164 |
-
By default, the dataset comes with audio. To skip loading the audio, use `with_audio=False`.
|
165 |
To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`.
|
166 |
Useful arguments to `datasets.Audio()` are:
|
167 |
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
|
168 |
-
- `decode=False` to skip decoding the audio and just get the MP3 file paths.
|
|
|
|
|
169 |
|
170 |
## Running the benchmark
|
171 |
|
@@ -174,14 +175,14 @@ The evaluation is implemented in our [`alt-eval` package](https://github.com/aud
|
|
174 |
from datasets import load_dataset
|
175 |
from alt_eval import compute_metrics
|
176 |
|
177 |
-
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0"
|
178 |
# transcriptions: list[str]
|
179 |
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
|
180 |
```
|
181 |
|
182 |
For example, the following code can be used to evaluate Whisper:
|
183 |
```python
|
184 |
-
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0"
|
185 |
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it
|
186 |
|
187 |
model = whisper.load_model("tiny")
|
@@ -191,9 +192,9 @@ transcriptions = [
|
|
191 |
]
|
192 |
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
|
193 |
```
|
194 |
-
Alternatively, if you already have transcriptions, you might prefer to skip loading the audio:
|
195 |
```python
|
196 |
-
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0",
|
197 |
```
|
198 |
|
199 |
## Citation
|
@@ -221,4 +222,4 @@ When using the benchmark, please cite [our paper](https://www.arxiv.org/abs/2408
|
|
221 |
address={Rhodes Island, Greece},
|
222 |
doi={10.1109/ICASSP49357.2023.10096725}
|
223 |
}
|
224 |
-
```
|
|
|
155 |
|
156 |
```python
|
157 |
from datasets import load_dataset
|
158 |
+
dataset = load_dataset("audioshake/jam-alt", split="test")
|
159 |
```
|
160 |
|
161 |
A subset is defined for each language (`en`, `fr`, `de`, `es`);
|
162 |
for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs.
|
163 |
|
|
|
164 |
To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`.
|
165 |
Useful arguments to `datasets.Audio()` are:
|
166 |
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
|
167 |
+
- `decode=False` to skip decoding the audio and just get the MP3 file paths and contents.
|
168 |
+
|
169 |
+
The `load_dataset` function also accepts a `columns` parameter, which can be useful for example if you want to skip downloading the audio (see the example below).
|
170 |
|
171 |
## Running the benchmark
|
172 |
|
|
|
175 |
from datasets import load_dataset
|
176 |
from alt_eval import compute_metrics
|
177 |
|
178 |
+
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", split="test")
|
179 |
# transcriptions: list[str]
|
180 |
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
|
181 |
```
|
182 |
|
183 |
For example, the following code can be used to evaluate Whisper:
|
184 |
```python
|
185 |
+
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", split="test")
|
186 |
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it
|
187 |
|
188 |
model = whisper.load_model("tiny")
|
|
|
192 |
]
|
193 |
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
|
194 |
```
|
195 |
+
Alternatively, if you already have transcriptions, you might prefer to skip loading the `audio` column:
|
196 |
```python
|
197 |
+
dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", split="test", columns=["name", "text", "language", "license_type"])
|
198 |
```
|
199 |
|
200 |
## Citation
|
|
|
222 |
address={Rhodes Island, Greece},
|
223 |
doi={10.1109/ICASSP49357.2023.10096725}
|
224 |
}
|
225 |
+
```
|