Datasets:

Modalities:
Audio
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
cifkao commited on
Commit
265a800
1 Parent(s): f746307

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -2
README.md CHANGED
@@ -40,10 +40,30 @@ Other arguments can be specified to control audio loading:
40
  - `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
41
  - `decode_audio=False` to skip decoding the audio and just get the MP3 file paths.
42
 
43
- ## Running evaluation
44
 
45
- Use the [`alt-eval`](https://github.com/audioshake/alt-eval) package for evaluation:
46
  ```python
 
47
  from alt_eval import compute_metrics
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
 
 
 
 
49
  ```
 
40
  - `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
41
  - `decode_audio=False` to skip decoding the audio and just get the MP3 file paths.
42
 
43
+ ## Running the benchmark
44
 
45
+ The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval):
46
  ```python
47
+ from datasets import load_dataset
48
  from alt_eval import compute_metrics
49
+
50
+ dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"]
51
+ # transcriptions: list[str]
52
+ compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
53
+ ```
54
+
55
+ By default, the dataset includes the audio, allowing you to run transcription directly.
56
+ For example, the following code can be used to evaluate Whisper:
57
+ ```python
58
+ dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", decode_audio=False)["test"]
59
+ model = whisper.load_model("tiny")
60
+ transcriptions = [
61
+ "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
62
+ for a in dataset["audio"]
63
+ ]
64
  compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
65
+ ```
66
+ Alternatively, if you already have transcriptions, you might prefer to skip loading the audio:
67
+ ```python
68
+ dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", with_audio=False)["test"]
69
  ```