Datasets:
Commit
•
3b46101
1
Parent(s):
305afbb
Update README.md
Browse files
README.md
CHANGED
@@ -2,4 +2,67 @@
|
|
2 |
|
3 |
https://groups.inf.ed.ac.uk/ami/corpus/
|
4 |
|
5 |
-
To be filled!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
https://groups.inf.ed.ac.uk/ami/corpus/
|
4 |
|
5 |
+
To be filled!
|
6 |
+
|
7 |
+
**Note**: This dataset corresponds to the data-processing of [KALDI's AMI S5 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5).
|
8 |
+
This means text is normalized and the audio data is chunked according to the scripts above!
|
9 |
+
To make the user experience as simply as possible, we provide the already chunked data to the user here so that the following can be done:
|
10 |
+
|
11 |
+
```python
|
12 |
+
from datasets import load_dataset
|
13 |
+
ds = load_dataset("edinburghcstr/ami", "ihm")
|
14 |
+
|
15 |
+
print(ds)
|
16 |
+
```
|
17 |
+
gives:
|
18 |
+
```
|
19 |
+
DatasetDict({
|
20 |
+
train: Dataset({
|
21 |
+
features: ['segment_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
|
22 |
+
num_rows: 108502
|
23 |
+
})
|
24 |
+
validation: Dataset({
|
25 |
+
features: ['segment_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
|
26 |
+
num_rows: 13098
|
27 |
+
})
|
28 |
+
test: Dataset({
|
29 |
+
features: ['segment_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
|
30 |
+
num_rows: 12643
|
31 |
+
})
|
32 |
+
})
|
33 |
+
```
|
34 |
+
|
35 |
+
```py
|
36 |
+
ds["train"][0]
|
37 |
+
```
|
38 |
+
|
39 |
+
automatically loads the audio into memory:
|
40 |
+
|
41 |
+
```
|
42 |
+
{'segment_id': 'EN2001a',
|
43 |
+
'audio_id': 'AMI_EN2001a_H00_MEE068_0000557_0000594',
|
44 |
+
'text': 'OKAY',
|
45 |
+
'audio': {'path': '/home/patrick_huggingface_co/.cache/huggingface/datasets/downloads/extracted/2d75d5b3e8a91f44692e2973f08b4cac53698f92c2567bd43b41d19c313a5280/EN2001a/train_ami_en2001a_h00_mee068_0000557_0000594.wav',
|
46 |
+
'array': array([0. , 0. , 0. , ..., 0.00033569, 0.00030518,
|
47 |
+
0.00030518], dtype=float32),
|
48 |
+
'sampling_rate': 16000},
|
49 |
+
'begin_time': 5.570000171661377,
|
50 |
+
'end_time': 5.940000057220459,
|
51 |
+
'microphone_id': 'H00',
|
52 |
+
'speaker_id': 'MEE068'}
|
53 |
+
```
|
54 |
+
|
55 |
+
|
56 |
+
The dataset was tested for correctness by fine-tuning a Wav2Vec2-Large model on it, more explicitly [the `wav2vec2-large-lv60` checkpoint](https://huggingface.co/facebook/wav2vec2-large-lv60).
|
57 |
+
|
58 |
+
As can be seen in this experiments, training the model for less than 2 epochs gives
|
59 |
+
|
60 |
+
*Result (WER)*:
|
61 |
+
|
62 |
+
| "dev" | "eval" |
|
63 |
+
|---|---|
|
64 |
+
| 25.27 | 25.21 |
|
65 |
+
|
66 |
+
as can be seen [here]( ).
|
67 |
+
|
68 |
+
You can run [run.sh]( ) to reproduce the result.
|