Finetune Whisper on Frisian and English
Collection
Assessing Knowledge-Distillation Based Compression of Whisper Model for Frisian ASR
•
12 items
•
Updated
This model is a fine-tuned version of distil-small.en on the mozilla-foundation/common_voice_6_fy_NL dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.1154 | 33.3333 | 500 | 3.7440 | 105.5534 |
0.0107 | 66.6667 | 1000 | 4.0814 | 88.1911 |
0.0 | 100.0 | 1500 | 4.1314 | 90.7432 |
0.0 | 133.3333 | 2000 | 4.1700 | 89.8663 |
0.0 | 166.6667 | 2500 | 4.1878 | 89.6061 |
0.0 | 200.0 | 3000 | 4.1946 | 89.3637 |