openai/whisper-small
This model is a fine-tuned version of openai/whisper-small on the Hanhpt23/ChineseMed dataset. It achieves the following results on the evaluation set:
- Loss: 4.9830
- Wer: 123.0681
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
3.0264 | 1.0 | 2222 | 2.9653 | 115.7413 |
2.3821 | 2.0 | 4444 | 2.9087 | 114.9971 |
1.5873 | 3.0 | 6666 | 3.3147 | 107.5558 |
0.9969 | 4.0 | 8888 | 3.7880 | 119.2330 |
0.6546 | 5.0 | 11110 | 4.1111 | 106.9834 |
0.5117 | 6.0 | 13332 | 4.2925 | 107.2696 |
0.4367 | 7.0 | 15554 | 4.4602 | 106.0675 |
0.3898 | 8.0 | 17776 | 4.5509 | 105.8958 |
0.3962 | 9.0 | 19998 | 4.6185 | 127.8191 |
0.3297 | 10.0 | 22220 | 4.6620 | 118.8323 |
0.3308 | 11.0 | 24442 | 4.7870 | 116.3137 |
0.304 | 12.0 | 26664 | 4.8033 | 106.2393 |
0.306 | 13.0 | 28886 | 4.8275 | 124.8426 |
0.2777 | 14.0 | 31108 | 4.8636 | 106.1248 |
0.329 | 15.0 | 33330 | 4.8876 | 105.5524 |
0.2666 | 16.0 | 35552 | 4.8984 | 110.6468 |
0.2713 | 17.0 | 37774 | 4.9296 | 105.2089 |
0.2834 | 18.0 | 39996 | 4.9481 | 123.6978 |
0.2202 | 19.0 | 42218 | 4.9403 | 122.8964 |
0.225 | 20.0 | 44440 | 4.9830 | 123.0681 |
Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 0