Whisper Small Ru ORD 0.7 - Mizoru
This model is a fine-tuned version of openai/whisper-small on the ORD_0.7 dataset. It achieves the following results on the evaluation set:
- Loss: 1.2786
- Wer: 69.8870
- Cer: 37.3459
- Clean Wer: 59.1663
- Clean Cer: 29.9311
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Clean Wer | Clean Cer |
---|---|---|---|---|---|---|---|
1.4233 | 1.0 | 196 | 1.3027 | 78.2041 | 42.0168 | 63.1339 | 35.6190 |
1.0985 | 2.0 | 392 | 1.1179 | 73.8741 | 38.0220 | 60.4978 | 30.3044 |
0.9609 | 3.0 | 588 | 1.0756 | 70.6593 | 35.9327 | 59.5915 | 29.1679 |
0.7698 | 4.0 | 784 | 1.0846 | 71.1564 | 38.0252 | 57.5893 | 29.8950 |
0.6445 | 5.0 | 980 | 1.1128 | 68.3353 | 35.9205 | 57.1834 | 28.0442 |
0.53 | 6.0 | 1176 | 1.1503 | 66.4836 | 35.4504 | 57.6763 | 28.5774 |
0.4199 | 7.0 | 1372 | 1.2154 | 68.9370 | 37.0868 | 58.5459 | 28.9185 |
0.3219 | 8.0 | 1568 | 1.2786 | 69.8870 | 37.3459 | 59.1663 | 29.9311 |
Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2
- Downloads last month
- 4