Whisper Small Ru ORD 0.3 - Mizoru
This model is a fine-tuned version of openai/whisper-small on the ORD_0.3 dataset. It achieves the following results on the evaluation set:
- Loss: 1.1209
- Wer: 47.3113
- Cer: 27.5860
- Clean Wer: 38.3506
- Clean Cer: 21.9986
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Clean Wer | Clean Cer |
---|---|---|---|---|---|---|---|
1.4661 | 1.0 | 573 | 1.2056 | 51.6886 | 30.0006 | 41.6769 | 24.2065 |
1.2937 | 2.0 | 1146 | 1.1368 | 48.8463 | 28.6297 | 39.8844 | 23.1391 |
1.2072 | 3.0 | 1719 | 1.1185 | 48.9176 | 28.0655 | 39.2747 | 22.5662 |
1.1072 | 4.0 | 2292 | 1.1209 | 47.3113 | 27.5860 | 38.3506 | 21.9986 |
1.0422 | 5.0 | 2865 | 1.1310 | 47.7069 | 27.7457 | 38.4329 | 21.8445 |
0.9742 | 6.0 | 3438 | 1.1386 | 47.5932 | 27.5753 | 38.1885 | 21.7741 |
Framework versions
- Transformers 4.39.2
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2
- Downloads last month
- 9