my_medium_wspr
This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0939
- Wer: 11.9372
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3
- training_steps: 100
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
2.6756 | 0.0020 | 3 | 2.8119 | 123.2984 |
2.7362 | 0.0041 | 6 | 2.1522 | 102.6702 |
1.0677 | 0.0061 | 9 | 1.2918 | 79.4764 |
1.1569 | 0.0081 | 12 | 0.8556 | 59.3717 |
1.0248 | 0.0102 | 15 | 0.7711 | 230.4712 |
0.5058 | 0.0122 | 18 | 0.8485 | 264.3979 |
0.4122 | 0.0143 | 21 | 0.7497 | 244.7644 |
0.4989 | 0.0163 | 24 | 0.7179 | 53.7173 |
0.7726 | 0.0183 | 27 | 0.8902 | 60.0524 |
0.793 | 0.0204 | 30 | 0.7948 | 52.2513 |
0.7562 | 0.0224 | 33 | 0.5553 | 41.5183 |
1.0738 | 0.0244 | 36 | 0.3708 | 36.7016 |
0.2565 | 0.0265 | 39 | 0.4904 | 40.5236 |
0.4824 | 0.0285 | 42 | 0.4613 | 44.7644 |
0.4291 | 0.0305 | 45 | 0.4332 | 56.0733 |
0.1382 | 0.0326 | 48 | 0.4353 | 47.6440 |
0.2113 | 0.0346 | 51 | 0.4286 | 37.7487 |
0.2521 | 0.0367 | 54 | 0.4604 | 38.4293 |
0.9556 | 0.0387 | 57 | 0.4387 | 43.4031 |
0.5493 | 0.0407 | 60 | 0.2983 | 30.8901 |
0.1505 | 0.0428 | 63 | 0.2391 | 26.5969 |
0.0664 | 0.0448 | 66 | 0.2358 | 27.6963 |
0.2826 | 0.0468 | 69 | 0.2045 | 27.9058 |
0.2103 | 0.0489 | 72 | 0.1674 | 20.1571 |
0.3009 | 0.0509 | 75 | 0.1725 | 17.9581 |
0.2875 | 0.0530 | 78 | 0.1275 | 18.1675 |
0.0258 | 0.0550 | 81 | 0.1209 | 16.6492 |
0.0935 | 0.0570 | 84 | 0.1545 | 19.0052 |
0.26 | 0.0591 | 87 | 0.1516 | 18.7435 |
0.0844 | 0.0611 | 90 | 0.1500 | 19.7906 |
0.0386 | 0.0631 | 93 | 0.1495 | 20.2618 |
0.403 | 0.0652 | 96 | 0.1036 | 12.9319 |
0.0091 | 0.0672 | 99 | 0.0939 | 11.9372 |
Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for AkylaiBva/my_medium_wspr
Base model
openai/whisper-medium