whisper_end22
This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.1061
- Train Accuracy: 0.0341
- Validation Loss: 0.5635
- Validation Accuracy: 0.0314
- Epoch: 22
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
Training results
Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
---|---|---|---|---|
5.0856 | 0.0116 | 4.4440 | 0.0123 | 0 |
4.3149 | 0.0131 | 4.0521 | 0.0142 | 1 |
3.9260 | 0.0146 | 3.7264 | 0.0153 | 2 |
3.5418 | 0.0160 | 3.3026 | 0.0174 | 3 |
2.7510 | 0.0198 | 2.0157 | 0.0241 | 4 |
1.6782 | 0.0250 | 1.3567 | 0.0273 | 5 |
1.1705 | 0.0274 | 1.0678 | 0.0286 | 6 |
0.9126 | 0.0287 | 0.9152 | 0.0294 | 7 |
0.7514 | 0.0296 | 0.8057 | 0.0299 | 8 |
0.6371 | 0.0302 | 0.7409 | 0.0302 | 9 |
0.5498 | 0.0307 | 0.6854 | 0.0306 | 10 |
0.4804 | 0.0312 | 0.6518 | 0.0307 | 11 |
0.4214 | 0.0316 | 0.6200 | 0.0310 | 12 |
0.3713 | 0.0319 | 0.5947 | 0.0311 | 13 |
0.3281 | 0.0322 | 0.5841 | 0.0311 | 14 |
0.2891 | 0.0325 | 0.5700 | 0.0313 | 15 |
0.2550 | 0.0328 | 0.5614 | 0.0313 | 16 |
0.2237 | 0.0331 | 0.5572 | 0.0313 | 17 |
0.1959 | 0.0333 | 0.5563 | 0.0314 | 18 |
0.1698 | 0.0335 | 0.5530 | 0.0314 | 19 |
0.1455 | 0.0337 | 0.5590 | 0.0314 | 20 |
0.1242 | 0.0339 | 0.5743 | 0.0313 | 21 |
0.1061 | 0.0341 | 0.5635 | 0.0314 | 22 |
Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.