whisper_new_split_0020

This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0899
  • Train Accuracy: 0.0330
  • Train Wermet: 24.8354
  • Validation Loss: 0.4715
  • Validation Accuracy: 0.0313
  • Validation Wermet: 21.1618
  • Epoch: 19

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Train Accuracy Train Wermet Validation Loss Validation Accuracy Validation Wermet Epoch
5.1027 0.0113 52.5530 4.4267 0.0121 41.4796 0
4.3285 0.0126 38.6893 3.9835 0.0145 33.6050 1
3.4573 0.0168 30.7714 2.5568 0.0215 31.7559 2
2.0878 0.0226 20.5131 1.5738 0.0257 21.2159 3
1.3529 0.0258 17.4367 1.1712 0.0276 17.7695 4
0.9953 0.0275 18.7308 0.9389 0.0287 20.5259 5
0.7852 0.0286 18.5731 0.8074 0.0294 17.6576 6
0.6428 0.0293 18.2945 0.7219 0.0298 19.9850 7
0.5384 0.0299 18.9258 0.6610 0.0301 18.9327 8
0.4565 0.0304 19.0749 0.6117 0.0304 21.9796 9
0.3901 0.0308 19.2099 0.5693 0.0306 18.0965 10
0.3348 0.0312 20.4777 0.5449 0.0307 19.9518 11
0.2877 0.0315 20.3181 0.5232 0.0309 20.4017 12
0.2471 0.0318 19.2073 0.5057 0.0310 18.7612 13
0.2120 0.0320 19.0961 0.4925 0.0311 22.3187 14
0.1809 0.0323 20.7944 0.4849 0.0311 27.2314 15
0.1539 0.0325 22.0951 0.4787 0.0312 25.2171 16
0.1299 0.0327 22.7652 0.4733 0.0312 22.7492 17
0.1087 0.0329 25.2223 0.4701 0.0312 28.9044 18
0.0899 0.0330 24.8354 0.4715 0.0313 21.1618 19

Framework versions

  • Transformers 4.27.0.dev0
  • TensorFlow 2.11.0
  • Tokenizers 0.13.2
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.