Edit model card

checkpoints

This model is a fine-tuned version of openai/whisper-tiny.en on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6298
  • Wer: 23.5927

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Wer
3.0151 0.2778 10 3.0286 56.4200
2.9308 0.5556 20 2.9485 48.8615
2.8354 0.8333 30 2.8120 47.7230
2.5887 1.1111 40 2.6141 45.8571
2.3781 1.3889 50 2.3393 44.5604
2.0871 1.6667 60 1.9773 43.1056
1.6094 1.9444 70 1.5469 46.4896
1.2354 2.2222 80 1.1572 56.1037
0.8648 2.5 90 0.9093 35.5787
0.7721 2.7778 100 0.8162 34.0291
0.6662 3.0556 110 0.7634 33.0487
0.6899 3.3333 120 0.7265 31.3093
0.6348 3.6111 130 0.6990 30.3922
0.6225 3.8889 140 0.6766 29.0323
0.5557 4.1667 150 0.6587 27.9886
0.5236 4.4444 160 0.6436 27.7356
0.5057 4.7222 170 0.6299 27.2929
0.5335 5.0 180 0.6176 26.9450
0.5073 5.2778 190 0.6050 26.6603
0.4444 5.5556 200 0.5965 26.2176
0.4432 5.8333 210 0.5872 25.6167
0.4254 6.1111 220 0.5813 25.2688
0.4108 6.3889 230 0.5729 25.1423
0.3889 6.6667 240 0.5672 25.2688
0.381 6.9444 250 0.5624 25.2688
0.3638 7.2222 260 0.5561 24.6679
0.3247 7.5 270 0.5508 24.6047
0.3051 7.7778 280 0.5465 24.4466
0.3347 8.0556 290 0.5405 24.0038
0.2621 8.3333 300 0.5404 23.9722
0.2869 8.6111 310 0.5362 24.3517
0.3108 8.8889 320 0.5306 23.7508
0.2519 9.1667 330 0.5283 24.1303
0.2571 9.4444 340 0.5296 23.9089
0.2308 9.7222 350 0.5297 23.6243
0.2582 10.0 360 0.5256 23.1499
0.1937 10.2778 370 0.5274 22.9285
0.1937 10.5556 380 0.5256 23.5927
0.2092 10.8333 390 0.5276 30.2657
0.1701 11.1111 400 0.5272 23.0550
0.1519 11.3889 410 0.5322 23.3713
0.1466 11.6667 420 0.5307 23.1183
0.1558 11.9444 430 0.5300 23.1499
0.1181 12.2222 440 0.5354 22.9602
0.1225 12.5 450 0.5338 23.5294
0.1308 12.7778 460 0.5337 22.9918
0.1235 13.0556 470 0.5360 23.4662
0.0819 13.3333 480 0.5467 29.6015
0.0805 13.6111 490 0.5518 23.5610
0.0937 13.8889 500 0.5495 23.3713
0.0657 14.1667 510 0.5500 23.0867
0.0616 14.4444 520 0.5599 23.2448
0.0648 14.7222 530 0.5605 23.7824
0.0671 15.0 540 0.5591 23.4662
0.0443 15.2778 550 0.5750 22.9918
0.0472 15.5556 560 0.5701 23.0550
0.0407 15.8333 570 0.5826 23.5610
0.0371 16.1111 580 0.5775 23.7824
0.0276 16.3889 590 0.5823 23.2764
0.035 16.6667 600 0.5821 22.6755
0.0353 16.9444 610 0.5810 23.3713
0.0228 17.2222 620 0.5944 23.6875
0.019 17.5 630 0.5957 23.5294
0.0195 17.7778 640 0.5962 23.0867
0.0196 18.0556 650 0.5968 23.5610
0.0146 18.3333 660 0.5978 23.0867
0.0148 18.6111 670 0.6028 23.5927
0.0147 18.8889 680 0.6033 24.0354
0.012 19.1667 690 0.6062 23.5294
0.0121 19.4444 700 0.6091 23.4345
0.0112 19.7222 710 0.6096 23.7508
0.0121 20.0 720 0.6121 23.3713
0.0093 20.2778 730 0.6150 23.5927
0.0092 20.5556 740 0.6140 23.4978
0.0095 20.8333 750 0.6142 23.2448
0.0091 21.1111 760 0.6177 23.3713
0.0085 21.3889 770 0.6185 23.4978
0.0086 21.6667 780 0.6189 23.4029
0.0086 21.9444 790 0.6201 23.5610
0.0072 22.2222 800 0.6211 23.6243
0.0072 22.5 810 0.6222 23.4662
0.0083 22.7778 820 0.6229 23.4662
0.0073 23.0556 830 0.6233 23.4978
0.0076 23.3333 840 0.6241 23.2132
0.0075 23.6111 850 0.6253 23.5610
0.0065 23.8889 860 0.6261 23.5294
0.0067 24.1667 870 0.6267 23.4662
0.0069 24.4444 880 0.6270 23.4978
0.0077 24.7222 890 0.6272 23.3713
0.0063 25.0 900 0.6276 23.3080
0.0064 25.2778 910 0.6277 23.5610
0.0062 25.5556 920 0.6283 23.5610
0.0065 25.8333 930 0.6285 23.5610
0.0054 26.1111 940 0.6288 23.5610
0.0065 26.3889 950 0.6291 23.5610
0.0062 26.6667 960 0.6294 23.5610
0.0061 26.9444 970 0.6296 23.5294
0.0066 27.2222 980 0.6298 23.5927
0.0062 27.5 990 0.6298 23.5927
0.0063 27.7778 1000 0.6298 23.5927

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1.dev0
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
37.8M params
Tensor type
F32
·

Finetuned from