Edit model card

whisper-tiny-finetune

This model is a fine-tuned version of openai/whisper-tiny.en on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5196
  • Wer: 19.8880

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Wer
3.9796 0.2778 10 3.9148 37.1304
3.8883 0.5556 20 3.8228 36.4457
3.8393 0.8333 30 3.6743 39.0912
3.5454 1.1111 40 3.4770 43.2306
3.3763 1.3889 50 3.2317 35.9477
3.1017 1.6667 60 2.9296 42.9816
2.7298 1.9444 70 2.5452 33.4578
2.2923 2.2222 80 2.0397 33.5512
1.739 2.5 90 1.4515 34.2670
1.148 2.7778 100 0.9843 34.1737
0.846 3.0556 110 0.7598 30.2210
0.7269 3.3333 120 0.6819 27.3265
0.6914 3.6111 130 0.6347 26.2372
0.6225 3.8889 140 0.6027 24.9611
0.5939 4.1667 150 0.5781 24.5565
0.5677 4.4444 160 0.5582 23.2493
0.5611 4.7222 170 0.5420 22.6268
0.5285 5.0 180 0.5271 22.0044
0.4771 5.2778 190 0.5158 21.8176
0.5003 5.5556 200 0.5068 21.7554
0.4232 5.8333 210 0.4965 21.4441
0.4232 6.1111 220 0.4888 20.6038
0.3522 6.3889 230 0.4824 20.2303
0.405 6.6667 240 0.4753 20.0436
0.4333 6.9444 250 0.4694 23.0937
0.3314 7.2222 260 0.4623 19.9813
0.3282 7.5 270 0.4581 19.8257
0.3463 7.7778 280 0.4567 19.6390
0.3215 8.0556 290 0.4518 19.0476
0.3049 8.3333 300 0.4496 18.7986
0.2792 8.6111 310 0.4463 18.8920
0.3031 8.8889 320 0.4426 18.7053
0.2353 9.1667 330 0.4451 18.5496
0.2618 9.4444 340 0.4433 18.8920
0.2405 9.7222 350 0.4449 18.4563
0.2609 10.0 360 0.4408 18.1450
0.1956 10.2778 370 0.4374 18.3940
0.2079 10.5556 380 0.4382 18.2695
0.2149 10.8333 390 0.4383 18.3940
0.1791 11.1111 400 0.4400 18.4563
0.1778 11.3889 410 0.4401 18.5185
0.1571 11.6667 420 0.4390 18.5185
0.1602 11.9444 430 0.4376 18.0205
0.1168 12.2222 440 0.4418 18.4251
0.1353 12.5 450 0.4418 18.6430
0.1156 12.7778 460 0.4433 18.3318
0.1148 13.0556 470 0.4422 17.8960
0.0895 13.3333 480 0.4478 18.1139
0.0903 13.6111 490 0.4492 18.7364
0.0981 13.8889 500 0.4522 19.1721
0.0669 14.1667 510 0.4570 19.0787
0.0723 14.4444 520 0.4612 18.5808
0.0677 14.7222 530 0.4603 18.9231
0.066 15.0 540 0.4600 19.1410
0.0393 15.2778 550 0.4696 18.6741
0.052 15.5556 560 0.4730 19.3589
0.0414 15.8333 570 0.4728 18.8609
0.0486 16.1111 580 0.4756 19.3589
0.0329 16.3889 590 0.4822 19.4522
0.0285 16.6667 600 0.4864 19.1410
0.0291 16.9444 610 0.4814 19.6078
0.0234 17.2222 620 0.4861 19.7012
0.0197 17.5 630 0.4928 19.7323
0.0191 17.7778 640 0.4927 19.7323
0.0187 18.0556 650 0.4914 19.7012
0.0167 18.3333 660 0.4961 19.8568
0.0152 18.6111 670 0.4998 19.9191
0.0125 18.8889 680 0.4983 20.1369
0.0116 19.1667 690 0.5016 19.7946
0.0107 19.4444 700 0.5022 19.7012
0.0126 19.7222 710 0.5032 19.9191
0.0112 20.0 720 0.5042 20.1369
0.0102 20.2778 730 0.5054 20.0436
0.0097 20.5556 740 0.5089 19.7946
0.01 20.8333 750 0.5074 19.9502
0.0092 21.1111 760 0.5099 19.9502
0.009 21.3889 770 0.5100 20.0436
0.008 21.6667 780 0.5119 19.9813
0.0087 21.9444 790 0.5125 19.9502
0.0083 22.2222 800 0.5111 20.0436
0.0083 22.5 810 0.5119 19.9813
0.0076 22.7778 820 0.5127 20.0124
0.0074 23.0556 830 0.5150 20.0124
0.0076 23.3333 840 0.5150 19.8568
0.007 23.6111 850 0.5162 20.0124
0.0074 23.8889 860 0.5165 19.9813
0.0066 24.1667 870 0.5157 19.7946
0.0068 24.4444 880 0.5163 19.9502
0.0065 24.7222 890 0.5173 19.8257
0.0074 25.0 900 0.5182 19.9813
0.0067 25.2778 910 0.5184 19.8568
0.006 25.5556 920 0.5186 20.0124
0.0063 25.8333 930 0.5187 19.7946
0.0071 26.1111 940 0.5190 19.8257
0.0058 26.3889 950 0.5193 19.8880
0.0063 26.6667 960 0.5195 19.8880
0.0059 26.9444 970 0.5195 19.8880
0.0056 27.2222 980 0.5195 19.8880
0.006 27.5 990 0.5196 19.8880
0.0068 27.7778 1000 0.5196 19.8880

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1.dev0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
37.8M params
Tensor type
F32
·

Finetuned from