Edit model card

Whisper tiny Book

This model is a fine-tuned version of openai/whisper-tiny.en on the Book dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0251
  • Wer: 1.9268
  • Cer: 0.3553

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
3.9929 0.0709 10 3.3716 29.5761 10.8629
3.9386 0.1418 20 3.0177 28.4200 10.2030
3.375 0.2128 30 2.6123 27.7457 9.8477
3.0007 0.2837 40 2.2620 26.1079 9.0186
2.5946 0.3546 50 1.9080 23.6031 7.8849
2.1729 0.4255 60 1.5682 20.2312 6.7682
1.8715 0.4965 70 1.3811 18.5934 6.0914
1.6468 0.5674 80 1.2722 17.2447 5.7868
1.5396 0.6383 90 1.1741 15.7033 5.0254
1.4669 0.7092 100 1.0837 14.5472 4.6024
1.3851 0.7801 110 0.9828 11.4644 3.7394
1.2107 0.8511 120 0.9004 9.9229 3.0457
1.1375 0.9220 130 0.8206 9.7303 3.2318
0.9685 0.9929 140 0.7487 8.9595 2.7750
0.8194 1.0638 150 0.6772 8.2852 2.5719
0.7976 1.1348 160 0.6092 7.8998 2.5550
0.7104 1.2057 170 0.5373 7.5145 2.2335
0.6255 1.2766 180 0.4742 7.0328 2.0981
0.5621 1.3475 190 0.4145 6.5511 1.9966
0.5151 1.4184 200 0.3615 5.3950 1.5905
0.4697 1.4894 210 0.3107 5.3950 1.4890
0.4032 1.5603 220 0.2671 5.2023 1.4382
0.3962 1.6312 230 0.2254 4.6243 1.2183
0.3278 1.7021 240 0.1901 4.1426 1.1337
0.2661 1.7730 250 0.1614 3.9499 1.0660
0.2927 1.8440 260 0.1362 3.6609 0.9475
0.2197 1.9149 270 0.1165 3.5645 0.9306
0.1859 1.9858 280 0.0991 3.4682 0.9137
0.1446 2.0567 290 0.0885 3.6609 0.8799
0.13 2.1277 300 0.0770 3.8536 0.9475
0.1376 2.1986 310 0.0675 3.5645 0.8629
0.1107 2.2695 320 0.0610 3.2755 0.8291
0.1187 2.3404 330 0.0568 2.9865 0.7614
0.1173 2.4113 340 0.0521 2.8902 0.7445
0.0859 2.4823 350 0.0476 2.8902 0.7445
0.0933 2.5532 360 0.0440 2.4085 0.5922
0.0787 2.6241 370 0.0397 2.0231 0.4399
0.0901 2.6950 380 0.0363 1.7341 0.3215
0.0833 2.7660 390 0.0337 1.9268 0.4230
0.0751 2.8369 400 0.0325 2.0231 0.3723
0.071 2.9078 410 0.0311 2.0231 0.3723
0.068 2.9787 420 0.0298 1.9268 0.3553
0.0724 3.0496 430 0.0286 2.1195 0.4569
0.0484 3.1206 440 0.0277 2.1195 0.4569
0.0422 3.1915 450 0.0272 1.9268 0.3553
0.0536 3.2624 460 0.0266 2.0231 0.4399
0.0443 3.3333 470 0.0262 2.0231 0.4399
0.0465 3.4043 480 0.0257 1.9268 0.3553
0.0555 3.4752 490 0.0253 1.9268 0.3553
0.0467 3.5461 500 0.0251 1.9268 0.3553

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kuan2/whisper-tiny-en-book

Finetuned
(58)
this model