Edit model card

la-whisper-small-covost2

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5845
  • Sacrebleu: 2090.6716
  • Wer: 73.0006

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Sacrebleu Wer
2.6943 0.11 50 2.1667 118.2640 686.6897
1.5505 0.23 100 1.6016 259.9307 165.6116
1.4093 0.34 150 1.5858 496.7335 197.0106
1.3209 0.45 200 1.5648 724.2491 121.8795
1.2941 0.56 250 1.5596 820.1241 161.7159
1.2078 0.68 300 1.5074 1022.0043 140.3875
1.1532 0.79 350 1.4972 174.8350 610.3716
1.0167 0.9 400 1.4551 1904.0921 82.7635
0.8842 1.01 450 1.4296 1883.6113 81.3906
0.5619 1.13 500 1.4333 1817.9440 84.9312
0.5523 1.24 550 1.4237 1517.1744 104.0918
0.4881 1.35 600 1.4413 1650.1807 97.2067
0.471 1.46 650 1.3961 1885.0014 82.2664
0.4412 1.58 700 1.3986 2145.9786 72.0469
0.4625 1.69 750 1.3885 1837.7812 87.4472
0.4195 1.8 800 1.4095 1909.2655 78.6920
0.4532 1.91 850 1.3891 1925.2238 82.0162
0.3201 2.03 900 1.4415 1919.2020 80.4437
0.1955 2.14 950 1.4410 1540.5046 101.0145
0.2111 2.25 1000 1.4345 1735.9648 90.9269
0.1981 2.36 1050 1.4597 1730.3250 91.5356
0.2052 2.48 1100 1.4439 2143.3630 72.4933
0.1886 2.59 1150 1.4702 1965.5005 77.7519
0.1918 2.7 1200 1.4518 2057.4517 75.4929
0.1755 2.81 1250 1.4788 1954.2237 78.2997
0.1769 2.93 1300 1.4588 1774.1464 91.9279
0.1104 3.04 1350 1.5281 1838.1999 84.7317
0.0718 3.15 1400 1.5133 2058.0955 76.0306
0.0855 3.26 1450 1.5271 1720.1072 89.1346
0.0717 3.38 1500 1.5289 2007.5163 75.9291
0.0707 3.49 1550 1.5366 2149.6478 71.9523
0.0704 3.6 1600 1.5355 2179.5147 69.8759
0.0676 3.71 1650 1.5393 2086.2197 73.2474
0.0748 3.83 1700 1.5398 1879.1610 80.7277
0.0695 3.94 1750 1.5351 2001.8476 78.8306
0.033 4.05 1800 1.5807 1892.0435 82.2630
0.0317 4.16 1850 1.5843 1967.1172 78.7765
0.0302 4.28 1900 1.5848 1969.6753 79.1248
0.0337 4.39 1950 1.5808 2062.9546 74.1537
0.0306 4.5 2000 1.5845 2090.6716 73.0006

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 2.0.0
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
8