Edit model card

Whisper Medium GA-EN Speech Translation

This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5839
  • Bleu: 26.1
  • Chrf: 41.83
  • Wer: 74.6511

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 0.03
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Bleu Chrf Validation Loss Wer
3.5292 0.03 100 0.07 10.93 3.0255 296.9383
3.2384 0.05 200 0.63 9.99 2.9567 99.5047
2.9326 0.08 300 1.29 14.61 2.7918 159.8829
2.8396 0.11 400 3.7 17.65 2.7288 120.2611
3.3706 0.13 500 6.06 18.15 2.5552 110.8510
2.2285 0.16 600 2.9 17.34 2.6213 180.4593
2.452 0.19 700 7.91 22.0 2.3512 101.3958
2.3773 0.22 800 9.88 24.57 2.2666 105.7632
2.4751 0.24 900 7.46 21.25 2.1982 138.2710
1.9366 0.27 1000 7.76 25.49 2.1982 133.8136
1.9036 0.3 1100 12.02 28.01 2.0519 95.8127
2.072 0.32 1200 12.5 29.48 2.0200 105.4480
1.5439 0.35 1300 15.36 31.45 1.9203 93.3363
1.8711 0.38 1400 9.7 30.29 1.8251 144.3044
1.6849 0.4 1500 16.5 33.04 1.7918 92.7510
1.3787 0.43 1600 17.32 34.29 1.7533 87.4381
1.4595 0.46 1700 19.31 36.88 1.6776 86.5376
1.3406 0.49 1800 19.96 36.71 1.6488 81.6299
1.5607 0.51 1900 21.46 38.7 1.6280 78.3881
1.3687 0.54 2000 19.94 37.49 1.6109 84.5565
1.3761 0.57 2100 19.55 35.47 1.7206 83.8361
1.4642 0.59 2200 20.57 37.87 1.7014 82.9356
1.2263 0.62 2300 20.35 36.62 1.6773 82.1252
1.2511 0.65 2400 15.65 36.0 1.7131 111.1661
1.0471 0.67 2500 20.56 39.75 1.6460 83.5209
1.2797 0.7 2600 19.69 39.6 1.6122 90.8600
1.1708 0.73 2700 22.3 41.05 1.5805 79.4237
1.0686 0.76 2800 22.72 40.5 1.5689 78.2981
0.933 0.78 2900 20.89 41.06 1.5499 90.4548
0.9654 0.81 3000 21.94 40.62 1.5438 81.4048
1.1812 0.84 3100 1.5872 20.99 39.88 88.0234
1.0603 0.86 3200 1.6039 18.9 38.69 96.3530
0.964 0.89 3300 1.5899 23.92 41.25 76.0018
0.8625 0.92 3400 1.6323 19.96 39.15 87.7082
0.9115 0.94 3500 1.5222 23.64 41.67 83.1157
0.9932 0.97 3600 1.5477 24.09 41.78 81.8550
1.1792 1.0 3700 1.5282 24.85 41.57 78.7033
0.4436 1.02 3800 1.5823 24.1 40.77 80.3692
0.3277 1.05 3900 1.5852 24.6 41.34 78.5682
0.372 1.08 4000 1.5839 26.1 41.83 74.6511

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
0
Safetensors
Model size
764M params
Tensor type
F32
·

Finetuned from