Edit model card

Whisper Small GA-EN Speech Translation

This model is a fine-tuned version of openai/whisper-small on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia + augmented dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3641
  • Bleu: 28.44
  • Chrf: 43.55
  • Wer: 72.6249

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • warmup_steps: 0
  • training_steps: 3000
  • mixed_precision_training: Native AMP
  • generation_max_length: 128

Training results

Training Loss Epoch Step Validation Loss Bleu Chrf Wer
2.3595 0.0438 100 1.7944 9.69 26.37 114.4529
1.9008 0.0876 200 1.5391 14.89 32.44 93.6065
1.535 0.1313 300 1.3972 18.24 33.57 81.9901
1.3307 0.1751 400 1.3684 21.34 37.37 72.8050
1.1263 0.2189 500 1.3284 19.33 39.83 91.8955
0.9805 0.2627 600 1.3301 23.67 38.68 78.3881
0.8989 0.3065 700 1.3123 20.32 36.94 76.3170
0.7557 0.3503 800 1.2717 25.74 40.16 72.4448
0.7216 0.3940 900 1.3090 22.34 37.79 78.9284
0.6131 0.4378 1000 1.2566 24.36 41.49 74.5160
0.5032 0.4816 1100 1.2742 21.69 41.12 83.3859
0.4567 0.5254 1200 1.2893 24.33 40.05 70.8690
0.3968 0.5692 1300 1.3000 26.97 41.45 69.6083
0.3353 0.6130 1400 1.2784 27.51 43.97 63.9352
0.2826 0.6567 1500 1.3165 24.36 39.83 70.6439
0.2643 0.7005 1600 1.3317 24.98 40.01 68.6628
0.2047 0.7443 1700 1.2905 28.01 42.72 65.8262
0.1946 0.7881 1800 1.2820 26.17 42.46 64.9257
0.1588 0.8319 1900 1.3172 26.9 43.02 63.5299
0.1322 0.8757 2000 1.3248 27.78 43.53 63.8001
0.1134 0.9194 2100 1.3198 28.98 45.27 72.7600
0.1031 0.9632 2200 1.3502 29.18 44.77 68.3476
0.0518 1.0070 2300 1.3433 28.6 42.96 69.0230
0.0481 1.0508 2400 1.3715 29.01 44.46 69.6983
0.0367 1.0946 2500 1.3696 26.94 42.39 73.6605
0.0309 1.1384 2600 1.3665 28.12 43.32 70.3737
0.0302 1.1821 2700 1.3836 29.6 44.56 67.2220
0.0302 1.2259 2800 1.3667 29.0 44.33 67.2220
0.0252 1.2697 2900 1.3633 29.07 44.09 70.6889
0.0257 1.3135 3000 1.3641 28.44 43.55 72.6249

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
16
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ymoslem/whisper-small-ga2en-v5.4-r

Finetuned
(1835)
this model

Datasets used to train ymoslem/whisper-small-ga2en-v5.4-r

Evaluation results

  • Bleu on IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia + augmented
    self-reported
    28.440
  • Wer on IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia + augmented
    self-reported
    72.625