Hanhpt23's picture
End of training
c0c6f72 verified
|
raw
history blame
2.81 kB
metadata
language:
  - fr
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-small
    results: []

openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the pphuc25/FrenchMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4691
  • Wer: 40.9824
  • Cer: 28.8290

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Cer
1.054 1.0 215 1.0167 52.2727 33.9067
0.6007 2.0 430 1.0289 57.0381 35.2828
0.3026 3.0 645 1.1406 40.3226 27.7418
0.1617 4.0 860 1.2025 40.0293 25.8291
0.1133 5.0 1075 1.2893 42.0088 28.0583
0.0903 6.0 1290 1.2941 42.0088 29.6821
0.0821 7.0 1505 1.3255 41.5689 28.0033
0.0511 8.0 1720 1.4011 43.5484 28.2923
0.0391 9.0 1935 1.4531 39.3695 27.4804
0.0283 10.0 2150 1.4935 43.0352 29.9436
0.0181 11.0 2365 1.4941 43.0352 29.1042
0.0152 12.0 2580 1.4768 43.3284 31.3609
0.0191 13.0 2795 1.4495 43.9150 30.8380
0.0055 14.0 3010 1.4699 42.1554 29.2418
0.0032 15.0 3225 1.4493 39.8094 28.0859
0.0041 16.0 3440 1.4503 40.8358 28.7739
0.0018 17.0 3655 1.4598 41.6422 29.3931
0.0015 18.0 3870 1.4654 41.2023 28.7051
0.0002 19.0 4085 1.4676 40.8358 28.7051
0.0005 20.0 4300 1.4691 40.9824 28.8290

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1