Hanhpt23's picture
End of training
b22254c verified
metadata
language:
  - fr
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-small
    results: []

openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the pphuc25/FrenchMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2159
  • Wer: 71.8475
  • Cer: 60.7403

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.8706 1.0 215 0.8600 178.2258 102.3256
0.6025 2.0 430 0.8573 114.1496 71.0059
0.3962 3.0 645 0.8968 88.9296 67.1529
0.2239 4.0 860 0.9412 96.1877 60.4651
7.0487 5.0 1075 6.4281 779.6188 546.2915
5.2135 6.0 1290 4.5960 95.0147 76.2626
0.2569 7.0 1505 1.0109 228.7390 163.0246
0.2082 8.0 1720 1.0642 215.1026 163.8090
0.1511 9.0 1935 1.0990 162.3167 156.4882
0.0247 10.0 2150 1.1002 160.6305 182.1660
0.0155 11.0 2365 1.1253 59.9707 62.7081
0.007 12.0 2580 1.1525 134.8240 131.4298
0.0042 13.0 2795 1.1656 164.0762 154.8507
0.0024 14.0 3010 1.1838 118.9883 105.3530
0.0027 15.0 3225 1.1876 87.5367 71.3362
0.0015 16.0 3440 1.1978 57.2581 44.8190
0.0017 17.0 3655 1.1999 72.2874 60.8504
0.0012 18.0 3870 1.2119 71.8475 60.7541
0.0011 19.0 4085 1.2144 71.8475 60.7403
0.0011 20.0 4300 1.2159 71.8475 60.7403

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1