Edit model card

openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the pphuc25/ChiMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3249
  • Cer: 27.8520

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Cer
0.776 1.0 161 0.9761 346.2567
0.4431 2.0 322 0.9469 55.4813
0.2522 3.0 483 1.1108 47.6381
0.1548 4.0 644 1.1770 33.2219
0.1216 5.0 805 1.2831 34.9822
0.0914 6.0 966 1.2727 31.5285
0.0799 7.0 1127 1.2785 33.9795
0.0503 8.0 1288 1.3214 41.1542
0.0577 9.0 1449 1.3421 31.4394
0.0316 10.0 1610 1.3284 35.1381
0.0249 11.0 1771 1.3602 30.2139
0.0245 12.0 1932 1.3494 32.3752
0.0199 13.0 2093 1.3304 30.7041
0.0126 14.0 2254 1.3625 30.6818
0.0039 15.0 2415 1.3302 29.3895
0.003 16.0 2576 1.3131 29.1444
0.0003 17.0 2737 1.3168 28.3422
0.0003 18.0 2898 1.3202 28.0526
0.0007 19.0 3059 1.3235 27.9857
0.0003 20.0 3220 1.3249 27.8520

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
242M params
Tensor type
F32
·

Finetuned from