Edit model card

openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the pphuc25/VietMed-split-8-2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8783
  • Wer: 84.0710
  • Cer: 69.8088

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.5701 1.0 569 0.5608 46.2200 38.1533
0.4525 2.0 1138 0.5395 65.0522 58.3991
0.3578 3.0 1707 0.5448 46.9632 42.0497
0.267 4.0 2276 0.5691 40.8786 36.0588
0.1655 5.0 2845 0.6143 74.4609 62.7468
0.0965 6.0 3414 0.6592 66.0443 49.6775
0.0465 7.0 3983 0.7106 64.5982 55.2552
0.0222 8.0 4552 0.7337 75.1455 63.4091
0.0162 9.0 5121 0.7609 57.3787 49.1906
0.0071 10.0 5690 0.7812 65.8429 53.4811
0.0048 11.0 6259 0.7982 49.3355 40.4222
0.0048 12.0 6828 0.8064 53.4249 43.6294
0.0023 13.0 7397 0.8291 55.4347 44.3160
0.0032 14.0 7966 0.8303 63.7196 56.2369
0.0013 15.0 8535 0.8443 83.7342 69.0423
0.0034 16.0 9104 0.8523 88.0871 73.9847
0.0009 17.0 9673 0.8752 85.7148 69.7958
0.0004 18.0 10242 0.8713 90.1885 72.8172
0.0004 19.0 10811 0.8762 82.5773 68.7490
0.0003 20.0 11380 0.8783 84.0710 69.8088

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
5
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Hanhpt23/whisper-small-vietmed-free_ED3-11

Finetuned
(1832)
this model