Edit model card

openai/whisper-medium

This model is a fine-tuned version of openai/whisper-medium on the pphuc25/VietMed-split-8-2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9216
  • Wer: 20.0549

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.5881 1.0 569 0.5795 27.0181
0.3919 2.0 1138 0.5413 21.8451
0.2496 3.0 1707 0.6104 22.6213
0.1324 4.0 2276 0.6854 23.5695
0.0898 5.0 2845 0.7554 22.0245
0.0642 6.0 3414 0.7739 21.6804
0.0366 7.0 3983 0.8119 21.2667
0.026 8.0 4552 0.8180 21.4058
0.0243 9.0 5121 0.8560 22.1893
0.0168 10.0 5690 0.8636 21.3326
0.0203 11.0 6259 0.8314 20.8091
0.0101 12.0 6828 0.8892 21.3106
0.0093 13.0 7397 0.8793 21.2594
0.0046 14.0 7966 0.8985 20.4613
0.0026 15.0 8535 0.8907 20.5272
0.0021 16.0 9104 0.9033 20.3807
0.0029 17.0 9673 0.9153 20.2014
0.0005 18.0 10242 0.9080 20.2746
0.0001 19.0 10811 0.9195 20.1135
0.0002 20.0 11380 0.9216 20.0549

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Hanhpt23/whisper-medium-vietmed-v1

Finetuned
(424)
this model