Hanhpt23's picture
End of training
55aeaa5 verified
metadata
language:
  - en
license: apache-2.0
base_model: openai/whisper-medium
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-medium
    results: []

openai/whisper-medium

This model is a fine-tuned version of openai/whisper-medium on the pphuc25/EngMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3174
  • Wer: 16.8776

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.5925 1.0 323 0.7699 21.1498
0.3437 2.0 646 0.8143 36.4451
0.1856 3.0 969 0.8772 20.0422
0.123 4.0 1292 0.8963 22.3629
0.0975 5.0 1615 0.9559 20.4641
0.0746 6.0 1938 0.9947 20.7806
0.0594 7.0 2261 1.0422 18.5654
0.0405 8.0 2584 1.1209 18.9873
0.0352 9.0 2907 1.1780 23.7869
0.0313 10.0 3230 1.1915 19.5148
0.0213 11.0 3553 1.2019 16.8776
0.0154 12.0 3876 1.1514 17.8797
0.0047 13.0 4199 1.1908 19.6730
0.0091 14.0 4522 1.1906 18.7764
0.0017 15.0 4845 1.2421 17.9852
0.0012 16.0 5168 1.2742 18.0907
0.0003 17.0 5491 1.3038 17.5105
0.0003 18.0 5814 1.3091 17.4578
0.0002 19.0 6137 1.3155 16.9304
0.0 20.0 6460 1.3174 16.8776

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1