Hanhpt23's picture
End of training
c79a23f verified
metadata
language:
  - en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-tiny
    results: []

openai/whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the pphuc25/EngMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4835
  • Wer: 32.2972

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.8402 1.0 3491 0.9453 45.4910
0.5871 2.0 6982 0.9525 54.3790
0.3779 3.0 10473 0.9838 38.5673
0.2907 4.0 13964 1.0268 38.9408
0.2155 5.0 17455 1.1086 47.6326
0.1331 6.0 20946 1.1735 37.2778
0.1051 7.0 24437 1.2287 43.8694
0.0862 8.0 27928 1.2749 38.3380
0.051 9.0 31419 1.3181 38.1879
0.0505 10.0 34910 1.3519 37.6607
0.0235 11.0 38401 1.3838 34.7355
0.0172 12.0 41892 1.4131 34.8962
0.0145 13.0 45383 1.4257 34.5925
0.0102 14.0 48874 1.4460 34.5535
0.0063 15.0 52365 1.4482 33.0453
0.0023 16.0 55856 1.4666 32.8515
0.0017 17.0 59347 1.4708 32.4284
0.004 18.0 62838 1.4847 32.8149
0.0002 19.0 66329 1.4768 32.1459
0.0001 20.0 69820 1.4835 32.2972

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1