Hanhpt23's picture
End of training
909ea46 verified
|
raw
history blame
2.63 kB
metadata
language:
  - vi
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-small
    results: []

openai/whisper-small

This model is a fine-tuned version of openai/whisper-small on the pphuc25/VietMed-split-8-2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8268
  • Wer: 39.2934

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.5567 1.0 569 0.5442 110.6681
0.3927 2.0 1138 0.5414 79.2641
0.2763 3.0 1707 0.5484 81.6145
0.1841 4.0 2276 0.5896 70.0897
0.1059 5.0 2845 0.6430 59.4838
0.0807 6.0 3414 0.6636 63.3132
0.0459 7.0 3983 0.7000 71.0086
0.0249 8.0 4552 0.7402 59.5643
0.0188 9.0 5121 0.7577 70.7926
0.0182 10.0 5690 0.7692 52.9599
0.0049 11.0 6259 0.7727 77.1664
0.0054 12.0 6828 0.7771 41.3619
0.0019 13.0 7397 0.7908 49.6650
0.0023 14.0 7966 0.8098 34.6403
0.001 15.0 8535 0.8050 41.5669
0.0005 16.0 9104 0.8027 35.4970
0.0004 17.0 9673 0.8123 39.3520
0.0007 18.0 10242 0.8224 43.5914
0.0002 19.0 10811 0.8250 39.2532
0.0001 20.0 11380 0.8268 39.2934

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1