Hanhpt23's picture
End of training
dae5a1d verified
metadata
language:
  - zh
license: apache-2.0
base_model: openai/whisper-medium
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-medium
    results: []

openai/whisper-medium

This model is a fine-tuned version of openai/whisper-medium on the Hanhpt23/ChineseMed dataset. It achieves the following results on the evaluation set:

  • Loss: 5.2602
  • Wer: 128.5060

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
3.1376 1.0 2222 3.0423 114.8827
2.4455 2.0 4444 3.0159 122.3240
1.5794 3.0 6666 3.4702 109.4448
0.885 4.0 8888 4.0015 122.7247
0.6963 5.0 11110 4.3781 109.8454
0.4926 6.0 13332 4.5503 105.9531
0.4615 7.0 15554 4.6635 108.9296
0.3744 8.0 17776 4.8384 113.3371
0.395 9.0 19998 4.9563 113.7378
0.3312 10.0 22220 4.9508 109.5592
0.3324 11.0 24442 5.0560 120.2633
0.3027 12.0 26664 4.9751 117.0006
0.3127 13.0 28886 5.1136 124.7853
0.2792 14.0 31108 5.1324 111.6772
0.3383 15.0 33330 5.1389 110.5896
0.2695 16.0 35552 5.1925 111.6199
0.2773 17.0 37774 5.1852 111.9061
0.2829 18.0 39996 5.2192 121.5226
0.2195 19.0 42218 5.2098 126.5598
0.2254 20.0 44440 5.2602 128.5060

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1