Edit model card

whisper-kor3_de_2

This model is a fine-tuned version of openai/whisper-small on the whisper-kor3_de_2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3685
  • Wer: 23.2718
  • Cer: 10.5941

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.3178 0.21 50 0.3333 23.0343 11.1119
0.2684 0.42 100 0.3293 22.9024 10.6660
0.279 0.64 150 0.3264 23.1662 10.8458
0.2813 0.85 200 0.3314 28.3905 14.9597
0.2363 1.06 250 0.3325 31.3720 16.7002
0.1909 1.27 300 0.3333 26.4908 13.0538
0.171 1.48 350 0.3384 24.4063 12.1044
0.1699 1.69 400 0.3330 22.6649 10.3711
0.1824 1.91 450 0.3336 23.3509 10.8242
0.0784 2.12 500 0.3425 22.7441 10.4574
0.0694 2.33 550 0.3492 23.2982 10.7379
0.0946 2.54 600 0.3442 24.2744 11.4212
0.0785 2.75 650 0.3486 22.6913 10.4646
0.0838 2.97 700 0.3466 22.7441 10.5941
0.0423 3.18 750 0.3600 24.2480 11.3780
0.0448 3.39 800 0.3615 23.0871 10.5509
0.0492 3.6 850 0.3640 23.3509 10.6228
0.04 3.81 900 0.3649 23.4565 10.5941
0.0385 4.03 950 0.3635 24.1689 11.2558
0.0276 4.24 1000 0.3685 23.2718 10.5941

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
7

Finetuned from

Evaluation results