Edit model card

whisper-synthesized-turkish-4-hour-llr

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2356
  • Wer: 14.9179

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.8278 1.04 100 1.5357 41.2110
1.2048 2.08 200 0.8059 86.3165
0.6206 3.12 300 0.5751 94.1612
0.4927 4.17 400 0.5024 80.1568
0.4277 5.21 500 0.4451 48.6607
0.3502 6.25 600 0.3999 32.7490
0.2948 7.29 700 0.3650 21.5961
0.24 8.33 800 0.3344 17.7756
0.2079 9.38 900 0.3010 15.2512
0.1416 10.42 1000 0.2402 13.8687
0.1076 11.46 1100 0.2357 14.4427
0.0905 12.5 1200 0.2350 14.0415
0.0759 13.54 1300 0.2348 14.0230
0.0819 14.58 1400 0.2345 14.0600
0.0608 15.62 1500 0.2346 14.3563
0.0712 16.67 1600 0.2350 14.2575
0.0608 17.71 1700 0.2355 14.8562
0.0632 18.75 1800 0.2354 14.9241
0.0544 19.79 1900 0.2356 14.9303
0.0529 20.83 2000 0.2356 14.9179

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
8