Edit model card

whisper-small-r22-e

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2918
  • Wer: 21.3875

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5
  • training_steps: 150
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.3822 0.09 10 0.4255 23.2826
0.2636 0.18 20 0.3321 22.4196
0.2037 0.27 30 0.3279 23.8071
0.1943 0.36 40 0.3177 22.3858
0.2203 0.45 50 0.3109 22.4873
0.193 0.54 60 0.3071 22.9272
0.2096 0.63 70 0.2990 22.6565
0.214 0.72 80 0.3029 22.4873
0.2375 0.81 90 0.2927 21.7259
0.2238 0.9 100 0.2918 22.4196
0.2119 0.99 110 0.2919 22.7580
0.1362 1.08 120 0.2897 22.0135
0.0997 1.17 130 0.2915 21.3029
0.0824 1.26 140 0.2920 21.4382
0.0923 1.35 150 0.2918 21.3875

Framework versions

  • Transformers 4.35.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.7.dev0
  • Tokenizers 0.14.1
Downloads last month
3

Finetuned from