Edit model card

Whisper Small English

This model is a fine-tuned version of openai/whisper-small on the mozilla-foundation/common_voice_11_0 en dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3266
  • Wer: 13.0386

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1529 0.1 1000 0.4381 17.7766
0.2372 0.2 2000 0.3988 15.9201
0.1706 0.3 3000 0.3841 15.5069
0.2781 0.4 4000 0.3697 14.8122
0.2167 0.5 5000 0.3576 14.2563
0.3609 0.6 6000 0.4041 18.0670
0.2455 0.7 7000 0.3372 13.4813
0.2502 0.8 8000 0.3393 13.5810
0.2564 0.9 9000 0.3303 13.1041
0.2394 1.0 10000 0.3266 13.0386

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 2.0.0+cu117
  • Datasets 2.11.1.dev0
  • Tokenizers 0.13.2
Downloads last month
7

Dataset used to train lorenzoncina/whisper-small-en-no-8bitoptm

Evaluation results