Edit model card

whisper-small-en

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0947
  • Wer: 11.3030

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0 7.6923 100 0.0560 11.6170
0.0 15.3846 200 0.0666 13.9717
0.0 23.0769 300 0.0754 13.8148
0.0 30.7692 400 0.0850 14.1287
0.0 38.4615 500 0.0945 13.1868
0.0 46.1538 600 0.1042 13.0298
0.0 53.8462 700 0.1147 13.8148
0.0 61.5385 800 0.1256 14.1287
0.0 69.2308 900 0.1361 14.7567
0.0 76.9231 1000 0.1487 13.8148
0.0 84.6154 1100 0.1619 17.1115
0.0 92.3077 1200 0.1759 17.2684
0.0 100.0 1300 0.1866 17.1115
0.0 107.6923 1400 0.1979 17.1115
0.0043 115.3846 1500 0.0933 10.2041
0.0 123.0769 1600 0.0901 10.9890
0.0 130.7692 1700 0.0914 11.3030
0.0 138.4615 1800 0.0922 11.3030
0.0 146.1538 1900 0.0929 11.3030
0.0 153.8462 2000 0.0933 11.3030
0.0 161.5385 2100 0.0938 11.3030
0.0 169.2308 2200 0.0942 11.3030
0.0 176.9231 2300 0.0943 11.3030
0.0 184.6154 2400 0.0945 11.3030
0.0 192.3077 2500 0.0947 11.3030
0.0 200.0 2600 0.0947 11.3030

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
242M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from