Edit model card

Whisper Tiny Hu v5

This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 16.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1835
  • Wer Ortho: 14.8079
  • Wer: 13.5339

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3.75e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.4291 0.67 1000 0.4821 47.5702 44.3878
0.271 1.34 2000 0.3431 35.7913 33.0685
0.2015 2.01 3000 0.2665 28.8089 26.0777
0.1559 2.68 4000 0.2355 24.7712 22.3006
0.0934 3.35 5000 0.2089 21.6879 19.7658
0.0542 4.02 6000 0.1921 18.6950 16.7003
0.061 4.69 7000 0.1895 17.2558 15.6122
0.0356 5.35 8000 0.1866 16.5302 14.9867
0.0225 6.02 9000 0.1815 15.8708 14.4115
0.0318 6.69 10000 0.1835 14.8079 13.5339

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
1
Safetensors
Model size
37.8M params
Tensor type
F32
·

Finetuned from

Dataset used to train sarpba/whisper-base-cv16-hu-v5

Evaluation results