Edit model card

whisper-small-ar-12hrsdarijadata-April21-params1

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6189
  • Wer: 57.5010
  • Cer: 22.7156

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
1.0603 0.38 250 0.8352 73.3652 36.2817
0.7041 0.75 500 0.6119 66.8656 29.7437
0.4434 1.13 750 0.5534 56.4001 24.9692
0.4344 1.51 1000 0.5341 55.8827 22.2788
0.3683 1.89 1250 0.5171 53.8931 23.3545
0.2964 2.26 1500 0.5245 53.2697 21.4447
0.2126 2.64 1750 0.5267 51.0545 19.9624
0.2095 3.02 2000 0.5428 55.1267 21.9697
0.1452 3.39 2250 0.5636 59.2386 23.4962
0.1123 3.77 2500 0.5598 54.0257 21.7816
0.077 4.15 2750 0.5809 53.5747 23.3614
0.0624 4.52 3000 0.5897 53.6941 22.1417
0.0492 4.9 3250 0.5973 54.0921 20.3922
0.0571 5.28 3500 0.6128 54.3043 21.2704
0.0637 5.66 3750 0.6139 54.6890 22.2114
0.0288 6.03 4000 0.6189 57.5010 22.7156

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.1.dev0
  • Tokenizers 0.13.3
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.