Edit model card

Whisper Tiny Pashto

This model is a fine-tuned version of openai/whisper-base on the google/fleurs ps_af dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8714
  • Wer: 60.0560

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-07
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1300
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9153 2.5 100 1.0240 68.9864
0.6865 5.0 200 0.8968 61.7660
0.5474 7.5 300 0.8744 60.5554
0.4646 10.0 400 0.8710 60.0560
0.4557 12.5 500 0.8732 59.4658
0.3882 15.0 600 0.8819 59.0648
0.3346 17.5 700 0.9032 59.4809
0.2947 20.0 800 0.9144 59.7685
0.2724 22.5 900 0.9289 58.9815
0.2785 25.0 1000 0.9339 59.2010
0.2454 27.5 1100 0.9439 59.1934
0.2297 30.0 1200 0.9485 59.0421
0.2383 33.33 1300 0.9529 59.0799

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.8.1.dev0
  • Tokenizers 0.13.2
Downloads last month
5

Dataset used to train ihanif/whisper-base-ps

Evaluation results