Edit model card

whisper-medium-ar-original

This model is a fine-tuned version of openai/whisper-medium on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1852
  • Wer: 14.1086

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 8000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.115 1.01 400 0.1204 18.6541
0.0774 2.02 800 0.1074 15.5844
0.0438 3.03 1200 0.1160 16.4699
0.0233 4.04 1600 0.1279 15.1122
0.0131 5.05 2000 0.1350 15.5254
0.0051 6.06 2400 0.1455 14.9941
0.0035 7.07 2800 0.1464 14.1677
0.0032 8.08 3200 0.1545 14.8170
0.0013 9.09 3600 0.1623 13.8725
0.0013 10.1 4000 0.1543 13.4002
0.0006 11.11 4400 0.1653 14.1677
0.0006 12.12 4800 0.1699 13.7544
0.0003 13.13 5200 0.1705 13.4593
0.0001 14.14 5600 0.1733 13.6954
0.0002 15.15 6000 0.1768 13.8725
0.0001 16.16 6400 0.1786 13.7544
0.0 17.17 6800 0.1826 13.9906
0.0 18.18 7200 0.1839 14.0496
0.0 19.19 7600 0.1848 14.0496
0.0 20.2 8000 0.1852 14.1086

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.1
  • Datasets 2.8.0
  • Tokenizers 0.13.2
Downloads last month
0

Evaluation results