Edit model card

whisper-small-mn-7

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3061
  • Wer: 32.6469
  • Cer: 11.2319

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 15000
  • mixed_precision_training: Native AMP

Training script

python train.py \
    --train_datasets "mozilla-foundation/common_voice_11_0|mn|train+validation,google/fleurs|mn_mn|train+validation,bayartsogt/ulaanbal-v0||train" \
    --eval_datasets "mozilla-foundation/common_voice_11_0|mn|test" \
    --whisper-size "small" \
    --language "mn,Mongolian" \
    --keep-chars " абвгдеёжзийклмноөпрстуүфхцчшъыьэюя.,?!" \
    --train-batch-size 32 \
    --eval-batch-size 32 \
    --max-steps 15000 \
    --num-workers 8 \
    --version 7 \

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.3416 0.61 1000 0.4335 51.0979 17.8608
0.2266 1.22 2000 0.3383 39.5346 13.6468
0.2134 1.83 3000 0.2994 35.6565 12.1677
0.165 2.43 4000 0.2927 34.1927 11.4602
0.1205 3.04 5000 0.2879 33.5209 11.3002
0.1284 3.65 6000 0.2884 32.7507 10.9885
0.0893 4.26 7000 0.3022 33.0894 11.2075
0.0902 4.87 8000 0.3061 32.6469 11.2319
0.065 5.48 9000 0.3233 32.8163 11.1595
0.0436 6.09 10000 0.3372 32.6852 11.1384
0.0469 6.7 11000 0.3481 32.8272 11.2867
0.0292 7.3 12000 0.3643 33.0784 11.3785
0.0277 7.91 13000 0.3700 33.1877 11.3600
0.0196 8.52 14000 0.3806 33.3734 11.4273
0.016 9.13 15000 0.3844 33.3188 11.4248

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.0+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2
Downloads last month
7

Datasets used to train bayartsogt/whisper-small-mn-7

Evaluation results