Edit model card

Mal_ASR_Whisper_small_imasc_1000

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0642
  • Wer: 52.2853

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.3098 0.74 200 0.2613 200.6810
0.1009 1.48 400 0.0988 54.5952
0.0559 2.22 600 0.0722 44.6184
0.0518 2.96 800 0.0608 39.1631
0.0285 3.7 1000 0.0573 46.0858
0.0166 4.44 1200 0.0567 46.7036
0.0082 5.19 1400 0.0589 50.9513
0.0075 5.93 1600 0.0590 65.6252
0.0031 6.67 1800 0.0629 57.2913
0.0018 7.41 2000 0.0642 52.2853

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.14.0
Downloads last month
10
Safetensors
Model size
242M params
Tensor type
F32
ยท

Finetuned from

Space using leenag/Mal_ASR_Whisper_small_imasc_1000 1