Edit model card

whisper-tiny-fluers_V2_telugu_Augmentation_full_datset_V2_e5

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3374
  • Wer: 61.2000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.2926 0.09 300 1.3993 129.5
1.0948 0.18 600 1.2674 109.1500
0.6591 0.28 900 0.5519 81.55
0.5326 0.37 1200 0.4361 72.55
0.4737 0.46 1500 0.4036 69.2000
0.4239 0.55 1800 0.3793 63.6
0.4011 0.64 2100 0.3625 62.2500
0.3687 0.73 2400 0.3651 62.5
0.3712 0.83 2700 0.3491 59.9
0.3686 0.92 3000 0.3438 60.6500
0.3381 1.01 3300 0.3391 58.25
0.3483 1.1 3600 0.3385 60.0500
0.341 1.19 3900 0.3374 61.2000

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
8