Edit model card

whisper_result

This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6586
  • Wer Ortho: 52.0665
  • Wer: 49.1825

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Wer Ortho
0.6179 0.03 1000 0.9762 60.3624 62.0464
0.505 0.07 2000 0.8327 54.8387 57.7117
0.4921 0.1 3000 0.7555 59.6111 63.5585
0.576 0.13 4000 0.7034 57.6226 58.9214
0.4169 0.17 5000 0.6763 44.5426 46.5726
0.3827 0.2 6000 0.6462 44.9403 47.0766
0.3509 0.23 7000 0.6331 46.4870 48.8407
0.4012 0.26 8000 0.6170 46.8847 49.3952
0.3634 0.3 9000 0.6864 47.4294 45.3822
0.3721 0.33 10000 0.6659 49.2944 46.4870
0.3198 0.36 11000 0.6586 52.0665 49.1825

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.13.3
Downloads last month
13
Inference API
Unable to determine this model's library. Check the docs .