Edit model card

torgo-whisper-lg-3-Nov3

This model is a fine-tuned version of openai/whisper-large-v3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0877
  • Wer: 5.6146

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.2844 0.3190 100 0.3944 28.6798
0.2992 0.6380 200 0.2182 17.4507
0.197 0.9569 300 0.1986 13.6571
0.1299 1.2759 400 0.1599 12.5190
0.1001 1.5949 500 0.1700 12.2155
0.1038 1.9139 600 0.1407 9.1806
0.0728 2.2329 700 0.1351 9.6358
0.0652 2.5518 800 0.1090 7.9666
0.0529 2.8708 900 0.1168 8.1942
0.0338 3.1898 1000 0.1132 6.8285
0.0358 3.5088 1100 0.0980 6.9044
0.0381 3.8278 1200 0.0820 6.7527
0.0245 4.1467 1300 0.0862 5.2352
0.0299 4.4657 1400 0.1068 5.9181
0.0261 4.7847 1500 0.0937 6.4492
0.0205 5.1037 1600 0.1019 7.2838
0.017 5.4226 1700 0.0990 5.6904
0.0115 5.7416 1800 0.0842 5.6146
0.018 6.0606 1900 0.1041 5.6904
0.0112 6.3796 2000 0.1135 7.1320
0.0174 6.6986 2100 0.0939 5.0835
0.0117 7.0175 2200 0.1092 5.9181
0.0121 7.3365 2300 0.0931 5.4628
0.0075 7.6555 2400 0.0974 5.6146
0.013 7.9745 2500 0.1142 5.7663
0.0063 8.2935 2600 0.1108 6.1457
0.0122 8.6124 2700 0.0929 5.6146
0.0125 8.9314 2800 0.0905 5.6146
0.0132 9.2504 2900 0.1202 6.1457
0.0132 9.5694 3000 0.0925 5.0835
0.0087 9.8884 3100 0.1028 5.0835
0.0043 10.2073 3200 0.0997 5.6146
0.004 10.5263 3300 0.0955 5.0835
0.007 10.8453 3400 0.0915 5.1593
0.0072 11.1643 3500 0.0814 5.3869
0.0069 11.4833 3600 0.0887 4.7800
0.0082 11.8022 3700 0.0940 5.6904
0.0105 12.1212 3800 0.0936 5.2352
0.0039 12.4402 3900 0.0966 5.3869
0.002 12.7592 4000 0.0820 4.7800
0.0032 13.0781 4100 0.0916 5.0835
0.0009 13.3971 4200 0.0877 5.6146

Framework versions

  • Transformers 4.43.4
  • Pytorch 2.4.1
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
14
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for sqrk/torgo-whisper-lg-3-Nov3

Finetuned
(298)
this model