jlvdoorn's picture
Model save
3ac5c68
|
raw
history blame
4.62 kB
metadata
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-large-v3-atco2-asr-atcosim
    results: []

whisper-large-v3-atco2-asr-atcosim

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1039
  • Wer: 22.2698

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 12644
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.049 1.97 250 0.0613 41.3521
0.0168 3.94 500 0.0656 25.3775
0.0076 5.91 750 0.0703 16.7505
0.0028 7.87 1000 0.0722 23.0540
0.001 9.84 1250 0.0727 21.6365
0.0008 11.81 1500 0.0728 24.0815
0.0012 13.78 1750 0.0712 36.9653
0.0025 15.75 2000 0.0701 21.1248
0.0005 17.72 2250 0.0745 10.2458
0.0006 19.69 2500 0.0781 26.3169
0.0013 21.65 2750 0.0760 15.4127
0.0073 23.62 3000 0.0790 85.4764
0.0038 25.59 3250 0.0724 44.4682
0.0003 27.56 3500 0.0772 37.4056
0.0003 29.53 3750 0.0778 31.2238
0.0 31.5 4000 0.0806 22.4040
0.0 33.46 4250 0.0831 20.6886
0.0 35.43 4500 0.0847 20.3322
0.0 37.4 4750 0.0860 20.7935
0.0 39.37 5000 0.0871 20.3657
0.0 41.34 5250 0.0880 20.5293
0.0 43.31 5500 0.0889 20.7977
0.0 45.28 5750 0.0898 20.4957
0.0 47.24 6000 0.0906 20.9612
0.0 49.21 6250 0.0914 20.8564
0.0 51.18 6500 0.0921 21.1919
0.0 53.15 6750 0.0928 20.7809
0.0 55.12 7000 0.0934 21.1793
0.0 57.09 7250 0.0941 21.2087
0.0 59.06 7500 0.0947 21.2255
0.0 61.02 7750 0.0953 21.4142
0.0 62.99 8000 0.0959 21.1961
0.0 64.96 8250 0.0966 21.1080
0.0 66.93 8500 0.0972 21.0955
0.0 68.9 8750 0.0978 21.4226
0.0 70.87 9000 0.0983 21.3681
0.0 72.83 9250 0.0988 21.6532
0.0 74.8 9500 0.0994 21.6155
0.0 76.77 9750 0.0999 21.5107
0.0 78.74 10000 0.1005 21.3974
0.0 80.71 10250 0.1010 21.6407
0.0 82.68 10500 0.1014 21.7120
0.0 84.65 10750 0.1019 21.8755
0.0 86.61 11000 0.1023 21.9510
0.0 88.58 11250 0.1027 21.9636
0.0 90.55 11500 0.1030 22.0223
0.0 92.52 11750 0.1033 22.0265
0.0 94.49 12000 0.1036 22.3536
0.0 96.46 12250 0.1038 22.3956
0.0 98.43 12500 0.1039 22.2698

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.14.1