whisper-large-v3-eu / README.md
zuazo's picture
End of training
d9c2799 verified
metadata
language:
  - eu
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_13_0
metrics:
  - wer
model-index:
  - name: Whisper Large-V3 Basque
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_13_0 eu
          type: mozilla-foundation/common_voice_13_0
          config: eu
          split: test
          args: eu
        metrics:
          - name: Wer
            type: wer
            value: 10.620114220908098

Whisper Large-V3 Basque

This model is a fine-tuned version of openai/whisper-large-v3 on the mozilla-foundation/common_voice_13_0 eu dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3803
  • Wer: 10.6201

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0326 4.85 1000 0.2300 13.3278
0.004 9.71 2000 0.2723 12.2038
0.0058 14.56 3000 0.2771 12.4246
0.003 19.42 4000 0.2838 12.2119
0.003 24.27 5000 0.2740 11.7704
0.0014 29.13 6000 0.2936 11.5436
0.0015 33.98 7000 0.2911 11.5193
0.0012 38.83 8000 0.2939 11.3674
0.0009 43.69 9000 0.3039 11.4140
0.0002 48.54 10000 0.3063 10.9624
0.0009 53.4 11000 0.3014 11.3350
0.0011 58.25 12000 0.3052 11.0474
0.0001 63.11 13000 0.3204 10.8692
0.0 67.96 14000 0.3413 10.7092
0.0 72.82 15000 0.3524 10.6647
0.0 77.67 16000 0.3607 10.6566
0.0 82.52 17000 0.3675 10.6120
0.0 87.38 18000 0.3737 10.6140
0.0 92.23 19000 0.3782 10.6181
0.0 97.09 20000 0.3803 10.6201

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1