whisper-large-v3-tr / README.md
ozguntosun's picture
End of training
c221229 verified
metadata
language:
  - tr
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_16_1
metrics:
  - wer
model-index:
  - name: Whisper Large TR - Özgün Tosun
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 16.1
          type: mozilla-foundation/common_voice_16_1
          config: tr
          split: None
          args: 'config: tr, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 11.727918051936383

Whisper Large TR - Özgün Tosun

This model is a fine-tuned version of openai/whisper-large-v3 on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1323
  • Wer: 11.7279

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1372 0.3652 1000 0.1810 16.0805
0.1103 0.7305 2000 0.1628 14.5458
0.0563 1.0957 3000 0.1513 12.9302
0.0657 1.4609 4000 0.1383 12.4198
0.0444 1.8262 5000 0.1323 11.7279

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1