warmestman's picture
End of training
fad5494 verified
|
raw
history blame
No virus
2.53 kB
metadata
language:
  - mn
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_16_1
  - google/fleurs
metrics:
  - wer
model-index:
  - name: Whisper Large MN - Ankhbayasgalan Davaadorj
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 16.1 & FLEURS
          type: mozilla-foundation/common_voice_16_1
          config: mn
          split: None
          args: 'config: mn, split: test+validation'
        metrics:
          - name: Wer
            type: wer
            value: 31.994939772289754

Whisper Large MN - Ankhbayasgalan Davaadorj

This model is a fine-tuned version of openai/whisper-large-v3 on the Common Voice 16.1 & FLEURS dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5662
  • Wer: 31.9949

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0691 5.99 1000 0.4597 41.5049
0.0183 11.98 2000 0.4996 38.2982
0.012 17.96 3000 0.5328 38.5402
0.0091 23.95 4000 0.5619 38.1277
0.004 29.94 5000 0.5439 35.2236
0.0019 35.93 6000 0.5731 35.3941
0.001 41.92 7000 0.5309 33.3755
0.0002 47.9 8000 0.5391 32.3140
0.0 53.89 9000 0.5543 32.1984
0.0 59.88 10000 0.5662 31.9949

Framework versions

  • Transformers 4.38.0.dev0
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2