whisper-small-uz / README.md
mrmuminov's picture
End of training
1e1c7ba verified
metadata
language:
  - uz
license: apache-2.0
base_model: openai/whisper-small
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_16_1
metrics:
  - wer
model-index:
  - name: Whisper Small Uz - Bahriddin Mo'minov
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 16.1
          type: mozilla-foundation/common_voice_16_1
          config: uz
          split: test
          args: 'config: uz, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 37.07903050585018

Whisper Small Uz - Bahriddin Mo'minov

This model is a fine-tuned version of openai/whisper-small on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3759
  • Wer: 37.0790

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.6057 0.26 1000 0.5283 46.5667
0.436 0.53 2000 0.4354 42.1575
0.4144 0.79 3000 0.3925 38.4788
0.3194 1.06 4000 0.3759 37.0790

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2