whisper-md-sr / README.md
emilios's picture
2
f8b9fd6
metadata
language:
  - sr
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_11_0,google/fleurs
metrics:
  - wer
model-index:
  - name: Whisper medium Serbian El Greco
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0,google/fleurs sr,sr_rs
          type: mozilla-foundation/common_voice_11_0,google/fleurs
          config: sr
          split: None
        metrics:
          - name: Wer
            type: wer
            value: 12.140833670578713

Whisper medium Serbian El Greco

This model is a fine-tuned version of openai/whisper-medium on the mozilla-foundation/common_voice_11_0,google/fleurs sr,sr_rs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4868
  • Wer: 12.1408

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss Wer
0.0222 2.72 1000 0.3442 14.0834
0.0032 5.43 2000 0.4106 14.5285
0.0011 8.15 3000 0.4331 12.8693
0.0029 10.87 4000 0.3948 12.6265
0.0012 13.59 5000 0.4512 12.6669
0.0009 16.3 6000 0.4890 12.7479
0.001 19.02 7000 0.4868 12.1408
0.0016 21.74 8000 0.4780 12.7074
0.0002 24.46 9000 0.4902 12.2218
0.0012 27.17 10000 0.5059 12.6669

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 2.0.0.dev20221216+cu116
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2