whisper-medium-el / README.md
emilios's picture
[INFO|trainer.py:2956] 2022-12-12 12:04:03,940 >> ***** Running Evaluation *****
5c23c43
|
raw
history blame
2.18 kB
metadata
language:
  - el
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_11_0,google/fleurs
metrics:
  - wer
model-index:
  - name: Whisper Medium El Greco
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0,google/fleurs el,el_gr
          type: mozilla-foundation/common_voice_11_0,google/fleurs
          config: null
          split: None
        metrics:
          - name: Wer
            type: wer
            value: 11.199851411589897

Whisper Medium El Greco

This model is a fine-tuned version of emilios/whisper-medium-el on the mozilla-foundation/common_voice_11_0,google/fleurs el,el_gr dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3801
  • Wer: 11.1999

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0176 2.49 1000 0.2945 12.6114
0.0064 4.98 2000 0.3423 12.2307
0.0022 7.46 3000 0.3632 11.5899
0.0014 9.95 4000 0.3788 11.2556
0.0008 12.44 5000 0.3801 11.1999

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.0+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2