miosipof's picture
End of training
4989d0e verified
metadata
base_model: openai/whisper-medium
datasets:
  - b-brave/speech_disorders_voice
language:
  - it
library_name: peft
license: apache-2.0
metrics:
  - wer
tags:
  - generated_from_trainer
model-index:
  - name: Whisper Large v3
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: b-brave/speech_disorders_voice
          type: b-brave/speech_disorders_voice
          config: default
          split: train
          args: default
        metrics:
          - type: wer
            value: 13.701431492842536
            name: Wer

Whisper Large v3

This model is a fine-tuned version of openai/whisper-medium on the b-brave/speech_disorders_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3056
  • Wer: 13.7014

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 128
  • training_steps: 384
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
2.6048 0.9481 64 0.5505 22.0859
0.2149 1.8963 128 0.3104 67.8937
0.0653 2.8444 192 0.3124 15.5419
0.0255 3.7926 256 0.3050 15.1329
0.013 4.7407 320 0.3051 14.1104
0.009 5.6889 384 0.3056 13.7014

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.2
  • Pytorch 2.2.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1