arielcerdap's picture
End of training
679a660 verified
|
raw
history blame
2.12 kB
metadata
library_name: transformers
language:
  - en
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
datasets:
  - stillerman/libristutter-4.7k
metrics:
  - wer
model-index:
  - name: Whisper Large V3 Stutter - Ariel Cerda
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Libristutter 4.7k
          type: stillerman/libristutter-4.7k
          args: 'config: en, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 19.625834127740703

Whisper Large V3 Stutter - Ariel Cerda

This model is a fine-tuned version of openai/whisper-large-v3 on the Libristutter 4.7k dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4927
  • Wer: 19.6258

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0358 3.7453 1000 0.2987 17.1711
0.0027 7.4906 2000 0.3993 18.0231
0.0004 11.2360 3000 0.4394 19.9476
0.0002 14.9813 4000 0.4826 20.0786
0.0001 18.7266 5000 0.4927 19.6258

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1