Gabi00's picture
End of training
5dc03f3 verified
metadata
base_model: openai/whisper-large-v3
datasets:
  - Gabi00/english-mistakes
language:
  - eng
library_name: peft
license: apache-2.0
metrics:
  - wer
tags:
  - generated_from_trainer
model-index:
  - name: Whisper Small Eng - Gabriel Mora
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: English-mistakes
          type: Gabi00/english-mistakes
          config: default
          split: validation
          args: 'config: eng, split: test'
        metrics:
          - type: wer
            value: 12.985346941102685
            name: Wer

Whisper Small Eng - Gabriel Mora

This model is a fine-tuned version of openai/whisper-small on the English-mistakes dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3644
  • Wer: 12.9853

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9139 0.1270 500 0.6388 24.1376
0.5572 0.2541 1000 0.4884 17.9087
0.5416 0.3811 1500 0.4371 15.2460
0.5542 0.5081 2000 0.4156 13.7921
0.6599 0.6352 2500 0.4036 13.4956
0.6117 0.7622 3000 0.3960 13.2676
0.5569 0.8892 3500 0.3890 13.1336
0.537 1.0163 4000 0.3850 12.5292
0.4677 1.1433 4500 0.3815 12.6261
0.5017 1.2703 5000 0.3792 12.4836
0.5346 1.3974 5500 0.3761 12.3126
0.4858 1.5244 6000 0.3735 12.2926
0.5478 1.6514 6500 0.3715 12.4009
0.5277 1.7785 7000 0.3699 12.2327
0.5153 1.9055 7500 0.3693 12.1643
0.5825 2.0325 8000 0.3681 12.1387
0.6049 2.1596 8500 0.3670 12.3211
0.5248 2.2866 9000 0.3662 12.1501
0.554 2.4136 9500 0.3653 12.0645
0.5031 2.5407 10000 0.3654 12.9312
0.5253 2.6677 10500 0.3647 12.9739
0.5132 2.7947 11000 0.3641 12.9511
0.5789 2.9217 11500 0.3644 12.9853

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.3
  • Pytorch 2.1.0+cu118
  • Datasets 2.20.0
  • Tokenizers 0.19.1