lord-reso's picture
End of training
2fe5b61 verified
metadata
base_model: openai/whisper-small
datasets:
  - lord-reso/inbrowser-proctor-dataset
language:
  - en
library_name: peft
license: apache-2.0
metrics:
  - wer
tags:
  - generated_from_trainer
model-index:
  - name: Whisper-Small-Inbrowser-Proctor-LORA
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: Inbrowser Procotor Dataset
          type: lord-reso/inbrowser-proctor-dataset
          args: 'config: en, split: test'
        metrics:
          - type: wer
            value: 18.158649251353935
            name: Wer

Whisper-Small-Inbrowser-Proctor-LORA

This model is a fine-tuned version of openai/whisper-small on the Inbrowser Procotor Dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3646
  • Wer: 18.1586

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 250
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.7817 0.8929 25 0.7456 31.6502
0.3905 1.7857 50 0.4646 29.4043
0.2194 2.6786 75 0.3988 20.3090
0.1697 3.5714 100 0.3776 16.1357
0.1246 4.4643 125 0.3744 18.7639
0.1062 5.3571 150 0.3698 19.9267
0.0862 6.25 175 0.3698 19.9108
0.0701 7.1429 200 0.3651 18.0153
0.0647 8.0357 225 0.3659 18.4613
0.056 8.9286 250 0.3646 18.1586

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1