JacobLinCool's picture
End of training
97de8ef verified
|
raw
history blame
3.34 kB
metadata
library_name: peft
language:
  - en
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
  - wft
  - whisper
  - automatic-speech-recognition
  - audio
  - speech
  - generated_from_trainer
datasets:
  - ntnu-smil/lttc-rebalanced-1-split
metrics:
  - wer
model-index:
  - name: whisper-large-v3-turbo-score-5-rebalanced-2
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: ntnu-smil/lttc-rebalanced-1-split
          type: ntnu-smil/lttc-rebalanced-1-split
        metrics:
          - type: wer
            value: 36.52802893309223
            name: Wer

whisper-large-v3-turbo-score-5-rebalanced-2

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the ntnu-smil/lttc-rebalanced-1-split dataset. It achieves the following results on the evaluation set:

  • Loss: 3.7199
  • Wer: 36.5280
  • Cer: 25.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.0489 1.0 18 3.2509 37.2514 23.4504
0.1246 2.0 36 3.6744 35.6239 23.5709
0.0011 3.0 54 3.6182 36.7089 22.9855
0.0075 4.0 72 3.7182 37.1609 22.6240
0.0002 5.0 90 3.7643 37.7939 23.6398
0.0028 6.0 108 3.6117 36.7089 23.8809
0.0003 7.0 126 3.5535 36.8897 24.6556
0.0001 8.0 144 3.6586 37.7939 25.1033
0.0003 9.0 162 3.6168 36.8897 24.7934
0.0001 10.0 180 3.6500 37.1609 25.1033
0.0002 11.0 198 3.6934 37.4322 25.3960
0.0001 12.0 216 3.6901 36.9801 25.2410
0.0001 13.0 234 3.6980 36.7993 25.2238
0.0001 14.0 252 3.6990 36.9801 25.1377
0.0002 15.0 270 3.7110 36.9801 25.2755
0.0001 16.0 288 3.7139 36.7993 25.1894
0.0001 17.0 306 3.7175 36.7089 25.1722
0.0001 18.0 324 3.7202 36.9801 25.3444
0.0001 19.0 342 3.7210 36.8897 24.9828
0.0002 20.0 360 3.7199 36.5280 25.0

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.3
  • Pytorch 2.2.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3