whisper_small_CGN / README.md
Jakob Poncelet
First model version
c44c40d
metadata
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - kul-speech-lab/CGN
metrics:
  - wer
model-index:
  - name: Whisper Small CGN
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: kul-speech-lab/CGN
          type: kul-speech-lab/CGN
          config: cgn-dev.py
          split: test
        metrics:
          - name: Wer
            type: wer
            value: 15.197170132057957

Whisper Small CGN

This model is a fine-tuned version of openai/whisper-small on the kul-speech-lab/CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3386
  • Wer: 15.1972

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 15000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1967 1.01 1000 0.4085 21.8459
0.1355 2.03 2000 0.3752 18.6212
0.2952 3.04 3000 0.3535 18.5841
0.1876 4.05 4000 0.3464 17.5097
0.1037 6.01 5000 0.3396 16.7360
0.0473 7.02 6000 0.3526 16.4131
0.1605 8.04 7000 0.3284 16.4012
0.0537 9.05 8000 0.3386 15.9454
0.0928 11.01 9000 0.3315 15.9568
0.0144 12.02 10000 0.3532 15.5387
0.0267 13.04 11000 0.3261 15.7577
0.0936 14.05 12000 0.3155 15.3380
0.0825 16.01 13000 0.3198 15.2653
0.0498 17.02 14000 0.3386 15.1972
0.0338 18.03 15000 0.3413 15.1972

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.0
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2

Whisper small model finetuned on Flemish part of Corpus Gesproken Nederlands (CGN).