whisper-small-bn / README.md
bezaisingh's picture
Model save
82368bc verified
metadata
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - common_voice_11_0
metrics:
  - wer
model-index:
  - name: whisper-small-bn
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: common_voice_11_0
          type: common_voice_11_0
          config: bn
          split: test
          args: bn
        metrics:
          - name: Wer
            type: wer
            value: 31.772098978338693

whisper-small-bn

This model is a fine-tuned version of openai/whisper-small on the common_voice_11_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1134
  • Wer: 31.7721

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 500
  • training_steps: 6000

Training results

Training Loss Epoch Step Validation Loss Wer
0.2262 0.3183 500 0.2355 61.4003
0.1542 0.6365 1000 0.1628 47.9933
0.1166 0.9548 1500 0.1391 43.0509
0.0803 1.2731 2000 0.1252 38.7278
0.0793 1.5913 2500 0.1147 36.4728
0.0834 1.9096 3000 0.1079 34.6947
0.0463 2.2279 3500 0.1097 33.4679
0.0541 2.5461 4000 0.1084 32.9231
0.0487 2.8644 4500 0.1042 32.5743
0.0281 3.1827 5000 0.1136 32.6657
0.0313 3.5010 5500 0.1127 32.2032
0.0286 3.8192 6000 0.1134 31.7721

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1
  • Datasets 2.20.0
  • Tokenizers 0.19.1