whisper-small-sw / README.md
diane20000000000000's picture
Diane/Model
2e43466 verified
metadata
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - common_voice_11_0
metrics:
  - wer
model-index:
  - name: whisper-small-sw
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: common_voice_11_0
          type: common_voice_11_0
          config: rw
          split: None
          args: rw
        metrics:
          - name: Wer
            type: wer
            value: 282.7586206896552

whisper-small-sw

This model is a fine-tuned version of openai/whisper-small on the common_voice_11_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1784
  • Wer: 282.7586

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 400
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0997 14.2857 100 2.9052 195.6897
0.0013 28.5714 200 3.1173 310.3448
0.0008 42.8571 300 3.1651 281.8966
0.0007 57.1429 400 3.1784 282.7586

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1