Edit model card

Whipser Small - Singlish

This model is a fine-tuned version of openai/whisper-small on the National Speech Corpus(partial) dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2020
  • Wer: 5.3795

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0068 5.01 500 0.1508 5.4137
0.001 11.01 1000 0.1691 5.0832
0.0003 16.02 1500 0.1769 5.1060
0.0006 22.01 2000 0.1840 5.0946
0.0005 28.0 2500 0.1891 5.1174
0.0003 33.02 3000 0.1933 5.2086
0.0005 39.01 3500 0.1962 5.2997
0.0002 45.0 4000 0.1991 5.3339
0.0002 50.02 4500 0.2010 5.3681
0.0003 56.01 5000 0.2020 5.3795

Framework versions

  • Transformers 4.40.0.dev0
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.1.dev0
  • Tokenizers 0.15.2
Downloads last month
6
Safetensors
Model size
242M params
Tensor type
F32
ยท

Finetuned from

Dataset used to train rngzhi/cs3264-project

Space using rngzhi/cs3264-project 1

Evaluation results