Edit model card

whisper-tiny-oshiwambo-speech

This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1409
  • Wer: 44.7619
  • Cer: 30.8962
  • Word Acc: 64.4444
  • Sent Acc: 2.8571
  • Precision: 0.6444
  • Recall: 0.5524
  • F1 Score: 0.5949

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss Wer Cer Word Acc Sent Acc Precision Recall F1 Score
0.0098 117.65 1000 0.0976 37.1429 29.0094 66.6667 8.5714 0.6538 0.6476 0.6507
0.0105 235.29 2000 0.1061 41.9048 33.0189 63.6364 2.8571 0.6238 0.6 0.6117
0.0105 352.94 3000 0.1134 37.1429 26.8868 66.6667 5.7143 0.6667 0.6286 0.6471
0.0091 470.59 4000 0.1222 37.1429 25.7075 66.6667 5.7143 0.6667 0.6286 0.6471
0.0098 588.24 5000 0.1265 40.0 28.3019 65.625 2.8571 0.6562 0.6 0.6269
0.0094 705.88 6000 0.1314 42.8571 30.8962 64.5161 2.8571 0.6452 0.5714 0.6061
0.0093 823.53 7000 0.1366 42.8571 29.2453 64.5161 2.8571 0.6452 0.5714 0.6061
0.0094 941.18 8000 0.1360 45.7143 31.8396 63.3333 0.0 0.6333 0.5429 0.5846
0.01 1058.82 9000 0.1394 44.7619 30.8962 64.4444 2.8571 0.6444 0.5524 0.5949
0.0087 1176.47 10000 0.1409 44.7619 30.8962 64.4444 2.8571 0.6444 0.5524 0.5949

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 2.0.0
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.