Edit model card

Whisper Small ko

This model is a fine-tuned version of openai/whisper-small on the customdata dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0498
  • Cer: 1.1070
  • Wer: 0.8157

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer Wer
1.1429 1.5625 100 0.8829 14.7984 14.5304
0.3401 3.125 200 0.2637 2.0625 1.7828
0.0413 4.6875 300 0.0599 1.5498 1.3167
0.0163 6.25 400 0.0462 1.2818 0.9904
0.0127 7.8125 500 0.0517 1.5265 1.1885
0.0065 9.375 600 0.0402 1.5031 1.0487
0.0028 10.9375 700 0.0396 1.7012 1.3167
0.001 12.5 800 0.0406 1.5148 1.1186
0.0004 14.0625 900 0.0405 1.4216 1.0371
0.0005 15.625 1000 0.0424 1.5847 1.1885
0.0001 17.1875 1100 0.0425 1.2701 0.9788
0.0001 18.75 1200 0.0429 1.3051 1.0137
0.0001 20.3125 1300 0.0432 1.2701 0.9788
0.0001 21.875 1400 0.0436 1.2818 0.9904
0.0001 23.4375 1500 0.0439 1.2934 1.0021
0.0001 25.0 1600 0.0441 1.2934 1.0021
0.0001 26.5625 1700 0.0443 1.2934 1.0021
0.0001 28.125 1800 0.0446 1.2934 1.0021
0.0001 29.6875 1900 0.0448 1.2818 0.9904
0.0001 31.25 2000 0.0449 1.2002 0.9089
0.0001 32.8125 2100 0.0454 1.2002 0.9089
0.0001 34.375 2200 0.0458 1.2002 0.9089
0.0 35.9375 2300 0.0461 1.2002 0.9089
0.0 37.5 2400 0.0463 1.1769 0.8856
0.0 39.0625 2500 0.0465 1.1769 0.8856
0.0 40.625 2600 0.0467 1.1536 0.8623
0.0 42.1875 2700 0.0469 1.1303 0.8390
0.0 43.75 2800 0.0471 1.1536 0.8623
0.0 45.3125 2900 0.0473 1.1536 0.8623
0.0 46.875 3000 0.0474 1.1536 0.8623
0.0 48.4375 3100 0.0476 1.1536 0.8623
0.0 50.0 3200 0.0477 1.1303 0.8390
0.0 51.5625 3300 0.0478 1.1419 0.8506
0.0 53.125 3400 0.0479 1.1186 0.8273
0.0 54.6875 3500 0.0481 1.1186 0.8273
0.0 56.25 3600 0.0482 1.1186 0.8273
0.0 57.8125 3700 0.0483 1.1186 0.8273
0.0 59.375 3800 0.0484 1.1070 0.8157
0.0 60.9375 3900 0.0485 1.1070 0.8157
0.0 62.5 4000 0.0487 1.1070 0.8157
0.0 64.0625 4100 0.0490 1.1070 0.8157
0.0 65.625 4200 0.0492 1.1070 0.8157
0.0 67.1875 4300 0.0494 1.1070 0.8157
0.0 68.75 4400 0.0495 1.1070 0.8157
0.0 70.3125 4500 0.0496 1.1070 0.8157
0.0 71.875 4600 0.0497 1.1070 0.8157
0.0 73.4375 4700 0.0497 1.1070 0.8157
0.0 75.0 4800 0.0497 1.1070 0.8157
0.0 76.5625 4900 0.0498 1.1070 0.8157
0.0 78.125 5000 0.0498 1.1070 0.8157

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.4.0
  • Datasets 2.18.0
  • Tokenizers 0.20.3
Downloads last month
6
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for GGarri/whisper_finetuned_ver241113_1

Finetuned
(1925)
this model

Dataset used to train GGarri/whisper_finetuned_ver241113_1

Evaluation results