Edit model card

whisper-small-ko-1159h

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1752
  • Wer: 10.4449

Model description

The model was trained to transcript the audio sources into Korean text.

Intended uses & limitations

More information needed

Training and evaluation data

I downloaded all data from AI-HUB (https://aihub.or.kr/). Two datasets, in particular, caught my attention: "Instruction Audio Set" and "Noisy Conversation Audio Set". I intentionally gathered 796 hours of audio from the first dataset and 363 hours of audio from the second dataset (This includes statistics for the training data only, and excludes information about the validation data.).

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 18483
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0953 0.33 2053 0.2155 13.0432
0.0803 0.67 4106 0.1951 12.0399
0.0746 1.0 6159 0.1836 11.3995
0.0509 1.33 8212 0.1819 11.0396
0.0525 1.67 10265 0.1782 10.9039
0.0493 2.0 12318 0.1743 10.7255
0.034 2.33 14371 0.1784 10.7377
0.0326 2.67 16424 0.1765 10.5471
0.0293 3.0 18477 0.1752 10.4449

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.11.0
  • Tokenizers 0.13.2
Downloads last month
3