Edit model card

whisper-tiny-kor-430k-hf-ep100

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4802
  • Cer: 7.0810

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 128
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.0276 1.0 2969 0.1394 10.5081
0.0115 2.0 5938 0.1684 9.0941
0.0082 3.0 8907 0.1932 8.4145
0.0063 4.0 11876 0.2197 7.8659
0.0052 5.0 14845 0.2516 7.7668
0.0047 6.0 17814 0.2314 8.0565
0.004 7.0 20783 0.2270 8.3521
0.0037 8.0 23752 0.2540 7.8283
0.0033 9.0 26721 0.2586 7.9098
0.0028 10.0 29690 0.2891 7.4437
0.0029 11.0 32659 0.2796 7.7976
0.0025 12.0 35628 0.2630 8.2731
0.0025 13.0 38597 0.2955 7.8518
0.0025 14.0 41566 0.2812 7.4797
0.002 15.0 44535 0.2859 7.9954
0.0023 16.0 47504 0.3172 7.2374
0.002 17.0 50473 0.3382 7.5966
0.0018 18.0 53442 0.3320 7.6383
0.0018 19.0 56411 0.3197 7.6900
0.0015 20.0 59380 0.3305 8.3678
0.0016 21.0 62349 0.3409 7.5117
0.0015 22.0 65318 0.3382 7.8556
0.0016 23.0 68287 0.3282 7.5863
0.0015 24.0 71256 0.3220 8.2449
0.0013 25.0 74225 0.3272 7.7731
0.0015 26.0 77194 0.3557 7.8019
0.0014 27.0 80163 0.3807 7.3311
0.0012 28.0 83132 0.3398 7.8117
0.0013 29.0 86101 0.3892 7.6089
0.001 30.0 89070 0.3876 7.7875
0.0011 31.0 92039 0.3942 7.3922
0.0012 32.0 95008 0.3836 8.0308
0.0011 33.0 97977 0.3745 7.9775
0.001 34.0 100946 0.3605 8.0117
0.001 35.0 103915 0.3615 7.4853
0.001 36.0 106884 0.3563 7.5916
0.0009 37.0 109853 0.3469 7.4750
0.0009 38.0 112822 0.3940 7.5919
0.0009 39.0 115791 0.3771 7.5443
0.0009 40.0 118760 0.3392 7.6593
0.0009 41.0 121729 0.3498 7.6393
0.0009 42.0 124698 0.3705 7.4474
0.0008 43.0 127667 0.3758 7.2274
0.0008 44.0 130636 0.3944 7.6919
0.0009 45.0 133605 0.3885 7.5565
0.0008 46.0 136574 0.3830 7.4628
0.0008 47.0 139543 0.3972 7.8546
0.0008 48.0 142512 0.3875 7.4916
0.0007 49.0 145481 0.3438 7.2606
0.0007 50.0 148450 0.3540 7.1581
0.0008 51.0 151419 0.3768 7.1712
0.0007 52.0 154388 0.4050 7.2286
0.0007 53.0 157357 0.3785 7.4637
0.0008 54.0 160326 0.4145 7.4800
0.0008 55.0 163295 0.4042 7.3791
0.0006 56.0 166264 0.3885 7.6994
0.0006 57.0 169233 0.4153 7.5440
0.0006 58.0 172202 0.4111 7.3408
0.0006 59.0 175171 0.4147 7.2872
0.0006 60.0 178140 0.4209 7.6270
0.0006 61.0 181109 0.4041 7.4258
0.0006 62.0 184078 0.4032 7.5324
0.0006 63.0 187047 0.4214 7.3687
0.0005 64.0 190016 0.3991 7.2750
0.0005 65.0 192985 0.3885 7.1731
0.0006 66.0 195954 0.4087 7.5063
0.0005 67.0 198923 0.3760 7.4913
0.0005 68.0 201892 0.3929 7.3314
0.0005 69.0 204861 0.4044 7.5173
0.0005 70.0 207830 0.4075 7.2712
0.0005 71.0 210799 0.4170 7.2415
0.0005 72.0 213768 0.4148 7.1142
0.0005 73.0 216737 0.4271 7.3020
0.0005 74.0 219706 0.4281 7.1863
0.0004 75.0 222675 0.4202 7.1543
0.0005 76.0 225644 0.4320 7.2910
0.0005 77.0 228613 0.4328 7.3995
0.0005 78.0 231582 0.4304 7.2255
0.0005 79.0 234551 0.4537 7.0023
0.0005 80.0 237520 0.4544 7.2048
0.0004 81.0 240489 0.4485 7.2167
0.0005 82.0 243458 0.4564 7.1794
0.0004 83.0 246427 0.4608 7.2145
0.0004 84.0 249396 0.4724 7.2098
0.0004 85.0 252365 0.4726 7.1424
0.0004 86.0 255334 0.4754 7.2832
0.0005 87.0 258303 0.4765 7.1709
0.0004 88.0 261272 0.4610 7.1358
0.0004 89.0 264241 0.4697 7.0797
0.0004 90.0 267210 0.4717 7.0913
0.0004 91.0 270179 0.4756 7.1017
0.0004 92.0 273148 0.4766 7.2089
0.0004 93.0 276117 0.4763 7.1057
0.0004 94.0 279086 0.4764 7.1101
0.0004 95.0 282055 0.4759 7.2170
0.0004 96.0 285024 0.4772 7.1104
0.0004 97.0 287993 0.4781 7.0819
0.0004 98.0 290962 0.4798 7.0897
0.0004 99.0 293931 0.4800 7.0872
0.0004 100.0 296900 0.4802 7.0810

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
12
Safetensors
Model size
37.8M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from