Edit model card

exp17-F03-both

This model is a fine-tuned version of yongjian/wav2vec2-large-a on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9268
  • Wer: 0.9485

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Wer
47.4704 0.36 500 3.3075 1.0131
3.1649 0.71 1000 3.3442 1.0
2.9674 1.07 1500 2.6986 1.0
2.7514 1.42 2000 2.5789 1.1299
2.6045 1.78 2500 2.3025 1.2529
2.373 2.14 3000 2.2169 1.2698
2.1632 2.49 3500 1.9883 1.2667
2.0942 2.85 4000 1.9294 1.2567
1.9239 3.2 4500 1.9799 1.2467
1.7549 3.56 5000 1.7485 1.2252
1.6973 3.91 5500 1.6799 1.2283
1.5823 4.27 6000 1.6847 1.2267
1.4761 4.63 6500 1.6971 1.1968
1.4381 4.98 7000 1.6280 1.2052
1.2509 5.34 7500 1.6657 1.2060
1.3112 5.69 8000 1.5618 1.1783
1.1851 6.05 8500 1.6555 1.1783
1.1112 6.41 9000 1.6586 1.1752
1.0463 6.76 9500 1.6135 1.1683
1.041 7.12 10000 1.5444 1.1522
0.9451 7.47 10500 1.5561 1.1622
0.9454 7.83 11000 1.5044 1.1483
0.8496 8.19 11500 1.6724 1.1330
0.825 8.54 12000 1.5950 1.1414
0.8291 8.9 12500 1.6023 1.1384
0.7279 9.25 13000 1.6319 1.1314
0.7394 9.61 13500 1.5478 1.1337
0.7079 9.96 14000 1.7564 1.1453
0.609 10.32 14500 1.7671 1.1245
0.6639 10.68 15000 1.7471 1.1314
0.648 11.03 15500 1.7694 1.2160
0.577 11.39 16000 1.6149 1.1760
0.577 11.74 16500 1.9288 1.1238
0.5695 12.1 17000 1.7503 1.1253
0.5326 12.46 17500 1.5635 1.1376
0.5423 12.81 18000 1.7083 1.1668
0.4775 13.17 18500 1.7054 1.1245
0.4772 13.52 19000 1.6455 1.1045
0.4737 13.88 19500 1.5996 1.0968
0.4529 14.23 20000 1.9847 1.1653
0.4461 14.59 20500 1.6845 1.1084
0.4497 14.95 21000 1.6465 1.0938
0.4096 15.3 21500 1.5919 1.0769
0.3897 15.66 22000 1.5637 1.0761
0.4234 16.01 22500 1.6360 1.0953
0.3659 16.37 23000 1.7573 1.0830
0.3352 16.73 23500 1.8474 1.0976
0.3886 17.08 24000 1.9115 1.0953
0.3255 17.44 24500 1.8820 1.0815
0.3405 17.79 25000 1.6862 1.0346
0.3205 18.15 25500 1.6912 1.0500
0.322 18.51 26000 1.6253 1.0615
0.296 18.86 26500 1.7924 1.0546
0.2869 19.22 27000 1.8204 1.0899
0.269 19.57 27500 1.7558 1.0292
0.2844 19.93 28000 1.6038 1.0131
0.2543 20.28 28500 1.7935 1.0161
0.3025 20.64 29000 1.8706 1.0423
0.2707 21.0 29500 2.0011 1.0208
0.2401 21.35 30000 1.9058 1.0161
0.2609 21.71 30500 1.7555 1.0015
0.2403 22.06 31000 1.9301 1.0085
0.2538 22.42 31500 1.8586 0.9969
0.2334 22.78 32000 1.8588 0.9985
0.2013 23.13 32500 1.9307 1.0108
0.2122 23.49 33000 1.8830 0.9908
0.2242 23.84 33500 1.8133 0.9754
0.188 24.2 34000 1.8435 0.9800
0.2142 24.56 34500 1.8491 0.9792
0.2059 24.91 35000 1.8005 0.9754
0.1794 25.27 35500 1.8845 0.9700
0.185 25.62 36000 1.8620 0.9731
0.1843 25.98 36500 1.8461 0.9539
0.1717 26.33 37000 1.8100 0.9639
0.164 26.69 37500 1.8192 0.9547
0.1888 27.05 38000 1.8005 0.9470
0.1792 27.4 38500 1.8901 0.9562
0.1708 27.76 39000 1.8306 0.9547
0.1508 28.11 39500 1.8934 0.9508
0.1751 28.47 40000 1.8956 0.9523
0.1541 28.83 40500 1.9360 0.9416
0.1611 29.18 41000 1.9346 0.9454
0.1684 29.54 41500 1.9247 0.9470
0.1463 29.89 42000 1.9268 0.9485

Framework versions

  • Transformers 4.23.1
  • Pytorch 1.12.1+cu113
  • Datasets 1.18.3
  • Tokenizers 0.13.2
Downloads last month
8