Edit model card

ED_small_cv_en_continue2

This model is a fine-tuned version of on the common_voice_13_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1534
  • Cer: 0.0838
  • Wer: 0.1978
  • Mer: 0.1928
  • Wil: 0.3161
  • Wip: 0.6839
  • Hits: 122778
  • Substitutions: 22066
  • Deletions: 3337
  • Insertions: 3914

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 256
  • eval_batch_size: 7
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • total_train_batch_size: 512
  • total_eval_batch_size: 14
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Cer Deletions Hits Insertions Validation Loss Mer Substitutions Wer Wil Wip
1.3588 5.0 7445 0.1570 5345 104522 7446 1.5216 0.3284 38314 0.3449 0.5094 0.4906
1.285 6.0 8934 0.1497 6362 105386 5691 1.4842 0.3151 36433 0.3272 0.4919 0.5081
1.7562 7.0 10423 0.1487 6144 106299 5993 1.4710 0.3105 35738 0.3231 0.4849 0.5151
1.5766 8.0 11912 0.1343 5075 110239 5997 1.3866 0.2850 32867 0.2965 0.4500 0.5500
1.478 9.0 13401 0.1193 4513 113519 5389 1.3274 0.2608 30149 0.2703 0.4166 0.5834
1.4494 10.0 14890 0.1141 4925 114772 4845 1.2920 0.2500 28484 0.2582 0.3998 0.6002
1.4086 11.0 16379 0.1063 4113 116948 4863 1.2627 0.2359 27120 0.2436 0.3803 0.6197
1.375 12.0 17868 0.1017 3817 118153 4921 1.2363 0.2283 26211 0.2359 0.3689 0.6311
1.3304 13.0 19357 0.0977 3489 119548 4862 1.2181 0.2189 25144 0.2260 0.3551 0.6449
1.3215 14.0 20846 0.0928 3994 120102 3969 1.1973 0.2106 24085 0.2163 0.3430 0.6570
1.2824 15.0 22335 0.0894 3388 121469 4429 1.1777 0.2041 23324 0.2102 0.3327 0.6673
1.2535 16.0 23824 0.0857 3131 122436 4283 1.1625 0.1970 22614 0.2026 0.3226 0.6774
1.2096 17.0 25313 0.0817 3242 123261 3842 1.1429 0.1892 21678 0.1941 0.3109 0.6891
1.1749 18.0 26802 0.0795 3384 123650 3604 1.1330 0.1854 21147 0.1899 0.3047 0.6953
1.1528 19.0 28291 0.0770 3262 124432 3579 1.1220 0.1801 20487 0.1844 0.2964 0.7036
1.1373 20.0 29780 0.0762 3197 124623 3517 1.1168 0.1785 20361 0.1827 0.2942 0.7058
1.2751 21.0 31269 1.1934 0.0921 0.2159 0.2093 0.3408 0.6592 120871 24019 3291 4681
1.2585 22.0 32758 1.1727 0.0884 0.2087 0.2022 0.3297 0.6703 122013 23124 3044 4751
1.2612 23.0 34247 1.1634 0.0863 0.2043 0.1986 0.3247 0.6753 122169 22727 3285 4260
1.2389 24.0 35736 1.1574 0.0851 0.2020 0.1964 0.3215 0.6785 122473 22496 3212 4220
1.2422 25.0 37225 1.1534 0.0838 0.1978 0.1928 0.3161 0.6839 122778 22066 3337 3914

Framework versions

  • Transformers 4.40.0.dev0
  • Pytorch 2.2.0+rocm5.6
  • Datasets 2.18.0
  • Tokenizers 0.15.2

Wandb run

https://wandb.ai/butspeechfit/decred_commonvoice_en/runs/ED_small_cv_en_continue2

Downloads last month
5
Safetensors
Model size
35.8M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .