--- base_model: Samuael/geez_15k_sc_mt5 tags: - generated_from_trainer metrics: - wer - bleu model-index: - name: geez_15k_sc_mt5 results: [] --- # geez_15k_sc_mt5 This model is a fine-tuned version of [Samuael/geez_15k_sc_mt5](https://huggingface.co/Samuael/geez_15k_sc_mt5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8105 - Wer: 0.2849 - Cer: 0.1373 - Bleu: 57.6756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Bleu | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:| | 0.9418 | 1.0 | 57 | 0.8984 | 0.4204 | 0.1872 | 42.5936 | | 0.6386 | 2.0 | 114 | 0.6856 | 0.3535 | 0.1563 | 48.7013 | | 0.6169 | 3.0 | 171 | 0.6101 | 0.3137 | 0.1256 | 52.0856 | | 0.428 | 4.0 | 228 | 0.5921 | 0.3125 | 0.1365 | 53.1328 | | 0.354 | 5.0 | 285 | 0.5692 | 0.2807 | 0.1096 | 55.5660 | | 0.2604 | 6.0 | 342 | 0.5947 | 0.2818 | 0.1098 | 55.6749 | | 0.2797 | 7.0 | 399 | 0.5874 | 0.2896 | 0.1203 | 55.5685 | | 0.2372 | 8.0 | 456 | 0.5978 | 0.2829 | 0.1206 | 56.3162 | | 0.1863 | 9.0 | 513 | 0.6046 | 0.2808 | 0.1188 | 56.7817 | | 0.1595 | 10.0 | 570 | 0.6295 | 0.2615 | 0.1006 | 58.1387 | | 0.1269 | 11.0 | 627 | 0.6548 | 0.2721 | 0.1157 | 57.4783 | | 0.1421 | 12.0 | 684 | 0.6572 | 0.2734 | 0.1164 | 57.7264 | | 0.1475 | 13.0 | 741 | 0.6673 | 0.2809 | 0.1204 | 57.1626 | | 0.1201 | 14.0 | 798 | 0.6996 | 0.2835 | 0.1271 | 57.0274 | | 0.0777 | 15.0 | 855 | 0.7227 | 0.2634 | 0.1071 | 58.3844 | | 0.075 | 16.0 | 912 | 0.7295 | 0.2607 | 0.1050 | 58.7280 | | 0.092 | 17.0 | 969 | 0.7404 | 0.2633 | 0.1062 | 58.7387 | | 0.0795 | 18.0 | 1026 | 0.7437 | 0.2842 | 0.1297 | 57.4888 | | 0.0933 | 19.0 | 1083 | 0.7513 | 0.2765 | 0.1207 | 57.8567 | | 0.0794 | 20.0 | 1140 | 0.7620 | 0.2718 | 0.1165 | 58.1063 | | 0.0559 | 21.0 | 1197 | 0.7543 | 0.2526 | 0.0999 | 59.4966 | | 0.0616 | 22.0 | 1254 | 0.7885 | 0.2664 | 0.1126 | 58.7126 | | 0.0666 | 23.0 | 1311 | 0.7734 | 0.2774 | 0.1215 | 57.7048 | | 0.0746 | 24.0 | 1368 | 0.7832 | 0.2764 | 0.1225 | 58.0553 | | 0.0644 | 25.0 | 1425 | 0.7872 | 0.2799 | 0.1273 | 57.9207 | | 0.0596 | 26.0 | 1482 | 0.8184 | 0.2721 | 0.1197 | 58.1888 | | 0.0507 | 27.0 | 1539 | 0.8053 | 0.2765 | 0.1226 | 58.1358 | | 0.0585 | 28.0 | 1596 | 0.8091 | 0.2806 | 0.1293 | 57.9993 | | 0.0595 | 29.0 | 1653 | 0.8147 | 0.2888 | 0.1416 | 57.4399 | | 0.0469 | 30.0 | 1710 | 0.8105 | 0.2849 | 0.1373 | 57.6756 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2