--- language: - tok - en - multilingual license: apache-2.0 tags: - generated_from_trainer - translation metrics: - bleu widget: - text: toki! mi jan Ton. mi lon ma Tawan. - text: soweli li toki ala toki e toki Inli? model-index: - name: toki-en-mt results: [] --- # toki-en-mt This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ROMANCE-en](https://huggingface.co/Helsinki-NLP/opus-mt-ROMANCE-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2840 - Bleu: 26.7612 - Gen Len: 9.0631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.7228 | 1.0 | 1260 | 1.4572 | 19.9464 | 9.2177 | | 1.3182 | 2.0 | 2520 | 1.3356 | 22.4628 | 9.1263 | | 1.1241 | 3.0 | 3780 | 1.3028 | 23.5152 | 9.0462 | | 0.9995 | 4.0 | 5040 | 1.2784 | 23.9526 | 9.1697 | | 0.8945 | 5.0 | 6300 | 1.2739 | 24.7707 | 9.0914 | | 0.8331 | 6.0 | 7560 | 1.2725 | 25.3477 | 9.0518 | | 0.7641 | 7.0 | 8820 | 1.2770 | 26.165 | 9.0245 | | 0.7163 | 8.0 | 10080 | 1.2809 | 25.8053 | 9.0933 | | 0.6886 | 9.0 | 11340 | 1.2799 | 26.5752 | 9.0669 | | 0.6627 | 10.0 | 12600 | 1.2840 | 26.7612 | 9.0631 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1