--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-rw tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-kin results: [] --- # marian-finetuned-multi-en-to-kin This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-rw](https://huggingface.co/Helsinki-NLP/opus-mt-en-rw) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0842 - Bleu: 28.1477 ## Model Description The model has been fine-tuned to perform machine translation from English to Kinyarwanda. ## Intended Uses & Limitations The primary intended use of this model is for research purposes. ## Training and Evaluation Data The model was fine-tuned using a combination of datasets from the following sources: - [Digital Umuganda](https://huggingface.co/datasets/DigitalUmuganda/kinyarwanda-english-machine-translation-dataset/tree/main) - [Masakhane](https://huggingface.co/datasets/masakhane/mafand/viewer/en-kin/validation) - [Muennighoff](https://huggingface.co/datasets/Muennighoff/flores200) For the training of the machine translation model, the dataset underwent the following preprocessing steps: - Text was converted to lowercase - Digits were removed The combined dataset was divided into training and validation sets, with a split of 90% for training and 10% for validation. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3