--- base_model: facebook/mbart-large-50-many-to-many-mmt tags: - generated_from_trainer metrics: - bleu model-index: - name: mbart-large-50-many-to-many-mmt-finetuned-pt-to-en results: [] --- # mbart-large-50-many-to-many-mmt-finetuned-pt-to-en This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.0116 - Bleu: 3.9795 - Gen Len: 46.1292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 2.981 | 1.0 | 989 | 2.6925 | 2.0264 | 53.1604 | | 2.1381 | 2.0 | 1978 | 2.6576 | 2.6268 | 47.2153 | | 1.6501 | 3.0 | 2967 | 2.7743 | 2.9549 | 45.5091 | | 1.2518 | 4.0 | 3956 | 2.9517 | 3.1862 | 49.9585 | | 0.9188 | 5.0 | 4945 | 3.1741 | 3.3758 | 46.0227 | | 0.6615 | 6.0 | 5934 | 3.3934 | 3.4223 | 42.5911 | | 0.4714 | 7.0 | 6923 | 3.6090 | 3.8522 | 48.1263 | | 0.3283 | 8.0 | 7912 | 3.8027 | 3.7817 | 46.4619 | | 0.2291 | 9.0 | 8901 | 3.9314 | 3.9939 | 47.2651 | | 0.163 | 10.0 | 9890 | 4.0116 | 3.9795 | 46.1292 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1