--- base_model: facebook/mbart-large-50-many-to-many-mmt tags: - generated_from_trainer metrics: - bleu model-index: - name: mbart-large-50-many-to-many-mmt-finetuned-pt-to-en results: [] --- # mbart-large-50-many-to-many-mmt-finetuned-pt-to-en This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8212 - Bleu: 5.1667 - Gen Len: 53.5411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 154 | 2.9955 | 1.7894 | 44.0952 | | No log | 2.0 | 308 | 2.8970 | 2.326 | 46.8317 | | No log | 3.0 | 462 | 2.9511 | 3.0767 | 55.8505 | | 2.5276 | 4.0 | 616 | 3.0830 | 3.3555 | 48.737 | | 2.5276 | 5.0 | 770 | 3.2703 | 3.5714 | 47.0602 | | 2.5276 | 6.0 | 924 | 3.4241 | 4.3465 | 52.7744 | | 1.0364 | 7.0 | 1078 | 3.5665 | 4.4909 | 52.705 | | 1.0364 | 8.0 | 1232 | 3.6942 | 4.6399 | 50.4717 | | 1.0364 | 9.0 | 1386 | 3.7852 | 5.2082 | 54.6205 | | 0.3319 | 10.0 | 1540 | 3.8212 | 5.1667 | 53.5411 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1