mbart-neutralization
This model is a fine-tuned version of facebook/mbart-large-50 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0108
- Bleu: 98.1545
- Gen Len: 18.8229
Model description
Disclaimer: this is part of a practical excerise carried out as part of the University course "Machine Traslation" of the Master's Degree in Language Processing and Applied AI to Linguistcs of Universidad de La Rioja. This model is a fine-tuned version of facebook/mbart-large-50 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0108
- Bleu: 98.1545
- Gen Len: 18.8229
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
No log | 1.0 | 440 | 0.0220 | 98.1628 | 18.8229 |
0.2273 | 2.0 | 880 | 0.0108 | 98.1545 | 18.8229 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 19
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for feserrm/mbart-neutralization
Base model
facebook/mbart-large-50