Edit model card

mbart-large-50-many-to-many-mmt-finetuned-test2

This model is a fine-tuned version of facebook/mbart-large-50-many-to-many-mmt on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9283
  • Rouge1: 28.015
  • Rouge2: 11.5757
  • Rougel: 23.4706
  • Rougelsum: 27.303
  • Gen Len: 48.8231

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.1456 1.0 2130 2.0695 24.7024 9.6769 20.91 24.0793 45.739
1.8957 2.0 4261 1.9656 25.8455 10.4715 21.9971 25.1654 50.9408
1.7359 3.0 6391 1.9281 26.9356 11.0235 22.709 26.197 51.5054
1.6207 4.0 8520 1.9283 28.015 11.5757 23.4706 27.303 48.8231

Framework versions

  • Transformers 4.27.2
  • Pytorch 1.13.0+cu117
  • Datasets 2.7.1
  • Tokenizers 0.13.2
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.