Edit model card

t5-base_fr-finetuned-en-to-it

This model is a fine-tuned version of j0hngou/t5-base-finetuned-en-to-fr on the ccmatrix dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4677
  • Bleu: 20.3152
  • Gen Len: 51.4433

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 282 2.0344 6.8826 64.574
2.3997 2.0 564 1.9371 7.9377 64.274
2.3997 3.0 846 1.8740 9.2364 59.8673
2.145 4.0 1128 1.8240 9.8068 60.566
2.145 5.0 1410 1.7813 10.3961 60.106
2.0183 6.0 1692 1.7476 11.2005 59.032
2.0183 7.0 1974 1.7152 11.8127 58.1673
1.9185 8.0 2256 1.6872 12.4843 57.5787
1.8414 9.0 2538 1.6643 13.4338 55.502
1.8414 10.0 2820 1.6459 13.7847 55.6753
1.7755 11.0 3102 1.6273 14.6959 53.838
1.7755 12.0 3384 1.6121 15.2948 53.4127
1.7224 13.0 3666 1.5967 15.878 53.0733
1.7224 14.0 3948 1.5809 16.3788 52.778
1.6751 15.0 4230 1.5689 16.7415 52.8
1.6358 16.0 4512 1.5580 17.0318 52.854
1.6358 17.0 4794 1.5509 17.6302 52.0947
1.5921 18.0 5076 1.5389 17.4239 52.71
1.5921 19.0 5358 1.5317 17.9003 52.3427
1.5696 20.0 5640 1.5253 17.769 52.928
1.5696 21.0 5922 1.5172 18.2974 51.8173
1.5344 22.0 6204 1.5117 18.5755 52.012
1.5344 23.0 6486 1.5046 18.5362 52.1447
1.5136 24.0 6768 1.5034 18.7394 51.9887
1.4968 25.0 7050 1.4968 19.1622 51.736
1.4968 26.0 7332 1.4947 19.1636 51.8467
1.472 27.0 7614 1.4886 19.3845 51.774
1.472 28.0 7896 1.4844 19.5481 51.458
1.4575 29.0 8178 1.4827 19.739 51.4593
1.4575 30.0 8460 1.4791 19.818 51.62
1.4435 31.0 8742 1.4763 19.904 51.5167
1.4336 32.0 9024 1.4750 19.9507 51.3787
1.4336 33.0 9306 1.4742 20.0704 51.3527
1.4236 34.0 9588 1.4717 20.2553 51.1967
1.4236 35.0 9870 1.4705 20.3014 51.156
1.4188 36.0 10152 1.4697 20.2263 51.4173
1.4188 37.0 10434 1.4687 20.244 51.394
1.412 38.0 10716 1.4681 20.2699 51.5993
1.412 39.0 10998 1.4676 20.2758 51.4473
1.4087 40.0 11280 1.4677 20.3152 51.4433

Framework versions

  • Transformers 4.22.1
  • Pytorch 1.12.1
  • Datasets 2.5.1
  • Tokenizers 0.11.0
Downloads last month
8

Evaluation results