Edit model card

T5_wmt14_En_Fr_1million

This model is a fine-tuned version of google-t5/t5-small on the wmt14 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3618
  • Bleu: 8.7934
  • Gen Len: 17.9953

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 60
  • eval_batch_size: 60
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
1.0796 1.0 1667 1.1872 9.2959 18.0253
1.01 2.0 3334 1.2029 9.1594 18.0187
0.9686 3.0 5001 1.2114 9.2836 18.0123
0.9366 4.0 6668 1.2261 9.18 17.995
0.8999 5.0 8335 1.2319 9.2754 17.9793
0.8769 6.0 10002 1.2413 9.1705 18.026
0.8536 7.0 11669 1.2502 9.036 17.9987
0.8273 8.0 13336 1.2633 9.2003 18.006
0.8125 9.0 15003 1.2740 9.0991 18.009
0.7905 10.0 16670 1.2835 8.9005 18.007
0.774 11.0 18337 1.2943 9.0676 17.9967
0.76 12.0 20004 1.3023 9.0644 18.0227
0.7358 13.0 21671 1.3125 8.9858 18.0027
0.7238 14.0 23338 1.3204 9.0178 18.0073
0.7143 15.0 25005 1.3317 8.9826 18.015
0.6988 16.0 26672 1.3402 8.9224 18.0073
0.6829 17.0 28339 1.3500 8.9307 17.996
0.6776 18.0 30006 1.3517 8.8798 17.9987
0.6695 19.0 31673 1.3585 8.895 17.9967
0.6637 20.0 33340 1.3618 8.7934 17.9953

Framework versions

  • Transformers 4.32.1
  • Pytorch 1.12.1
  • Datasets 2.18.0
  • Tokenizers 0.13.2
Downloads last month
135

Finetuned from

Dataset used to train sriram-sanjeev9s/T5_wmt14_En_Fr_1million

Evaluation results