Edit model card

t5-base-finetuned-en-to-tr

This model is a fine-tuned version of t5-base on the setimes dataset. It achieves the following results on the evaluation set:

  • Loss: 4.7522
  • Bleu: 13.0464
  • Gen Len: 17.5633

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
7.6012 1.0 12851 7.4685 2.2376 18.1521
7.0962 2.0 25702 6.8819 4.4861 18.0448
6.6712 3.0 38553 6.4648 6.1268 18.014
6.3473 4.0 51404 6.1421 7.6084 17.9027
6.1161 5.0 64255 5.8969 8.4021 17.7949
5.9178 6.0 77106 5.6935 9.37 17.8392
5.7331 7.0 89957 5.5226 9.8004 17.8893
5.5981 8.0 102808 5.3886 10.3562 17.8955
5.4867 9.0 115659 5.2807 10.876 17.7434
5.3722 10.0 128510 5.1751 11.1864 17.7313
5.2739 11.0 141361 5.0924 11.6223 17.6476
5.2339 12.0 154212 5.0033 11.8264 17.6996
5.1754 13.0 167063 4.9500 12.1915 17.6447
5.0981 14.0 179914 4.8958 12.4578 17.5782
5.0478 15.0 192765 4.8458 12.6398 17.5753
4.9778 16.0 205616 4.8142 12.6034 17.5681
4.9689 17.0 218467 4.7840 12.807 17.5816
4.9368 18.0 231318 4.7680 13.038 17.5614
4.9829 19.0 244169 4.7572 13.0403 17.5407
4.9434 20.0 257020 4.7522 13.0464 17.5633

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.2.1+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
2

Finetuned from

Dataset used to train Justice0893/t5-base-finetuned-en-to-tr

Evaluation results