Edit model card

t5-large-finetuned-amazon-test_2

This model is a fine-tuned version of t5-large on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3001
  • Rouge1: 15.7824
  • Rouge2: 5.363
  • Rougel: 15.6395
  • Rougelsum: 15.4975

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1.7536 1.0 1314 2.6889 15.4715 7.1041 15.012 15.0539
1.9856 2.0 2628 2.5947 15.5517 7.607 15.2437 15.1322
1.7233 3.0 3942 2.7304 16.8701 7.3701 16.6526 16.4333
1.5162 4.0 5256 2.8605 16.278 6.2082 16.114 16.0542
1.3537 5.0 6570 2.9747 15.8483 5.8158 15.5124 15.4393
1.2302 6.0 7884 3.1345 15.6894 5.516 15.5049 15.2841
1.141 7.0 9198 3.2224 15.7304 6.131 15.7835 15.5706
1.0733 8.0 10512 3.3001 15.7824 5.363 15.6395 15.4975

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
6

Dataset used to train NICFRU/t5-large-finetuned-amazon-test_2

Evaluation results