Edit model card

t5-base-finetuned-multi-news

This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2612
  • Rouge1: 16.6322
  • Rouge2: 5.7556
  • Rougel: 12.4728
  • Rougelsum: 14.4814

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
2.5641 1.0 1250 2.2636 16.6762 5.7127 12.4648 14.5499
2.3542 2.0 2500 2.2439 16.7381 5.7345 12.5515 14.5785
2.2487 3.0 3750 2.2388 16.8879 5.8792 12.6417 14.8011
2.1705 4.0 5000 2.2413 16.5921 5.7804 12.4539 14.4865
2.1083 5.0 6250 2.2459 16.6878 5.8593 12.5132 14.5473
2.0622 6.0 7500 2.2495 16.7267 5.7825 12.48 14.5309
2.0297 7.0 8750 2.2581 16.633 5.748 12.4418 14.4796
2.0084 8.0 10000 2.2612 16.6322 5.7556 12.4728 14.4814

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
4