Edit model card

t5-small-finetuned-cnn-v2

This model is a fine-tuned version of t5-small on the cnn_dailymail dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5474
  • Rouge1: 35.154
  • Rouge2: 18.683
  • Rougel: 30.8481
  • Rougelsum: 32.9638

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1.8823 1.0 35890 1.5878 34.9676 18.4927 30.6753 32.7702
1.7871 2.0 71780 1.5709 34.9205 18.5556 30.6514 32.745
1.7507 3.0 107670 1.5586 34.9825 18.4964 30.6724 32.7644
1.7253 4.0 143560 1.5584 35.074 18.6171 30.8007 32.9132
1.705 5.0 179450 1.5528 35.023 18.5787 30.7014 32.8396
1.6894 6.0 215340 1.5518 35.0583 18.6754 30.791 32.8814
1.6776 7.0 251230 1.5468 35.2236 18.6812 30.8944 33.0362
1.6687 8.0 287120 1.5474 35.154 18.683 30.8481 32.9638

Framework versions

  • Transformers 4.14.0
  • Pytorch 1.5.0
  • Datasets 2.3.2
  • Tokenizers 0.10.3
Downloads last month
3

Dataset used to train ubikpt/t5-small-finetuned-cnn-v2

Evaluation results