Edit model card

t5-base-finetuned-summscreen-bestval-100-genlen-10-epochs

This model is a fine-tuned version of t5-base on the SummScreen dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9499
  • Rouge1: 29.2769
  • Rouge2: 5.5288
  • Rougel: 17.5141
  • Rougelsum: 25.345
  • Gen Len: 86.7596

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
3.2858 0.99 3500 3.1088 27.292 4.7003 16.4176 23.7005 84.6259
3.2085 1.99 7000 3.0425 28.3997 5.0233 17.0582 24.5815 86.8322
3.107 2.98 10500 3.0110 28.7042 5.3326 17.429 24.7691 84.8549
3.074 3.98 14000 2.9886 28.8975 5.371 17.302 25.0658 86.6327
2.9899 4.97 17500 2.9769 29.0185 5.6415 17.6407 24.7669 82.8435
2.9857 5.97 21000 2.9647 29.5476 5.5332 17.4855 25.2605 87.3152
2.9542 6.96 24500 2.9586 29.4713 5.5729 17.5815 25.2393 88.0295
2.9301 7.96 28000 2.9536 29.8483 5.7355 17.8895 25.774 87.195
2.9118 8.95 31500 2.9503 29.3014 5.5802 17.5983 25.3476 86.0476
2.9033 9.95 35000 2.9499 29.2769 5.5288 17.5141 25.345 86.7596

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
1