Edit model card

t5-base-finetuned-p7_V2

This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5882
  • Rouge1: 53.6225
  • Rouge2: 43.8285
  • Rougel: 51.4683
  • Rougelsum: 51.8848
  • Gen Len: 18.2368

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
0.7305 1.0 910 0.6238 52.767 42.9846 50.5281 50.9489 18.3125
0.4819 2.0 1820 0.5835 53.6526 44.0044 51.6286 51.9821 18.2533
0.4193 3.0 2730 0.5882 53.6225 43.8285 51.4683 51.8848 18.2368

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
0