Edit model card

textGeneration_06

This model is a fine-tuned version of t5-small on the xsum dataset. It achieves the following results on the evaluation set:

  • Loss: 3.7405
  • Rouge1: 12.1154
  • Rouge2: 1.7291
  • Rougel: 9.4055
  • Rougelsum: 11.035
  • Gen Len: 937.368

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
4.2168 1.0 1250 3.8405 12.1695 1.7457 9.3821 11.0907 896.12
4.1005 2.0 2500 3.7840 11.933 1.7034 9.3269 10.8944 938.399
4.0678 3.0 3750 3.7579 12.0066 1.7388 9.3301 10.9558 936.662
4.0411 4.0 5000 3.7445 12.0542 1.7188 9.4032 11.0116 932.645
4.0359 5.0 6250 3.7405 12.1154 1.7291 9.4055 11.035 937.368

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
8

Dataset used to train Seungjun/textGeneration_06

Evaluation results