Edit model card

flan-t5-base-samsum

This model is a fine-tuned version of google/flan-t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3721
  • Rouge1: 47.5
  • Rouge2: 23.9237
  • Rougel: 40.0646
  • Rougelsum: 43.6387
  • Gen Len: 17.2405

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.4398 1.0 1842 1.3823 47.2415 23.7419 39.5142 43.4177 17.0354
1.3564 2.0 3684 1.3747 46.833 23.308 39.2838 42.9821 17.3077
1.2776 3.0 5526 1.3721 47.5 23.9237 40.0646 43.6387 17.2405
1.2345 4.0 7368 1.3744 47.5599 23.9714 40.06 43.8107 17.2454
1.194 5.0 9210 1.3760 47.7868 24.0949 40.2021 43.789 17.2466

Framework versions

  • Transformers 4.27.4
  • Pytorch 2.0.0+cu117
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
7

Dataset used to train yingzwang/flan-t5-base-samsum

Evaluation results