flan-t5

This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6990
  • Rouge1: 67.5685
  • Rouge2: 60.4456
  • Rougel: 66.1518
  • Rougelsum: 66.4684
  • Gen Len: 18.9254

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 135 0.7189 66.6756 59.3564 65.186 65.5624 18.8657
No log 2.0 270 0.7073 67.6204 60.4625 66.1983 66.5708 18.9254
No log 3.0 405 0.7045 67.4291 60.2768 65.9877 66.3244 18.9254
0.6373 4.0 540 0.6990 67.5685 60.4456 66.1518 66.4684 18.9254
0.6373 5.0 675 0.7041 67.5685 60.4456 66.1518 66.4684 18.9254

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.