Edit model card

t5-xl-v2

This model is a fine-tuned version of google/flan-t5-xl on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8350
  • Rouge1: 29.9535
  • Rouge2: 11.4157
  • Rougel: 21.3830
  • Rougelsum: 21.3793

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 24
  • eval_batch_size: 32
  • seed: 7
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
2.1401 0.34 300 1.8438 30.0703 11.5959 21.6066 21.6053
2.13 0.67 600 1.8369 29.9460 11.6006 21.6609 21.6625
2.1364 1.01 900 1.8262 30.2597 11.8455 21.7202 21.7212
2.1301 1.35 1200 1.8301 30.1492 11.6922 21.6115 21.6151
2.146 1.68 1500 1.8350 29.9535 11.4157 21.3830 21.3793

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Adapter for