Edit model card

flan_t5_small_finetuned

This model is a fine-tuned version of google/flan-t5-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3412
  • Rouge1: 44.9038
  • Rouge2: 22.7667
  • Rougel: 38.8789
  • Rougelsum: 41.7196
  • Gen Len: 16.92

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 7 1.4537 43.8958 21.4426 37.7876 41.0621 17.4
No log 2.0 14 1.4025 44.291 21.7752 37.6394 41.0092 17.1
No log 3.0 21 1.3687 44.5803 22.5489 38.5959 41.6202 17.0
No log 4.0 28 1.3487 44.6139 22.5884 38.6786 41.5311 16.94
No log 5.0 35 1.3412 44.9038 22.7667 38.8789 41.7196 16.92

Framework versions

  • Transformers 4.30.2
  • Pytorch 1.13.1+cpu
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
1

Dataset used to train anirbankgec/flan_t5_small_finetuned

Evaluation results