Edit model card

flan_t5_small_finetuned_anirbanrc

This model is a fine-tuned version of google/flan-t5-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5172
  • Rouge1: 43.2639
  • Rouge2: 20.726
  • Rougel: 37.0774
  • Rougelsum: 39.6232
  • Gen Len: 16.92

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 7 1.6379 42.0058 18.6227 35.3019 38.6413 17.36
No log 2.0 14 1.5869 43.938 20.3595 36.876 40.0421 17.14
No log 3.0 21 1.5483 43.3723 20.3935 36.9286 39.6476 17.0
No log 4.0 28 1.5255 43.9774 21.5464 37.8954 40.5009 16.9
No log 5.0 35 1.5172 43.2639 20.726 37.0774 39.6232 16.92

Framework versions

  • Transformers 4.30.2
  • Pytorch 1.13.1+cpu
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
1

Dataset used to train AnirbanRC/flan_t5_small_finetuned_anirbanrc

Evaluation results