Edit model card

flan-t5-base-finetuned-scope-summarization

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2068
  • Rouge1: 21.1277
  • Rouge2: 12.8385
  • Rougel: 19.2508
  • Rougelsum: 19.1904

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.7131 1.0 40 0.3103 13.5236 5.6576 11.5554 11.5235
0.3577 2.0 80 0.2444 20.2029 12.8573 18.8596 18.7919
0.3116 3.0 120 0.2315 20.1102 12.5261 18.5794 18.6565
0.3041 4.0 160 0.2235 19.7317 12.0446 18.1138 18.1158
0.2856 5.0 200 0.2166 19.9465 12.3127 18.2483 18.1644
0.2972 6.0 240 0.2128 20.5461 12.4766 18.5225 18.5724
0.2787 7.0 280 0.2101 20.383 12.8677 19.021 18.9993
0.2837 8.0 320 0.2087 21.0603 12.7582 19.2214 19.1966
0.2803 9.0 360 0.2074 20.9823 12.7617 19.1207 19.0656
0.2696 10.0 400 0.2068 21.1277 12.8385 19.2508 19.1904

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
14
Safetensors
Model size
248M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nandavikas16/flan-t5-base-finetuned-scope-summarization

Finetuned
(643)
this model