Edit model card

flan-t5-large-field-short

This model is a fine-tuned version of google/flan-t5-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3511
  • Rouge1: 75.9188
  • Rouge2: 65.0316
  • Rougel: 74.1965
  • Rougelsum: 74.2928
  • Gen Len: 18.5368

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 225 0.4477 70.3244 54.48 67.539 67.5367 18.5368
No log 2.0 450 0.3952 71.2109 55.6557 68.192 68.0612 18.4211
0.5212 3.0 675 0.3703 72.5832 59.067 70.4191 70.3352 18.4526
0.5212 4.0 900 0.3427 73.1074 58.472 70.4641 70.4554 18.4947
0.3084 5.0 1125 0.3303 74.4224 61.8209 72.1663 72.1812 18.5158
0.3084 6.0 1350 0.3265 74.6821 62.8891 72.8496 72.9381 18.4842
0.2453 7.0 1575 0.3250 74.8631 63.6981 73.088 73.0406 18.5789
0.2453 8.0 1800 0.3283 75.7529 64.1127 73.7844 73.6245 18.5158
0.2037 9.0 2025 0.3425 75.0953 63.3301 73.4542 73.4519 18.5053
0.2037 10.0 2250 0.3387 75.6678 64.5649 73.939 73.8801 18.5368
0.2037 11.0 2475 0.3343 75.2038 64.2915 73.673 73.5405 18.4947
0.1746 12.0 2700 0.3443 74.5103 62.7511 72.5728 72.5861 18.4947
0.1746 13.0 2925 0.3520 75.7764 64.4211 73.9112 73.9166 18.5368
0.1551 14.0 3150 0.3530 75.385 64.0909 73.6109 73.7228 18.5368
0.1551 15.0 3375 0.3511 75.9188 65.0316 74.1965 74.2928 18.5368

Framework versions

  • Transformers 4.43.2
  • Pytorch 2.2.0a0+81ea7a4
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
783M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Sirshendu3e01/flan-t5-large-field-short

Finetuned
(106)
this model