Edit model card

flan-t5-base-srbd

This model is a fine-tuned version of google/flan-t5-base on an srbd1_v2_annotated_segmented. It achieves the following results on the evaluation set:

  • Loss: 0.2061
  • Rouge1: 73.9926
  • Rouge2: 65.6762
  • Rougel: 73.0659
  • Rougelsum: 73.9428
  • Gen Len: 15.7754

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
11.6626 0.86 200 0.5153 51.3405 41.8375 50.9028 51.2567 7.5616
0.4412 1.72 400 0.2881 73.5628 64.0434 72.4555 73.42 15.7495
0.3127 2.59 600 0.2365 73.3305 64.0389 72.3218 73.2283 15.6328
0.2683 3.45 800 0.2397 73.6254 64.704 72.6818 73.5721 15.8272
0.2462 4.31 1000 0.2197 73.8635 65.1757 72.8344 73.6793 15.7235
0.2206 5.17 1200 0.2142 74.0454 65.5104 73.1022 73.9321 15.8553
0.2073 6.03 1400 0.2087 73.8199 65.2791 72.8211 73.7232 15.743
0.1951 6.9 1600 0.2109 74.0189 65.1189 73.0456 73.9537 15.6782
0.1946 7.76 1800 0.2078 74.1418 65.7362 73.3347 74.0834 15.7775
0.1885 8.62 2000 0.2062 74.3159 65.8017 73.2801 74.0865 15.7905
0.1793 9.48 2200 0.2061 73.9926 65.6762 73.0659 73.9428 15.7754

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Lancelot53/flan-t5-base-srbd

Finetuned
(629)
this model