Sandipan1994's picture
End of training
9dedb54
|
raw
history blame
4.56 kB
metadata
license: apache-2.0
base_model: google/flan-t5-base
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: flan-t5-base-finetuned-FOMC
    results: []

flan-t5-base-finetuned-FOMC

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2531
  • Rouge1: 33.7697
  • Rouge2: 20.9968
  • Rougel: 30.2984
  • Rougelsum: 30.5446
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 10 2.6231 31.9634 19.4463 28.9802 29.3026 19.0
No log 2.0 20 2.5645 31.6125 19.0593 28.572 28.9666 19.0
No log 3.0 30 2.5120 31.9437 19.4395 28.9629 29.2817 19.0
No log 4.0 40 2.4752 30.874 18.8778 27.8613 28.3752 19.0
No log 5.0 50 2.4449 30.4001 18.425 27.2624 27.939 19.0
No log 6.0 60 2.4177 31.0542 19.1125 27.9874 28.6255 19.0
No log 7.0 70 2.3935 31.0542 19.1125 27.9874 28.6255 19.0
No log 8.0 80 2.3778 31.0542 19.1125 27.9874 28.6255 19.0
No log 9.0 90 2.3565 31.0542 19.1125 27.9874 28.6255 19.0
No log 10.0 100 2.3415 31.0542 19.1125 27.9874 28.6255 19.0
No log 11.0 110 2.3296 32.2319 19.728 29.1471 29.4452 19.0
No log 12.0 120 2.3206 32.5462 19.9463 29.5345 29.6243 19.0
No log 13.0 130 2.3123 32.5462 19.9463 29.5345 29.6243 19.0
No log 14.0 140 2.3034 32.5462 19.9463 29.5345 29.3859 19.0
No log 15.0 150 2.2966 32.5462 19.9463 29.5345 29.3859 19.0
No log 16.0 160 2.2882 32.5462 19.9463 29.5345 29.3859 19.0
No log 17.0 170 2.2813 32.5462 19.9463 29.5345 29.3859 19.0
No log 18.0 180 2.2772 33.7697 20.9968 30.2984 30.5446 19.0
No log 19.0 190 2.2728 33.7697 20.9968 30.2984 30.5446 19.0
No log 20.0 200 2.2683 33.7697 20.9968 30.2984 30.5446 19.0
No log 21.0 210 2.2643 33.7697 20.9968 30.2984 30.5446 19.0
No log 22.0 220 2.2627 33.7697 20.9968 30.2984 30.5446 19.0
No log 23.0 230 2.2615 33.7697 20.9968 30.2984 30.5446 19.0
No log 24.0 240 2.2586 33.7697 20.9968 30.2984 30.5446 19.0
No log 25.0 250 2.2573 33.7697 20.9968 30.2984 30.5446 19.0
No log 26.0 260 2.2560 33.7697 20.9968 30.2984 30.5446 19.0
No log 27.0 270 2.2552 33.7697 20.9968 30.2984 30.5446 19.0
No log 28.0 280 2.2542 33.7697 20.9968 30.2984 30.5446 19.0
No log 29.0 290 2.2533 33.7697 20.9968 30.2984 30.5446 19.0
No log 30.0 300 2.2531 33.7697 20.9968 30.2984 30.5446 19.0

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3