Edit model card

summarization_t5base_en_to_kjven

This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8324
  • Bleu: 21.2143
  • Gen Len: 18.1685

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
1.0735 1.0 2860 0.9479 21.3913 18.1219
0.9776 2.0 5720 0.8750 22.1711 18.1307
0.918 3.0 8580 0.8317 22.6915 18.1381
0.8741 4.0 11440 0.8039 23.0856 18.1468
0.8489 5.0 14300 0.7841 23.3573 18.1455
0.8169 6.0 17160 0.7664 23.5073 18.1493
0.7965 7.0 20020 0.7532 23.6919 18.1495
0.78 8.0 22880 0.7411 23.8445 18.1461
0.7568 9.0 25740 0.7338 23.86 18.155
0.7496 10.0 28600 0.7228 23.953 18.1511
0.7411 11.0 31460 0.7175 24.0327 18.1511
0.8376 12.0 34320 0.8114 23.311 18.1319
1.1918 13.0 37180 0.9686 21.5339 18.1185
1.0929 14.0 40040 0.8978 21.561 18.1455
1.0373 15.0 42900 0.8617 21.4942 18.1542
1.0165 16.0 45760 0.8432 21.3962 18.1595
0.9973 17.0 48620 0.8340 21.2558 18.166
0.9889 18.0 51480 0.8326 21.2238 18.1687
0.9909 19.0 54340 0.8325 21.2216 18.1688
0.9942 20.0 57200 0.8324 21.2143 18.1685

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.