Edit model card

t5-base-question-answer-summarization

This model is a fine-tuned version of google-t5/t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1424
  • Rouge1: 85.4974
  • Rouge2: 77.0571
  • Rougel: 82.4125
  • Rougelsum: 82.4757

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.3381 1.0 526 0.1310 85.4136 77.2307 82.5493 82.5887
0.1221 2.0 1052 0.1291 85.5109 77.3495 82.5035 82.5448
0.1008 3.0 1578 0.1293 85.7918 77.3841 82.5218 82.5855
0.0861 4.0 2104 0.1312 85.8164 77.5711 82.5025 82.5955
0.075 5.0 2630 0.1358 85.769 77.3766 82.6532 82.691
0.069 6.0 3156 0.1361 85.417 76.9087 82.397 82.4857
0.0625 7.0 3682 0.1404 85.5539 77.0784 82.4147 82.445
0.0595 8.0 4208 0.1424 85.4974 77.0571 82.4125 82.4757

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
223M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for JohnDoe70/t5-summarization-v2

Base model

google-t5/t5-base
Quantized
(6)
this model