t5-base-finetuned-summarize-news-finetuned-xsum

This model is a fine-tuned version of mrm8488/t5-base-finetuned-summarize-news on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8286
  • Rouge1: 36.1835
  • Rouge2: 19.3429
  • Rougel: 35.3053
  • Rougelsum: 35.4019
  • Gen Len: 18.9615

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 26 3.0615 19.3248 6.3763 17.2502 17.2526 19.0
No log 2.0 52 2.5465 34.32 18.9536 32.8364 33.0664 19.0
No log 3.0 78 2.2511 34.606 18.9098 33.9115 33.9428 19.0
No log 4.0 104 2.0548 36.568 20.0592 35.5504 35.5845 18.9615
No log 5.0 130 1.9450 36.6344 19.5431 35.6034 35.6426 18.9615
No log 6.0 156 1.8820 36.1835 19.3429 35.3053 35.4019 18.9615
No log 7.0 182 1.8411 36.1835 19.3429 35.3053 35.4019 18.9615
No log 8.0 208 1.8286 36.1835 19.3429 35.3053 35.4019 18.9615

Framework versions

  • Transformers 4.36.0
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.15.0
Downloads last month
18
Safetensors
Model size
223M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ibtissam369/t5-base-finetuned-summarize-news-finetuned-xsum

Finetuned
(1)
this model