Edit model card

pegasus-cnn_dailymail-v4-e1-e4-feedback

This model is a fine-tuned version of theojolliffe/pegasus-cnn_dailymail-v4-e1 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 5.1815
  • Rouge1: 59.7329
  • Rouge2: 42.429
  • Rougel: 53.5945
  • Rougelsum: 52.9219
  • Gen Len: 37.8636

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 90 5.5674 60.4652 43.8057 54.5759 52.5243 39.5
No log 2.0 180 5.3261 60.248 43.9953 54.9492 53.6372 38.6364
No log 3.0 270 5.2167 60.4887 43.6915 54.3825 53.5206 37.1364
No log 4.0 360 5.1815 59.7329 42.429 53.5945 52.9219 37.8636

Framework versions

  • Transformers 4.12.3
  • Pytorch 1.9.0
  • Datasets 1.18.0
  • Tokenizers 0.10.3
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.