Edit model card

distilbart-cnn-6-6-finetuned-summscreen-10-epochs

This model is a fine-tuned version of sshleifer/distilbart-cnn-6-6 on the SummScreen dataset. It achieves the following results on the evaluation set:

  • Loss: 3.4962
  • Rouge1: 26.3499
  • Rouge2: 7.3999
  • Rougel: 18.6087
  • Rougelsum: 23.17
  • Gen Len: 49.8609

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
3.1229 1.0 3673 3.1271 26.6959 7.4401 18.8303 23.7132 49.9763
2.8872 2.0 7346 3.0482 26.6447 7.5599 18.5921 23.2786 49.8195
2.5733 3.0 11019 3.0292 27.425 7.9963 19.3544 24.1281 49.8757
2.3886 4.0 14692 3.0625 27.1291 7.5541 18.9375 23.8729 49.8905
2.215 5.0 18365 3.1118 27.1773 7.551 19.0524 24.1015 49.9142
2.0377 6.0 22038 3.2086 27.2237 7.8821 19.2136 24.0477 49.784
1.9358 7.0 25711 3.3405 26.7555 7.6628 18.8609 23.5264 49.8343
1.8292 8.0 29384 3.4124 26.7741 7.4529 18.9276 23.5827 49.8757
1.7702 9.0 33057 3.4457 26.6281 7.4415 18.7932 23.4608 49.8639
1.7443 10.0 36730 3.4962 26.3499 7.3999 18.6087 23.17 49.8609

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.