Edit model card

distilbart-cnn-12-6-sec

This model is a fine-tuned version of sshleifer/distilbart-cnn-12-6 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0798
  • Rouge1: 72.1665
  • Rouge2: 62.2601
  • Rougel: 67.8376
  • Rougelsum: 71.1407
  • Gen Len: 121.62

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 99 0.3526 53.3978 38.6395 45.6271 51.0477 111.48
No log 2.0 198 0.1961 55.7397 43.6293 50.9595 54.0764 111.46
No log 3.0 297 0.1483 66.9443 54.8966 62.6678 65.6787 118.64
No log 4.0 396 0.1218 67.2661 56.1852 63.1339 65.8066 124.92
No log 5.0 495 0.1139 67.2097 55.8694 62.7508 65.9706 123.02
0.4156 6.0 594 0.0940 71.607 60.6697 66.7873 70.339 122.84
0.4156 7.0 693 0.0888 71.3792 61.8326 68.25 70.5113 124.4
0.4156 8.0 792 0.0870 72.7472 62.6968 68.2853 71.5789 124.34
0.4156 9.0 891 0.0799 73.4438 63.5966 68.8737 72.3014 119.88
0.4156 10.0 990 0.0798 72.1665 62.2601 67.8376 71.1407 121.62

Framework versions

  • Transformers 4.20.1
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.