Edit model card

bart-large-cnn-finetuned-roundup-4

This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2573
  • Rouge1: 49.0193
  • Rouge2: 28.6311
  • Rougel: 31.3363
  • Rougelsum: 46.1408
  • Gen Len: 142.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 132 1.3178 48.4526 28.6361 30.2875 45.4822 142.0
No log 2.0 264 1.2404 48.139 28.2459 29.3584 45.0785 142.0
No log 3.0 396 1.2389 49.74 29.7834 33.143 46.8147 142.0
0.9855 4.0 528 1.2573 49.0193 28.6311 31.3363 46.1408 142.0

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
8