Edit model card

bart-large-cnn-finetuned-roundup-2

This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2605
  • Rouge1: 49.3582
  • Rouge2: 29.7017
  • Rougel: 30.6996
  • Rougelsum: 46.3736
  • Gen Len: 142.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 132 1.3168 49.5253 30.0497 31.3982 46.9568 142.0
No log 2.0 264 1.2605 49.3582 29.7017 30.6996 46.3736 142.0

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
2
Hosted inference API
This model can be loaded on the Inference API on-demand.

Evaluation results