Edit model card

bart-large-cnn-dc

This model is a fine-tuned version of facebook/bart-large-cnn on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7411
  • Rouge1: 32.6259
  • Rouge2: 13.8436
  • Rougel: 24.1807
  • Rougelsum: 25.5363

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1.913 1.0 2676 1.7099 31.961 13.1769 22.9039 24.4001
1.4454 2.0 5352 1.5883 32.4628 13.6901 23.9072 25.1181
1.1456 3.0 8028 1.5655 32.4881 13.8212 23.8344 25.0851
0.8904 4.0 10704 1.6124 32.7249 13.7468 24.0745 25.5324
0.6868 5.0 13380 1.7411 32.6259 13.8436 24.1807 25.5363

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
3
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for czartur/bart-large-cnn-dc

Finetuned
(297)
this model