Edit model card

bart-large-cnn-finetuned-scope1-summarization

This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0612
  • Rouge1: 55.9874
  • Rouge2: 41.0458
  • Rougel: 47.6072
  • Rougelsum: 47.5635

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
No log 1.0 17 0.1238 46.7806 30.4394 36.8259 36.8757
0.4762 2.0 34 0.1058 49.4907 32.4075 39.352 39.161
0.4762 3.0 51 0.0899 54.1557 35.6198 41.6488 41.4013
0.1104 4.0 68 0.0867 53.237 36.766 42.8508 42.7151
0.1104 5.0 85 0.0773 57.4084 39.3354 45.068 44.9505
0.0914 6.0 102 0.0736 56.9111 41.3118 48.1607 47.9965
0.0914 7.0 119 0.0699 58.6135 42.3985 48.7923 48.4873
0.0785 8.0 136 0.0673 59.5593 43.9205 51.7275 51.5617
0.0785 9.0 153 0.0618 62.0583 47.3928 53.3198 53.1472
0.0702 10.0 170 0.0612 55.9874 41.0458 47.6072 47.5635

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nandavikas16/bart-large-cnn-finetuned-scope1-summarization

Finetuned
(297)
this model