Edit model card

CS685-text-summarizer-2

This model is a fine-tuned version of facebook/bart-base on the billsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7651
  • Rouge1: 17.1607
  • Rouge2: 13.943
  • Rougel: 16.6793
  • Rougelsum: 16.8422

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
2.4547 1.0 569 1.9895 16.6343 13.0432 16.1262 16.2449
2.0246 2.0 1138 1.8688 16.939 13.4711 16.4359 16.5797
1.818 3.0 1707 1.8075 17.1388 13.827 16.6136 16.7574
1.6831 4.0 2276 1.7744 17.2292 13.9353 16.6961 16.8786
1.5956 5.0 2845 1.7651 17.1607 13.943 16.6793 16.8422

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.0+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
12

Dataset used to train cs608/billsum-model

Evaluation results