Edit model card

BART_pretrained_on_billsum_finetuned_on_small_SCOTUS_extracted_dataset_2

This model is a fine-tuned version of bheshaj/bart-large-billsum-epochs20 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 9.5858
  • Rouge1: 0.06
  • Rouge2: 0.0
  • Rougel: 0.06
  • Rougelsum: 0.0598
  • Gen Len: 20.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.02
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
23.3274 0.98 10 35.8778 0.0289 0.0 0.0288 0.0289 20.0
43.999 1.98 20 39.9056 0.0475 0.0045 0.0453 0.0453 20.0
32.0042 2.98 30 25.4613 0.0516 0.0005 0.0499 0.0499 20.0
21.3151 3.98 40 17.6485 0.0021 0.0 0.0021 0.0021 20.0
15.4017 4.98 50 12.6187 0.06 0.0 0.06 0.0598 20.0
10.9491 5.98 60 9.5858 0.06 0.0 0.06 0.0598 20.0

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
0