Edit model card

pubhealth-expanded-1

This model is a fine-tuned version of facebook/bart-base on the clupubhealth dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3198
  • Rouge1: 28.6755
  • Rouge2: 9.2869
  • Rougel: 21.9675
  • Rougelsum: 22.2946
  • Gen Len: 19.85

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 12
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 10
  • total_train_batch_size: 120
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
3.6788 0.08 40 2.3758 29.5273 9.3588 22.4799 22.6212 19.835
3.4222 0.15 80 2.3484 29.0821 9.1988 22.3907 22.5996 19.88
3.3605 0.23 120 2.3500 29.2893 9.296 22.1247 22.4075 19.94
3.3138 0.31 160 2.3504 29.039 8.907 21.9631 22.2506 19.91
3.2678 0.39 200 2.3461 29.678 9.4429 22.3439 22.6962 19.92
3.2371 0.46 240 2.3267 28.535 9.1858 21.3721 21.6634 19.915
3.204 0.54 280 2.3330 29.0796 9.4283 21.8953 22.1867 19.885
3.1881 0.62 320 2.3164 29.1456 9.1919 21.9529 22.235 19.945
3.1711 0.69 360 2.3208 29.3212 9.4823 22.1643 22.4159 19.895
3.1752 0.77 400 2.3239 29.0408 9.3615 21.8007 22.0795 19.945
3.1591 0.85 440 2.3218 28.6336 9.2799 21.5843 21.9422 19.845
3.1663 0.93 480 2.3198 28.6755 9.2869 21.9675 22.2946 19.85

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.7.1
  • Tokenizers 0.13.2
Downloads last month
1

Finetuned from

Evaluation results