Edit model card

bart-large-cnn-finetuned-pubmed-finetuned-roundup-e8

This model is a fine-tuned version of theojolliffe/bart-large-cnn-finetuned-pubmed on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1034
  • Rouge1: 48.4605
  • Rouge2: 28.5961
  • Rougel: 32.5389
  • Rougelsum: 45.7358
  • Gen Len: 142.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 25 1.4278 47.952 29.4059 34.273 45.7244 142.0
No log 2.0 50 1.4351 48.7561 29.4049 30.631 46.4074 142.0
No log 3.0 75 1.5375 50.0069 31.4237 32.0834 47.679 142.0
No log 4.0 100 1.6647 49.6919 28.8821 31.9357 47.0396 142.0
No log 5.0 125 1.8070 47.8472 26.6979 30.7049 44.5848 142.0
No log 6.0 150 1.9981 47.8352 27.0966 31.4529 46.5251 142.0
No log 7.0 175 2.0904 48.6272 30.5493 32.7827 46.8462 142.0
No log 8.0 200 2.1034 48.4605 28.5961 32.5389 45.7358 142.0

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
9