Edit model card

bart-paraphrase-finetuned-xsum-v3

This model is a fine-tuned version of eugenesiow/bart-paraphrase on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1881
  • Rouge1: 99.9251
  • Rouge2: 99.9188
  • Rougel: 99.9251
  • Rougelsum: 99.9251
  • Gen Len: 10.17

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 100 0.2702 99.9251 99.9188 99.9251 99.9251 10.38
No log 2.0 200 0.2773 99.9251 99.9188 99.9251 99.9251 11.45
No log 3.0 300 0.2178 99.8148 99.7051 99.8208 99.8148 11.19
No log 4.0 400 0.3649 99.9251 99.9188 99.9251 99.9251 12.32
0.1561 5.0 500 0.2532 99.8957 99.8875 99.8957 99.8918 10.375
0.1561 6.0 600 0.2050 99.9251 99.9188 99.9251 99.9251 11.15
0.1561 7.0 700 0.2364 99.8957 99.8875 99.8957 99.8918 10.18
0.1561 8.0 800 0.2006 99.9251 99.9188 99.9251 99.9251 10.17
0.1561 9.0 900 0.1628 99.9251 99.9188 99.9251 99.9251 10.23
0.1538 10.0 1000 0.1881 99.9251 99.9188 99.9251 99.9251 10.17

Framework versions

  • Transformers 4.19.2
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1
Downloads last month
28