Edit model card

bart-paraphrase-v4-e1-feedback-e4

This model is a fine-tuned version of theojolliffe/bart-paraphrase-v4-e1-feedback on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9640
  • Rouge1: 61.6305
  • Rouge2: 41.9892
  • Rougel: 57.0694
  • Rougelsum: 58.3816
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 34 2.8512 67.5001 46.2823 62.2247 63.3811 18.875
No log 2.0 68 2.3116 62.1089 43.432 57.564 58.8003 19.0
No log 3.0 102 2.0519 61.2025 40.9901 56.3369 57.5829 19.0
No log 4.0 136 1.9640 61.6305 41.9892 57.0694 58.3816 19.0

Framework versions

  • Transformers 4.12.3
  • Pytorch 1.9.0
  • Datasets 1.18.0
  • Tokenizers 0.10.3
Downloads last month
8