Edit model card

bart-paraphrase-pubmed

This model is a fine-tuned version of facebook/bart-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6340
  • Rouge2 Precision: 0.83
  • Rouge2 Recall: 0.6526
  • Rouge2 Fmeasure: 0.7144

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure
0.6613 1.0 663 0.4750 0.8321 0.6552 0.7167
0.4993 2.0 1326 0.4404 0.8366 0.6583 0.7203
0.443 3.0 1989 0.4261 0.8319 0.6562 0.7176
0.3482 4.0 2652 0.4198 0.8348 0.6571 0.7191
0.3206 5.0 3315 0.4233 0.8344 0.656 0.7183
0.294 6.0 3978 0.4334 0.835 0.657 0.719
0.2404 7.0 4641 0.4437 0.8334 0.6559 0.7178
0.2228 8.0 5304 0.4438 0.8348 0.6565 0.7187
0.211 9.0 5967 0.4516 0.8329 0.6549 0.717
0.1713 10.0 6630 0.4535 0.8332 0.6547 0.7169
0.1591 11.0 7293 0.4763 0.8349 0.6561 0.7184
0.1555 12.0 7956 0.4824 0.8311 0.6534 0.7153
0.1262 13.0 8619 0.4883 0.8322 0.655 0.7167
0.1164 14.0 9282 0.5025 0.8312 0.6539 0.7158
0.1108 15.0 9945 0.5149 0.8321 0.6535 0.7157
0.0926 16.0 10608 0.5340 0.8315 0.6544 0.7159
0.0856 17.0 11271 0.5322 0.8306 0.6518 0.7142
0.0785 18.0 11934 0.5346 0.8324 0.6549 0.7167
0.071 19.0 12597 0.5488 0.8311 0.652 0.714
0.0635 20.0 13260 0.5624 0.8287 0.6517 0.7132
0.0608 21.0 13923 0.5612 0.8299 0.6527 0.7141
0.0531 22.0 14586 0.5764 0.8283 0.6498 0.7119
0.0486 23.0 15249 0.5832 0.8298 0.6532 0.7148
0.0465 24.0 15912 0.5866 0.83 0.6522 0.7142
0.0418 25.0 16575 0.5825 0.83 0.6523 0.7141
0.0391 26.0 17238 0.5997 0.8306 0.6545 0.716
0.0376 27.0 17901 0.5894 0.8315 0.6546 0.7164
0.035 28.0 18564 0.6045 0.8306 0.6529 0.7149
0.0316 29.0 19227 0.6168 0.8311 0.6546 0.7162
0.0314 30.0 19890 0.6203 0.8311 0.6552 0.7164
0.0292 31.0 20553 0.6173 0.8315 0.6548 0.7163
0.0265 32.0 21216 0.6226 0.832 0.6548 0.7166
0.0274 33.0 21879 0.6264 0.8314 0.6538 0.7155
0.0247 34.0 22542 0.6254 0.8289 0.6515 0.7132
0.0238 35.0 23205 0.6254 0.8307 0.6519 0.7142
0.0232 36.0 23868 0.6295 0.8287 0.6515 0.7133
0.0215 37.0 24531 0.6326 0.8293 0.6523 0.7138
0.0212 38.0 25194 0.6332 0.8295 0.6522 0.714
0.0221 39.0 25857 0.6335 0.8305 0.6528 0.7147
0.0202 40.0 26520 0.6340 0.83 0.6526 0.7144

Framework versions

  • Transformers 4.12.3
  • Pytorch 1.9.0+cu111
  • Datasets 1.15.1
  • Tokenizers 0.10.3
Downloads last month
10