Edit model card

plbart-finetuned-unitTest-v1

This model is a fine-tuned version of uclanlp/plbart-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4949
  • Bleu: 0.0000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Bleu
1.2819 1.0 918 0.8663 0.0000
0.7575 2.0 1836 0.8740 0.0000
0.6118 3.0 2754 0.8570 0.0000
0.506 4.0 3672 0.9254 0.0000
0.4114 5.0 4590 0.9183 0.0000
0.3554 6.0 5508 0.9901 0.0000
0.274 7.0 6426 1.0385 0.0000
0.248 8.0 7344 1.0427 0.0000
0.2008 9.0 8262 1.0427 0.0000
0.1883 10.0 9180 1.1100 0.0000
0.1572 11.0 10098 1.1253 0.0000
0.1349 12.0 11016 1.1110 0.0000
0.1139 13.0 11934 1.1399 0.0000
0.1012 14.0 12852 1.1812 0.0000
0.0924 15.0 13770 1.2108 0.0000
0.078 16.0 14688 1.2008 0.0000
0.0738 17.0 15606 1.2454 0.0000
0.0627 18.0 16524 1.2557 0.0000
0.0559 19.0 17442 1.2442 0.0000
0.0502 20.0 18360 1.2724 0.0000
0.0478 21.0 19278 1.2731 0.0000
0.044 22.0 20196 1.2850 0.0000
0.038 23.0 21114 1.2929 0.0000
0.0357 24.0 22032 1.3371 0.0000
0.0316 25.0 22950 1.3470 0.0000
0.0285 26.0 23868 1.3976 0.0000
0.0269 27.0 24786 1.3808 0.0000
0.0258 28.0 25704 1.4028 0.0000
0.0228 29.0 26622 1.4032 0.0000
0.0219 30.0 27540 1.4082 0.0000
0.0207 31.0 28458 1.4250 0.0000
0.0186 32.0 29376 1.4383 0.0000
0.0162 33.0 30294 1.4060 0.0000
0.0151 34.0 31212 1.4416 0.0000
0.015 35.0 32130 1.4397 0.0000
0.0146 36.0 33048 1.4644 0.0000
0.0107 37.0 33966 1.4560 0.0000
0.0116 38.0 34884 1.4565 0.0000
0.0116 39.0 35802 1.4707 0.0000
0.0105 40.0 36720 1.4684 0.0000
0.0084 41.0 37638 1.4776 0.0000
0.0098 42.0 38556 1.4885 0.0000
0.0078 43.0 39474 1.4916 0.0000
0.0071 44.0 40392 1.4891 0.0000
0.0071 45.0 41310 1.4883 0.0000
0.0071 46.0 42228 1.4880 0.0000
0.0069 47.0 43146 1.4867 0.0000
0.0068 48.0 44064 1.4936 0.0000
0.0061 49.0 44982 1.4943 0.0000
0.0062 50.0 45900 1.4949 0.0000

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
7

Finetuned from