Edit model card

codet5p-770m-py-sanitized-codebleu-1-True-1e-07-0.1-prefix-tuning

This model is a fine-tuned version of Salesforce/codet5p-770m-py on the mbpp dataset. It achieves the following results on the evaluation set:

  • Loss: 7.8250
  • Codebleu: 0.0215
  • Ngram Match Score: 0.0004
  • Weighted Ngram Match Score: 0.0003
  • Syntax Match Score: 0.0013
  • Dataflow Match Score: 0.0522

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-07
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Codebleu Ngram Match Score Weighted Ngram Match Score Syntax Match Score Dataflow Match Score
7.9988 1.0 8 7.8277 0.0231 0.0004 0.0004 0.0013 0.0562
8.0077 2.0 16 7.8276 0.0231 0.0004 0.0004 0.0013 0.0562
8.0173 3.0 24 7.8276 0.0231 0.0004 0.0004 0.0013 0.0562
7.996 4.0 32 7.8276 0.0231 0.0004 0.0004 0.0013 0.0562
8.0369 5.0 40 7.8276 0.0231 0.0004 0.0004 0.0013 0.0562
8.0406 6.0 48 7.8276 0.0231 0.0004 0.0004 0.0013 0.0562
8.0162 7.0 56 7.8275 0.0231 0.0004 0.0004 0.0013 0.0562
7.9996 8.0 64 7.8274 0.0231 0.0004 0.0004 0.0013 0.0562
7.9955 9.0 72 7.8274 0.0231 0.0004 0.0004 0.0013 0.0562
8.015 10.0 80 7.8273 0.0231 0.0004 0.0004 0.0013 0.0562
8.0264 11.0 88 7.8272 0.0231 0.0004 0.0004 0.0013 0.0562
8.0091 12.0 96 7.8270 0.0215 0.0004 0.0003 0.0013 0.0522
8.0184 13.0 104 7.8269 0.0215 0.0004 0.0003 0.0013 0.0522
8.0261 14.0 112 7.8268 0.0215 0.0004 0.0003 0.0013 0.0522
7.9791 15.0 120 7.8267 0.0215 0.0004 0.0003 0.0013 0.0522
8.0159 16.0 128 7.8265 0.0215 0.0004 0.0003 0.0013 0.0522
7.9996 17.0 136 7.8264 0.0215 0.0004 0.0003 0.0013 0.0522
8.0404 18.0 144 7.8263 0.0215 0.0004 0.0003 0.0013 0.0522
7.9689 19.0 152 7.8262 0.0215 0.0004 0.0003 0.0013 0.0522
8.0391 20.0 160 7.8261 0.0215 0.0004 0.0003 0.0013 0.0522
7.9905 21.0 168 7.8260 0.0215 0.0004 0.0003 0.0013 0.0522
8.0239 22.0 176 7.8259 0.0215 0.0004 0.0003 0.0013 0.0522
8.0125 23.0 184 7.8258 0.0215 0.0004 0.0003 0.0013 0.0522
7.9988 24.0 192 7.8257 0.0215 0.0004 0.0003 0.0013 0.0522
7.9675 25.0 200 7.8257 0.0215 0.0004 0.0003 0.0013 0.0522
8.0127 26.0 208 7.8256 0.0215 0.0004 0.0003 0.0013 0.0522
8.0141 27.0 216 7.8255 0.0215 0.0004 0.0003 0.0013 0.0522
8.0028 28.0 224 7.8255 0.0215 0.0004 0.0003 0.0013 0.0522
8.0304 29.0 232 7.8254 0.0215 0.0004 0.0003 0.0013 0.0522
8.0433 30.0 240 7.8254 0.0215 0.0004 0.0003 0.0013 0.0522
8.0184 31.0 248 7.8253 0.0215 0.0004 0.0003 0.0013 0.0522
8.0456 32.0 256 7.8253 0.0215 0.0004 0.0003 0.0013 0.0522
8.0378 33.0 264 7.8252 0.0215 0.0004 0.0003 0.0013 0.0522
8.0035 34.0 272 7.8252 0.0215 0.0004 0.0003 0.0013 0.0522
8.0212 35.0 280 7.8252 0.0215 0.0004 0.0003 0.0013 0.0522
8.0033 36.0 288 7.8251 0.0215 0.0004 0.0003 0.0013 0.0522
8.0311 37.0 296 7.8251 0.0215 0.0004 0.0003 0.0013 0.0522
8.0264 38.0 304 7.8251 0.0215 0.0004 0.0003 0.0013 0.0522
7.9892 39.0 312 7.8251 0.0215 0.0004 0.0003 0.0013 0.0522
8.0047 40.0 320 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0111 41.0 328 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0124 42.0 336 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0408 43.0 344 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
7.9969 44.0 352 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0283 45.0 360 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0449 46.0 368 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0017 47.0 376 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0328 48.0 384 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
7.9923 49.0 392 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522
8.0266 50.0 400 7.8250 0.0215 0.0004 0.0003 0.0013 0.0522

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
0
Unable to determine this model's library. Check the docs .

Finetuned from

Dataset used to train vichyt/codet5p-770m-py-sanitized-codebleu-1-True-1e-07-0.1-prefix-tuning