Edit model card

codet5p-770m-py-sanitized-codebleu-1-True-5e-05-0.1-prefix-tuning-10

This model is a fine-tuned version of Salesforce/codet5p-770m-py on the mbpp dataset. It achieves the following results on the evaluation set:

  • Loss: 6.9322
  • Codebleu: 0.4
  • Ngram Match Score: 0
  • Weighted Ngram Match Score: 0
  • Syntax Match Score: 0.0
  • Dataflow Match Score: 0.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Codebleu Ngram Match Score Weighted Ngram Match Score Syntax Match Score Dataflow Match Score
8.4569 1.0 8 8.3211 0.4005 0 0 0.0013 0.0
8.3909 2.0 16 8.3110 0.4005 0 0 0.0013 0.0
8.4352 3.0 24 8.2963 0.4005 0 0 0.0013 0.0
8.4135 4.0 32 8.2770 0.4011 0 0 0.0026 0.0
8.3584 5.0 40 8.2522 0.4011 0 0 0.0026 0.0
8.3913 6.0 48 8.2214 0.4011 0 0 0.0026 0.0
8.3619 7.0 56 8.1818 0.4011 0 0 0.0026 0.0
8.3378 8.0 64 8.1403 0.4011 0 0 0.0026 0.0
8.3079 9.0 72 8.0977 0.4011 0 0 0.0026 0.0
8.283 10.0 80 8.0445 0.4011 0 0 0.0026 0.0
8.2672 11.0 88 7.9846 0.4005 0 0 0.0013 0.0
8.1667 12.0 96 7.9191 0.4005 0 0 0.0013 0.0
8.1512 13.0 104 7.8567 0.4005 0 0 0.0013 0.0
8.0747 14.0 112 7.7911 0.4005 0 0 0.0013 0.0
8.0491 15.0 120 7.7305 0.4005 0 0 0.0013 0.0
8.0468 16.0 128 7.6730 0.0013 0.0000 0.0002 0.0013 0.0020
8.0098 17.0 136 7.6252 0.0013 0 0 0.0013 0.0020
7.9435 18.0 144 7.5634 0.0013 0 0 0.0013 0.0020
7.9474 19.0 152 7.5135 0.0027 0 0 0.0026 0.0040
7.9001 20.0 160 7.4649 0.0035 0.0000 0.0002 0.0026 0.0060
7.8258 21.0 168 7.4191 0.0053 0 0 0.0013 0.0120
7.8187 22.0 176 7.3800 0.0086 0 0 0.0013 0.0201
7.8188 23.0 184 7.3427 0.0045 0 0 0.0013 0.0100
7.757 24.0 192 7.3078 0.0045 0 0 0.0013 0.0100
7.779 25.0 200 7.2778 0.0037 0 0 0.0013 0.0080
7.7325 26.0 208 7.2472 0.0021 0 0 0.0013 0.0040
7.6905 27.0 216 7.2038 0.0021 0 0 0.0013 0.0040
7.7171 28.0 224 7.1760 0.0037 0 0 0.0013 0.0080
7.6543 29.0 232 7.1521 0.4005 0 0 0.0013 0.0
7.6224 30.0 240 7.1293 0.4005 0 0 0.0013 0.0
7.6304 31.0 248 7.1081 0.4005 0 0 0.0013 0.0
7.6372 32.0 256 7.0884 0.4005 0 0 0.0013 0.0
7.6063 33.0 264 7.0699 0.4005 0 0 0.0013 0.0
7.5567 34.0 272 7.0523 0.4005 0 0 0.0013 0.0
7.5592 35.0 280 7.0367 0.4005 0 0 0.0013 0.0
7.5703 36.0 288 7.0225 0.4005 0 0 0.0013 0.0
7.5456 37.0 296 7.0091 0.4005 0 0 0.0013 0.0
7.5418 38.0 304 6.9977 0.4 0 0 0.0 0.0
7.5104 39.0 312 6.9866 0.4 0 0 0.0 0.0
7.5034 40.0 320 6.9773 0.4 0 0 0.0 0.0
7.4976 41.0 328 6.9687 0.4 0 0 0.0 0.0
7.5127 42.0 336 6.9609 0.4 0 0 0.0 0.0
7.4824 43.0 344 6.9542 0.4 0 0 0.0 0.0
7.4834 44.0 352 6.9486 0.4 0 0 0.0 0.0
7.4707 45.0 360 6.9436 0.4 0 0 0.0 0.0
7.4733 46.0 368 6.9396 0.4 0 0 0.0 0.0
7.4736 47.0 376 6.9364 0.4 0 0 0.0 0.0
7.4883 48.0 384 6.9341 0.4 0 0 0.0 0.0
7.4565 49.0 392 6.9327 0.4 0 0 0.0 0.0
7.4785 50.0 400 6.9322 0.4 0 0 0.0 0.0

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
0
Unable to determine this model's library. Check the docs .

Finetuned from

Dataset used to train vichyt/codet5p-770m-py-sanitized-codebleu-1-True-5e-05-0.1-prefix-tuning-10