Edit model card

h0

This model is a fine-tuned version of distilgpt2 on hearthstone dataset. GitHub repo. It achieves the following results on the evaluation set:

  • Loss: 0.3117
  • Exact Match: 0.1970
  • Bleu: 0.9085
  • Codebleu: 0.7341
  • Ngram Match Score: 0.7211
  • Weighted Ngram Match Score: 0.7299
  • Syntax Match Score: 0.7536
  • Dataflow Match Score: 0.7317
  • Chrf: 92.8689

Model description

DistilGPT2 fine-tuned on HearthStone dataset for 200 epochs

Intended uses & limitations

HearthStone card code synthesis.

Training and evaluation data

See split of hearthstone dataset

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 17
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Exact Match Bleu Codebleu Ngram Match Score Weighted Ngram Match Score Syntax Match Score Dataflow Match Score Chrf
0.543 11.94 1600 0.2701 0.0152 0.8552 0.6144 0.6027 0.6136 0.6431 0.5982 89.0280
0.1459 23.88 3200 0.2408 0.0909 0.8841 0.6733 0.6610 0.6719 0.7210 0.6393 91.2517
0.0801 35.82 4800 0.2498 0.1515 0.8966 0.6999 0.6954 0.7054 0.7326 0.6662 92.1356
0.0498 47.76 6400 0.2569 0.1818 0.9012 0.7015 0.7022 0.7114 0.7428 0.6496 92.4668
0.0323 59.7 8000 0.2732 0.1667 0.9044 0.7241 0.7025 0.7123 0.7551 0.7266 92.5429
0.0214 71.64 9600 0.2896 0.1667 0.9034 0.7228 0.7101 0.7195 0.7670 0.6945 92.4258
0.015 83.58 11200 0.2870 0.1667 0.9046 0.7292 0.7137 0.7228 0.7667 0.7137 92.5979
0.0121 95.52 12800 0.2907 0.1667 0.9075 0.7287 0.7198 0.7297 0.7696 0.6958 92.7074
0.0093 107.46 14400 0.2976 0.1667 0.9073 0.7365 0.7134 0.7238 0.7732 0.7356 92.8347
0.0073 119.4 16000 0.3037 0.1818 0.9085 0.7326 0.7154 0.7241 0.7529 0.7381 92.8343
0.006 131.34 17600 0.3047 0.1970 0.9104 0.7410 0.7230 0.7312 0.7667 0.7433 92.8286
0.005 143.28 19200 0.3080 0.1970 0.9088 0.7377 0.7232 0.7316 0.7746 0.7214 92.8035
0.0044 155.22 20800 0.3071 0.1970 0.9076 0.7343 0.7196 0.7283 0.7783 0.7112 92.7742
0.004 167.16 22400 0.3097 0.1970 0.9082 0.7440 0.7236 0.7334 0.7601 0.7587 92.8117
0.0035 179.1 24000 0.3111 0.1970 0.9080 0.7355 0.7204 0.7295 0.7616 0.7304 92.7990
0.0036 191.04 25600 0.3117 0.1970 0.9085 0.7341 0.7211 0.7299 0.7536 0.7317 92.8689

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.13.0
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
2

Dataset used to train dvitel/h0

Evaluation results