File size: 3,634 Bytes
0a3212f
654259a
0a3212f
654259a
 
0a3212f
 
654259a
 
 
 
 
0a3212f
 
654259a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91229a3
654259a
 
0a3212f
654259a
0a3212f
 
 
654259a
471614f
0a3212f
 
 
 
 
 
 
 
 
654259a
0a3212f
 
 
654259a
0a3212f
 
 
654259a
0a3212f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
tags:
- CodeGPT-small-py
- hearthstone
metrics:
- bleu
- dvitel/codebleu
- exact_match
- chrf
datasets:
- dvitel/hearthstone
model-index:
- name: h0-1
  results:
  - task:
      type: text-generation
      name: Python Code Synthesis
    dataset:
      type: dvitel/hearthstone
      name: HearthStone
      split: test
    metrics:
      - type: exact_match
        value: 0.21212121212121213
        name: Exact Match
      - type: bleu
        value: 0.8954467480979604
        name: BLEU        
      - type: dvitel/codebleu
        value: 0.6976253554171774
        name: CodeBLEU                
      - type: chrf
        value: 91.42413429212283
        name: chrF                        

---

# h0-1

This model is a fine-tuned version of [microsoft/CodeGPT-small-py](https://huggingface.co/microsoft/CodeGPT-small-py) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset.
[GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h0-1.py).
It achieves the following results on the evaluation set:
- Loss: 0.3622
- Exact Match: 0.1970
- Bleu: 0.9193
- Codebleu: 0.7686
- Chrf: 93.5686

## Model description

CodeGPT-small-py fine-tuned on HearthStone dataset for 200 epochs

## Intended uses & limitations

HearthStone card code synthesis. 

## Training and evaluation data

See split of [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step  | Validation Loss | Exact Match | Bleu   | Codebleu | Chrf    |
|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-------:|
| 0.2482        | 11.94  | 1600  | 0.2828          | 0.1364      | 0.9012 | 0.7012   | 92.2247 |
| 0.0203        | 23.88  | 3200  | 0.2968          | 0.1970      | 0.9114 | 0.7298   | 93.0236 |
| 0.0082        | 35.82  | 4800  | 0.3049          | 0.1970      | 0.9125 | 0.7480   | 93.1997 |
| 0.0049        | 47.76  | 6400  | 0.3190          | 0.1818      | 0.9125 | 0.7526   | 93.0967 |
| 0.0038        | 59.7   | 8000  | 0.3289          | 0.1818      | 0.9117 | 0.7348   | 93.1293 |
| 0.0024        | 71.64  | 9600  | 0.3358          | 0.1970      | 0.9142 | 0.7555   | 93.0747 |
| 0.0022        | 83.58  | 11200 | 0.3379          | 0.1970      | 0.9164 | 0.7642   | 93.2931 |
| 0.0013        | 95.52  | 12800 | 0.3444          | 0.2121      | 0.9189 | 0.7700   | 93.4456 |
| 0.0009        | 107.46 | 14400 | 0.3408          | 0.1970      | 0.9188 | 0.7655   | 93.4808 |
| 0.0006        | 119.4  | 16000 | 0.3522          | 0.1970      | 0.9177 | 0.7510   | 93.4061 |
| 0.0003        | 131.34 | 17600 | 0.3589          | 0.2121      | 0.9178 | 0.7614   | 93.3980 |
| 0.0002        | 143.28 | 19200 | 0.3562          | 0.2121      | 0.9179 | 0.7634   | 93.5130 |
| 0.0002        | 155.22 | 20800 | 0.3624          | 0.1970      | 0.9208 | 0.7699   | 93.6707 |
| 0.0001        | 167.16 | 22400 | 0.3608          | 0.1970      | 0.9193 | 0.7703   | 93.6082 |
| 0.0001        | 179.1  | 24000 | 0.3620          | 0.1970      | 0.9190 | 0.7667   | 93.5154 |
| 0.0001        | 191.04 | 25600 | 0.3622          | 0.1970      | 0.9193 | 0.7686   | 93.5686 |


### Framework versions

- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1