yuvraj17 commited on
Commit
22aeb4c
1 Parent(s): bdd2978

End of training

Browse files
Files changed (2) hide show
  1. README.md +224 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
3
+ library_name: peft
4
+ license: llama3.1
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: EvolCodeLlama-3.1-8B-Instruct
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
22
+ model_type: LlamaForCausalLM
23
+ tokenizer_type: AutoTokenizer
24
+ is_llama_derived_model: true
25
+ hub_model_id: EvolCodeLlama-3.1-8B-Instruct
26
+
27
+ load_in_8bit: false
28
+ load_in_4bit: true
29
+ strict: false
30
+
31
+ datasets:
32
+ - path: mlabonne/Evol-Instruct-Python-1k
33
+ type: alpaca
34
+ dataset_prepared_path: last_run_prepared
35
+ val_set_size: 0.02
36
+ output_dir: ./qlora-out
37
+
38
+ adapter: qlora
39
+ lora_model_dir:
40
+
41
+ sequence_len: 2048
42
+ sample_packing: true
43
+
44
+ lora_r: 32
45
+ lora_alpha: 16
46
+ lora_dropout: 0.05
47
+ lora_target_modules:
48
+ lora_target_linear: true
49
+ lora_fan_in_fan_out:
50
+
51
+ wandb_project: axolotl
52
+ wandb_entity:
53
+ wandb_watch:
54
+ wandb_run_id:
55
+ wandb_log_model:
56
+
57
+ gradient_accumulation_steps: 4
58
+ micro_batch_size: 2
59
+ num_epochs: 3
60
+ optimizer: paged_adamw_32bit
61
+ lr_scheduler: cosine
62
+ learning_rate: 0.0002
63
+
64
+ train_on_inputs: false
65
+ group_by_length: false
66
+ bf16: true
67
+ fp16: false
68
+ tf32: false
69
+
70
+ gradient_checkpointing: true
71
+ early_stopping_patience:
72
+ resume_from_checkpoint:
73
+ local_rank:
74
+ logging_steps: 1
75
+ xformers_attention:
76
+ flash_attention: true
77
+
78
+ warmup_steps: 100
79
+ eval_steps: 0.01
80
+ save_strategy: epoch
81
+ save_steps:
82
+ debug:
83
+ deepspeed:
84
+ weight_decay: 0.0
85
+ fsdp:
86
+ fsdp_config:
87
+ special_tokens:
88
+ pad_token: "<|end_of_text|>"
89
+
90
+ ```
91
+
92
+ </details><br>
93
+
94
+ # EvolCodeLlama-3.1-8B-Instruct
95
+
96
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
97
+ It achieves the following results on the evaluation set:
98
+ - Loss: 0.4057
99
+
100
+ ## Model description
101
+
102
+ More information needed
103
+
104
+ ## Intended uses & limitations
105
+
106
+ More information needed
107
+
108
+ ## Training and evaluation data
109
+
110
+ More information needed
111
+
112
+ ## Training procedure
113
+
114
+ ### Training hyperparameters
115
+
116
+ The following hyperparameters were used during training:
117
+ - learning_rate: 0.0002
118
+ - train_batch_size: 2
119
+ - eval_batch_size: 2
120
+ - seed: 42
121
+ - gradient_accumulation_steps: 4
122
+ - total_train_batch_size: 8
123
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
124
+ - lr_scheduler_type: cosine
125
+ - lr_scheduler_warmup_steps: 100
126
+ - num_epochs: 3
127
+
128
+ ### Training results
129
+
130
+ | Training Loss | Epoch | Step | Validation Loss |
131
+ |:-------------:|:------:|:----:|:---------------:|
132
+ | 0.388 | 0.0120 | 1 | 0.4443 |
133
+ | 0.3646 | 0.0359 | 3 | 0.4441 |
134
+ | 0.3216 | 0.0719 | 6 | 0.4439 |
135
+ | 0.3628 | 0.1078 | 9 | 0.4435 |
136
+ | 0.2506 | 0.1437 | 12 | 0.4417 |
137
+ | 0.2855 | 0.1796 | 15 | 0.4379 |
138
+ | 0.2472 | 0.2156 | 18 | 0.4310 |
139
+ | 0.3146 | 0.2515 | 21 | 0.4243 |
140
+ | 0.2829 | 0.2874 | 24 | 0.4185 |
141
+ | 0.2926 | 0.3234 | 27 | 0.4139 |
142
+ | 0.3832 | 0.3593 | 30 | 0.4099 |
143
+ | 0.3 | 0.3952 | 33 | 0.4069 |
144
+ | 0.2759 | 0.4311 | 36 | 0.4051 |
145
+ | 0.341 | 0.4671 | 39 | 0.4017 |
146
+ | 0.2268 | 0.5030 | 42 | 0.3989 |
147
+ | 0.3938 | 0.5389 | 45 | 0.3971 |
148
+ | 0.3478 | 0.5749 | 48 | 0.3951 |
149
+ | 0.2745 | 0.6108 | 51 | 0.3935 |
150
+ | 0.2623 | 0.6467 | 54 | 0.3920 |
151
+ | 0.3743 | 0.6826 | 57 | 0.3903 |
152
+ | 0.3205 | 0.7186 | 60 | 0.3898 |
153
+ | 0.332 | 0.7545 | 63 | 0.3897 |
154
+ | 0.268 | 0.7904 | 66 | 0.3876 |
155
+ | 0.2842 | 0.8263 | 69 | 0.3873 |
156
+ | 0.3677 | 0.8623 | 72 | 0.3868 |
157
+ | 0.212 | 0.8982 | 75 | 0.3857 |
158
+ | 0.2656 | 0.9341 | 78 | 0.3854 |
159
+ | 0.2499 | 0.9701 | 81 | 0.3844 |
160
+ | 0.3512 | 1.0060 | 84 | 0.3850 |
161
+ | 0.3069 | 1.0269 | 87 | 0.3848 |
162
+ | 0.3037 | 1.0629 | 90 | 0.3856 |
163
+ | 0.2785 | 1.0988 | 93 | 0.3864 |
164
+ | 0.206 | 1.1347 | 96 | 0.3873 |
165
+ | 0.3354 | 1.1707 | 99 | 0.3912 |
166
+ | 0.3281 | 1.2066 | 102 | 0.3882 |
167
+ | 0.3452 | 1.2425 | 105 | 0.3849 |
168
+ | 0.3153 | 1.2784 | 108 | 0.3851 |
169
+ | 0.3846 | 1.3144 | 111 | 0.3851 |
170
+ | 0.2847 | 1.3503 | 114 | 0.3842 |
171
+ | 0.3128 | 1.3862 | 117 | 0.3842 |
172
+ | 0.282 | 1.4222 | 120 | 0.3866 |
173
+ | 0.2186 | 1.4581 | 123 | 0.3876 |
174
+ | 0.2122 | 1.4940 | 126 | 0.3862 |
175
+ | 0.2877 | 1.5299 | 129 | 0.3837 |
176
+ | 0.2771 | 1.5659 | 132 | 0.3822 |
177
+ | 0.3518 | 1.6018 | 135 | 0.3820 |
178
+ | 0.302 | 1.6377 | 138 | 0.3829 |
179
+ | 0.2653 | 1.6737 | 141 | 0.3833 |
180
+ | 0.3281 | 1.7096 | 144 | 0.3832 |
181
+ | 0.2933 | 1.7455 | 147 | 0.3821 |
182
+ | 0.1959 | 1.7814 | 150 | 0.3824 |
183
+ | 0.2013 | 1.8174 | 153 | 0.3830 |
184
+ | 0.1909 | 1.8533 | 156 | 0.3824 |
185
+ | 0.2321 | 1.8892 | 159 | 0.3812 |
186
+ | 0.2695 | 1.9251 | 162 | 0.3798 |
187
+ | 0.2516 | 1.9611 | 165 | 0.3796 |
188
+ | 0.2148 | 1.9970 | 168 | 0.3796 |
189
+ | 0.2233 | 2.0180 | 171 | 0.3802 |
190
+ | 0.234 | 2.0539 | 174 | 0.3844 |
191
+ | 0.2615 | 2.0898 | 177 | 0.3938 |
192
+ | 0.1582 | 2.1257 | 180 | 0.4031 |
193
+ | 0.218 | 2.1617 | 183 | 0.4071 |
194
+ | 0.2438 | 2.1976 | 186 | 0.4072 |
195
+ | 0.1822 | 2.2335 | 189 | 0.4050 |
196
+ | 0.2163 | 2.2695 | 192 | 0.4028 |
197
+ | 0.1513 | 2.3054 | 195 | 0.4021 |
198
+ | 0.1898 | 2.3413 | 198 | 0.4031 |
199
+ | 0.1857 | 2.3772 | 201 | 0.4059 |
200
+ | 0.1909 | 2.4132 | 204 | 0.4075 |
201
+ | 0.1119 | 2.4491 | 207 | 0.4092 |
202
+ | 0.1794 | 2.4850 | 210 | 0.4091 |
203
+ | 0.1188 | 2.5210 | 213 | 0.4081 |
204
+ | 0.1525 | 2.5569 | 216 | 0.4073 |
205
+ | 0.1897 | 2.5928 | 219 | 0.4069 |
206
+ | 0.1785 | 2.6287 | 222 | 0.4064 |
207
+ | 0.169 | 2.6647 | 225 | 0.4064 |
208
+ | 0.1518 | 2.7006 | 228 | 0.4060 |
209
+ | 0.1896 | 2.7365 | 231 | 0.4052 |
210
+ | 0.1675 | 2.7725 | 234 | 0.4055 |
211
+ | 0.2193 | 2.8084 | 237 | 0.4055 |
212
+ | 0.1887 | 2.8443 | 240 | 0.4057 |
213
+ | 0.1639 | 2.8802 | 243 | 0.4055 |
214
+ | 0.1701 | 2.9162 | 246 | 0.4058 |
215
+ | 0.2019 | 2.9521 | 249 | 0.4057 |
216
+
217
+
218
+ ### Framework versions
219
+
220
+ - PEFT 0.12.0
221
+ - Transformers 4.44.0
222
+ - Pytorch 2.4.0+cu121
223
+ - Datasets 2.20.0
224
+ - Tokenizers 0.19.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0629cb0f17dd639dbc5a071ba7abc0b7234e8d275d6873339267b721e47c4d93
3
+ size 335706186