superiort commited on
Commit
1ff2a38
1 Parent(s): b039f87

End of training

Browse files
Files changed (2) hide show
  1. README.md +241 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: nlpai-lab/KULLM3
8
+ model-index:
9
+ - name: kullm3_finetuning_test_4300QA_10epochs
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ base_model: nlpai-lab/KULLM3
22
+ base_model_config: nlpai-lab/KULLM3
23
+ model_type: LlamaForCausalLM
24
+ tokenizer_type: LlamaTokenizer
25
+ is_llama_derived_model: true
26
+ hub_model_id: kullm3_finetuning_test_4300QA_10epochs
27
+
28
+ load_in_8bit: false
29
+ load_in_4bit: true
30
+ strict: false
31
+
32
+ datasets:
33
+ - path: superiort/multiplechoice-4300
34
+ type: alpaca
35
+ dataset_prepared_path: last_run_prepared
36
+ val_set_size: 0.02
37
+ output_dir: ./kullm3_finetuning_test_4300QA_10epochs
38
+
39
+ adapter: qlora
40
+ lora_model_dir:
41
+
42
+ sequence_len: 4096
43
+ sample_packing: false
44
+
45
+ lora_r: 32
46
+ lora_alpha: 16
47
+ lora_dropout: 0.05
48
+ lora_target_modules:
49
+ lora_target_linear: true
50
+ lora_fan_in_fan_out:
51
+
52
+ wandb_project: axolotl
53
+ wandb_entity:
54
+ wandb_watch:
55
+ wandb_run_id:
56
+ wandb_log_model:
57
+
58
+ gradient_accumulation_steps: 4
59
+ micro_batch_size: 2
60
+ num_epochs: 10
61
+ optimizer: paged_adamw_32bit
62
+ lr_scheduler: cosine
63
+ learning_rate: 0.0002
64
+
65
+ train_on_inputs: false
66
+ group_by_length: false
67
+ bf16: true
68
+ fp16: false
69
+ tf32: false
70
+
71
+ gradient_checkpointing: true
72
+ early_stopping_patience:
73
+ resume_from_checkpoint:
74
+ local_rank:
75
+ logging_steps: 1
76
+ xformers_attention:
77
+ flash_attention: true
78
+
79
+ warmup_steps: 100
80
+ eval_steps: 0.01
81
+ save_strategy: epoch
82
+ save_steps:
83
+ debug:
84
+ deepspeed:
85
+ weight_decay: 0.0
86
+ fsdp:
87
+ fsdp_config:
88
+ special_tokens:
89
+ bos_token: "<s>"
90
+ eos_token: "</s>"
91
+ unk_token: "<unk>"
92
+ pad_token: "</s>" # EOS와 PAD가 동일
93
+
94
+ ```
95
+
96
+ </details><br>
97
+
98
+ # kullm3_finetuning_test_4300QA_10epochs
99
+
100
+ This model is a fine-tuned version of [nlpai-lab/KULLM3](https://huggingface.co/nlpai-lab/KULLM3) on the None dataset.
101
+ It achieves the following results on the evaluation set:
102
+ - Loss: 0.4754
103
+
104
+ ## Model description
105
+
106
+ More information needed
107
+
108
+ ## Intended uses & limitations
109
+
110
+ More information needed
111
+
112
+ ## Training and evaluation data
113
+
114
+ More information needed
115
+
116
+ ## Training procedure
117
+
118
+ ### Training hyperparameters
119
+
120
+ The following hyperparameters were used during training:
121
+ - learning_rate: 0.0002
122
+ - train_batch_size: 2
123
+ - eval_batch_size: 2
124
+ - seed: 42
125
+ - distributed_type: multi-GPU
126
+ - num_devices: 4
127
+ - gradient_accumulation_steps: 4
128
+ - total_train_batch_size: 32
129
+ - total_eval_batch_size: 8
130
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
131
+ - lr_scheduler_type: cosine
132
+ - lr_scheduler_warmup_steps: 100
133
+ - num_epochs: 10
134
+
135
+ ### Training results
136
+
137
+ | Training Loss | Epoch | Step | Validation Loss |
138
+ |:-------------:|:-----:|:----:|:---------------:|
139
+ | 0.4883 | 0.01 | 1 | 0.3229 |
140
+ | 0.4139 | 0.11 | 14 | 0.2783 |
141
+ | 0.3475 | 0.21 | 28 | 0.2473 |
142
+ | 0.3427 | 0.32 | 42 | 0.2353 |
143
+ | 0.303 | 0.43 | 56 | 0.2297 |
144
+ | 0.2902 | 0.53 | 70 | 0.2334 |
145
+ | 0.288 | 0.64 | 84 | 0.2271 |
146
+ | 0.2856 | 0.74 | 98 | 0.2233 |
147
+ | 0.3035 | 0.85 | 112 | 0.2182 |
148
+ | 0.2829 | 0.96 | 126 | 0.2161 |
149
+ | 0.2986 | 1.06 | 140 | 0.2219 |
150
+ | 0.2552 | 1.17 | 154 | 0.2269 |
151
+ | 0.2489 | 1.28 | 168 | 0.2223 |
152
+ | 0.2523 | 1.38 | 182 | 0.2248 |
153
+ | 0.2481 | 1.49 | 196 | 0.2220 |
154
+ | 0.235 | 1.59 | 210 | 0.2209 |
155
+ | 0.2661 | 1.7 | 224 | 0.2165 |
156
+ | 0.2522 | 1.81 | 238 | 0.2231 |
157
+ | 0.2775 | 1.91 | 252 | 0.2190 |
158
+ | 0.1825 | 2.02 | 266 | 0.2228 |
159
+ | 0.1836 | 2.13 | 280 | 0.2331 |
160
+ | 0.1655 | 2.23 | 294 | 0.2378 |
161
+ | 0.1604 | 2.34 | 308 | 0.2376 |
162
+ | 0.1766 | 2.44 | 322 | 0.2356 |
163
+ | 0.1897 | 2.55 | 336 | 0.2344 |
164
+ | 0.1756 | 2.66 | 350 | 0.2375 |
165
+ | 0.1616 | 2.76 | 364 | 0.2387 |
166
+ | 0.1436 | 2.87 | 378 | 0.2371 |
167
+ | 0.166 | 2.98 | 392 | 0.2341 |
168
+ | 0.0828 | 3.08 | 406 | 0.2602 |
169
+ | 0.0893 | 3.19 | 420 | 0.2747 |
170
+ | 0.079 | 3.29 | 434 | 0.2760 |
171
+ | 0.0843 | 3.4 | 448 | 0.2780 |
172
+ | 0.0815 | 3.51 | 462 | 0.2812 |
173
+ | 0.0948 | 3.61 | 476 | 0.2828 |
174
+ | 0.0845 | 3.72 | 490 | 0.2766 |
175
+ | 0.1025 | 3.83 | 504 | 0.2772 |
176
+ | 0.0763 | 3.93 | 518 | 0.2813 |
177
+ | 0.0322 | 4.04 | 532 | 0.3309 |
178
+ | 0.031 | 4.14 | 546 | 0.3221 |
179
+ | 0.028 | 4.25 | 560 | 0.3348 |
180
+ | 0.031 | 4.36 | 574 | 0.3374 |
181
+ | 0.0309 | 4.46 | 588 | 0.3355 |
182
+ | 0.0331 | 4.57 | 602 | 0.3344 |
183
+ | 0.034 | 4.68 | 616 | 0.3384 |
184
+ | 0.0324 | 4.78 | 630 | 0.3420 |
185
+ | 0.0301 | 4.89 | 644 | 0.3350 |
186
+ | 0.0327 | 4.99 | 658 | 0.3387 |
187
+ | 0.0111 | 5.1 | 672 | 0.4010 |
188
+ | 0.0089 | 5.21 | 686 | 0.3917 |
189
+ | 0.0075 | 5.31 | 700 | 0.3925 |
190
+ | 0.0106 | 5.42 | 714 | 0.3911 |
191
+ | 0.0091 | 5.53 | 728 | 0.3937 |
192
+ | 0.0109 | 5.63 | 742 | 0.3985 |
193
+ | 0.009 | 5.74 | 756 | 0.4044 |
194
+ | 0.0095 | 5.84 | 770 | 0.3949 |
195
+ | 0.0075 | 5.95 | 784 | 0.3984 |
196
+ | 0.0036 | 6.06 | 798 | 0.4133 |
197
+ | 0.0031 | 6.16 | 812 | 0.4424 |
198
+ | 0.0026 | 6.27 | 826 | 0.4525 |
199
+ | 0.0034 | 6.38 | 840 | 0.4519 |
200
+ | 0.0019 | 6.48 | 854 | 0.4513 |
201
+ | 0.0018 | 6.59 | 868 | 0.4517 |
202
+ | 0.0023 | 6.69 | 882 | 0.4520 |
203
+ | 0.0016 | 6.8 | 896 | 0.4534 |
204
+ | 0.0018 | 6.91 | 910 | 0.4528 |
205
+ | 0.001 | 7.01 | 924 | 0.4537 |
206
+ | 0.0011 | 7.12 | 938 | 0.4581 |
207
+ | 0.0009 | 7.23 | 952 | 0.4631 |
208
+ | 0.0009 | 7.33 | 966 | 0.4662 |
209
+ | 0.0013 | 7.44 | 980 | 0.4680 |
210
+ | 0.0008 | 7.54 | 994 | 0.4700 |
211
+ | 0.001 | 7.65 | 1008 | 0.4711 |
212
+ | 0.0009 | 7.76 | 1022 | 0.4720 |
213
+ | 0.0011 | 7.86 | 1036 | 0.4727 |
214
+ | 0.0009 | 7.97 | 1050 | 0.4731 |
215
+ | 0.0011 | 8.08 | 1064 | 0.4735 |
216
+ | 0.001 | 8.18 | 1078 | 0.4739 |
217
+ | 0.001 | 8.29 | 1092 | 0.4741 |
218
+ | 0.001 | 8.39 | 1106 | 0.4746 |
219
+ | 0.0011 | 8.5 | 1120 | 0.4744 |
220
+ | 0.0012 | 8.61 | 1134 | 0.4751 |
221
+ | 0.0011 | 8.71 | 1148 | 0.4748 |
222
+ | 0.001 | 8.82 | 1162 | 0.4747 |
223
+ | 0.0009 | 8.93 | 1176 | 0.4754 |
224
+ | 0.0011 | 9.03 | 1190 | 0.4752 |
225
+ | 0.0013 | 9.14 | 1204 | 0.4751 |
226
+ | 0.0009 | 9.24 | 1218 | 0.4749 |
227
+ | 0.001 | 9.35 | 1232 | 0.4750 |
228
+ | 0.0017 | 9.46 | 1246 | 0.4750 |
229
+ | 0.0012 | 9.56 | 1260 | 0.4749 |
230
+ | 0.0008 | 9.67 | 1274 | 0.4747 |
231
+ | 0.0008 | 9.78 | 1288 | 0.4749 |
232
+ | 0.0011 | 9.88 | 1302 | 0.4754 |
233
+
234
+
235
+ ### Framework versions
236
+
237
+ - PEFT 0.10.0
238
+ - Transformers 4.40.0.dev0
239
+ - Pytorch 2.1.2+cu121
240
+ - Datasets 2.15.0
241
+ - Tokenizers 0.15.2
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e65d771e02f48165d23eb11947936bb8a8670081c8713acfe9d65fabc5ccf293
3
+ size 251901130