KaraKaraWitch commited on
Commit
43ea476
·
verified ·
1 Parent(s): 33049a7

End of training

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: KaraKaraWitch/CavesOfQwen3-8b
4
+ tags:
5
+ - axolotl
6
+ - base_model:adapter:KaraKaraWitch/CavesOfQwen3-8b
7
+ - lora
8
+ - transformers
9
+ datasets:
10
+ - train.jsonl
11
+ pipeline_tag: text-generation
12
+ model-index:
13
+ - name: crossing-field-4
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
21
+ <details><summary>See axolotl config</summary>
22
+
23
+ axolotl version: `0.13.0.dev0`
24
+ ```yaml
25
+ base_model: KaraKaraWitch/CavesOfQwen3-8b
26
+ hub_model_id: KaraKaraWitch/crossing-field-4
27
+
28
+ load_in_8bit: true
29
+ load_in_4bit: false
30
+
31
+
32
+ chat_template: qwen3
33
+ datasets:
34
+ - path: train.jsonl
35
+ type: chat_template
36
+
37
+ field_messages: conversation
38
+ train_on_eos: all
39
+ message_property_mappings:
40
+ role: from
41
+ content: content
42
+
43
+
44
+ roles:
45
+ assistant:
46
+ - gpt
47
+ - model
48
+ - assistant
49
+ user:
50
+ - human
51
+ - user
52
+ dataset_prepared_path: last_run_prepared
53
+ val_set_size: 0.05
54
+ output_dir: lora-out
55
+
56
+ adapter: lora
57
+ lora_model_dir:
58
+
59
+ sequence_len: 8192
60
+ sample_packing: true
61
+ eval_sample_packing: false
62
+ pad_to_sequence_len: true
63
+
64
+
65
+ plugins:
66
+ - axolotl.integrations.liger.LigerPlugin
67
+ - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
68
+ liger_rope: true
69
+ liger_rms_norm: true
70
+ liger_layer_norm: true
71
+ liger_glu_activation: true
72
+ liger_fused_linear_cross_entropy: false
73
+ cut_cross_entropy: true
74
+
75
+ lora_r: 64
76
+ lora_alpha: 32
77
+ lora_dropout: 0.05
78
+ lora_target_linear: true
79
+ lora_target_modules:
80
+ - gate_proj
81
+ - down_proj
82
+ - up_proj
83
+ - q_proj
84
+ - v_proj
85
+ - k_proj
86
+ - o_proj
87
+
88
+ wandb_project: azure-edge
89
+ wandb_entity:
90
+ wandb_watch:
91
+ wandb_name: crossing-field-4
92
+ wandb_log_model:
93
+
94
+ gradient_accumulation_steps: 4
95
+ micro_batch_size: 2
96
+ num_epochs: 6
97
+ optimizer: adamw_bnb_8bit
98
+ lr_scheduler: cosine
99
+ learning_rate: 0.0002
100
+
101
+ bf16: auto
102
+ tf32: false
103
+
104
+ gradient_checkpointing: true
105
+ resume_from_checkpoint:
106
+ logging_steps: 1
107
+ flash_attention: true
108
+
109
+ loss_watchdog_threshold: 5.0
110
+ loss_watchdog_patience: 3
111
+
112
+ warmup_steps: 50
113
+ evals_per_epoch: 1
114
+ saves_per_epoch: 4
115
+ weight_decay: 0.0
116
+ special_tokens:
117
+ eos_token: <|im_end|>
118
+
119
+ # save_first_step: true # uncomment this to validate checkpoint saving works with your config
120
+
121
+ ```
122
+
123
+ </details><br>
124
+
125
+ # crossing-field-4
126
+
127
+ This model is a fine-tuned version of [KaraKaraWitch/CavesOfQwen3-8b](https://huggingface.co/KaraKaraWitch/CavesOfQwen3-8b) on the train.jsonl dataset.
128
+ It achieves the following results on the evaluation set:
129
+ - Loss: 1.3543
130
+ - Memory/max Mem Active(gib): 20.87
131
+ - Memory/max Mem Allocated(gib): 20.87
132
+ - Memory/device Mem Reserved(gib): 21.53
133
+
134
+ ## Model description
135
+
136
+ More information needed
137
+
138
+ ## Intended uses & limitations
139
+
140
+ More information needed
141
+
142
+ ## Training and evaluation data
143
+
144
+ More information needed
145
+
146
+ ## Training procedure
147
+
148
+ ### Training hyperparameters
149
+
150
+ The following hyperparameters were used during training:
151
+ - learning_rate: 0.0002
152
+ - train_batch_size: 2
153
+ - eval_batch_size: 2
154
+ - seed: 42
155
+ - distributed_type: multi-GPU
156
+ - num_devices: 2
157
+ - gradient_accumulation_steps: 4
158
+ - total_train_batch_size: 16
159
+ - total_eval_batch_size: 4
160
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
161
+ - lr_scheduler_type: cosine
162
+ - lr_scheduler_warmup_steps: 50
163
+ - training_steps: 4212
164
+
165
+ ### Training results
166
+
167
+ | Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) |
168
+ |:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|
169
+ | No log | 0 | 0 | 1.7247 | 18.3 | 12.95 | 18.5 |
170
+ | 1.5809 | 1.0 | 702 | 1.4699 | 20.87 | 20.87 | 21.43 |
171
+ | 1.4682 | 2.0 | 1404 | 1.4264 | 20.87 | 20.87 | 21.53 |
172
+ | 1.3153 | 3.0 | 2106 | 1.3886 | 20.87 | 20.87 | 21.53 |
173
+ | 1.2031 | 4.0 | 2808 | 1.3615 | 20.87 | 20.87 | 21.53 |
174
+ | 1.1377 | 5.0 | 3510 | 1.3515 | 20.87 | 20.87 | 21.53 |
175
+ | 1.1198 | 6.0 | 4212 | 1.3543 | 20.87 | 20.87 | 21.53 |
176
+
177
+
178
+ ### Framework versions
179
+
180
+ - PEFT 0.17.0
181
+ - Transformers 4.55.2
182
+ - Pytorch 2.7.1+cu126
183
+ - Datasets 4.0.0
184
+ - Tokenizers 0.21.4