File size: 3,290 Bytes
49572ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dba35bf
 
 
 
 
 
2e353cc
49572ca
2e353cc
dba35bf
2e353cc
dba35bf
2e353cc
dba35bf
 
2e353cc
49572ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66b6fc7
49572ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
license: apache-2.0
datasets:
- abacusai/SystemChat-1.1
language:
- en
library_name: transformers
tags:
- llama-factory
- unsloth
---
# h2o-danube2 with ChatML template

This is a [BAdam](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") and [LoRA+](https://arxiv.org/abs/2402.12354 "LoRA+: Efficient Low Rank Adaptation of Large Models") fine-tuned danube2 base model. It uses the ChatML template and was trained on the [SystemChat-1.1](https://huggingface.co/datasets/abacusai/SystemChat-1.1) from [Abacus.AI](https://huggingface.co/abacusai).

## Quants

Thank you [mradermacher](https://huggingface.co/mradermacher)!

- [mradermacher/danube2-1.8b-SystemChat-1.1-GGUF](https://huggingface.co/mradermacher/danube2-1.8b-SystemChat-1.1-GGUF)

## Template

```jinja
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>
```

## BAdam

```yaml
### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: descending
badam_switch_interval: 50
badam_start_block: 22
badam_mask_mode: scatter
badam_verbose: 1
seed: 314

### dataset
dataset: systemchat11
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: systemchat11-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 0.00002
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000

```

### BAdam Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0062        | 0.8324 | 1000 | 0.9837          |
| 0.8484        | 1.6648 | 2000 | 0.9388          |
| 0.7834        | 2.4971 | 3000 | 0.9309          |


## QLoRA+

```yaml
### model
model_name_or_path: systemchat11-chatml-badam

### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
loraplus_lr_ratio: 16.0
lora_rank: 8
lora_alpha: 16
use_unsloth: true
quantization_bit: 4
upcast_layernorm: true
seed: 31415

### dataset
dataset: systemchat11
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: systemchat11-chatml-badam/loraplus
logging_steps: 1
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 4
gradient_accumulation_steps: 4
learning_rate: 0.0001
num_train_epochs: 2.0
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.02
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
```

### QLoRA+ Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8591        | 0.4204 | 500  | 0.8457          |
| 0.9098        | 0.8409 | 1000 | 0.8251          |
| 0.735         | 1.2613 | 1500 | 0.8304          |
| 0.6811        | 1.6817 | 2000 | 0.8252          |