metadata
license: apache-2.0
datasets:
- abacusai/SystemChat-1.1
language:
- en
library_name: transformers
tags:
- llama-factory
- unsloth
h2o-danube2 with ChatML template
This is a BAdam and LoRA+ fine-tuned danube2 base model. It uses the ChatML template and was trained on the SystemChat-1.1 from Abacus.AI.
Quants
Thank you mradermacher!
Template
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>
BAdam
### model
model_name_or_path: danube2-base-chatml
### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: descending
badam_switch_interval: 50
badam_start_block: 22
badam_mask_mode: scatter
badam_verbose: 1
seed: 314
### dataset
dataset: systemchat11
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: systemchat11-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 0.00002
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2
### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000
BAdam Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0062 | 0.8324 | 1000 | 0.9837 |
0.8484 | 1.6648 | 2000 | 0.9388 |
0.7834 | 2.4971 | 3000 | 0.9309 |
QLoRA+
### model
model_name_or_path: systemchat11-chatml-badam
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
loraplus_lr_ratio: 16.0
lora_rank: 8
lora_alpha: 16
use_unsloth: true
quantization_bit: 4
upcast_layernorm: true
seed: 31415
### dataset
dataset: systemchat11
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12
### output
output_dir: systemchat11-chatml-badam/loraplus
logging_steps: 1
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false
### train
per_device_train_batch_size: 4
gradient_accumulation_steps: 4
learning_rate: 0.0001
num_train_epochs: 2.0
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2
### eval
val_size: 0.02
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
QLoRA+ Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.8591 | 0.4204 | 500 | 0.8457 |
0.9098 | 0.8409 | 1000 | 0.8251 |
0.735 | 1.2613 | 1500 | 0.8304 |
0.6811 | 1.6817 | 2000 | 0.8252 |