h2o-danube2 with ChatML template

This model was first fine-tuned with BAdam on cgato/SlimOrcaDedupCleaned using LLama-Factory.

Template

<|im_start|>system
{{system}}<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>

BAdam config

### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 13
seed: 314

### dataset
dataset: slimorca_dedup_cleaned
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: slim-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 0.000005
num_train_epochs: 1
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 2000

BAdam training results

Training Loss Epoch Step Validation Loss
0.8535 0.0889 2000 0.8340
0.8735 0.1778 4000 0.8128
0.8054 0.2668 6000 0.8008
0.7907 0.3557 8000 0.8002
0.8749 0.4446 10000 0.7972
0.7463 0.5335 12000 0.7899
0.7762 0.6225 14000 0.7870
0.8231 0.7114 16000 0.7854
0.8686 0.8003 18000 0.7801
0.9159 0.8892 20000 0.7877
0.8281 0.9782 22000 0.7786
Downloads last month
14
Safetensors
Model size
1.83B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for trollek/danube2-1.8b-SlimOrcaDedup

Finetuned
(13)
this model

Dataset used to train trollek/danube2-1.8b-SlimOrcaDedup

Collection including trollek/danube2-1.8b-SlimOrcaDedup