Edit model card

h2o-danube2 with ChatML template

This model was first fine-tuned with BAdam on migtissera/Tess-v1.5 using LLama-Factory.

Quants

Thanks to mradermacher for this!

Template

<|im_start|>system
{{system}}<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>

BAdam config

### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 6
seed: 720

### dataset
dataset: tess15
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: tess15-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 0.00001
num_train_epochs: 1
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000

BAdam training results

Training Loss Epoch Step Validation Loss
0.8017 0.0643 1000 0.6820
0.6167 0.1287 2000 0.6610
0.6161 0.1930 3000 0.6496
0.6322 0.2574 4000 0.6423
0.5127 0.3217 5000 0.6366
0.61 0.3860 6000 0.6312
0.6758 0.4504 7000 0.6266
0.5901 0.5147 8000 0.6215
0.5163 0.5791 9000 0.6197
0.6043 0.6434 10000 0.6175
0.5056 0.7077 11000 0.6153
0.5772 0.7721 12000 0.6126
0.6692 0.8364 13000 0.6107
0.5262 0.9008 14000 0.6066
0.6386 0.9651 15000 0.6056
Downloads last month
14
Safetensors
Model size
1.83B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train trollek/danube2-1.8b-Tess-v1.5

Collection including trollek/danube2-1.8b-Tess-v1.5