Edit model card

max_steps = 1000
learning_rate = 5e-7
label_smoothing = 0.2 # somewhere between 0 and 0.5
warmup_ratio = 0.1
dpo_beta = 0.01
use_rslora = False
use_loftq = False
lora_rank = 16
lora_alpha = 16
lora_dropout = 0.05
load_separate_reference_model = False
max_seq_length = 2048
eval_steps = 200
train_split = 0.008

Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train andysalerno/openchat-nectar-0.19