Edit model card

open-llama-2-ko based model with modified DPO dataset

This is an Korean Model based on

  • [beomi/open-llama-2-ko-7b]

Dataset is modified from

  • [SJ-Donald/orca-dpo-pairs-ko]

Parameters

learning_rate: float = 3e-4
lr_scheduler: str = "cosine"
warmup_ratio: float = 0.1
lora_r: int = 16
lora_alpha: int = 16
lora_dropout: float = 0.05
optim='paged_adamw_32bit'
bf16=True
Downloads last month
1,231
Safetensors
Model size
6.86B params
Tensor type
FP16
·