Edit model card

Developed by chPark

Training Strategy

We fine-tuned this model based on yanolja/KoSOLAR-10.7B-v0.1 We applied a DPO to realPCH/kosolra-kullm

Run the model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "realPCH/kosolra_SFT_DPO_v0"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)

text = "[INST] Put instruction here. [/INST]"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
1,202
Safetensors
Model size
10.9B params
Tensor type
F32
·

Datasets used to train realPCH/kosolra_SFT_DPO_v0