Edit model card

Model Generation

from transforemrs import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1", use_fast=True)

text="μ•ˆλ…•ν•˜μ„Έμš”."
outputs = model.generate(
  **tokenizer(
    f"### Instruction: {text}\n\n### output:",
    return_tensors='pt'
  ).to('cuda'),
  max_new_tokens=256,
  temperature=0.2,
  top_p=1,
  do_sample=True
)
print(tokenizer.decode(outputs[0]))
Downloads last month
1,294
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using AIdenU/LLAMA-2-13b-ko-Y24-DPO_v0.1 5