How to use
prompt = '''๋๋ ๋๊ตฌ์ผ?'''
messages = [
{"role": "system", "content": "๋น์ ์ ํ๊ตญ์ด ai ๋ชจ๋ธ์
๋๋ค. ๋น์ ์ ๋ฅ๋ ฅ์ ์ต๋ํ ์ฌ์ฉํ์ฌ ๋ต๋ณํด์ผ ํฉ๋๋ค."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt")
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.