Prompt Tempalte
It follows Alpaca format.
### ์ง๋ฌธ: {instruction}
### ๋ต๋ณ: {output}
Implementation Code
import troch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.fron_pretrained("Ja3ck/Mistral-instruct-IPO-Y24-v1", return_dict=True, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Ja3ck/Mistral-instruct-IPO-Y24-v1", use_fast=True)
tokenizer.pad_token = tokenizer.unk_token
tokenizer.pad_token_id = tokenizer.unk_token_id
tokenizer.padding_side = "left"
def gen(x):
x_ = f"### ์ง๋ฌธ: {x.strip()} ### ๋ต๋ณ: "
inputs = tokenizer(x_, return_tensor='pt')
input_ids = inputs['input_ids'].cuda()
generation_output = model.generate(
pad_token_id = tokenizer.pad_token_id,
temperature=0.1,
top_p=1,
top_k=50,
num_beams=1,
repetition_penalty=1.13,
do_sample=True,
),
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=1024
)
for seq in generation_output.sequences:
output = tokenizer.decode(seq)
print(output.split("### ๋ต๋ณ: ")[1].strip())
gen("์๋
ํ์ธ์?")
- Downloads last month
- 1,501
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.