Edit model card

Prompt Tempalte

It follows Alpaca format.

### ์งˆ๋ฌธ: {instruction}
### ๋‹ต๋ณ€: {output}

Implementation Code

import troch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.fron_pretrained("Ja3ck/Mistral-instruct-IPO-Y24-v1", return_dict=True, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Ja3ck/Mistral-instruct-IPO-Y24-v1", use_fast=True)
tokenizer.pad_token = tokenizer.unk_token
tokenizer.pad_token_id = tokenizer.unk_token_id
tokenizer.padding_side = "left"

def gen(x):
  x_ = f"### ์งˆ๋ฌธ: {x.strip()} ### ๋‹ต๋ณ€: "
  inputs = tokenizer(x_, return_tensor='pt')
  input_ids = inputs['input_ids'].cuda()
  generation_output = model.generate(
      pad_token_id = tokenizer.pad_token_id,
      temperature=0.1,
      top_p=1,
      top_k=50,
      num_beams=1,
      repetition_penalty=1.13,
      do_sample=True,
    ),
    return_dict_in_generate=True,
    output_scores=True,
    max_new_tokens=1024
  )
  for seq in generation_output.sequences:
    output = tokenizer.decode(seq)
    print(output.split("### ๋‹ต๋ณ€: ")[1].strip())

gen("์•ˆ๋…•ํ•˜์„ธ์š”?")
Downloads last month
1,501
Safetensors
Model size
7.24B params
Tensor type
BF16
ยท
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using Ja-ck/Mistral-instruct-IPO-Y24-v1 1