Edit model card

Model Card for Model ID

Base Model : beomi/Llama-3-Open-Ko-8B-Instruct-preview

Dataset = Bingsu/ko_alpaca_data

Model inference

prompt template

alpaca_prompt = """์•„๋ž˜๋Š” ์งˆ๋ฌธ instruction ๊ณผ ์ถ”๊ฐ€์ •๋ณด๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” input ์ž…๋‹ˆ๋‹ค. ์ ์ ˆํ•œ response๋ฅผ ์ƒ์„ฑํ•ด์ฃผ์„ธ์š”.

### Instruction:
{instruction}

### Input:
{input}

### Response:
{response}"""

inference code


def generate_response(prompt, model):
    prompt = alpaca_prompt.format(instruction=prompt, input="", response="")
    messages = [
        {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."},
        {"role": "user", "content": f"{prompt}"},
    ]

    input_ids = tokenizer.apply_chat_template(
        messages,
        add_generation_prompt=True,
        return_tensors="pt"
    ).to(model.device)

    terminators = [
        tokenizer.eos_token_id,
        tokenizer.convert_tokens_to_ids("<|eot_id|>")
    ]

    outputs = model.generate(
        input_ids,
        max_new_tokens=512,
        eos_token_id=terminators,
        do_sample=True,
        temperature=0.1,
        top_p=0.9,
    )

    response = outputs[0][input_ids.shape[-1]:]
    return tokenizer.decode(response, skip_special_tokens=True)


instruction = "๊ธˆ๋ฆฌ๊ฐ€ ์˜ค๋ฅด๋ฉด ๋ฌผ๊ฐ€๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ผ?"

generate_response(instruction, model)
Downloads last month
0
Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Model tree for gamzadole/llama3_instruct_preview_alpaca_finetuning

Adapter
(12)
this model