ssong1's picture
Update README.md
d1285fe verified
|
raw
history blame
No virus
1.91 kB
metadata
license: apache-2.0
library_name: peft
base_model: ssong1/kgpt-j-5.8b
datasets:
  - open-Orca/OpenOrca
language:
  - en
  - kr

This Model

This model is a finetuned version of [EleutherAI/polyglot-ko-5.8b] (https://huggingface.co/EleutherAI/polyglot-ko-5.8b). It was aligned with ๐Ÿค— TRL's SFTTrainer on the Open-Orca/OpenOrca dataset.

How to use

import json
import torch

from peft import LoraConfig, get_peft_model
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel

model1 = AutoModelForCausalLM.from_pretrained(
    "ssong1/gpt-j-5.8b", torch_dtype="auto", device_map="auto"
)

lora_path = "ssong1/gpt-j-5.8b-sum-adapter"
model2 = PeftModel.from_pretrained(model1, lora_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(lora_path)

prompt_template = """\
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

msg = "Q:๋‹ค์Œ ๋ฌธ์„œ๋ฅผ ์š”์•ฝ ํ•˜์„ธ์š”, Context:{context}"

system_prompt = "You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can."


context="""\
"""
tokens = tokenizer.encode(
    prompt_template.format(
        system_prompt=system_prompt,
        prompt=msg.format(context=context),
    ),
    return_tensors="pt",
).to(device="auto", non_blocking=True)

gen_tokens = model2.generate(
    input_ids=tokens,
    do_sample=False,
    temperature=0.5,
    max_length=1024,
    pad_token_id=63999,
    eos_token_id=63999,
)
inputs = tokenizer.batch_decode([gen_tokens[0][: tokens[0].shape[0]]])[0]
generated = tokenizer.batch_decode([gen_tokens[0][tokens[0].shape[0] :]])[0].replace(
    "<|im_end|>", ""
)
print(inputs)
print("\ngenerated:")
print(generated)

Framework versions

  • PEFT 0.7.1