File size: 1,909 Bytes
b73392e
45f02f5
b73392e
 
45f02f5
 
 
 
d1285fe
b73392e
 
964cf74
0036ee7
0e9d696
964cf74
 
45f02f5
 
 
 
 
 
 
 
 
 
 
0ce4c7d
45f02f5
 
0ce4c7d
45f02f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b73392e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: apache-2.0
library_name: peft
base_model: ssong1/kgpt-j-5.8b
datasets:
- open-Orca/OpenOrca
language:
- en
- kr
---

#### This Model
This model is a finetuned version of [EleutherAI/polyglot-ko-5.8b] (https://huggingface.co/EleutherAI/polyglot-ko-5.8b).
It was aligned with [🤗 TRL's](https://github.com/huggingface/trl) `SFTTrainer` on the [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset.


#### How to use

```python
import json
import torch

from peft import LoraConfig, get_peft_model
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel

model1 = AutoModelForCausalLM.from_pretrained(
    "ssong1/gpt-j-5.8b", torch_dtype="auto", device_map="auto"
)

lora_path = "ssong1/gpt-j-5.8b-sum-adapter"
model2 = PeftModel.from_pretrained(model1, lora_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(lora_path)

prompt_template = """\
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

msg = "Q:다음 문서를 요약 하세요, Context:{context}"

system_prompt = "You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can."


context="""\
"""
tokens = tokenizer.encode(
    prompt_template.format(
        system_prompt=system_prompt,
        prompt=msg.format(context=context),
    ),
    return_tensors="pt",
).to(device="auto", non_blocking=True)

gen_tokens = model2.generate(
    input_ids=tokens,
    do_sample=False,
    temperature=0.5,
    max_length=1024,
    pad_token_id=63999,
    eos_token_id=63999,
)
inputs = tokenizer.batch_decode([gen_tokens[0][: tokens[0].shape[0]]])[0]
generated = tokenizer.batch_decode([gen_tokens[0][tokens[0].shape[0] :]])[0].replace(
    "<|im_end|>", ""
)
print(inputs)
print("\ngenerated:")
print(generated)
```


### Framework versions

- PEFT 0.7.1