Update README.md
Browse files
README.md
CHANGED
@@ -43,6 +43,7 @@ The current model was developed using the GPT-4 API to generate a dataset for or
|
|
43 |
## How to Get Started with the Model
|
44 |
|
45 |
This is a simple example of usage of the model.
|
|
|
46 |
|
47 |
``` python
|
48 |
import torch
|
@@ -88,9 +89,10 @@ tokenizer = AutoTokenizer.from_pretrained(
|
|
88 |
|
89 |
sentence = "์์ด์ค์๋ฉ๋ฆฌ์นด๋
ธ ํจ์ฌ์ด์ฆ ํ์ ํ๊ณ ์. ๋ธ๊ธฐ์ค๋ฌด๋ ํ์ ์ฃผ์ธ์. ๋, ์ฝ๋๋ธ๋ฃจ๋ผ๋ผ ํ๋์."
|
90 |
analysis = wrapper_generate(
|
91 |
-
|
92 |
-
|
93 |
-
|
|
|
94 |
)
|
95 |
print(analysis)
|
96 |
```
|
|
|
43 |
## How to Get Started with the Model
|
44 |
|
45 |
This is a simple example of usage of the model.
|
46 |
+
If you want to load the fined-tuned model in INT4, please specify @load_in_4bit=True@ instead of @load_in_8bit=True@.
|
47 |
|
48 |
``` python
|
49 |
import torch
|
|
|
89 |
|
90 |
sentence = "์์ด์ค์๋ฉ๋ฆฌ์นด๋
ธ ํจ์ฌ์ด์ฆ ํ์ ํ๊ณ ์. ๋ธ๊ธฐ์ค๋ฌด๋ ํ์ ์ฃผ์ธ์. ๋, ์ฝ๋๋ธ๋ฃจ๋ผ๋ผ ํ๋์."
|
91 |
analysis = wrapper_generate(
|
92 |
+
model=trained_model,
|
93 |
+
tokenizer=tokenizer,
|
94 |
+
input_prompt=prompt_template.format(System=default_system_msg, User=sentence),
|
95 |
+
do_stream=False
|
96 |
)
|
97 |
print(analysis)
|
98 |
```
|