jiangchengchengNLP
commited on
Commit
•
c094eaa
1
Parent(s):
4e41286
Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,18 @@ license: apache-2.0
|
|
9 |
基础模型:华佗 GPT2-7B
|
10 |
量化方法:int8 量化,使用 bitsandbytes 库
|
11 |
模型大小:量化后约为 8GB,原模型大小为 26.8GB
|
12 |
-
来源:百川模型支持
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
基础模型:华佗 GPT2-7B
|
10 |
量化方法:int8 量化,使用 bitsandbytes 库
|
11 |
模型大小:量化后约为 8GB,原模型大小为 26.8GB
|
12 |
+
来源:百川模型支持
|
13 |
+
使用指南:
|
14 |
+
```
|
15 |
+
import torch
|
16 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
17 |
+
from transformers.generation.utils import GenerationConfig
|
18 |
+
tokenizer = AutoTokenizer.from_pretrained("jiangchengchengNLP/huatuo-7b-sns8bits", use_fast=True, trust_remote_code=True)
|
19 |
+
model = AutoModelForCausalLM.from_pretrained("jiangchengchengNLP/huatuo-7b-sns8bits", device_map="auto", torch_dtype="auto", trust_remote_code=True)
|
20 |
+
model.generation_config = GenerationConfig.from_pretrained("FreedomIntelligence/HuatuoGPT2-7B-4bits")
|
21 |
+
messages = []
|
22 |
+
messages.append({"role": "user", "content": "肚子疼怎么办?"})
|
23 |
+
response = model.HuatuoChat(tokenizer, messages)
|
24 |
+
print(response)
|
25 |
+
|
26 |
+
```
|