tridungduong16
commited on
Commit
•
35caecb
1
Parent(s):
4f76bdb
Update README.md
Browse files
README.md
CHANGED
@@ -87,63 +87,39 @@ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) instal
|
|
87 |
Then try the following example code:
|
88 |
|
89 |
```python
|
90 |
-
from transformers import AutoTokenizer, pipeline, logging
|
91 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
quantize_config=None)
|
118 |
-
"""
|
119 |
-
|
120 |
-
prompt = "Tell me about AI"
|
121 |
-
prompt_template=f'''{prompt}
|
122 |
-
'''
|
123 |
-
|
124 |
-
print("\n\n*** Generate:")
|
125 |
-
|
126 |
-
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
|
127 |
-
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
|
128 |
-
print(tokenizer.decode(output[0]))
|
129 |
-
|
130 |
-
# Inference can also be done using transformers' pipeline
|
131 |
-
|
132 |
-
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
|
133 |
-
logging.set_verbosity(logging.CRITICAL)
|
134 |
-
|
135 |
-
print("*** Pipeline:")
|
136 |
-
pipe = pipeline(
|
137 |
-
"text-generation",
|
138 |
-
model=model,
|
139 |
-
tokenizer=tokenizer,
|
140 |
-
max_new_tokens=512,
|
141 |
-
temperature=0.7,
|
142 |
-
top_p=0.95,
|
143 |
-
repetition_penalty=1.15
|
144 |
)
|
145 |
-
|
146 |
-
|
|
|
|
|
|
|
|
|
147 |
```
|
148 |
|
149 |
## Compatibility
|
@@ -152,33 +128,6 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
|
|
152 |
|
153 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
154 |
|
155 |
-
<!-- footer start -->
|
156 |
-
## Discord
|
157 |
-
|
158 |
-
For further support, and discussions on these models and AI in general, join us at:
|
159 |
-
|
160 |
-
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
161 |
-
|
162 |
-
## Thanks, and how to contribute.
|
163 |
-
|
164 |
-
Thanks to the [chirper.ai](https://chirper.ai) team!
|
165 |
-
|
166 |
-
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
167 |
-
|
168 |
-
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
169 |
-
|
170 |
-
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
171 |
-
|
172 |
-
* Patreon: https://patreon.com/TheBlokeAI
|
173 |
-
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
174 |
-
|
175 |
-
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
|
176 |
-
|
177 |
-
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
|
178 |
-
|
179 |
-
Thank you to all my generous patrons and donaters!
|
180 |
-
|
181 |
-
<!-- footer end -->
|
182 |
|
183 |
# Original model card: Meta's Llama 2 13B
|
184 |
|
|
|
87 |
Then try the following example code:
|
88 |
|
89 |
```python
|
|
|
90 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
91 |
+
from threading import Thread
|
92 |
+
import gc
|
93 |
+
import traceback
|
94 |
+
import asyncio
|
95 |
+
import json
|
96 |
+
from websockets.server import serve
|
97 |
+
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig, get_gptq_peft_model
|
98 |
+
|
99 |
+
|
100 |
+
MODEL_PATH_GPTQ= "Llama-2-13B-GPTQ"
|
101 |
+
ADAPTER_DIR= "Llama-2-13B-GPTQ-Orca"
|
102 |
+
|
103 |
+
DEV = "cuda:0"
|
104 |
+
|
105 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH_GPTQ, use_fast=True)
|
106 |
+
model = AutoGPTQForCausalLM.from_quantized(
|
107 |
+
MODEL_PATH_GPTQ,
|
108 |
+
use_safetensors=True,
|
109 |
+
trust_remote_code=False,
|
110 |
+
use_triton=True,
|
111 |
+
device="cuda:0",
|
112 |
+
warmup_triton=False,
|
113 |
+
trainable=True,
|
114 |
+
inject_fused_attention=True,
|
115 |
+
inject_fused_mlp=False,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
116 |
)
|
117 |
+
model = get_gptq_peft_model(
|
118 |
+
model,
|
119 |
+
model_id=ADAPTER_DIR,
|
120 |
+
train_mode=False
|
121 |
+
)
|
122 |
+
model.eval()
|
123 |
```
|
124 |
|
125 |
## Compatibility
|
|
|
128 |
|
129 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
131 |
|
132 |
# Original model card: Meta's Llama 2 13B
|
133 |
|