Initial GPTQ model commit
Browse files
README.md
ADDED
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# Manticore 13B Chat Pyg Guanaco GPTQ
|
21 |
+
|
22 |
+
These files are GPTQ 4bit model files for [Manticore 13B Chat Pyg Guanaco](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
|
23 |
+
|
24 |
+
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
25 |
+
|
26 |
+
**This is an experimental new GPTQ which offers up to 8K context size**
|
27 |
+
|
28 |
+
The increased context is tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
29 |
+
|
30 |
+
It has also been tested from Python code using AutoGPTQ, and `trust_remote_code=True`.
|
31 |
+
|
32 |
+
Code credits:
|
33 |
+
- Original concept and code for increasing context length: [kaiokendev](https://huggingface.co/kaiokendev)
|
34 |
+
- Updated Llama modelling code that includes this automatically via trust_remote_code: [emozilla](https://huggingface.co/emozilla).
|
35 |
+
|
36 |
+
Please read carefully below to see how to use it.
|
37 |
+
|
38 |
+
GGML versions are not yet provided, as there is not yet support for SuperHOT in llama.cpp. This is being investigated and will hopefully come soon.
|
39 |
+
|
40 |
+
## Repositories available
|
41 |
+
|
42 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ)
|
43 |
+
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-fp16)
|
44 |
+
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco)
|
45 |
+
|
46 |
+
## How to easily download and use this model in text-generation-webui with ExLlama
|
47 |
+
|
48 |
+
Please make sure you're using the latest version of text-generation-webui
|
49 |
+
|
50 |
+
1. Click the **Model tab**.
|
51 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ`.
|
52 |
+
3. Click **Download**.
|
53 |
+
4. The model will start downloading. Once it's finished it will say "Done"
|
54 |
+
5. Untick **Autoload the model**
|
55 |
+
6. In the top left, click the refresh icon next to **Model**.
|
56 |
+
7. In the **Model** dropdown, choose the model you just downloaded: `Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ`
|
57 |
+
8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context.
|
58 |
+
9. Now click **Save Settings** followed by **Reload**
|
59 |
+
10. The model will automatically load, and is now ready for use!
|
60 |
+
11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
|
61 |
+
|
62 |
+
## How to use this GPTQ model from Python code with AutoGPTQ
|
63 |
+
|
64 |
+
First make sure you have AutoGPTQ and Einops installed:
|
65 |
+
|
66 |
+
```
|
67 |
+
pip3 install einops auto-gptq
|
68 |
+
```
|
69 |
+
|
70 |
+
Then run the following code. Note that in order to get this to work, `config.json` has been hardcoded to a sequence length of 8192.
|
71 |
+
|
72 |
+
If you want to try 4096 instead to reduce VRAM usage, please manually edit `config.json` to set `max_position_embeddings` to the value you want.
|
73 |
+
|
74 |
+
```python
|
75 |
+
from transformers import AutoTokenizer, pipeline, logging
|
76 |
+
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
77 |
+
import argparse
|
78 |
+
|
79 |
+
model_name_or_path = "TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ"
|
80 |
+
model_basename = "manticore-13b-chat-pyg-guanaco-superhot-8k-GPTQ-4bit-128g.no-act.order"
|
81 |
+
|
82 |
+
use_triton = False
|
83 |
+
|
84 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
85 |
+
|
86 |
+
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
87 |
+
model_basename=model_basename,
|
88 |
+
use_safetensors=True,
|
89 |
+
trust_remote_code=True,
|
90 |
+
device_map='auto',
|
91 |
+
use_triton=use_triton,
|
92 |
+
quantize_config=None)
|
93 |
+
|
94 |
+
model.seqlen = 8192
|
95 |
+
|
96 |
+
# Note: check the prompt template is correct for this model.
|
97 |
+
prompt = "Tell me about AI"
|
98 |
+
prompt_template=f'''USER: {prompt}
|
99 |
+
ASSISTANT:'''
|
100 |
+
|
101 |
+
print("\n\n*** Generate:")
|
102 |
+
|
103 |
+
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
|
104 |
+
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
|
105 |
+
print(tokenizer.decode(output[0]))
|
106 |
+
|
107 |
+
# Inference can also be done using transformers' pipeline
|
108 |
+
|
109 |
+
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
|
110 |
+
logging.set_verbosity(logging.CRITICAL)
|
111 |
+
|
112 |
+
print("*** Pipeline:")
|
113 |
+
pipe = pipeline(
|
114 |
+
"text-generation",
|
115 |
+
model=model,
|
116 |
+
tokenizer=tokenizer,
|
117 |
+
max_new_tokens=512,
|
118 |
+
temperature=0.7,
|
119 |
+
top_p=0.95,
|
120 |
+
repetition_penalty=1.15
|
121 |
+
)
|
122 |
+
|
123 |
+
print(pipe(prompt_template)[0]['generated_text'])
|
124 |
+
```
|
125 |
+
|
126 |
+
## Using other UIs: monkey patch
|
127 |
+
|
128 |
+
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
|
129 |
+
|
130 |
+
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
|
131 |
+
|
132 |
+
## Provided files
|
133 |
+
|
134 |
+
**manticore-13b-chat-pyg-guanaco-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors**
|
135 |
+
|
136 |
+
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
137 |
+
|
138 |
+
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
|
139 |
+
|
140 |
+
* `manticore-13b-chat-pyg-guanaco-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors`
|
141 |
+
* Works for use with ExLlama with increased context (4096 or 8192)
|
142 |
+
* Works with AutoGPTQ in Python code, including with increased context, if `trust_remote_code=True` is set.
|
143 |
+
* Should work with GPTQ-for-LLaMa in CUDA mode, but unknown if increased context works - TBC. May have issues with GPTQ-for-LLaMa Triton mode.
|
144 |
+
* Works with text-generation-webui, including one-click-installers.
|
145 |
+
* Parameters: Groupsize = 128. Act Order / desc_act = False.
|
146 |
+
|
147 |
+
<!-- footer start -->
|
148 |
+
## Discord
|
149 |
+
|
150 |
+
For further support, and discussions on these models and AI in general, join us at:
|
151 |
+
|
152 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
153 |
+
|
154 |
+
## Thanks, and how to contribute.
|
155 |
+
|
156 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
157 |
+
|
158 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
159 |
+
|
160 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
161 |
+
|
162 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
163 |
+
|
164 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
165 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
166 |
+
|
167 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
168 |
+
|
169 |
+
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
|
170 |
+
|
171 |
+
Thank you to all my generous patrons and donaters!
|
172 |
+
|
173 |
+
<!-- footer end -->
|
174 |
+
|
175 |
+
# Original model card: Kaio Ken's SuperHOT 8K
|
176 |
+
|
177 |
+
### SuperHOT Prototype 2 w/ 8K Context
|
178 |
+
|
179 |
+
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
|
180 |
+
Tests have shown that the model does indeed leverage the extended context at 8K.
|
181 |
+
|
182 |
+
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
|
183 |
+
|
184 |
+
#### Looking for Merged & Quantized Models?
|
185 |
+
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
|
186 |
+
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
|
187 |
+
|
188 |
+
|
189 |
+
#### Training Details
|
190 |
+
I trained the LoRA with the following configuration:
|
191 |
+
- 1200 samples (~400 samples over 2048 sequence length)
|
192 |
+
- learning rate of 3e-4
|
193 |
+
- 3 epochs
|
194 |
+
- The exported modules are:
|
195 |
+
- q_proj
|
196 |
+
- k_proj
|
197 |
+
- v_proj
|
198 |
+
- o_proj
|
199 |
+
- no bias
|
200 |
+
- Rank = 4
|
201 |
+
- Alpha = 8
|
202 |
+
- no dropout
|
203 |
+
- weight decay of 0.1
|
204 |
+
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
|
205 |
+
- Trained on 4-bit base model
|
206 |
+
|
207 |
+
# Original model card: Manticore 13B Chat Pyg Guanaco
|
208 |
+
|
209 |
+
Manticore-13b-Chat-Pyg with the Guanaco 13b qLoRa from TimDettmers applied
|
210 |
+
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
|