TheBloke commited on
Commit
54d1dd7
1 Parent(s): be4364e

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +262 -0
README.md ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Minlik's Chinese Alpaca 33B Merged GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [Minlik's Chinese Alpaca 33B Merged](https://huggingface.co/minlik/chinese-alpaca-33b-merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ **This is an experimental new GPTQ which offers up to 8K context size**
27
+
28
+ The increased context is tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
29
+
30
+ It has also been tested from Python code using AutoGPTQ, and `trust_remote_code=True`.
31
+
32
+ Code credits:
33
+ - Original concept and code for increasing context length: [kaiokendev](https://huggingface.co/kaiokendev)
34
+ - Updated Llama modelling code that includes this automatically via trust_remote_code: [emozilla](https://huggingface.co/emozilla).
35
+
36
+ Please read carefully below to see how to use it.
37
+
38
+ **NOTE**: Using the full 8K context on a 30B model will exceed 24GB VRAM.
39
+
40
+ ## Repositories available
41
+
42
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GPTQ)
43
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GGML)
44
+ * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16)
45
+ * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/minlik/chinese-alpaca-33b-merged)
46
+
47
+ ## How to easily download and use this model in text-generation-webui with ExLlama
48
+
49
+ Please make sure you're using the latest version of text-generation-webui
50
+
51
+ 1. Click the **Model tab**.
52
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GPTQ`.
53
+ 3. Click **Download**.
54
+ 4. The model will start downloading. Once it's finished it will say "Done"
55
+ 5. Untick **Autoload the model**
56
+ 6. In the top left, click the refresh icon next to **Model**.
57
+ 7. In the **Model** dropdown, choose the model you just downloaded: `Chinese-Alpaca-33B-SuperHOT-8K-GPTQ`
58
+ 8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context.
59
+ 9. Now click **Save Settings** followed by **Reload**
60
+ 10. The model will automatically load, and is now ready for use!
61
+ 11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
62
+
63
+ ## How to use this GPTQ model from Python code with AutoGPTQ
64
+
65
+ First make sure you have AutoGPTQ and Einops installed:
66
+
67
+ ```
68
+ pip3 install einops auto-gptq
69
+ ```
70
+
71
+ Then run the following code. Note that in order to get this to work, `config.json` has been hardcoded to a sequence length of 8192.
72
+
73
+ If you want to try 4096 instead to reduce VRAM usage, please manually edit `config.json` to set `max_position_embeddings` to the value you want.
74
+
75
+ ```python
76
+ from transformers import AutoTokenizer, pipeline, logging
77
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
78
+ import argparse
79
+
80
+ model_name_or_path = "TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GPTQ"
81
+ model_basename = "chinese-alpaca-33b-superhot-8k-GPTQ-4bit--1g.act.order"
82
+
83
+ use_triton = False
84
+
85
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
86
+
87
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
88
+ model_basename=model_basename,
89
+ use_safetensors=True,
90
+ trust_remote_code=True,
91
+ device_map='auto',
92
+ use_triton=use_triton,
93
+ quantize_config=None)
94
+
95
+ model.seqlen = 8192
96
+
97
+ # Note: check the prompt template is correct for this model.
98
+ prompt = "Tell me about AI"
99
+ prompt_template=f'''USER: {prompt}
100
+ ASSISTANT:'''
101
+
102
+ print("\n\n*** Generate:")
103
+
104
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
105
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
106
+ print(tokenizer.decode(output[0]))
107
+
108
+ # Inference can also be done using transformers' pipeline
109
+
110
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
111
+ logging.set_verbosity(logging.CRITICAL)
112
+
113
+ print("*** Pipeline:")
114
+ pipe = pipeline(
115
+ "text-generation",
116
+ model=model,
117
+ tokenizer=tokenizer,
118
+ max_new_tokens=512,
119
+ temperature=0.7,
120
+ top_p=0.95,
121
+ repetition_penalty=1.15
122
+ )
123
+
124
+ print(pipe(prompt_template)[0]['generated_text'])
125
+ ```
126
+
127
+ ## Using other UIs: monkey patch
128
+
129
+ Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
130
+
131
+ It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
132
+
133
+ ## Provided files
134
+
135
+ **chinese-alpaca-33b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors**
136
+
137
+ This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
138
+
139
+ It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
140
+
141
+ * `chinese-alpaca-33b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors`
142
+ * Works for use with ExLlama with increased context (4096 or 8192)
143
+ * Works with AutoGPTQ in Python code, including with increased context, if `trust_remote_code=True` is set.
144
+ * Should work with GPTQ-for-LLaMa in CUDA mode, but unknown if increased context works - TBC. May have issues with GPTQ-for-LLaMa Triton mode.
145
+ * Works with text-generation-webui, including one-click-installers.
146
+ * Parameters: Groupsize = -1. Act Order / desc_act = True.
147
+
148
+ <!-- footer start -->
149
+ ## Discord
150
+
151
+ For further support, and discussions on these models and AI in general, join us at:
152
+
153
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
154
+
155
+ ## Thanks, and how to contribute.
156
+
157
+ Thanks to the [chirper.ai](https://chirper.ai) team!
158
+
159
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
160
+
161
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
162
+
163
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
164
+
165
+ * Patreon: https://patreon.com/TheBlokeAI
166
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
167
+
168
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
169
+
170
+ **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
171
+
172
+ Thank you to all my generous patrons and donaters!
173
+
174
+ <!-- footer end -->
175
+
176
+ # Original model card: Kaio Ken's SuperHOT 8K
177
+
178
+ ### SuperHOT Prototype 2 w/ 8K Context
179
+
180
+ This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
181
+ Tests have shown that the model does indeed leverage the extended context at 8K.
182
+
183
+ You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
184
+
185
+ #### Looking for Merged & Quantized Models?
186
+ - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
187
+ - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
188
+
189
+
190
+ #### Training Details
191
+ I trained the LoRA with the following configuration:
192
+ - 1200 samples (~400 samples over 2048 sequence length)
193
+ - learning rate of 3e-4
194
+ - 3 epochs
195
+ - The exported modules are:
196
+ - q_proj
197
+ - k_proj
198
+ - v_proj
199
+ - o_proj
200
+ - no bias
201
+ - Rank = 4
202
+ - Alpha = 8
203
+ - no dropout
204
+ - weight decay of 0.1
205
+ - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
206
+ - Trained on 4-bit base model
207
+
208
+ # Original model card: Minlik's Chinese Alpaca 33B Merged
209
+
210
+
211
+ 加入��文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-33B模型。
212
+
213
+ 模型转换用到的相关base及lora模型如下:
214
+ - base-model: elinas/llama-30b-hf-transformers-4.29
215
+ - lora-model: ziqingyang/chinese-alpaca-lora-33b
216
+
217
+ 详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v4.0
218
+
219
+
220
+ ### 使用方法参考
221
+ 1. 安装模块包
222
+ ```bash
223
+ pip install sentencepiece
224
+ pip install transformers>=4.28.0
225
+ ```
226
+
227
+ 2. 生成文本
228
+ ```python
229
+ import torch
230
+ import transformers
231
+ from transformers import LlamaTokenizer, LlamaForCausalLM
232
+
233
+ def generate_prompt(text):
234
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
235
+
236
+ ### Instruction:
237
+ {text}
238
+
239
+ ### Response:"""
240
+
241
+
242
+ tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-33b-merged')
243
+ model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-33b-merged').half().to('cuda')
244
+ model.eval()
245
+
246
+ text = '第一个登上月球的人是谁?'
247
+ prompt = generate_prompt(text)
248
+ input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
249
+
250
+
251
+ with torch.no_grad():
252
+ output_ids = model.generate(
253
+ input_ids=input_ids,
254
+ max_new_tokens=128,
255
+ temperature=1,
256
+ top_k=40,
257
+ top_p=0.9,
258
+ repetition_penalty=1.15
259
+ ).cuda()
260
+ output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
261
+ print(output.replace(prompt, '').strip())
262
+ ```