TheBloke commited on
Commit
6bd46c2
1 Parent(s): 9aad79e

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +250 -0
README.md ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # LmSys' Vicuna 33B 1.3 (final) GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [LmSys' Vicuna 33B 1.3 (final)](https://huggingface.co/lmsys/vicuna-33b-v1.3) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ **This is an experimental new GPTQ which offers up to 8K context size**
27
+
28
+ The increased context is tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
29
+
30
+ It has also been tested from Python code using AutoGPTQ, and `trust_remote_code=True`.
31
+
32
+ Code credits:
33
+ - Original concept and code for increasing context length: [kaiokendev](https://huggingface.co/kaiokendev)
34
+ - Updated Llama modelling code that includes this automatically via trust_remote_code: [emozilla](https://huggingface.co/emozilla).
35
+
36
+ Please read carefully below to see how to use it.
37
+
38
+ **NOTE**: Using the full 8K context on a 30B model will exceed 24GB VRAM.
39
+
40
+ ## Repositories available
41
+
42
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GPTQ)
43
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GGML)
44
+ * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16)
45
+ * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-33b-v1.3)
46
+
47
+ ## How to easily download and use this model in text-generation-webui with ExLlama
48
+
49
+ Please make sure you're using the latest version of text-generation-webui
50
+
51
+ 1. Click the **Model tab**.
52
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GPTQ`.
53
+ 3. Click **Download**.
54
+ 4. The model will start downloading. Once it's finished it will say "Done"
55
+ 5. Untick **Autoload the model**
56
+ 6. In the top left, click the refresh icon next to **Model**.
57
+ 7. In the **Model** dropdown, choose the model you just downloaded: `Vicuna-33B-1-3-SuperHOT-8K-GPTQ`
58
+ 8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context.
59
+ 9. Now click **Save Settings** followed by **Reload**
60
+ 10. The model will automatically load, and is now ready for use!
61
+ 11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
62
+
63
+ ## How to use this GPTQ model from Python code with AutoGPTQ
64
+
65
+ First make sure you have AutoGPTQ and Einops installed:
66
+
67
+ ```
68
+ pip3 install einops auto-gptq
69
+ ```
70
+
71
+ Then run the following code. Note that in order to get this to work, `config.json` has been hardcoded to a sequence length of 8192.
72
+
73
+ If you want to try 4096 instead to reduce VRAM usage, please manually edit `config.json` to set `max_position_embeddings` to the value you want.
74
+
75
+ ```python
76
+ from transformers import AutoTokenizer, pipeline, logging
77
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
78
+ import argparse
79
+
80
+ model_name_or_path = "TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GPTQ"
81
+ model_basename = "vicuna-33b-1.3-superhot-8k-GPTQ-4bit--1g.act.order"
82
+
83
+ use_triton = False
84
+
85
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
86
+
87
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
88
+ model_basename=model_basename,
89
+ use_safetensors=True,
90
+ trust_remote_code=True,
91
+ device_map='auto',
92
+ use_triton=use_triton,
93
+ quantize_config=None)
94
+
95
+ model.seqlen = 8192
96
+
97
+ # Note: check the prompt template is correct for this model.
98
+ prompt = "Tell me about AI"
99
+ prompt_template=f'''USER: {prompt}
100
+ ASSISTANT:'''
101
+
102
+ print("\n\n*** Generate:")
103
+
104
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
105
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
106
+ print(tokenizer.decode(output[0]))
107
+
108
+ # Inference can also be done using transformers' pipeline
109
+
110
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
111
+ logging.set_verbosity(logging.CRITICAL)
112
+
113
+ print("*** Pipeline:")
114
+ pipe = pipeline(
115
+ "text-generation",
116
+ model=model,
117
+ tokenizer=tokenizer,
118
+ max_new_tokens=512,
119
+ temperature=0.7,
120
+ top_p=0.95,
121
+ repetition_penalty=1.15
122
+ )
123
+
124
+ print(pipe(prompt_template)[0]['generated_text'])
125
+ ```
126
+
127
+ ## Using other UIs: monkey patch
128
+
129
+ Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
130
+
131
+ It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
132
+
133
+ ## Provided files
134
+
135
+ **vicuna-33b-1.3-superhot-8k-GPTQ-4bit--1g.act.order.safetensors**
136
+
137
+ This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
138
+
139
+ It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
140
+
141
+ * `vicuna-33b-1.3-superhot-8k-GPTQ-4bit--1g.act.order.safetensors`
142
+ * Works for use with ExLlama with increased context (4096 or 8192)
143
+ * Works with AutoGPTQ in Python code, including with increased context, if `trust_remote_code=True` is set.
144
+ * Should work with GPTQ-for-LLaMa in CUDA mode, but unknown if increased context works - TBC. May have issues with GPTQ-for-LLaMa Triton mode.
145
+ * Works with text-generation-webui, including one-click-installers.
146
+ * Parameters: Groupsize = -1. Act Order / desc_act = True.
147
+
148
+ <!-- footer start -->
149
+ ## Discord
150
+
151
+ For further support, and discussions on these models and AI in general, join us at:
152
+
153
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
154
+
155
+ ## Thanks, and how to contribute.
156
+
157
+ Thanks to the [chirper.ai](https://chirper.ai) team!
158
+
159
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
160
+
161
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
162
+
163
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
164
+
165
+ * Patreon: https://patreon.com/TheBlokeAI
166
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
167
+
168
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
169
+
170
+ **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
171
+
172
+ Thank you to all my generous patrons and donaters!
173
+
174
+ <!-- footer end -->
175
+
176
+ # Original model card: Kaio Ken's SuperHOT 8K
177
+
178
+ ### SuperHOT Prototype 2 w/ 8K Context
179
+
180
+ This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
181
+ Tests have shown that the model does indeed leverage the extended context at 8K.
182
+
183
+ You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
184
+
185
+ #### Looking for Merged & Quantized Models?
186
+ - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
187
+ - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
188
+
189
+
190
+ #### Training Details
191
+ I trained the LoRA with the following configuration:
192
+ - 1200 samples (~400 samples over 2048 sequence length)
193
+ - learning rate of 3e-4
194
+ - 3 epochs
195
+ - The exported modules are:
196
+ - q_proj
197
+ - k_proj
198
+ - v_proj
199
+ - o_proj
200
+ - no bias
201
+ - Rank = 4
202
+ - Alpha = 8
203
+ - no dropout
204
+ - weight decay of 0.1
205
+ - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
206
+ - Trained on 4-bit base model
207
+
208
+ # Original model card: LmSys' Vicuna 33B 1.3 (final)
209
+
210
+
211
+ # Vicuna Model Card
212
+
213
+ ## Model Details
214
+
215
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
216
+
217
+ - **Developed by:** [LMSYS](https://lmsys.org/)
218
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
219
+ - **License:** Non-commercial license
220
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
221
+
222
+ ### Model Sources
223
+
224
+ - **Repository:** https://github.com/lm-sys/FastChat
225
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
226
+ - **Paper:** https://arxiv.org/abs/2306.05685
227
+ - **Demo:** https://chat.lmsys.org/
228
+
229
+ ## Uses
230
+
231
+ The primary use of Vicuna is research on large language models and chatbots.
232
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
233
+
234
+ ## How to Get Started with the Model
235
+
236
+ Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
237
+ APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
238
+
239
+ ## Training Details
240
+
241
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
242
+ The training data is around 140K conversations collected from ShareGPT.com.
243
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
244
+
245
+ ## Evaluation
246
+
247
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
248
+
249
+ ## Difference between different versions of Vicuna
250
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)