TheBloke commited on
Commit
5cbd3ff
1 Parent(s): bf38bd2

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +287 -0
README.md ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # VMware's Open Llama 7B v2 Open Instruct GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [VMware's Open Llama 7B v2 Open Instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ ## Repositories available
27
+
28
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GPTQ)
29
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GGML)
30
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct)
31
+
32
+ ## Prompt template: Alpaca
33
+
34
+ ```Below is an instruction that describes a task. Write a response that appropriately completes the request.
35
+
36
+ ### Instruction: {prompt}
37
+
38
+ ### Response:
39
+ ```
40
+
41
+ ## Provided files
42
+
43
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
44
+
45
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
46
+
47
+ | Branch | Filename | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With |
48
+ | main | open-llama-7b-v2-open-instruct-GPTQ-4bit-128g.no-act.order.safetensors | 4 | 128 | False | 4.00 GB| True | GPTQ-for-LLaMa |
49
+ | gptq-4bit-32g-actorder_True | gptq_model-4bit-32g.safetensors | 4 | 32 | True | 4.28 GB| True | GPTQ-for-LLaMa |
50
+ | gptq-4bit-64g-actorder_True | gptq_model-4bit-64g.safetensors | 4 | 64 | True | 4.02 GB| True | GPTQ-for-LLaMa |
51
+ | gptq-4bit-128g-actorder_True | gptq_model-4bit-128g.safetensors | 4 | 128 | True | 3.90 GB| True | GPTQ-for-LLaMa |
52
+ | gptq-8bit--1g-actorder_True | gptq_model-8bit--1g.safetensors | 8 | -1 | True | 7.01 GB| False | GPTQ-for-LLaMa |
53
+
54
+
55
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
56
+
57
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
58
+
59
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
60
+
61
+ 1. Click the **Model tab**.
62
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/open-llama-7B-v2-open-instruct-GPTQ`.
63
+ 3. Click **Download**.
64
+ 4. The model will start downloading. Once it's finished it will say "Done"
65
+ 5. In the top left, click the refresh icon next to **Model**.
66
+ 6. In the **Model** dropdown, choose the model you just downloaded: `open-llama-7B-v2-open-instruct-GPTQ`
67
+ 7. The model will automatically load, and is now ready for use!
68
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
69
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
70
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
71
+
72
+ ## How to use this GPTQ model from Python code
73
+
74
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
75
+
76
+ `GITHUB_ACTIONS=true pip install auto-gptq`
77
+
78
+ Then try the following example code:
79
+
80
+ ```python
81
+ from transformers import AutoTokenizer, pipeline, logging
82
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
83
+
84
+ model_name_or_path = "TheBloke/open-llama-7B-v2-open-instruct-GPTQ"
85
+ model_basename = "open-llama-7b-v2-open-instruct-GPTQ-4bit-128g.no-act.order"
86
+
87
+ use_triton = False
88
+
89
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
90
+
91
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
92
+ model_basename=model_basename,
93
+ use_safetensors=True,
94
+ trust_remote_code=True,
95
+ device="cuda:0",
96
+ use_triton=use_triton,
97
+ quantize_config=None)
98
+
99
+ prompt = "Tell me about AI"
100
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
101
+
102
+ ### Instruction: {prompt}
103
+
104
+ ### Response:
105
+
106
+ '''
107
+
108
+ print("\n\n*** Generate:")
109
+
110
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
111
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
112
+ print(tokenizer.decode(output[0]))
113
+
114
+ # Inference can also be done using transformers' pipeline
115
+
116
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
117
+ logging.set_verbosity(logging.CRITICAL)
118
+
119
+ print("*** Pipeline:")
120
+ pipe = pipeline(
121
+ "text-generation",
122
+ model=model,
123
+ tokenizer=tokenizer,
124
+ max_new_tokens=512,
125
+ temperature=0.7,
126
+ top_p=0.95,
127
+ repetition_penalty=1.15
128
+ )
129
+
130
+ print(pipe(prompt_template)[0]['generated_text'])
131
+ ```
132
+
133
+ ## Compatibility
134
+
135
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
136
+
137
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
138
+
139
+ <!-- footer start -->
140
+ ## Discord
141
+
142
+ For further support, and discussions on these models and AI in general, join us at:
143
+
144
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
145
+
146
+ ## Thanks, and how to contribute.
147
+
148
+ Thanks to the [chirper.ai](https://chirper.ai) team!
149
+
150
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
151
+
152
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
153
+
154
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
155
+
156
+ * Patreon: https://patreon.com/TheBlokeAI
157
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
158
+
159
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
160
+
161
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
162
+
163
+ Thank you to all my generous patrons and donaters!
164
+
165
+ <!-- footer end -->
166
+
167
+ # Original model card: VMware's Open Llama 7B v2 Open Instruct
168
+
169
+
170
+ # VMware/open-llama-7B-v2-open-instruct
171
+ Instruction-tuned version of the fully trained Open LLama 7B v2 model. The model is open for <b>COMMERCIAL USE</b>. <br>
172
+
173
+ - This model performs better on code compared to v1 due to the improvements made on the base model by the openlm-research team.
174
+ - The instruction model is trained on an improved instruction tuning dataset compared to v1
175
+
176
+ <b> NOTE </b> : The model was trained using the Alpaca prompt template
177
+ <b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
178
+
179
+ ## License
180
+ - <b>Commercially Viable </b>
181
+
182
+ - Open-instruct-v1
183
+ - Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
184
+
185
+ Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
186
+ - ESNLI - MIT
187
+ - ECQA - CDLA 1.0 - Sharing
188
+ - Strategy - MIT
189
+ - CREAK - MIT
190
+ - gsmk8 - MIT
191
+ - aqua - MIT
192
+ - qasc - Apache 2.0
193
+ - Language Model, ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
194
+
195
+
196
+ ## Nomenclature
197
+
198
+ - Model : Open-llama-v2
199
+ - Model Size: 7B parameters
200
+ - Dataset: Open-instruct(oasst,dolly, hhrlhf)
201
+
202
+ ## Use in Transformers
203
+
204
+ ```
205
+ import os
206
+ import torch
207
+ from transformers import AutoModelForCausalLM, AutoTokenizer
208
+
209
+ model_name = 'VMware/open-llama-7b-open-instruct'
210
+
211
+
212
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
213
+
214
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
215
+
216
+ prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
217
+
218
+ prompt = """What is attention mechanism of a transformer model?
219
+ Write a python code to illustrate how attention works within a transformer model using numpy library. Donot use pytorch or tensorflow."""
220
+
221
+
222
+ inputt = prompt_template.format(instruction= prompt)
223
+ input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
224
+
225
+ output1 = model.generate(input_ids, max_length=512)
226
+ input_length = input_ids.shape[1]
227
+ output1 = output1[:, input_length:]
228
+ output = tokenizer.decode(output1[0])
229
+
230
+ print(output)
231
+
232
+ '''
233
+ Sure, I can help you with that!
234
+
235
+ Attention mechanisms in transformer models are typically implemented using the attention mechanism in the self-attention layer. Self-attention allows the model to focus on different parts of the input sequence when processing it. This is achieved by computing a set of attention weights, which are used to weigh the contribution of each input element to the output.
236
+
237
+ Here's an example code using NumPy to illustrate how attention works in a transformer model:
238
+
239
+ ```python
240
+ import numpy as np
241
+
242
+ def attention_weights(query, key, value, mask):
243
+ # Query, key, and value are input tensors. Mask is a tensor of zeros and ones that represents the attention mask.
244
+ # It is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
245
+ # The attention weights are the element-wise product of the query, key, and mask tensors.
246
+ # The result is a tensor of the same shape as the query tensor.
247
+
248
+ # Compute the dot product between the query tensor and the key tensor
249
+ dot = np.matmul(query, key)
250
+
251
+ # Compute the element-wise softmax of the dot product tensor
252
+ exp_dot = np.exp(dot)
253
+
254
+ # Multiply the dot product and the softmax of the dot product tensors
255
+ weights = dot * exp_dot
256
+
257
+ # Return the attention weights as a NumPy tensor
258
+ return weights
259
+
260
+ # Define the input sequence
261
+ query = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
262
+ key = np.array([[0.1, 0.2], [0.3, 0.4]])
263
+ value = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
264
+ mask = np.array([[False, True, True], [False, True, True]])
265
+
266
+ # Compute the attention weights
267
+ weights = attention_weights(query, key, value, mask)
268
+
269
+ # Print the attention weights
270
+ print(weights)
271
+ ```
272
+
273
+ In this example, the `attention_weights` function takes as input the query tensor, key tensor, value tensor, and mask tensor. It computes the dot product between the query and key tensors using the `np.matmul` function, and then applies a softmax function using the `np.exp` function to the element-wise dot product tensor. It then multiplies the dot product and softmax tensors using the `np.matmul` function, and returns the result as a NumPy tensor.
274
+
275
+ The `query`, `key`, and `value` tensors represent the input sequence to the transformer model. The `mask` tensor represents the attention mask, which is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
276
+
277
+ The output of the `attention_weights` function is a NumPy tensor that represents the attention weights for the input sequence. These weights are used by the transformer model to weigh the contribution of each input element to the output.
278
+
279
+ I hope this helps!</s>
280
+ '''
281
+ ```
282
+
283
+ ## Finetuning details
284
+ The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
285
+ ## Evaluation
286
+
287
+ <B>TODO</B>