Update for Transformers GPTQ support
Browse files
README.md
CHANGED
@@ -4,17 +4,20 @@ license: other
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
7 |
-
|
8 |
-
|
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
</div>
|
17 |
</div>
|
|
|
|
|
18 |
<!-- header end -->
|
19 |
|
20 |
# VMware's Open Llama 7B v2 Open Instruct GPTQ
|
@@ -164,6 +167,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
|
|
164 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
165 |
|
166 |
<!-- footer start -->
|
|
|
167 |
## Discord
|
168 |
|
169 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -183,12 +187,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
183 |
* Patreon: https://patreon.com/TheBlokeAI
|
184 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
185 |
|
186 |
-
**Special thanks to**:
|
|
|
|
|
187 |
|
188 |
-
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
|
189 |
|
190 |
Thank you to all my generous patrons and donaters!
|
191 |
|
|
|
|
|
192 |
<!-- footer end -->
|
193 |
|
194 |
# Original model card: VMware's Open Llama 7B v2 Open Instruct
|
@@ -207,10 +214,10 @@ Instruction-tuned version of the fully trained Open LLama 7B v2 model. The mode
|
|
207 |
- <b>Commercially Viable </b>
|
208 |
|
209 |
- Open-instruct-v1
|
210 |
-
- Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
|
211 |
|
212 |
Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
|
213 |
-
- ESNLI - MIT
|
214 |
- ECQA - CDLA 1.0 - Sharing
|
215 |
- Strategy - MIT
|
216 |
- CREAK - MIT
|
@@ -220,7 +227,7 @@ Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
|
|
220 |
- Language Model, ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
|
221 |
|
222 |
|
223 |
-
## Nomenclature
|
224 |
|
225 |
- Model : Open-llama-v2
|
226 |
- Model Size: 7B parameters
|
@@ -242,7 +249,7 @@ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float
|
|
242 |
|
243 |
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
|
244 |
|
245 |
-
prompt = """What is attention mechanism of a transformer model?
|
246 |
Write a python code to illustrate how attention works within a transformer model using numpy library. Donot use pytorch or tensorflow."""
|
247 |
|
248 |
|
@@ -271,16 +278,16 @@ def attention_weights(query, key, value, mask):
|
|
271 |
# It is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
|
272 |
# The attention weights are the element-wise product of the query, key, and mask tensors.
|
273 |
# The result is a tensor of the same shape as the query tensor.
|
274 |
-
|
275 |
# Compute the dot product between the query tensor and the key tensor
|
276 |
dot = np.matmul(query, key)
|
277 |
-
|
278 |
# Compute the element-wise softmax of the dot product tensor
|
279 |
exp_dot = np.exp(dot)
|
280 |
-
|
281 |
# Multiply the dot product and the softmax of the dot product tensors
|
282 |
weights = dot * exp_dot
|
283 |
-
|
284 |
# Return the attention weights as a NumPy tensor
|
285 |
return weights
|
286 |
|
@@ -306,7 +313,7 @@ The output of the `attention_weights` function is a NumPy tensor that represents
|
|
306 |
I hope this helps!</s>
|
307 |
'''
|
308 |
```
|
309 |
-
|
310 |
## Finetuning details
|
311 |
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
|
312 |
## Evaluation
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
7 |
+
<!-- 200823 -->
|
8 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
9 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
10 |
</div>
|
11 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
12 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
13 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
14 |
</div>
|
15 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
16 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
17 |
</div>
|
18 |
</div>
|
19 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
20 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
21 |
<!-- header end -->
|
22 |
|
23 |
# VMware's Open Llama 7B v2 Open Instruct GPTQ
|
|
|
167 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
168 |
|
169 |
<!-- footer start -->
|
170 |
+
<!-- 200823 -->
|
171 |
## Discord
|
172 |
|
173 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
187 |
* Patreon: https://patreon.com/TheBlokeAI
|
188 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
189 |
|
190 |
+
**Special thanks to**: Aemon Algiz.
|
191 |
+
|
192 |
+
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
|
193 |
|
|
|
194 |
|
195 |
Thank you to all my generous patrons and donaters!
|
196 |
|
197 |
+
And thank you again to a16z for their generous grant.
|
198 |
+
|
199 |
<!-- footer end -->
|
200 |
|
201 |
# Original model card: VMware's Open Llama 7B v2 Open Instruct
|
|
|
214 |
- <b>Commercially Viable </b>
|
215 |
|
216 |
- Open-instruct-v1
|
217 |
+
- Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0
|
218 |
|
219 |
Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples
|
220 |
+
- ESNLI - MIT
|
221 |
- ECQA - CDLA 1.0 - Sharing
|
222 |
- Strategy - MIT
|
223 |
- CREAK - MIT
|
|
|
227 |
- Language Model, ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0
|
228 |
|
229 |
|
230 |
+
## Nomenclature
|
231 |
|
232 |
- Model : Open-llama-v2
|
233 |
- Model Size: 7B parameters
|
|
|
249 |
|
250 |
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
|
251 |
|
252 |
+
prompt = """What is attention mechanism of a transformer model?
|
253 |
Write a python code to illustrate how attention works within a transformer model using numpy library. Donot use pytorch or tensorflow."""
|
254 |
|
255 |
|
|
|
278 |
# It is used to prevent the model from attending to certain positions in the input sequence if they are not relevant.
|
279 |
# The attention weights are the element-wise product of the query, key, and mask tensors.
|
280 |
# The result is a tensor of the same shape as the query tensor.
|
281 |
+
|
282 |
# Compute the dot product between the query tensor and the key tensor
|
283 |
dot = np.matmul(query, key)
|
284 |
+
|
285 |
# Compute the element-wise softmax of the dot product tensor
|
286 |
exp_dot = np.exp(dot)
|
287 |
+
|
288 |
# Multiply the dot product and the softmax of the dot product tensors
|
289 |
weights = dot * exp_dot
|
290 |
+
|
291 |
# Return the attention weights as a NumPy tensor
|
292 |
return weights
|
293 |
|
|
|
313 |
I hope this helps!</s>
|
314 |
'''
|
315 |
```
|
316 |
+
|
317 |
## Finetuning details
|
318 |
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
|
319 |
## Evaluation
|
config.json
CHANGED
@@ -1,23 +1,33 @@
|
|
1 |
{
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
}
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "/home/gollapudit/peft/open_llama_7b_v2_open_instruct",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"hidden_act": "silu",
|
9 |
+
"hidden_size": 4096,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 11008,
|
12 |
+
"max_position_embeddings": 2048,
|
13 |
+
"model_type": "llama",
|
14 |
+
"num_attention_heads": 32,
|
15 |
+
"num_hidden_layers": 32,
|
16 |
+
"pad_token_id": 0,
|
17 |
+
"rms_norm_eps": 1e-06,
|
18 |
+
"tie_word_embeddings": false,
|
19 |
+
"torch_dtype": "float16",
|
20 |
+
"transformers_version": "4.30.2",
|
21 |
+
"use_cache": true,
|
22 |
+
"vocab_size": 32000,
|
23 |
+
"quantization_config": {
|
24 |
+
"bits": 4,
|
25 |
+
"group_size": 128,
|
26 |
+
"damp_percent": 0.01,
|
27 |
+
"desc_act": false,
|
28 |
+
"sym": true,
|
29 |
+
"true_sequential": true,
|
30 |
+
"model_file_base_name": "model",
|
31 |
+
"quant_method": "gptq"
|
32 |
+
}
|
33 |
}
|
open-llama-7b-v2-open-instruct-GPTQ-4bit-128g.no-act.order.safetensors → model.safetensors
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5d2f8c94b37e75a76a658272afd7c1eacf6a327d359d53b60babe065040e6486
|
3 |
+
size 3996053408
|
quantize_config.json
CHANGED
@@ -1,8 +1,9 @@
|
|
1 |
{
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
|
|
8 |
}
|
|
|
1 |
{
|
2 |
+
"bits": 4,
|
3 |
+
"group_size": 128,
|
4 |
+
"damp_percent": 0.01,
|
5 |
+
"desc_act": false,
|
6 |
+
"sym": true,
|
7 |
+
"true_sequential": true,
|
8 |
+
"model_file_base_name": "model"
|
9 |
}
|