TheBloke commited on
Commit
6139497
1 Parent(s): b0300e0

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -10,17 +10,20 @@ tags:
10
  ---
11
 
12
  <!-- header start -->
13
- <div style="width: 100%;">
14
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
22
  </div>
23
  </div>
 
 
24
  <!-- header end -->
25
 
26
  # NousResearch's Redmond Puffin 13B V1.3 GPTQ
@@ -31,17 +34,31 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
31
 
32
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
33
 
 
 
34
  ## Repositories available
35
 
36
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
37
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)
38
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
39
 
40
- ## Prompt template: Human-Gpt
41
 
42
  ```
43
- ### human:
44
- ### gpt:
 
 
 
 
 
 
 
 
 
 
 
 
45
  ```
46
 
47
  ## Provided files
@@ -52,13 +69,13 @@ Each separate quant is in a different branch. See below for instructions on fet
52
 
53
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
54
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
55
- | main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
56
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
57
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
58
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
59
- | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
60
- | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
61
- | gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
62
  | gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
63
 
64
  ## How to download from branches
@@ -102,7 +119,7 @@ from transformers import AutoTokenizer, pipeline, logging
102
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
103
 
104
  model_name_or_path = "TheBloke/Redmond-Puffin-13B-GPTQ"
105
- model_basename = "gptq_model-4bit-128g"
106
 
107
  use_triton = False
108
 
@@ -129,9 +146,13 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
129
  """
130
 
131
  prompt = "Tell me about AI"
132
- prompt_template=f'''### human:
133
- ### gpt:
134
- '''
 
 
 
 
135
 
136
  print("\n\n*** Generate:")
137
 
@@ -165,6 +186,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
165
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
166
 
167
  <!-- footer start -->
 
168
  ## Discord
169
 
170
  For further support, and discussions on these models and AI in general, join us at:
@@ -184,13 +206,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
184
  * Patreon: https://patreon.com/TheBlokeAI
185
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
186
 
187
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
188
 
189
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
190
 
191
 
192
  Thank you to all my generous patrons and donaters!
193
 
 
 
194
  <!-- footer end -->
195
 
196
  # Original model card: NousResearch's Redmond Puffin 13B V1.3
@@ -212,7 +236,7 @@ Notable mentions for assisting in some of the training issues goes to: Caseus an
212
 
213
  ## Model Training
214
 
215
- Redmond-Puffin-13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
216
 
217
  Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
218
 
@@ -260,13 +284,13 @@ We plan to have these solved in an updated Puffin model in the very near future,
260
 
261
  ## Future Plans
262
 
263
- This is a relatively early build amongst the grand plans for the future of Puffin!
264
 
265
  Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
266
 
267
  ## How you can help!
268
 
269
- In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
270
 
271
  If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact ldj on discord!
272
 
 
10
  ---
11
 
12
  <!-- header start -->
13
+ <!-- 200823 -->
14
+ <div style="width: auto; margin-left: auto; margin-right: auto">
15
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
16
  </div>
17
  <div style="display: flex; justify-content: space-between; width: 100%;">
18
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
20
  </div>
21
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
23
  </div>
24
  </div>
25
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
26
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
27
  <!-- header end -->
28
 
29
  # NousResearch's Redmond Puffin 13B V1.3 GPTQ
 
34
 
35
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
36
 
37
+ **Note**: The files in this repo were updated on July 20th to reflect the [V1.3 release of NousResearch's Redmond Puffin 13B](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
38
+
39
  ## Repositories available
40
 
41
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
42
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)
43
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
44
 
45
+ ## Prompt template: Human-Response
46
 
47
  ```
48
+ ### human: {prompt}
49
+
50
+ ### response:
51
+ ```
52
+ Optional reccomended pre-prompt / system prompt:
53
+
54
+ ```
55
+ ### human: Interact in conversation to the best of your ability, please be concise, logical, intelligent and coherent.
56
+
57
+ ### response: Sure! sounds good.
58
+
59
+ ### human: {prompt}
60
+
61
+ ### response:
62
  ```
63
 
64
  ## Provided files
 
69
 
70
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
71
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
72
+ | main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
73
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
74
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
75
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
76
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
77
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
78
+ | gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
79
  | gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
80
 
81
  ## How to download from branches
 
119
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
120
 
121
  model_name_or_path = "TheBloke/Redmond-Puffin-13B-GPTQ"
122
+ model_basename = "model"
123
 
124
  use_triton = False
125
 
 
146
  """
147
 
148
  prompt = "Tell me about AI"
149
+ prompt_template=f'''### human: Interact in conversation to the best of your ability, please be concise, logical, intelligent and coherent.
150
+
151
+ ### response: Sure! sounds good.
152
+
153
+ ### human: {prompt}
154
+
155
+ ### response:'''
156
 
157
  print("\n\n*** Generate:")
158
 
 
186
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
187
 
188
  <!-- footer start -->
189
+ <!-- 200823 -->
190
  ## Discord
191
 
192
  For further support, and discussions on these models and AI in general, join us at:
 
206
  * Patreon: https://patreon.com/TheBlokeAI
207
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
208
 
209
+ **Special thanks to**: Aemon Algiz.
210
 
211
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
212
 
213
 
214
  Thank you to all my generous patrons and donaters!
215
 
216
+ And thank you again to a16z for their generous grant.
217
+
218
  <!-- footer end -->
219
 
220
  # Original model card: NousResearch's Redmond Puffin 13B V1.3
 
236
 
237
  ## Model Training
238
 
239
+ Redmond-Puffin-13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
240
 
241
  Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
242
 
 
284
 
285
  ## Future Plans
286
 
287
+ This is a relatively early build amongst the grand plans for the future of Puffin!
288
 
289
  Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
290
 
291
  ## How you can help!
292
 
293
+ In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
294
 
295
  If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact ldj on discord!
296
 
config.json CHANGED
@@ -1,25 +1,36 @@
1
  {
2
- "_name_or_path": "output/puffin-v1.3-4k-sharegpt-llama-2-13b/checkpoint-550",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 1,
7
- "eos_token_id": 2,
8
- "hidden_act": "silu",
9
- "hidden_size": 5120,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 13824,
12
- "max_position_embeddings": 4096,
13
- "model_type": "llama",
14
- "num_attention_heads": 40,
15
- "num_hidden_layers": 40,
16
- "num_key_value_heads": 40,
17
- "pad_token_id": 0,
18
- "rms_norm_eps": 1e-05,
19
- "rope_scaling": null,
20
- "tie_word_embeddings": false,
21
- "torch_dtype": "bfloat16",
22
- "transformers_version": "4.32.0.dev0",
23
- "use_cache": true,
24
- "vocab_size": 32032
 
 
 
 
 
 
 
 
 
 
 
25
  }
 
1
  {
2
+ "_name_or_path": "output/puffin-v1.3-4k-sharegpt-llama-2-13b/checkpoint-550",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 4096,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 40,
15
+ "num_hidden_layers": 40,
16
+ "num_key_value_heads": 40,
17
+ "pad_token_id": 0,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_scaling": null,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "bfloat16",
22
+ "transformers_version": "4.32.0.dev0",
23
+ "use_cache": true,
24
+ "vocab_size": 32032,
25
+ "quantization_config": {
26
+ "bits": 8,
27
+ "group_size": 64,
28
+ "damp_percent": 0.01,
29
+ "desc_act": true,
30
+ "sym": true,
31
+ "true_sequential": true,
32
+ "model_name_or_path": null,
33
+ "model_file_base_name": "model",
34
+ "quant_method": "gptq"
35
+ }
36
  }
gptq_model-8bit-64g.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7fe874b24143acebb60c568c138599cdbdad99a4fd5f54a15667bd131ca5dbe7
3
- size 13950923112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6590c20d374d946cc1eec8fc653f99cb8c208cfb0aaad9318af9677da87c51cf
3
+ size 13950923168
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }