TheBloke commited on
Commit
14a2b99
1 Parent(s): 23b2e61

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -27,17 +27,20 @@ widget:
27
  ---
28
 
29
  <!-- header start -->
30
- <div style="width: 100%;">
31
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
32
  </div>
33
  <div style="display: flex; justify-content: space-between; width: 100%;">
34
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
35
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
36
  </div>
37
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
38
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
39
  </div>
40
  </div>
 
 
41
  <!-- header end -->
42
 
43
  # Llama2 13B Orca 8K 3319 - GPTQ
@@ -70,13 +73,13 @@ Each separate quant is in a different branch. See below for instructions on fet
70
 
71
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
72
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
73
- | [main](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/main) | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
74
- | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
75
- | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
76
- | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
77
- | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
78
- | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
79
- | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
80
  | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
81
 
82
  ## How to download from branches
@@ -120,7 +123,7 @@ from transformers import AutoTokenizer, pipeline, logging
120
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
121
 
122
  model_name_or_path = "TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ"
123
- model_basename = "gptq_model-4bit-128g"
124
 
125
  use_triton = False
126
 
@@ -182,6 +185,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
182
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
183
 
184
  <!-- footer start -->
 
185
  ## Discord
186
 
187
  For further support, and discussions on these models and AI in general, join us at:
@@ -201,13 +205,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
201
  * Patreon: https://patreon.com/TheBlokeAI
202
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
203
 
204
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
205
 
206
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
207
 
208
 
209
  Thank you to all my generous patrons and donaters!
210
 
 
 
211
  <!-- footer end -->
212
 
213
  # Original model card: OpenAssistant's Llama2 13B Orca 8K 3319
@@ -306,8 +312,8 @@ Dataset Composition:
306
  fanfics: 1000
307
  red_pajama: 1000
308
  ```
309
-
310
- The dataset [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat) combines similar examples of the GPT-4 subset of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) to form longer conversations
311
  to improve long-context training.
312
 
313
  Additionally, RedPajama and FanFics were used for classic language modelling as an auxiliary task to improve the RoPE scaling for the 8k context size.
 
27
  ---
28
 
29
  <!-- header start -->
30
+ <!-- 200823 -->
31
+ <div style="width: auto; margin-left: auto; margin-right: auto">
32
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
33
  </div>
34
  <div style="display: flex; justify-content: space-between; width: 100%;">
35
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
36
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
37
  </div>
38
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
39
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
40
  </div>
41
  </div>
42
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
43
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
44
  <!-- header end -->
45
 
46
  # Llama2 13B Orca 8K 3319 - GPTQ
 
73
 
74
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
75
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
76
+ | [main](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/main) | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
77
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
78
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
79
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
80
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
81
+ | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
82
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
83
  | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
84
 
85
  ## How to download from branches
 
123
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
124
 
125
  model_name_or_path = "TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ"
126
+ model_basename = "model"
127
 
128
  use_triton = False
129
 
 
185
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
186
 
187
  <!-- footer start -->
188
+ <!-- 200823 -->
189
  ## Discord
190
 
191
  For further support, and discussions on these models and AI in general, join us at:
 
205
  * Patreon: https://patreon.com/TheBlokeAI
206
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
207
 
208
+ **Special thanks to**: Aemon Algiz.
209
 
210
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
211
 
212
 
213
  Thank you to all my generous patrons and donaters!
214
 
215
+ And thank you again to a16z for their generous grant.
216
+
217
  <!-- footer end -->
218
 
219
  # Original model card: OpenAssistant's Llama2 13B Orca 8K 3319
 
312
  fanfics: 1000
313
  red_pajama: 1000
314
  ```
315
+
316
+ The dataset [shahules786/orca-chat](https://huggingface.co/datasets/shahules786/orca-chat) combines similar examples of the GPT-4 subset of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) to form longer conversations
317
  to improve long-context training.
318
 
319
  Additionally, RedPajama and FanFics were used for classic language modelling as an auxiliary task to improve the RoPE scaling for the 8k context size.
config.json CHANGED
@@ -1,30 +1,41 @@
1
  {
2
- "_name_or_path": "/mnt/data/ikka/Open-Assistant/model/model_training/llama2_13b_orca_8k_2/",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 1,
7
- "eos_token_id": 2,
8
- "hidden_act": "silu",
9
- "hidden_size": 5120,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 13824,
12
- "max_length": 8192,
13
- "max_position_embeddings": 8192,
14
- "model_type": "llama",
15
- "num_attention_heads": 40,
16
- "num_hidden_layers": 40,
17
- "num_key_value_heads": 40,
18
- "pad_token_id": 0,
19
- "pretraining_tp": 1,
20
- "rms_norm_eps": 1e-05,
21
- "rope_scaling": {
22
- "factor": 2.0,
23
- "type": "linear"
24
- },
25
- "tie_word_embeddings": false,
26
- "torch_dtype": "float16",
27
- "transformers_version": "4.31.0.dev0",
28
- "use_cache": true,
29
- "vocab_size": 32016
 
 
 
 
 
 
 
 
 
 
 
30
  }
 
1
  {
2
+ "_name_or_path": "/mnt/data/ikka/Open-Assistant/model/model_training/llama2_13b_orca_8k_2/",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_length": 8192,
13
+ "max_position_embeddings": 8192,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 40,
16
+ "num_hidden_layers": 40,
17
+ "num_key_value_heads": 40,
18
+ "pad_token_id": 0,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": {
22
+ "factor": 2.0,
23
+ "type": "linear"
24
+ },
25
+ "tie_word_embeddings": false,
26
+ "torch_dtype": "float16",
27
+ "transformers_version": "4.31.0.dev0",
28
+ "use_cache": true,
29
+ "vocab_size": 32016,
30
+ "quantization_config": {
31
+ "bits": 4,
32
+ "group_size": 128,
33
+ "damp_percent": 0.01,
34
+ "desc_act": false,
35
+ "sym": true,
36
+ "true_sequential": true,
37
+ "model_name_or_path": null,
38
+ "model_file_base_name": "model",
39
+ "quant_method": "gptq"
40
+ }
41
  }
gptq_model-4bit-128g.safetensors → model.safetensors RENAMED
File without changes
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }