TheBloke commited on
Commit
d19a058
1 Parent(s): c64be8d

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -13,17 +13,20 @@ tags:
13
  ---
14
 
15
  <!-- header start -->
16
- <div style="width: 100%;">
17
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
18
  </div>
19
  <div style="display: flex; justify-content: space-between; width: 100%;">
20
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
22
  </div>
23
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
24
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
25
  </div>
26
  </div>
 
 
27
  <!-- header end -->
28
 
29
  # Llama2 70B Chat Uncensored - GPTQ
@@ -74,11 +77,11 @@ All GPTQ files are made with AutoGPTQ.
74
 
75
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
76
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
77
- | [main](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
78
- | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
79
- | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
80
- | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
81
- | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
82
  | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
83
 
84
  ## How to download from branches
@@ -195,6 +198,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
195
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
196
 
197
  <!-- footer start -->
 
198
  ## Discord
199
 
200
  For further support, and discussions on these models and AI in general, join us at:
@@ -214,13 +218,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
214
  * Patreon: https://patreon.com/TheBlokeAI
215
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
216
 
217
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
218
 
219
- **Patreon special mentions**: Willem Michiel, Ajan Kanaga, Cory Kujawski, Alps Aficionado, Nikolai Manek, Jonathan Leane, Stanislav Ovsiannikov, Michael Levine, Luke Pendergrass, Sid, K, Gabriel Tamborski, Clay Pascal, Kalila, William Sang, Will Dee, Pieter, Nathan LeClaire, ya boyyy, David Flickinger, vamX, Derek Yates, Fen Risland, Jeffrey Morgan, webtim, Daniel P. Andersen, Chadd, Edmond Seymore, Pyrater, Olusegun Samson, Lone Striker, biorpg, alfie_i, Mano Prime, Chris Smitley, Dave, zynix, Trenton Dambrowitz, Johann-Peter Hartmann, Magnesian, Spencer Kim, John Detwiler, Iucharbius, Gabriel Puliatti, LangChain4j, Luke @flexchar, Vadim, Rishabh Srivastava, Preetika Verma, Ai Maven, Femi Adebogun, WelcomeToTheClub, Leonard Tan, Imad Khwaja, Steven Wood, Stefan Sabev, Sebastain Graf, usrbinkat, Dan Guido, Sam, Eugene Pentland, Mandus, transmissions 11, Slarti, Karl Bernard, Spiking Neurons AB, Artur Olbinski, Joseph William Delisle, ReadyPlayerEmma, Olakabola, Asp the Wyvern, Space Cruiser, Matthew Berman, Randy H, subjectnull, danny, John Villwock, Illia Dulskyi, Rainer Wilmers, theTransient, Pierre Kircher, Alexandros Triantafyllidis, Viktor Bowallius, terasurfer, Deep Realms, SuperWojo, senxiiz, Oscar Rangel, Alex, Stephen Murray, Talal Aujan, Raven Klaugh, Sean Connelly, Raymond Fosdick, Fred von Graf, chris gileta, Junyu Yang, Elle
220
 
221
 
222
  Thank you to all my generous patrons and donaters!
223
 
 
 
224
  <!-- footer end -->
225
 
226
  # Original model card: Jarrad Hope's Llama2 70B Chat Uncensored
@@ -234,9 +240,9 @@ Special thanks to [George Sung](https://huggingface.co/georgesung) for creating
234
 
235
  The version here is the fp16 HuggingFace model.
236
 
237
- In 8 bit mode, the model fits into 84% of A100 80GB (67.2GB) 68747MiB
238
- In 4 bit mode, the model fits into 51% of A100 80GB (40.8GB) 41559MiB
239
- 500gb of RAM/Swap was required to merge the model.
240
 
241
  ## GGML & GPTQ versions
242
  Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
@@ -266,7 +272,7 @@ This model was created as a response to the overbearing & patronising responses
266
 
267
  ## Illustration
268
 
269
- This can be illustrated with the simple question, 'What is a poop?':
270
 
271
  ### LLama 2 70B Chat
272
  ```llama2-70b-chat
@@ -301,7 +307,7 @@ A straightforward, unassuming answer. The model has provided accurate and helpfu
301
 
302
  ## Morality
303
 
304
- The response in this illustration raises an interesting question, where does morality lie? Is it with us or with the model?
305
 
306
  If an AI is trained to be safe, why does it not only apply its morality to itself, why does it attempt to overzealously change the human's behaviour in the interaction?
307
 
 
13
  ---
14
 
15
  <!-- header start -->
16
+ <!-- 200823 -->
17
+ <div style="width: auto; margin-left: auto; margin-right: auto">
18
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
19
  </div>
20
  <div style="display: flex; justify-content: space-between; width: 100%;">
21
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
23
  </div>
24
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
25
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
26
  </div>
27
  </div>
28
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
29
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
30
  <!-- header end -->
31
 
32
  # Llama2 70B Chat Uncensored - GPTQ
 
77
 
78
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
79
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
80
+ | [main](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
81
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
82
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
83
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
84
+ | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
85
  | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
86
 
87
  ## How to download from branches
 
198
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
199
 
200
  <!-- footer start -->
201
+ <!-- 200823 -->
202
  ## Discord
203
 
204
  For further support, and discussions on these models and AI in general, join us at:
 
218
  * Patreon: https://patreon.com/TheBlokeAI
219
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
220
 
221
+ **Special thanks to**: Aemon Algiz.
222
 
223
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
224
 
225
 
226
  Thank you to all my generous patrons and donaters!
227
 
228
+ And thank you again to a16z for their generous grant.
229
+
230
  <!-- footer end -->
231
 
232
  # Original model card: Jarrad Hope's Llama2 70B Chat Uncensored
 
240
 
241
  The version here is the fp16 HuggingFace model.
242
 
243
+ In 8 bit mode, the model fits into 84% of A100 80GB (67.2GB) 68747MiB
244
+ In 4 bit mode, the model fits into 51% of A100 80GB (40.8GB) 41559MiB
245
+ 500gb of RAM/Swap was required to merge the model.
246
 
247
  ## GGML & GPTQ versions
248
  Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
 
272
 
273
  ## Illustration
274
 
275
+ This can be illustrated with the simple question, 'What is a poop?':
276
 
277
  ### LLama 2 70B Chat
278
  ```llama2-70b-chat
 
307
 
308
  ## Morality
309
 
310
+ The response in this illustration raises an interesting question, where does morality lie? Is it with us or with the model?
311
 
312
  If an AI is trained to be safe, why does it not only apply its morality to itself, why does it attempt to overzealously change the human's behaviour in the interaction?
313
 
config.json CHANGED
@@ -1,27 +1,38 @@
1
  {
2
- "_name_or_path": "TheBloke/Llama-2-70B-fp16",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 1,
7
- "eos_token_id": 2,
8
- "hidden_act": "silu",
9
- "hidden_size": 8192,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 28672,
12
- "max_length": 4096,
13
- "max_position_embeddings": 2048,
14
- "model_type": "llama",
15
- "num_attention_heads": 64,
16
- "num_hidden_layers": 80,
17
- "num_key_value_heads": 8,
18
- "pad_token_id": 0,
19
- "pretraining_tp": 1,
20
- "rms_norm_eps": 1e-05,
21
- "rope_scaling": null,
22
- "tie_word_embeddings": false,
23
- "torch_dtype": "float32",
24
- "transformers_version": "4.31.0",
25
- "use_cache": true,
26
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
 
27
  }
 
1
  {
2
+ "_name_or_path": "TheBloke/Llama-2-70B-fp16",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 8192,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 28672,
12
+ "max_length": 4096,
13
+ "max_position_embeddings": 2048,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 64,
16
+ "num_hidden_layers": 80,
17
+ "num_key_value_heads": 8,
18
+ "pad_token_id": 0,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": null,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.31.0",
25
+ "use_cache": true,
26
+ "vocab_size": 32000,
27
+ "quantization_config": {
28
+ "bits": 4,
29
+ "group_size": 64,
30
+ "damp_percent": 0.1,
31
+ "desc_act": true,
32
+ "sym": true,
33
+ "true_sequential": true,
34
+ "model_name_or_path": null,
35
+ "model_file_base_name": "model",
36
+ "quant_method": "gptq"
37
+ }
38
  }
gptq_model-4bit-64g.safetensors → model.safetensors RENAMED
File without changes
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }