TheBloke commited on
Commit
25fd406
1 Parent(s): 120c336

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -5,22 +5,25 @@ model_type: llama
5
  ---
6
 
7
  <!-- header start -->
8
- <div style="width: 100%;">
9
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
10
  </div>
11
  <div style="display: flex; justify-content: space-between; width: 100%;">
12
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
13
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
14
  </div>
15
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
16
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
17
  </div>
18
  </div>
 
 
19
  <!-- header end -->
20
 
21
  # Meta's LLaMA 65B GPTQ
22
 
23
- These files are GPTQ model files for [Meta's LLaMA 65B](https://huggingface.co/https://ai.meta.com/blog/large-language-model-llama-meta-ai).
24
 
25
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
26
 
@@ -96,7 +99,7 @@ from transformers import AutoTokenizer, pipeline, logging
96
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
97
 
98
  model_name_or_path = "TheBloke/LLaMA-65B-GPTQ"
99
- model_basename = "gptq_model-4bit--1g"
100
 
101
  use_triton = False
102
 
@@ -158,6 +161,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
158
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
159
 
160
  <!-- footer start -->
 
161
  ## Discord
162
 
163
  For further support, and discussions on these models and AI in general, join us at:
@@ -177,12 +181,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
177
  * Patreon: https://patreon.com/TheBlokeAI
178
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
179
 
180
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
 
 
181
 
182
- **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
183
 
184
  Thank you to all my generous patrons and donaters!
185
 
 
 
186
  <!-- footer end -->
187
 
188
  # Original model card: Meta's LLaMA 65B
 
5
  ---
6
 
7
  <!-- header start -->
8
+ <!-- 200823 -->
9
+ <div style="width: auto; margin-left: auto; margin-right: auto">
10
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
11
  </div>
12
  <div style="display: flex; justify-content: space-between; width: 100%;">
13
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
14
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
15
  </div>
16
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
17
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
18
  </div>
19
  </div>
20
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
21
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
22
  <!-- header end -->
23
 
24
  # Meta's LLaMA 65B GPTQ
25
 
26
+ These files are GPTQ model files for [Meta's LLaMA 65B](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
27
 
28
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
29
 
 
99
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
100
 
101
  model_name_or_path = "TheBloke/LLaMA-65B-GPTQ"
102
+ model_basename = "model"
103
 
104
  use_triton = False
105
 
 
161
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
162
 
163
  <!-- footer start -->
164
+ <!-- 200823 -->
165
  ## Discord
166
 
167
  For further support, and discussions on these models and AI in general, join us at:
 
181
  * Patreon: https://patreon.com/TheBlokeAI
182
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
183
 
184
+ **Special thanks to**: Aemon Algiz.
185
+
186
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
187
 
 
188
 
189
  Thank you to all my generous patrons and donaters!
190
 
191
+ And thank you again to a16z for their generous grant.
192
+
193
  <!-- footer end -->
194
 
195
  # Original model card: Meta's LLaMA 65B
config.json CHANGED
@@ -1,22 +1,33 @@
1
  {
2
- "architectures": [
3
- "LlamaForCausalLM"
4
- ],
5
- "bos_token_id": 1,
6
- "eos_token_id": 2,
7
- "hidden_act": "silu",
8
- "hidden_size": 8192,
9
- "initializer_range": 0.02,
10
- "intermediate_size": 22016,
11
- "max_sequence_length": 2048,
12
- "model_type": "llama",
13
- "num_attention_heads": 64,
14
- "num_hidden_layers": 80,
15
- "pad_token_id": 0,
16
- "rms_norm_eps": 1e-05,
17
- "tie_word_embeddings": false,
18
- "torch_dtype": "float16",
19
- "transformers_version": "4.28.0.dev0",
20
- "use_cache": true,
21
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
 
22
  }
 
1
  {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "bos_token_id": 1,
6
+ "eos_token_id": 2,
7
+ "hidden_act": "silu",
8
+ "hidden_size": 8192,
9
+ "initializer_range": 0.02,
10
+ "intermediate_size": 22016,
11
+ "max_sequence_length": 2048,
12
+ "model_type": "llama",
13
+ "num_attention_heads": 64,
14
+ "num_hidden_layers": 80,
15
+ "pad_token_id": 0,
16
+ "rms_norm_eps": 1e-05,
17
+ "tie_word_embeddings": false,
18
+ "torch_dtype": "float16",
19
+ "transformers_version": "4.28.0.dev0",
20
+ "use_cache": true,
21
+ "vocab_size": 32000,
22
+ "quantization_config": {
23
+ "bits": 3,
24
+ "group_size": 64,
25
+ "damp_percent": 0.01,
26
+ "desc_act": true,
27
+ "sym": true,
28
+ "true_sequential": true,
29
+ "model_name_or_path": null,
30
+ "model_file_base_name": "model",
31
+ "quant_method": "gptq"
32
+ }
33
  }
gptq_model-3bit-64g.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eff62b601acef634b17edfe574b91805f62df0ec83841a5214608f3eaa39e65b
3
- size 27776178320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ade984efbca3f1c0ff9785f4c4cf0c5ea5807a139ec4ccf1de53fb79453867a
3
+ size 27776178376
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }