Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
8ec18e5
1 Parent(s): 3ba8d11

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -11,17 +11,20 @@ datasets:
11
  ---
12
 
13
  <!-- header start -->
14
- <div style="width: 100%;">
15
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
16
  </div>
17
  <div style="display: flex; justify-content: space-between; width: 100%;">
18
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
20
  </div>
21
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
23
  </div>
24
  </div>
 
 
25
  <!-- header end -->
26
 
27
  # Pankaj Mathur's Orca Mini 13B GPTQ
@@ -152,6 +155,7 @@ It was created with group_size 128 to increase inference accuracy, but without -
152
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
153
 
154
  <!-- footer start -->
 
155
  ## Discord
156
 
157
  For further support, and discussions on these models and AI in general, join us at:
@@ -171,12 +175,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
171
  * Patreon: https://patreon.com/TheBlokeAI
172
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
173
 
174
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
175
 
176
- **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
177
 
178
  Thank you to all my generous patrons and donaters!
179
 
 
 
180
  <!-- footer end -->
181
 
182
  # Original model card: Pankaj Mathur's Orca Mini 13B
@@ -235,12 +242,12 @@ model = LlamaForCausalLM.from_pretrained(
235
 
236
  #generate text function
237
  def generate_text(system, instruction, input=None):
238
-
239
  if input:
240
  prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
241
  else:
242
  prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
243
-
244
  tokens = tokenizer.encode(prompt)
245
  tokens = torch.LongTensor(tokens).unsqueeze(0)
246
  tokens = tokens.to('cuda')
@@ -250,14 +257,14 @@ def generate_text(system, instruction, input=None):
250
  length = len(tokens[0])
251
  with torch.no_grad():
252
  rest = model.generate(
253
- input_ids=tokens,
254
- max_length=length+instance['generate_len'],
255
- use_cache=True,
256
- do_sample=True,
257
  top_p=instance['top_p'],
258
  temperature=instance['temperature'],
259
  top_k=instance['top_k']
260
- )
261
  output = rest[0][length:]
262
  string = tokenizer.decode(output, skip_special_tokens=True)
263
  return f'[!] Response: {string}'
 
11
  ---
12
 
13
  <!-- header start -->
14
+ <!-- 200823 -->
15
+ <div style="width: auto; margin-left: auto; margin-right: auto">
16
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
  </div>
18
  <div style="display: flex; justify-content: space-between; width: 100%;">
19
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
21
  </div>
22
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
24
  </div>
25
  </div>
26
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
27
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
28
  <!-- header end -->
29
 
30
  # Pankaj Mathur's Orca Mini 13B GPTQ
 
155
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
156
 
157
  <!-- footer start -->
158
+ <!-- 200823 -->
159
  ## Discord
160
 
161
  For further support, and discussions on these models and AI in general, join us at:
 
175
  * Patreon: https://patreon.com/TheBlokeAI
176
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
177
 
178
+ **Special thanks to**: Aemon Algiz.
179
+
180
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
181
 
 
182
 
183
  Thank you to all my generous patrons and donaters!
184
 
185
+ And thank you again to a16z for their generous grant.
186
+
187
  <!-- footer end -->
188
 
189
  # Original model card: Pankaj Mathur's Orca Mini 13B
 
242
 
243
  #generate text function
244
  def generate_text(system, instruction, input=None):
245
+
246
  if input:
247
  prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
248
  else:
249
  prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
250
+
251
  tokens = tokenizer.encode(prompt)
252
  tokens = torch.LongTensor(tokens).unsqueeze(0)
253
  tokens = tokens.to('cuda')
 
257
  length = len(tokens[0])
258
  with torch.no_grad():
259
  rest = model.generate(
260
+ input_ids=tokens,
261
+ max_length=length+instance['generate_len'],
262
+ use_cache=True,
263
+ do_sample=True,
264
  top_p=instance['top_p'],
265
  temperature=instance['temperature'],
266
  top_k=instance['top_k']
267
+ )
268
  output = rest[0][length:]
269
  string = tokenizer.decode(output, skip_special_tokens=True)
270
  return f'[!] Response: {string}'
config.json CHANGED
@@ -1,23 +1,33 @@
1
  {
2
- "_name_or_path": "openlm-research/open_llama_13b",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 1,
7
- "eos_token_id": 2,
8
- "hidden_act": "silu",
9
- "hidden_size": 5120,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 13824,
12
- "max_position_embeddings": 2048,
13
- "model_type": "llama",
14
- "num_attention_heads": 40,
15
- "num_hidden_layers": 40,
16
- "pad_token_id": 0,
17
- "rms_norm_eps": 1e-06,
18
- "tie_word_embeddings": false,
19
- "torch_dtype": "float32",
20
- "transformers_version": "4.29.1",
21
- "use_cache": true,
22
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
23
  }
 
1
  {
2
+ "_name_or_path": "openlm-research/open_llama_13b",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 2048,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 40,
15
+ "num_hidden_layers": 40,
16
+ "pad_token_id": 0,
17
+ "rms_norm_eps": 1e-06,
18
+ "tie_word_embeddings": false,
19
+ "torch_dtype": "float32",
20
+ "transformers_version": "4.29.1",
21
+ "use_cache": true,
22
+ "vocab_size": 32000,
23
+ "quantization_config": {
24
+ "bits": 4,
25
+ "group_size": 128,
26
+ "damp_percent": 0.01,
27
+ "desc_act": false,
28
+ "sym": true,
29
+ "true_sequential": true,
30
+ "model_file_base_name": "model",
31
+ "quant_method": "gptq"
32
+ }
33
  }
orca-mini-13b-GPTQ-4bit-128g.no-act.order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:57ad502787648560c367ebf4add04bcf54eab4a6bd1b4499e5731b202fafbaf2
3
- size 8110988216
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8083286190bd0909eaa67183badb563a1e8394539d5f64809b00b7fb28bd31e0
3
+ size 8110988272
quantize_config.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "bits": 4,
3
- "group_size": 128,
4
- "damp_percent": 0.01,
5
- "desc_act": false,
6
- "sym": true,
7
- "true_sequential": true
 
8
  }
 
1
  {
2
+ "bits": 4,
3
+ "group_size": 128,
4
+ "damp_percent": 0.01,
5
+ "desc_act": false,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "model_file_base_name": "model"
9
  }