Text Generation
Transformers
Safetensors
gpt_bigcode
code
text-generation-inference
4-bit precision
4 papers
TheBloke commited on
Commit
37b13fd
1 Parent(s): d5601e0

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -40,23 +40,26 @@ extra_gated_prompt: >-
40
  Please read the BigCode [OpenRAIL-M
41
  license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
42
  agreement before accepting it.
43
-
44
  extra_gated_fields:
45
  I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
46
  ---
47
 
48
  <!-- header start -->
49
- <div style="width: 100%;">
50
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
51
  </div>
52
  <div style="display: flex; justify-content: space-between; width: 100%;">
53
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
54
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
55
  </div>
56
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
57
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
58
  </div>
59
  </div>
 
 
60
  <!-- header end -->
61
 
62
  # OpenAccess AI Collective's Minotaur 15B GPTQ
@@ -111,7 +114,7 @@ from transformers import AutoTokenizer, pipeline, logging
111
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
112
 
113
  model_name_or_path = "TheBloke/minotaur-15B-GPTQ"
114
- model_basename = "gptq_model-4bit-128g"
115
 
116
  use_triton = False
117
 
@@ -171,11 +174,12 @@ It was created with group_size 128 to increase inference accuracy, but without -
171
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
172
 
173
  <!-- footer start -->
 
174
  ## Discord
175
 
176
  For further support, and discussions on these models and AI in general, join us at:
177
 
178
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
179
 
180
  ## Thanks, and how to contribute.
181
 
@@ -190,12 +194,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
190
  * Patreon: https://patreon.com/TheBlokeAI
191
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
192
 
193
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
194
 
195
- **Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
196
 
197
  Thank you to all my generous patrons and donaters!
198
 
 
 
199
  <!-- footer end -->
200
 
201
  # Original model card: OpenAccess AI Collective's Minotaur 15B
@@ -282,10 +289,10 @@ Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://hugging
282
 
283
  ## Model Summary
284
 
285
- StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
286
  combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
287
  It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
288
- [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
289
 
290
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
291
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
@@ -334,7 +341,7 @@ The training code dataset of the model was filtered for permissive licenses only
334
  # Limitations
335
 
336
  The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
337
- Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
338
 
339
  # Training
340
  StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
 
40
  Please read the BigCode [OpenRAIL-M
41
  license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
42
  agreement before accepting it.
43
+
44
  extra_gated_fields:
45
  I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
46
  ---
47
 
48
  <!-- header start -->
49
+ <!-- 200823 -->
50
+ <div style="width: auto; margin-left: auto; margin-right: auto">
51
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
52
  </div>
53
  <div style="display: flex; justify-content: space-between; width: 100%;">
54
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
55
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
56
  </div>
57
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
58
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
59
  </div>
60
  </div>
61
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
62
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
63
  <!-- header end -->
64
 
65
  # OpenAccess AI Collective's Minotaur 15B GPTQ
 
114
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
115
 
116
  model_name_or_path = "TheBloke/minotaur-15B-GPTQ"
117
+ model_basename = "model"
118
 
119
  use_triton = False
120
 
 
174
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
175
 
176
  <!-- footer start -->
177
+ <!-- 200823 -->
178
  ## Discord
179
 
180
  For further support, and discussions on these models and AI in general, join us at:
181
 
182
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
183
 
184
  ## Thanks, and how to contribute.
185
 
 
194
  * Patreon: https://patreon.com/TheBlokeAI
195
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
196
 
197
+ **Special thanks to**: Aemon Algiz.
198
+
199
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
200
 
 
201
 
202
  Thank you to all my generous patrons and donaters!
203
 
204
+ And thank you again to a16z for their generous grant.
205
+
206
  <!-- footer end -->
207
 
208
  # Original model card: OpenAccess AI Collective's Minotaur 15B
 
289
 
290
  ## Model Summary
291
 
292
+ StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
293
  combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
294
  It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
295
+ [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
296
 
297
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
298
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
 
341
  # Limitations
342
 
343
  The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
344
+ Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
345
 
346
  # Training
347
  StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
config.json CHANGED
@@ -1,39 +1,50 @@
1
  {
2
- "_name_or_path": "/fsx/bigcode/experiments/pretraining/conversions/starcoderplus/large-model",
3
- "activation_function": "gelu",
4
- "architectures": [
5
- "GPTBigCodeForCausalLM"
6
- ],
7
- "attention_softmax_in_fp32": true,
8
- "multi_query": true,
9
- "attn_pdrop": 0.1,
10
- "bos_token_id": 0,
11
- "embd_pdrop": 0.1,
12
- "eos_token_id": 0,
13
- "inference_runner": 0,
14
- "initializer_range": 0.02,
15
- "layer_norm_epsilon": 1e-05,
16
- "max_batch_size": null,
17
- "max_sequence_length": null,
18
- "model_type": "gpt_bigcode",
19
- "n_embd": 6144,
20
- "n_head": 48,
21
- "n_inner": 24576,
22
- "n_layer": 40,
23
- "n_positions": 8192,
24
- "pad_key_length": true,
25
- "pre_allocate_kv_cache": false,
26
- "resid_pdrop": 0.1,
27
- "scale_attention_softmax_in_fp32": true,
28
- "scale_attn_weights": true,
29
- "summary_activation": null,
30
- "summary_first_dropout": 0.1,
31
- "summary_proj_to_labels": true,
32
- "summary_type": "cls_index",
33
- "summary_use_proj": true,
34
- "torch_dtype": "float32",
35
- "transformers_version": "4.28.1",
36
- "use_cache": true,
37
- "validate_runner_input": true,
38
- "vocab_size": 49152
 
 
 
 
 
 
 
 
 
 
 
39
  }
 
1
  {
2
+ "_name_or_path": "/fsx/bigcode/experiments/pretraining/conversions/starcoderplus/large-model",
3
+ "activation_function": "gelu",
4
+ "architectures": [
5
+ "GPTBigCodeForCausalLM"
6
+ ],
7
+ "attention_softmax_in_fp32": true,
8
+ "multi_query": true,
9
+ "attn_pdrop": 0.1,
10
+ "bos_token_id": 0,
11
+ "embd_pdrop": 0.1,
12
+ "eos_token_id": 0,
13
+ "inference_runner": 0,
14
+ "initializer_range": 0.02,
15
+ "layer_norm_epsilon": 1e-05,
16
+ "max_batch_size": null,
17
+ "max_sequence_length": null,
18
+ "model_type": "gpt_bigcode",
19
+ "n_embd": 6144,
20
+ "n_head": 48,
21
+ "n_inner": 24576,
22
+ "n_layer": 40,
23
+ "n_positions": 8192,
24
+ "pad_key_length": true,
25
+ "pre_allocate_kv_cache": false,
26
+ "resid_pdrop": 0.1,
27
+ "scale_attention_softmax_in_fp32": true,
28
+ "scale_attn_weights": true,
29
+ "summary_activation": null,
30
+ "summary_first_dropout": 0.1,
31
+ "summary_proj_to_labels": true,
32
+ "summary_type": "cls_index",
33
+ "summary_use_proj": true,
34
+ "torch_dtype": "float32",
35
+ "transformers_version": "4.28.1",
36
+ "use_cache": true,
37
+ "validate_runner_input": true,
38
+ "vocab_size": 49152,
39
+ "quantization_config": {
40
+ "bits": 4,
41
+ "group_size": 128,
42
+ "damp_percent": 0.01,
43
+ "desc_act": false,
44
+ "sym": true,
45
+ "true_sequential": true,
46
+ "model_name_or_path": null,
47
+ "model_file_base_name": "model",
48
+ "quant_method": "gptq"
49
+ }
50
  }
gptq_model-4bit-128g.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5127f9187fcc02538b776362cdb4d606f8a1f5e97d3a1137da60b355fbe55085
3
- size 9198404320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5244916171978b2a27c3d20ab73e86bfc2bb0f149877adf20d7f9748463b2ec5
3
+ size 9198404376
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }