TheBloke commited on
Commit
2a7cb4a
1 Parent(s): 36b6471

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -73,24 +73,27 @@ extra_gated_prompt: >-
73
  Please read the BigCode [OpenRAIL-M
74
  license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
75
  agreement before accepting it.
76
-
77
  extra_gated_fields:
78
  I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
79
 
80
  ---
81
 
82
  <!-- header start -->
83
- <div style="width: 100%;">
84
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
85
  </div>
86
  <div style="display: flex; justify-content: space-between; width: 100%;">
87
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
88
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
89
  </div>
90
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
91
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
92
  </div>
93
  </div>
 
 
94
  <!-- header end -->
95
 
96
  # Bigcode's StarcoderPlus GPTQ
@@ -179,11 +182,12 @@ It was created without group_size to lower VRAM requirements, and with --act-ord
179
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
180
 
181
  <!-- footer start -->
 
182
  ## Discord
183
 
184
  For further support, and discussions on these models and AI in general, join us at:
185
 
186
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
187
 
188
  ## Thanks, and how to contribute.
189
 
@@ -198,12 +202,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
198
  * Patreon: https://patreon.com/TheBlokeAI
199
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
200
 
201
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
202
 
203
- **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
204
 
205
  Thank you to all my generous patrons and donaters!
206
 
 
 
207
  <!-- footer end -->
208
 
209
  # Original model card: Bigcode's StarcoderPlus
@@ -223,10 +230,10 @@ Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://hugging
223
 
224
  ## Model Summary
225
 
226
- StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
227
  combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
228
  It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
229
- [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
230
 
231
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
232
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
@@ -275,7 +282,7 @@ The training code dataset of the model was filtered for permissive licenses only
275
  # Limitations
276
 
277
  The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
278
- Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
279
 
280
  # Training
281
  StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
@@ -298,4 +305,4 @@ StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCod
298
  - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
299
 
300
  # License
301
- The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
 
73
  Please read the BigCode [OpenRAIL-M
74
  license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
75
  agreement before accepting it.
76
+
77
  extra_gated_fields:
78
  I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
79
 
80
  ---
81
 
82
  <!-- header start -->
83
+ <!-- 200823 -->
84
+ <div style="width: auto; margin-left: auto; margin-right: auto">
85
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
86
  </div>
87
  <div style="display: flex; justify-content: space-between; width: 100%;">
88
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
89
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
90
  </div>
91
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
92
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
93
  </div>
94
  </div>
95
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
96
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
97
  <!-- header end -->
98
 
99
  # Bigcode's StarcoderPlus GPTQ
 
182
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
183
 
184
  <!-- footer start -->
185
+ <!-- 200823 -->
186
  ## Discord
187
 
188
  For further support, and discussions on these models and AI in general, join us at:
189
 
190
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
191
 
192
  ## Thanks, and how to contribute.
193
 
 
202
  * Patreon: https://patreon.com/TheBlokeAI
203
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
204
 
205
+ **Special thanks to**: Aemon Algiz.
206
+
207
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
208
 
 
209
 
210
  Thank you to all my generous patrons and donaters!
211
 
212
+ And thank you again to a16z for their generous grant.
213
+
214
  <!-- footer end -->
215
 
216
  # Original model card: Bigcode's StarcoderPlus
 
230
 
231
  ## Model Summary
232
 
233
+ StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
234
  combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
235
  It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
236
+ [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
237
 
238
  - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
239
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
 
282
  # Limitations
283
 
284
  The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
285
+ Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
286
 
287
  # Training
288
  StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
 
305
  - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
306
 
307
  # License
308
+ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
config.json CHANGED
@@ -35,5 +35,16 @@
35
  "transformers_version": "4.30.0.dev0",
36
  "use_cache": true,
37
  "validate_runner_input": true,
38
- "vocab_size": 49152
39
- }
 
 
 
 
 
 
 
 
 
 
 
 
35
  "transformers_version": "4.30.0.dev0",
36
  "use_cache": true,
37
  "validate_runner_input": true,
38
+ "vocab_size": 49152,
39
+ "quantization_config": {
40
+ "bits": 4,
41
+ "group_size": -1,
42
+ "damp_percent": 0.01,
43
+ "desc_act": true,
44
+ "sym": true,
45
+ "true_sequential": true,
46
+ "model_name_or_path": null,
47
+ "model_file_base_name": "model",
48
+ "quant_method": "gptq"
49
+ }
50
+ }
gptq_model-4bit--1g.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d7020dc8e70f8efe417ffeb2f78e4b1089a234543a9254248e3ff85607a5a4c
3
- size 8906589520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d1febf57c7c06f9865f629bf38371c2b6097db6b5f9740eac93f8055fc13421
3
+ size 8906589576
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }