TheBloke commited on
Commit
f2856d0
1 Parent(s): db5a9f3

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -32,17 +32,20 @@ tags:
32
  ---
33
 
34
  <!-- header start -->
35
- <div style="width: 100%;">
36
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
37
  </div>
38
  <div style="display: flex; justify-content: space-between; width: 100%;">
39
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
40
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
41
  </div>
42
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
43
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
44
  </div>
45
  </div>
 
 
46
  <!-- header end -->
47
 
48
  # Stablecode Completion Alpha 3B 4K - GPTQ
@@ -93,11 +96,11 @@ All GPTQ files are made with AutoGPTQ.
93
 
94
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
95
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
96
- | [main](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.82 GB | No | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
97
- | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.96 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
98
- | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.86 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
99
- | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.82 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
100
- | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 3.08 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
101
  | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 3.14 GB | No | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
102
 
103
  ## How to download from branches
@@ -216,6 +219,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
216
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
217
 
218
  <!-- footer start -->
 
219
  ## Discord
220
 
221
  For further support, and discussions on these models and AI in general, join us at:
@@ -235,13 +239,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
235
  * Patreon: https://patreon.com/TheBlokeAI
236
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
237
 
238
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
239
 
240
- **Patreon special mentions**: Willem Michiel, Ajan Kanaga, Cory Kujawski, Alps Aficionado, Nikolai Manek, Jonathan Leane, Stanislav Ovsiannikov, Michael Levine, Luke Pendergrass, Sid, K, Gabriel Tamborski, Clay Pascal, Kalila, William Sang, Will Dee, Pieter, Nathan LeClaire, ya boyyy, David Flickinger, vamX, Derek Yates, Fen Risland, Jeffrey Morgan, webtim, Daniel P. Andersen, Chadd, Edmond Seymore, Pyrater, Olusegun Samson, Lone Striker, biorpg, alfie_i, Mano Prime, Chris Smitley, Dave, zynix, Trenton Dambrowitz, Johann-Peter Hartmann, Magnesian, Spencer Kim, John Detwiler, Iucharbius, Gabriel Puliatti, LangChain4j, Luke @flexchar, Vadim, Rishabh Srivastava, Preetika Verma, Ai Maven, Femi Adebogun, WelcomeToTheClub, Leonard Tan, Imad Khwaja, Steven Wood, Stefan Sabev, Sebastain Graf, usrbinkat, Dan Guido, Sam, Eugene Pentland, Mandus, transmissions 11, Slarti, Karl Bernard, Spiking Neurons AB, Artur Olbinski, Joseph William Delisle, ReadyPlayerEmma, Olakabola, Asp the Wyvern, Space Cruiser, Matthew Berman, Randy H, subjectnull, danny, John Villwock, Illia Dulskyi, Rainer Wilmers, theTransient, Pierre Kircher, Alexandros Triantafyllidis, Viktor Bowallius, terasurfer, Deep Realms, SuperWojo, senxiiz, Oscar Rangel, Alex, Stephen Murray, Talal Aujan, Raven Klaugh, Sean Connelly, Raymond Fosdick, Fred von Graf, chris gileta, Junyu Yang, Elle
241
 
242
 
243
  Thank you to all my generous patrons and donaters!
244
 
 
 
245
  <!-- footer end -->
246
 
247
  # Original model card: StabilityAI's Stablecode Completion Alpha 3B 4K
@@ -250,7 +256,7 @@ Thank you to all my generous patrons and donaters!
250
 
251
  ## Model Description
252
 
253
- `StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
254
 
255
  ## Usage
256
  The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
@@ -301,7 +307,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
301
 
302
  ### Training Dataset
303
 
304
- The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset.
305
 
306
  ### Training Procedure
307
 
@@ -322,9 +328,9 @@ This model is intended to be used responsibly. It is not intended to be used to
322
  ## How to cite
323
 
324
  ```bibtex
325
- @misc{StableCodeCompleteAlpha4K,
326
- url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k)},
327
- title={Stable Code Complete Alpha},
328
  author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
329
  }
330
  ```
 
32
  ---
33
 
34
  <!-- header start -->
35
+ <!-- 200823 -->
36
+ <div style="width: auto; margin-left: auto; margin-right: auto">
37
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
38
  </div>
39
  <div style="display: flex; justify-content: space-between; width: 100%;">
40
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
41
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
42
  </div>
43
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
44
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
45
  </div>
46
  </div>
47
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
48
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
49
  <!-- header end -->
50
 
51
  # Stablecode Completion Alpha 3B 4K - GPTQ
 
96
 
97
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
98
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
99
+ | [main](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.82 GB | No | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
100
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.96 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
101
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.86 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
102
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.82 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
103
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 3.08 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
104
  | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 3.14 GB | No | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
105
 
106
  ## How to download from branches
 
219
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
220
 
221
  <!-- footer start -->
222
+ <!-- 200823 -->
223
  ## Discord
224
 
225
  For further support, and discussions on these models and AI in general, join us at:
 
239
  * Patreon: https://patreon.com/TheBlokeAI
240
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
241
 
242
+ **Special thanks to**: Aemon Algiz.
243
 
244
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
245
 
246
 
247
  Thank you to all my generous patrons and donaters!
248
 
249
+ And thank you again to a16z for their generous grant.
250
+
251
  <!-- footer end -->
252
 
253
  # Original model card: StabilityAI's Stablecode Completion Alpha 3B 4K
 
256
 
257
  ## Model Description
258
 
259
+ `StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
260
 
261
  ## Usage
262
  The model is intended to do single/multiline code completion from a long context window upto 4k tokens.
 
307
 
308
  ### Training Dataset
309
 
310
+ The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset.
311
 
312
  ### Training Procedure
313
 
 
328
  ## How to cite
329
 
330
  ```bibtex
331
+ @misc{StableCodeCompleteAlpha4K,
332
+ url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k)},
333
+ title={Stable Code Complete Alpha},
334
  author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
335
  }
336
  ```
config.json CHANGED
@@ -1,26 +1,37 @@
1
  {
2
- "architectures": [
3
- "GPTNeoXForCausalLM"
4
- ],
5
- "bos_token_id": 0,
6
- "classifier_dropout": 0.1,
7
- "eos_token_id": 0,
8
- "hidden_act": "gelu",
9
- "hidden_size": 2560,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 10240,
12
- "layer_norm_eps": 1e-05,
13
- "max_position_embeddings": 4096,
14
- "model_type": "gpt_neox",
15
- "num_attention_heads": 32,
16
- "num_hidden_layers": 32,
17
- "rotary_emb_base": 10000,
18
- "rotary_pct": 0.25,
19
- "tie_word_embeddings": false,
20
- "torch_dtype": "float16",
21
- "transformers_version": "4.30.2",
22
- "use_cache": true,
23
- "use_parallel_residual": true,
24
- "vocab_size": 49152,
25
- "pretraining_tp": 1
 
 
 
 
 
 
 
 
 
 
 
26
  }
 
1
  {
2
+ "architectures": [
3
+ "GPTNeoXForCausalLM"
4
+ ],
5
+ "bos_token_id": 0,
6
+ "classifier_dropout": 0.1,
7
+ "eos_token_id": 0,
8
+ "hidden_act": "gelu",
9
+ "hidden_size": 2560,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 10240,
12
+ "layer_norm_eps": 1e-05,
13
+ "max_position_embeddings": 4096,
14
+ "model_type": "gpt_neox",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "rotary_emb_base": 10000,
18
+ "rotary_pct": 0.25,
19
+ "tie_word_embeddings": false,
20
+ "torch_dtype": "float16",
21
+ "transformers_version": "4.30.2",
22
+ "use_cache": true,
23
+ "use_parallel_residual": true,
24
+ "vocab_size": 49152,
25
+ "pretraining_tp": 1,
26
+ "quantization_config": {
27
+ "bits": 4,
28
+ "group_size": 128,
29
+ "damp_percent": 0.1,
30
+ "desc_act": false,
31
+ "sym": true,
32
+ "true_sequential": true,
33
+ "model_name_or_path": null,
34
+ "model_file_base_name": "model",
35
+ "quant_method": "gptq"
36
+ }
37
  }
gptq_model-4bit-128g.safetensors → model.safetensors RENAMED
File without changes
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }