TheBloke commited on
Commit
91b80b2
1 Parent(s): edba84b

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -12,17 +12,20 @@ tags:
12
  ---
13
 
14
  <!-- header start -->
15
- <div style="width: 100%;">
16
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
17
  </div>
18
  <div style="display: flex; justify-content: space-between; width: 100%;">
19
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
21
  </div>
22
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
24
  </div>
25
  </div>
 
 
26
  <!-- header end -->
27
 
28
  # NousResearch's Redmond Hermes Coder GPTQ
@@ -79,7 +82,7 @@ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
79
  import argparse
80
 
81
  model_name_or_path = "TheBloke/Redmond-Hermes-Coder-GPTQ"
82
- model_basename = "gptq_model-4bit-128g"
83
 
84
  use_triton = False
85
 
@@ -145,6 +148,7 @@ It was created with group_size 128 to increase inference accuracy, but without -
145
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
146
 
147
  <!-- footer start -->
 
148
  ## Discord
149
 
150
  For further support, and discussions on these models and AI in general, join us at:
@@ -164,12 +168,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
164
  * Patreon: https://patreon.com/TheBlokeAI
165
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
166
 
167
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
168
 
169
- **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
170
 
171
  Thank you to all my generous patrons and donaters!
172
 
 
 
173
  <!-- footer end -->
174
 
175
  # Original model card: NousResearch's Redmond Hermes Coder
@@ -181,7 +188,7 @@ Thank you to all my generous patrons and donaters!
181
 
182
  Redmond-Hermes-Coder 15B is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
183
 
184
- This model was trained with a WizardCoder base, which itself uses a StarCoder base model.
185
 
186
  The model is truly great at code, but, it does come with a tradeoff though. While far better at code than the original Nous-Hermes built on Llama, it is worse than WizardCoder at pure code benchmarks, like HumanEval.
187
 
@@ -191,16 +198,16 @@ However, it does seem better at non-code than WizardCoder on a variety of things
191
 
192
  ## Model Training
193
 
194
- The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions.
195
 
196
  Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' (v1) GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.
197
 
198
  ## Collaborators
199
- The model fine-tuning and the datasets were a collaboration of efforts and resources from members of Nous Research, includingTeknium, Karan4D, Huemin Art, and Redmond AI's generous compute grants.
200
-
201
- Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
202
 
203
- Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
204
  The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801.
205
  If anyone was left out, please open a thread in the community tab.
206
 
@@ -213,7 +220,7 @@ The model follows the Alpaca prompt format:
213
  ### Response:
214
  ```
215
 
216
- or
217
 
218
  ```
219
  ### Instruction:
@@ -221,11 +228,11 @@ or
221
  ### Input:
222
 
223
  ### Response:
224
- ```
225
 
226
  ## Resources for Applied Use Cases:
227
- For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
228
- For an example of a roleplaying discord bot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
229
 
230
  ## Future Plans
231
  The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All.
@@ -270,5 +277,5 @@ HumanEval: 39%
270
 
271
  ## Model Usage
272
  The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
273
-
274
  Compute provided by our project sponsor Redmond AI, thank you!!
 
12
  ---
13
 
14
  <!-- header start -->
15
+ <!-- 200823 -->
16
+ <div style="width: auto; margin-left: auto; margin-right: auto">
17
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
18
  </div>
19
  <div style="display: flex; justify-content: space-between; width: 100%;">
20
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
22
  </div>
23
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
24
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
25
  </div>
26
  </div>
27
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
28
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
29
  <!-- header end -->
30
 
31
  # NousResearch's Redmond Hermes Coder GPTQ
 
82
  import argparse
83
 
84
  model_name_or_path = "TheBloke/Redmond-Hermes-Coder-GPTQ"
85
+ model_basename = "model"
86
 
87
  use_triton = False
88
 
 
148
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
149
 
150
  <!-- footer start -->
151
+ <!-- 200823 -->
152
  ## Discord
153
 
154
  For further support, and discussions on these models and AI in general, join us at:
 
168
  * Patreon: https://patreon.com/TheBlokeAI
169
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
170
 
171
+ **Special thanks to**: Aemon Algiz.
172
+
173
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
174
 
 
175
 
176
  Thank you to all my generous patrons and donaters!
177
 
178
+ And thank you again to a16z for their generous grant.
179
+
180
  <!-- footer end -->
181
 
182
  # Original model card: NousResearch's Redmond Hermes Coder
 
188
 
189
  Redmond-Hermes-Coder 15B is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
190
 
191
+ This model was trained with a WizardCoder base, which itself uses a StarCoder base model.
192
 
193
  The model is truly great at code, but, it does come with a tradeoff though. While far better at code than the original Nous-Hermes built on Llama, it is worse than WizardCoder at pure code benchmarks, like HumanEval.
194
 
 
198
 
199
  ## Model Training
200
 
201
+ The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions.
202
 
203
  Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' (v1) GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.
204
 
205
  ## Collaborators
206
+ The model fine-tuning and the datasets were a collaboration of efforts and resources from members of Nous Research, includingTeknium, Karan4D, Huemin Art, and Redmond AI's generous compute grants.
207
+
208
+ Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
209
 
210
+ Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
211
  The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801.
212
  If anyone was left out, please open a thread in the community tab.
213
 
 
220
  ### Response:
221
  ```
222
 
223
+ or
224
 
225
  ```
226
  ### Instruction:
 
228
  ### Input:
229
 
230
  ### Response:
231
+ ```
232
 
233
  ## Resources for Applied Use Cases:
234
+ For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
235
+ For an example of a roleplaying discord bot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
236
 
237
  ## Future Plans
238
  The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All.
 
277
 
278
  ## Model Usage
279
  The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
280
+
281
  Compute provided by our project sponsor Redmond AI, thank you!!
config.json CHANGED
@@ -1,39 +1,50 @@
1
  {
2
- "_name_or_path": "./hermeswizardcoder-step3800/",
3
- "activation_function": "gelu",
4
- "architectures": [
5
- "GPTBigCodeForCausalLM"
6
- ],
7
- "attention_softmax_in_fp32": true,
8
- "attn_pdrop": 0.1,
9
- "bos_token_id": 0,
10
- "embd_pdrop": 0.1,
11
- "eos_token_id": 0,
12
- "inference_runner": 0,
13
- "initializer_range": 0.02,
14
- "layer_norm_epsilon": 1e-05,
15
- "max_batch_size": null,
16
- "max_sequence_length": null,
17
- "model_type": "gpt_bigcode",
18
- "multi_query": true,
19
- "n_embd": 6144,
20
- "n_head": 48,
21
- "n_inner": 24576,
22
- "n_layer": 40,
23
- "n_positions": 8192,
24
- "pad_key_length": true,
25
- "pre_allocate_kv_cache": false,
26
- "resid_pdrop": 0.1,
27
- "scale_attention_softmax_in_fp32": true,
28
- "scale_attn_weights": true,
29
- "summary_activation": null,
30
- "summary_first_dropout": 0.1,
31
- "summary_proj_to_labels": true,
32
- "summary_type": "cls_index",
33
- "summary_use_proj": true,
34
- "torch_dtype": "float16",
35
- "transformers_version": "4.29.2",
36
- "use_cache": true,
37
- "validate_runner_input": true,
38
- "vocab_size": 49153
 
 
 
 
 
 
 
 
 
 
 
39
  }
 
1
  {
2
+ "_name_or_path": "./hermeswizardcoder-step3800/",
3
+ "activation_function": "gelu",
4
+ "architectures": [
5
+ "GPTBigCodeForCausalLM"
6
+ ],
7
+ "attention_softmax_in_fp32": true,
8
+ "attn_pdrop": 0.1,
9
+ "bos_token_id": 0,
10
+ "embd_pdrop": 0.1,
11
+ "eos_token_id": 0,
12
+ "inference_runner": 0,
13
+ "initializer_range": 0.02,
14
+ "layer_norm_epsilon": 1e-05,
15
+ "max_batch_size": null,
16
+ "max_sequence_length": null,
17
+ "model_type": "gpt_bigcode",
18
+ "multi_query": true,
19
+ "n_embd": 6144,
20
+ "n_head": 48,
21
+ "n_inner": 24576,
22
+ "n_layer": 40,
23
+ "n_positions": 8192,
24
+ "pad_key_length": true,
25
+ "pre_allocate_kv_cache": false,
26
+ "resid_pdrop": 0.1,
27
+ "scale_attention_softmax_in_fp32": true,
28
+ "scale_attn_weights": true,
29
+ "summary_activation": null,
30
+ "summary_first_dropout": 0.1,
31
+ "summary_proj_to_labels": true,
32
+ "summary_type": "cls_index",
33
+ "summary_use_proj": true,
34
+ "torch_dtype": "float16",
35
+ "transformers_version": "4.29.2",
36
+ "use_cache": true,
37
+ "validate_runner_input": true,
38
+ "vocab_size": 49153,
39
+ "quantization_config": {
40
+ "bits": 4,
41
+ "group_size": 128,
42
+ "damp_percent": 0.01,
43
+ "desc_act": false,
44
+ "sym": true,
45
+ "true_sequential": true,
46
+ "model_name_or_path": null,
47
+ "model_file_base_name": "model",
48
+ "quant_method": "gptq"
49
+ }
50
  }
gptq_model-4bit-128g.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:35b90ab3ed5904fd0a0e8b1ef9020e4a8c8591d1acfed1cca74fc356f5dc2014
3
- size 9198428896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed15545d44aba2bbc238f0fae3b3f7ddf7b8b04c57a9793e61b1e02afb49bb87
3
+ size 9198428952
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }