TheBloke commited on
Commit
f488bfb
1 Parent(s): 5e9a550

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -9,17 +9,20 @@ tags:
9
  ---
10
 
11
  <!-- header start -->
12
- <div style="width: 100%;">
13
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
14
  </div>
15
  <div style="display: flex; justify-content: space-between; width: 100%;">
16
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
18
  </div>
19
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
21
  </div>
22
  </div>
 
 
23
  <!-- header end -->
24
 
25
  # Elinas' Chronos 33B merged with Kaio Ken's SuperHOT 8K GPTQ
@@ -154,6 +157,7 @@ It was created without group_size to lower VRAM requirements, and with --act-ord
154
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
155
 
156
  <!-- footer start -->
 
157
  ## Discord
158
 
159
  For further support, and discussions on these models and AI in general, join us at:
@@ -173,12 +177,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
173
  * Patreon: https://patreon.com/TheBlokeAI
174
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
175
 
176
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
177
 
178
- **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
179
 
180
  Thank you to all my generous patrons and donaters!
181
 
 
 
182
  <!-- footer end -->
183
 
184
  # Original model card: Kaio Ken's SuperHOT 8K
@@ -196,9 +203,9 @@ You will need to **use either the monkeypatch** or, if you are already using the
196
 
197
 
198
  #### Training Details
199
- I trained the LoRA with the following configuration:
200
  - 1200 samples (~400 samples over 2048 sequence length)
201
- - learning rate of 3e-4
202
  - 3 epochs
203
  - The exported modules are:
204
  - q_proj
@@ -235,7 +242,7 @@ Your instruction or question here.
235
  [4bit GPTQ Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
236
 
237
  <!--**Support My Development of New Models**
238
- <a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
239
  src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
240
 
241
  --
@@ -318,11 +325,11 @@ Hyperparameters for the model architecture
318
  </tr>
319
  <tr>
320
  <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
321
- </tr>
322
  </thead>
323
- <tbody>
324
  <tr>
325
- <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
326
  </tr>
327
  <tr>
328
  <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
@@ -332,13 +339,13 @@ Hyperparameters for the model architecture
332
  </tr>
333
  <tr>
334
  <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
335
- </tr>
336
  </tbody>
337
  </table>
338
 
339
  *Table 1 - Summary of LLama Model Hyperparameters*
340
 
341
- We present our results on eight standard common sense reasoning benchmarks in the table below.
342
  <table>
343
  <thead>
344
  <tr>
@@ -346,23 +353,23 @@ We present our results on eight standard common sense reasoning benchmarks in th
346
  </tr>
347
  <tr>
348
  <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
349
- </tr>
350
  </thead>
351
- <tbody>
352
- <tr>
353
  <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
354
- </th>
355
  <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
356
  </th>
357
  <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
358
  </th>
359
- <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
360
  </tbody>
361
  </table>
362
  *Table 2 - Summary of LLama Model Performance on Reasoning tasks*
363
 
364
 
365
- We present our results on bias in the table below. Note that lower value is better indicating lower bias.
366
 
367
 
368
  | No | Category | FAIR LLM |
 
9
  ---
10
 
11
  <!-- header start -->
12
+ <!-- 200823 -->
13
+ <div style="width: auto; margin-left: auto; margin-right: auto">
14
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
22
  </div>
23
  </div>
24
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
25
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
26
  <!-- header end -->
27
 
28
  # Elinas' Chronos 33B merged with Kaio Ken's SuperHOT 8K GPTQ
 
157
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
158
 
159
  <!-- footer start -->
160
+ <!-- 200823 -->
161
  ## Discord
162
 
163
  For further support, and discussions on these models and AI in general, join us at:
 
177
  * Patreon: https://patreon.com/TheBlokeAI
178
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
179
 
180
+ **Special thanks to**: Aemon Algiz.
181
+
182
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
183
 
 
184
 
185
  Thank you to all my generous patrons and donaters!
186
 
187
+ And thank you again to a16z for their generous grant.
188
+
189
  <!-- footer end -->
190
 
191
  # Original model card: Kaio Ken's SuperHOT 8K
 
203
 
204
 
205
  #### Training Details
206
+ I trained the LoRA with the following configuration:
207
  - 1200 samples (~400 samples over 2048 sequence length)
208
+ - learning rate of 3e-4
209
  - 3 epochs
210
  - The exported modules are:
211
  - q_proj
 
242
  [4bit GPTQ Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
243
 
244
  <!--**Support My Development of New Models**
245
+ <a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
246
  src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
247
 
248
  --
 
325
  </tr>
326
  <tr>
327
  <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
328
+ </tr>
329
  </thead>
330
+ <tbody>
331
  <tr>
332
+ <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
333
  </tr>
334
  <tr>
335
  <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
 
339
  </tr>
340
  <tr>
341
  <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
342
+ </tr>
343
  </tbody>
344
  </table>
345
 
346
  *Table 1 - Summary of LLama Model Hyperparameters*
347
 
348
+ We present our results on eight standard common sense reasoning benchmarks in the table below.
349
  <table>
350
  <thead>
351
  <tr>
 
353
  </tr>
354
  <tr>
355
  <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
356
+ </tr>
357
  </thead>
358
+ <tbody>
359
+ <tr>
360
  <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
361
+ </th>
362
  <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
363
  </th>
364
  <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
365
  </th>
366
+ <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
367
  </tbody>
368
  </table>
369
  *Table 2 - Summary of LLama Model Performance on Reasoning tasks*
370
 
371
 
372
+ We present our results on bias in the table below. Note that lower value is better indicating lower bias.
373
 
374
 
375
  | No | Category | FAIR LLM |
config.json CHANGED
@@ -1,28 +1,38 @@
1
  {
2
- "_name_or_path": "/workspace/process/lora_base/elinas_chronos-33b",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "auto_map": {
7
  "AutoModel": "modelling_llama.LlamaModel",
8
  "AutoModelForCausalLM": "modelling_llama.LlamaForCausalLM",
9
  "AutoModelForSequenceClassification": "modelling_llama.LlamaForSequenceClassification"
10
- },
11
- "bos_token_id": 1,
12
- "eos_token_id": 2,
13
- "hidden_act": "silu",
14
- "hidden_size": 6656,
15
- "initializer_range": 0.02,
16
- "intermediate_size": 17920,
17
- "max_position_embeddings": 8192,
18
- "model_type": "llama",
19
- "num_attention_heads": 52,
20
- "num_hidden_layers": 60,
21
- "pad_token_id": 0,
22
- "rms_norm_eps": 1e-06,
23
- "tie_word_embeddings": false,
24
- "torch_dtype": "float16",
25
- "transformers_version": "4.30.0.dev0",
26
- "use_cache": true,
27
- "vocab_size": 32000
28
- }
 
 
 
 
 
 
 
 
 
 
 
1
  {
2
+ "_name_or_path": "/workspace/process/lora_base/elinas_chronos-33b",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "auto_map": {
7
  "AutoModel": "modelling_llama.LlamaModel",
8
  "AutoModelForCausalLM": "modelling_llama.LlamaForCausalLM",
9
  "AutoModelForSequenceClassification": "modelling_llama.LlamaForSequenceClassification"
10
+ },
11
+ "bos_token_id": 1,
12
+ "eos_token_id": 2,
13
+ "hidden_act": "silu",
14
+ "hidden_size": 6656,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 17920,
17
+ "max_position_embeddings": 8192,
18
+ "model_type": "llama",
19
+ "num_attention_heads": 52,
20
+ "num_hidden_layers": 60,
21
+ "pad_token_id": 0,
22
+ "rms_norm_eps": 1e-06,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "float16",
25
+ "transformers_version": "4.30.0.dev0",
26
+ "use_cache": true,
27
+ "vocab_size": 32000,
28
+ "quantization_config": {
29
+ "bits": 4,
30
+ "group_size": -1,
31
+ "damp_percent": 0.01,
32
+ "desc_act": true,
33
+ "sym": true,
34
+ "true_sequential": true,
35
+ "model_file_base_name": "model",
36
+ "quant_method": "gptq"
37
+ }
38
+ }
chronos-33b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7a1fb04e7b180d95c2c768d566ce72196014a50a6ff1f73731eab3e70938f36a
3
- size 16940128408
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:108ad378d2b63f9107021c3a94158085c3b00733bd2c041936c6a0b2c35f5950
3
+ size 16940128464
quantize_config.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "bits": 4,
3
- "group_size": -1,
4
- "damp_percent": 0.01,
5
- "desc_act": true,
6
- "sym": true,
7
- "true_sequential": true
 
8
  }
 
1
  {
2
+ "bits": 4,
3
+ "group_size": -1,
4
+ "damp_percent": 0.01,
5
+ "desc_act": true,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "model_file_base_name": "model"
9
  }