TheBloke commited on
Commit
06ae847
1 Parent(s): dfc085f

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -4,17 +4,20 @@ license: other
4
  ---
5
 
6
  <!-- header start -->
7
- <div style="width: 100%;">
8
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
9
  </div>
10
  <div style="display: flex; justify-content: space-between; width: 100%;">
11
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
  </div>
14
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
  </div>
17
  </div>
 
 
18
  <!-- header end -->
19
 
20
  # Kevin Pro's Vicuna 13B CoT GPTQ
@@ -116,11 +119,12 @@ It was created with group_size 128 to increase inference accuracy, but without -
116
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
117
 
118
  <!-- footer start -->
 
119
  ## Discord
120
 
121
  For further support, and discussions on these models and AI in general, join us at:
122
 
123
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
124
 
125
  ## Thanks, and how to contribute.
126
 
@@ -135,12 +139,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
135
  * Patreon: https://patreon.com/TheBlokeAI
136
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
137
 
138
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
 
 
139
 
140
- **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
141
 
142
  Thank you to all my generous patrons and donaters!
143
 
 
 
144
  <!-- footer end -->
145
 
146
  # Original model card: Kevin Pro's Vicuna 13B CoT
@@ -148,7 +155,7 @@ Thank you to all my generous patrons and donaters!
148
  # Model Card for Model ID
149
  SFT to enhance the CoT capabiliy of Vicuna
150
 
151
- If you find the model helpful, please click "like" to support us.
152
  We also welcome feedback on your usage experience and any issues you encounter in the issues section.
153
 
154
  Another 7B version: https://huggingface.co/kevinpro/Vicuna-7B-CoT
@@ -225,7 +232,7 @@ Use the code below to get started with the model.
225
 
226
  [More Information Needed]
227
 
228
- ### Training Procedure
229
 
230
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
231
 
 
4
  ---
5
 
6
  <!-- header start -->
7
+ <!-- 200823 -->
8
+ <div style="width: auto; margin-left: auto; margin-right: auto">
9
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
10
  </div>
11
  <div style="display: flex; justify-content: space-between; width: 100%;">
12
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
13
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
14
  </div>
15
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
16
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
17
  </div>
18
  </div>
19
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
20
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
21
  <!-- header end -->
22
 
23
  # Kevin Pro's Vicuna 13B CoT GPTQ
 
119
  * Parameters: Groupsize = 128. Act Order / desc_act = False.
120
 
121
  <!-- footer start -->
122
+ <!-- 200823 -->
123
  ## Discord
124
 
125
  For further support, and discussions on these models and AI in general, join us at:
126
 
127
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
128
 
129
  ## Thanks, and how to contribute.
130
 
 
139
  * Patreon: https://patreon.com/TheBlokeAI
140
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
141
 
142
+ **Special thanks to**: Aemon Algiz.
143
+
144
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
145
 
 
146
 
147
  Thank you to all my generous patrons and donaters!
148
 
149
+ And thank you again to a16z for their generous grant.
150
+
151
  <!-- footer end -->
152
 
153
  # Original model card: Kevin Pro's Vicuna 13B CoT
 
155
  # Model Card for Model ID
156
  SFT to enhance the CoT capabiliy of Vicuna
157
 
158
+ If you find the model helpful, please click "like" to support us.
159
  We also welcome feedback on your usage experience and any issues you encounter in the issues section.
160
 
161
  Another 7B version: https://huggingface.co/kevinpro/Vicuna-7B-CoT
 
232
 
233
  [More Information Needed]
234
 
235
+ ### Training Procedure
236
 
237
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
238
 
config.json CHANGED
@@ -1,24 +1,34 @@
1
  {
2
- "_name_or_path": "/mnt/data1/sheshuaijie/Code/Alpaca-CoT/mnt/data1/sheshuaijie/Output/CoT/Trained/vicuna-13b_english-cot+auto-cot_0.0002/merged",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 0,
7
- "eos_token_id": 1,
8
- "hidden_act": "silu",
9
- "hidden_size": 5120,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 13824,
12
- "max_position_embeddings": 2048,
13
- "max_sequence_length": 2048,
14
- "model_type": "llama",
15
- "num_attention_heads": 40,
16
- "num_hidden_layers": 40,
17
- "pad_token_id": -1,
18
- "rms_norm_eps": 1e-06,
19
- "tie_word_embeddings": false,
20
- "torch_dtype": "float32",
21
- "transformers_version": "4.29.2",
22
- "use_cache": true,
23
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
24
  }
 
1
  {
2
+ "_name_or_path": "/mnt/data1/sheshuaijie/Code/Alpaca-CoT/mnt/data1/sheshuaijie/Output/CoT/Trained/vicuna-13b_english-cot+auto-cot_0.0002/merged",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 0,
7
+ "eos_token_id": 1,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 2048,
13
+ "max_sequence_length": 2048,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 40,
16
+ "num_hidden_layers": 40,
17
+ "pad_token_id": -1,
18
+ "rms_norm_eps": 1e-06,
19
+ "tie_word_embeddings": false,
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.29.2",
22
+ "use_cache": true,
23
+ "vocab_size": 32000,
24
+ "quantization_config": {
25
+ "bits": 4,
26
+ "group_size": 128,
27
+ "damp_percent": 0.01,
28
+ "desc_act": false,
29
+ "sym": true,
30
+ "true_sequential": true,
31
+ "model_file_base_name": "model",
32
+ "quant_method": "gptq"
33
+ }
34
  }
vicuna-13b-cot-GPTQ-4bit-128g.no-act.order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8b86a0d17281af68f3e1f75087e798822ed99a61d40692f5320ba2dc867f9d46
3
- size 8110988216
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da003862eb26e66eb3ff489a22baa2cdf6e68ec167187ed4d2840df76a57ef11
3
+ size 8110988272
quantize_config.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "bits": 4,
3
- "group_size": 128,
4
- "damp_percent": 0.01,
5
- "desc_act": false,
6
- "sym": true,
7
- "true_sequential": true
 
8
  }
 
1
  {
2
+ "bits": 4,
3
+ "group_size": 128,
4
+ "damp_percent": 0.01,
5
+ "desc_act": false,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "model_file_base_name": "model"
9
  }