TheBloke commited on
Commit
fe78c1b
1 Parent(s): 4d56433

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -11,17 +11,20 @@ tags:
11
  inference: false
12
  ---
13
  <!-- header start -->
14
- <div style="width: 100%;">
15
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
16
  </div>
17
  <div style="display: flex; justify-content: space-between; width: 100%;">
18
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
20
  </div>
21
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
23
  </div>
24
  </div>
 
 
25
  <!-- header end -->
26
 
27
  This is a 4bit 128g GPTQ of [chansung's gpt4-alpaca-lora-13b](https://huggingface.co/chansung/gpt4-alpaca-lora-13b).
@@ -61,11 +64,12 @@ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
61
  There is also a `no-act-order.safetensors` file which will work with oobabooga's fork of GPTQ-for-LLaMa; it does not require the latest GPTQ code.
62
 
63
  <!-- footer start -->
 
64
  ## Discord
65
 
66
  For further support, and discussions on these models and AI in general, join us at:
67
 
68
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
69
 
70
  ## Thanks, and how to contribute.
71
 
@@ -80,9 +84,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
80
  * Patreon: https://patreon.com/TheBlokeAI
81
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
82
 
83
- **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
 
 
 
84
 
85
  Thank you to all my generous patrons and donaters!
 
 
 
86
  <!-- footer end -->
87
  # Original model card is below
88
 
 
11
  inference: false
12
  ---
13
  <!-- header start -->
14
+ <!-- 200823 -->
15
+ <div style="width: auto; margin-left: auto; margin-right: auto">
16
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
  </div>
18
  <div style="display: flex; justify-content: space-between; width: 100%;">
19
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
21
  </div>
22
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
24
  </div>
25
  </div>
26
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
27
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
28
  <!-- header end -->
29
 
30
  This is a 4bit 128g GPTQ of [chansung's gpt4-alpaca-lora-13b](https://huggingface.co/chansung/gpt4-alpaca-lora-13b).
 
64
  There is also a `no-act-order.safetensors` file which will work with oobabooga's fork of GPTQ-for-LLaMa; it does not require the latest GPTQ code.
65
 
66
  <!-- footer start -->
67
+ <!-- 200823 -->
68
  ## Discord
69
 
70
  For further support, and discussions on these models and AI in general, join us at:
71
 
72
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
73
 
74
  ## Thanks, and how to contribute.
75
 
 
84
  * Patreon: https://patreon.com/TheBlokeAI
85
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
86
 
87
+ **Special thanks to**: Aemon Algiz.
88
+
89
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
90
+
91
 
92
  Thank you to all my generous patrons and donaters!
93
+
94
+ And thank you again to a16z for their generous grant.
95
+
96
  <!-- footer end -->
97
  # Original model card is below
98
 
config.json CHANGED
@@ -19,5 +19,16 @@
19
  "torch_dtype": "float16",
20
  "transformers_version": "4.29.0.dev0",
21
  "use_cache": true,
22
- "vocab_size": 32000
23
- }
 
 
 
 
 
 
 
 
 
 
 
 
19
  "torch_dtype": "float16",
20
  "transformers_version": "4.29.0.dev0",
21
  "use_cache": true,
22
+ "vocab_size": 32000,
23
+ "quantization_config": {
24
+ "bits": 4,
25
+ "group_size": 128,
26
+ "damp_percent": 0.01,
27
+ "desc_act": false,
28
+ "sym": true,
29
+ "true_sequential": true,
30
+ "model_name_or_path": null,
31
+ "model_file_base_name": "model",
32
+ "quant_method": "gptq"
33
+ }
34
+ }
gpt4-alpaca-lora-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aeaa4cd165e34067c4011e464c11d51ebf0919dd3bbbd96eaf746a96e2f87b6b
3
- size 7255159218
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcc29201f4163cdffaa6a10610b956f994e8c91e8e5818a2eccd40034bdb0964
3
+ size 7255159272
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }