Update for Transformers GPTQ support
Browse files- README.md +21 -15
- config.json +39 -28
- gptq_model-4bit-128g.safetensors → model.safetensors +0 -0
- quantize_config.json +1 -1
README.md
CHANGED
@@ -9,17 +9,20 @@ quantized_by: TheBloke
|
|
9 |
---
|
10 |
|
11 |
<!-- header start -->
|
12 |
-
|
13 |
-
|
|
|
14 |
</div>
|
15 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
16 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
17 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
18 |
</div>
|
19 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
20 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
21 |
</div>
|
22 |
</div>
|
|
|
|
|
23 |
<!-- header end -->
|
24 |
|
25 |
# Vicuna 13B v1.5 16K - GPTQ
|
@@ -70,13 +73,13 @@ All GPTQ files are made with AutoGPTQ.
|
|
70 |
|
71 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
72 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
73 |
-
| [main](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.26 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
74 |
-
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
75 |
-
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
76 |
-
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
77 |
-
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
|
78 |
-
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.95 GB | No | 8-bit, with group size 64g and Act Order for even higher inference quality. Poor AutoGPTQ CUDA speed. |
|
79 |
-
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
|
80 |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
|
81 |
|
82 |
## How to download from branches
|
@@ -193,6 +196,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
|
|
193 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
194 |
|
195 |
<!-- footer start -->
|
|
|
196 |
## Discord
|
197 |
|
198 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -212,13 +216,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
212 |
* Patreon: https://patreon.com/TheBlokeAI
|
213 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
214 |
|
215 |
-
**Special thanks to**:
|
216 |
|
217 |
-
**Patreon special mentions**:
|
218 |
|
219 |
|
220 |
Thank you to all my generous patrons and donaters!
|
221 |
|
|
|
|
|
222 |
<!-- footer end -->
|
223 |
|
224 |
# Original model card: lmsys's Vicuna 13B v1.5 16K
|
@@ -232,7 +238,7 @@ Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared convers
|
|
232 |
|
233 |
- **Developed by:** [LMSYS](https://lmsys.org/)
|
234 |
- **Model type:** An auto-regressive language model based on the transformer architecture
|
235 |
-
- **License:** Llama 2 Community License Agreement
|
236 |
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
|
237 |
|
238 |
### Model Sources
|
@@ -250,7 +256,7 @@ The primary intended users of the model are researchers and hobbyists in natural
|
|
250 |
## How to Get Started with the Model
|
251 |
|
252 |
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
|
253 |
-
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
|
254 |
|
255 |
## Training Details
|
256 |
|
|
|
9 |
---
|
10 |
|
11 |
<!-- header start -->
|
12 |
+
<!-- 200823 -->
|
13 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
14 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
15 |
</div>
|
16 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
17 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
18 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
19 |
</div>
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
21 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
22 |
</div>
|
23 |
</div>
|
24 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
25 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
26 |
<!-- header end -->
|
27 |
|
28 |
# Vicuna 13B v1.5 16K - GPTQ
|
|
|
73 |
|
74 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
75 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
76 |
+
| [main](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.26 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
77 |
+
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
78 |
+
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
79 |
+
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
80 |
+
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
|
81 |
+
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.95 GB | No | 8-bit, with group size 64g and Act Order for even higher inference quality. Poor AutoGPTQ CUDA speed. |
|
82 |
+
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
|
83 |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
|
84 |
|
85 |
## How to download from branches
|
|
|
196 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
197 |
|
198 |
<!-- footer start -->
|
199 |
+
<!-- 200823 -->
|
200 |
## Discord
|
201 |
|
202 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
216 |
* Patreon: https://patreon.com/TheBlokeAI
|
217 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
218 |
|
219 |
+
**Special thanks to**: Aemon Algiz.
|
220 |
|
221 |
+
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
|
222 |
|
223 |
|
224 |
Thank you to all my generous patrons and donaters!
|
225 |
|
226 |
+
And thank you again to a16z for their generous grant.
|
227 |
+
|
228 |
<!-- footer end -->
|
229 |
|
230 |
# Original model card: lmsys's Vicuna 13B v1.5 16K
|
|
|
238 |
|
239 |
- **Developed by:** [LMSYS](https://lmsys.org/)
|
240 |
- **Model type:** An auto-regressive language model based on the transformer architecture
|
241 |
+
- **License:** Llama 2 Community License Agreement
|
242 |
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
|
243 |
|
244 |
### Model Sources
|
|
|
256 |
## How to Get Started with the Model
|
257 |
|
258 |
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
|
259 |
+
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
|
260 |
|
261 |
## Training Details
|
262 |
|
config.json
CHANGED
@@ -1,30 +1,41 @@
|
|
1 |
{
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
}
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "vicuna-13b-v1.5-16k/",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"hidden_act": "silu",
|
9 |
+
"hidden_size": 5120,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 13824,
|
12 |
+
"max_position_embeddings": 4096,
|
13 |
+
"max_sequence_length": 16384,
|
14 |
+
"model_type": "llama",
|
15 |
+
"num_attention_heads": 40,
|
16 |
+
"num_hidden_layers": 40,
|
17 |
+
"num_key_value_heads": 40,
|
18 |
+
"pad_token_id": 0,
|
19 |
+
"pretraining_tp": 1,
|
20 |
+
"rms_norm_eps": 1e-05,
|
21 |
+
"rope_scaling": {
|
22 |
+
"factor": 4.0,
|
23 |
+
"type": "linear"
|
24 |
+
},
|
25 |
+
"tie_word_embeddings": false,
|
26 |
+
"torch_dtype": "float16",
|
27 |
+
"transformers_version": "4.31.0",
|
28 |
+
"use_cache": true,
|
29 |
+
"vocab_size": 32000,
|
30 |
+
"quantization_config": {
|
31 |
+
"bits": 4,
|
32 |
+
"group_size": 128,
|
33 |
+
"damp_percent": 0.1,
|
34 |
+
"desc_act": false,
|
35 |
+
"sym": true,
|
36 |
+
"true_sequential": true,
|
37 |
+
"model_name_or_path": null,
|
38 |
+
"model_file_base_name": "model",
|
39 |
+
"quant_method": "gptq"
|
40 |
+
}
|
41 |
}
|
gptq_model-4bit-128g.safetensors → model.safetensors
RENAMED
File without changes
|
quantize_config.json
CHANGED
@@ -6,5 +6,5 @@
|
|
6 |
"sym": true,
|
7 |
"true_sequential": true,
|
8 |
"model_name_or_path": null,
|
9 |
-
"model_file_base_name":
|
10 |
}
|
|
|
6 |
"sym": true,
|
7 |
"true_sequential": true,
|
8 |
"model_name_or_path": null,
|
9 |
+
"model_file_base_name": "model"
|
10 |
}
|