Update README.md
Browse files
README.md
CHANGED
@@ -50,12 +50,12 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
50 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
51 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
52 |
| main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
53 |
-
| gptq-4bit-32g-actorder_True | 4 | 32 |
|
54 |
-
| gptq-4bit-64g-actorder_True | 4 | 64 |
|
55 |
-
| gptq-4bit-128g-actorder_True | 4 | 128 |
|
56 |
-
| gptq-8bit--1g-actorder_True | 8 | None |
|
57 |
-
| gptq-3bit--1g-actorder_True | 3 | None |
|
58 |
-
| gptq-3bit-128g-actorder_False | 3 | 128 |
|
59 |
|
60 |
## How to download from branches
|
61 |
|
@@ -129,7 +129,6 @@ prompt_template=f'''A chat between a curious user and an artificial intelligence
|
|
129 |
|
130 |
USER: {prompt}
|
131 |
ASSISTANT:
|
132 |
-
|
133 |
'''
|
134 |
|
135 |
print("\n\n*** Generate:")
|
|
|
50 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
51 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
52 |
| main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
53 |
+
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
54 |
+
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
55 |
+
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
56 |
+
| gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
|
57 |
+
| gptq-3bit--1g-actorder_True | 3 | None | True | 12.92 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
58 |
+
| gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
59 |
|
60 |
## How to download from branches
|
61 |
|
|
|
129 |
|
130 |
USER: {prompt}
|
131 |
ASSISTANT:
|
|
|
132 |
'''
|
133 |
|
134 |
print("\n\n*** Generate:")
|