TheBloke commited on
Commit
efaa1d9
1 Parent(s): 403ac5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -49,11 +49,11 @@ Each separate quant is in a different branch. See below for instructions on fet
49
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
50
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
51
  | main | 4 | 128 | False | 8.11 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
52
- | gptq-4bit-32g-actorder_True | 4 | 32 | 1 | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
53
- | gptq-4bit-64g-actorder_True | 4 | 64 | 1 | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
54
- | gptq-4bit-128g-actorder_True | 4 | 128 | 1 | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
55
- | gptq-8bit--1g-actorder_True | 8 | None | 1 | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
56
- | gptq-8bit-128g-actorder_False | 8 | 128 | 0 | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
57
 
58
  ## How to download from branches
59
 
 
49
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
50
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
51
  | main | 4 | 128 | False | 8.11 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
52
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
53
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
54
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
55
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
56
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
57
 
58
  ## How to download from branches
59