Initial GPTQ model commit
Browse files
README.md
CHANGED
@@ -96,9 +96,9 @@ All GPTQ files are made with AutoGPTQ.
|
|
96 |
| [main](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.82 GB | No | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
97 |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.96 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
98 |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.86 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
99 |
-
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 |
|
100 |
-
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 |
|
101 |
-
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 |
|
102 |
|
103 |
## How to download from branches
|
104 |
|
|
|
96 |
| [main](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.82 GB | No | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
97 |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.96 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
98 |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.86 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
99 |
+
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 1.82 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
100 |
+
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 3.08 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
|
101 |
+
| [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/stablecode-completion-alpha-3b-4k-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 3.14 GB | No | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
|
102 |
|
103 |
## How to download from branches
|
104 |
|