Initial GPTQ model commit
Browse files
README.md
CHANGED
@@ -32,10 +32,11 @@ pipeline_tag: text-generation
|
|
32 |
|
33 |
## Description
|
34 |
|
35 |
-
|
36 |
|
37 |
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
|
38 |
|
|
|
39 |
|
40 |
## Repositories available
|
41 |
|
@@ -62,10 +63,11 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
62 |
|
63 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
64 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
65 |
-
| main | 4 |
|
66 |
-
| gptq-4bit-
|
67 |
-
| gptq-4bit-
|
68 |
-
| gptq-4bit-
|
|
|
69 |
| gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
70 |
| gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
71 |
| gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
@@ -73,10 +75,10 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
73 |
|
74 |
## How to download from branches
|
75 |
|
76 |
-
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/FreeWilly2-GPTQ:gptq-4bit-
|
77 |
- With Git, you can clone a branch with:
|
78 |
```
|
79 |
-
git clone --branch gptq-4bit-
|
80 |
```
|
81 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
82 |
|
@@ -88,7 +90,7 @@ It is strongly recommended to use the text-generation-webui one-click-installers
|
|
88 |
|
89 |
1. Click the **Model tab**.
|
90 |
2. Under **Download custom model or LoRA**, enter `TheBloke/FreeWilly2-GPTQ`.
|
91 |
-
- To download from a specific branch, enter for example `TheBloke/FreeWilly2-GPTQ:gptq-4bit-
|
92 |
- see Provided Files above for the list of branches for each option.
|
93 |
3. Click **Download**.
|
94 |
4. The model will start downloading. Once it's finished it will say "Done"
|
@@ -112,7 +114,7 @@ from transformers import AutoTokenizer, pipeline, logging
|
|
112 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
113 |
|
114 |
model_name_or_path = "TheBloke/FreeWilly2-GPTQ"
|
115 |
-
model_basename = "gptq_model-4bit
|
116 |
|
117 |
use_triton = False
|
118 |
|
@@ -130,7 +132,7 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
|
130 |
To download from a specific branch, use the revision parameter, as in this example:
|
131 |
|
132 |
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
133 |
-
revision="gptq-4bit-
|
134 |
model_basename=model_basename,
|
135 |
use_safetensors=True,
|
136 |
trust_remote_code=False,
|
|
|
32 |
|
33 |
## Description
|
34 |
|
35 |
+
This repo contains GPTQ model files for [Stability AI's FreeWilly 2](https://huggingface.co/stabilityai/FreeWilly2).
|
36 |
|
37 |
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
|
38 |
|
39 |
+
None
|
40 |
|
41 |
## Repositories available
|
42 |
|
|
|
63 |
|
64 |
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
65 |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
66 |
+
| main | 4 | None | True | 36652374352.00 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
67 |
+
| gptq-4bit-128g-actorder_False | 4 | 128 | False | 36.65 GB | True | AutoGPTQ | 4-bit, without Act Order and group size 128g. |
|
68 |
+
| gptq-4bit-32g-actorder_True | 4 | 32 | True | Processing, coming soon | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
69 |
+
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
70 |
+
| gptq-4bit-128g-actorder_True | 4 | 128 | True | Processing, coming soon | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
71 |
| gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
72 |
| gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
|
73 |
| gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
|
|
75 |
|
76 |
## How to download from branches
|
77 |
|
78 |
+
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/FreeWilly2-GPTQ:gptq-4bit-128g-actorder_False`
|
79 |
- With Git, you can clone a branch with:
|
80 |
```
|
81 |
+
git clone --branch gptq-4bit-128g-actorder_False https://huggingface.co/TheBloke/FreeWilly2-GPTQ`
|
82 |
```
|
83 |
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
84 |
|
|
|
90 |
|
91 |
1. Click the **Model tab**.
|
92 |
2. Under **Download custom model or LoRA**, enter `TheBloke/FreeWilly2-GPTQ`.
|
93 |
+
- To download from a specific branch, enter for example `TheBloke/FreeWilly2-GPTQ:gptq-4bit-128g-actorder_False`
|
94 |
- see Provided Files above for the list of branches for each option.
|
95 |
3. Click **Download**.
|
96 |
4. The model will start downloading. Once it's finished it will say "Done"
|
|
|
114 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
115 |
|
116 |
model_name_or_path = "TheBloke/FreeWilly2-GPTQ"
|
117 |
+
model_basename = "gptq_model-4bit--1g"
|
118 |
|
119 |
use_triton = False
|
120 |
|
|
|
132 |
To download from a specific branch, use the revision parameter, as in this example:
|
133 |
|
134 |
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
135 |
+
revision="gptq-4bit-128g-actorder_False",
|
136 |
model_basename=model_basename,
|
137 |
use_safetensors=True,
|
138 |
trust_remote_code=False,
|