Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
84427a6
1 Parent(s): 72023fd

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -32,10 +32,11 @@ pipeline_tag: text-generation
32
 
33
  ## Description
34
 
35
- These repo contains GPTQ model files for [Stability AI's FreeWilly 2](https://huggingface.co/stabilityai/FreeWilly2).
36
 
37
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
38
 
 
39
 
40
  ## Repositories available
41
 
@@ -62,14 +63,15 @@ Each separate quant is in a different branch. See below for instructions on fet
62
 
63
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
64
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
65
- | main | 4 | 128 | False | 36.65 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
66
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | Processing, coming soon | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
67
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
68
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | Processing, coming soon | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
69
  | gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
70
  | gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
71
  | gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
72
- | gptq-3bit-64g-actorder_True | 3 | 64 | True | 29.30 GB | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
 
73
 
74
  ## How to download from branches
75
 
@@ -112,7 +114,7 @@ from transformers import AutoTokenizer, pipeline, logging
112
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
113
 
114
  model_name_or_path = "TheBloke/FreeWilly2-GPTQ"
115
- model_basename = "gptq_model-4bit-128g"
116
 
117
  use_triton = False
118
 
 
32
 
33
  ## Description
34
 
35
+ This repo contains GPTQ model files for [Stability AI's FreeWilly 2](https://huggingface.co/stabilityai/FreeWilly2).
36
 
37
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
38
 
39
+ None
40
 
41
  ## Repositories available
42
 
 
63
 
64
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
65
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
66
+ | main | 4 | None | True | 35.33 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
67
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
68
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
69
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
70
  | gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
71
  | gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
72
  | gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
73
+ | gptq-3bit-64g-actorder_True | 3 | 64 | True | 29.30 GB | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
74
+ | gptq-4bit-128g-actorder_False | 4 | 128 | False | 36.65 GB | True | AutoGPTQ | 4-bit, without Act Order and group size 128g. |
75
 
76
  ## How to download from branches
77
 
 
114
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
115
 
116
  model_name_or_path = "TheBloke/FreeWilly2-GPTQ"
117
+ model_basename = "gptq_model-4bit--1g"
118
 
119
  use_triton = False
120