TheBloke commited on
Commit
3b7afc7
1 Parent(s): 71e3419

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -121,15 +121,14 @@ print(tokenizer.decode(output[0]))
121
 
122
  **gptq_model-4bit--1g.safetensors**
123
 
124
- This will work with AutoGPTQ as of commit `3cb1bf5` (`3cb1bf5a6d43a06dc34c6442287965d1838303d3`)
125
 
126
  It was created without groupsize to reduce VRAM requirements, and with `desc_act` (act-order) to improve inference quality.
127
 
128
  * `gptq_model-4bit--1g.safetensors`
129
- * Works only with latest AutoGPTQ CUDA, compiled from source as of commit `3cb1bf5`
130
  * At this time it does not work with AutoGPTQ Triton, but support will hopefully be added in time.
131
- * Works with text-generation-webui using `--autogptq --trust_remote_code`
132
- * At this time it does NOT work with one-click-installers
133
  * Does not work with any version of GPTQ-for-LLaMa
134
  * Parameters: Groupsize = None. Act order (desc_act)
135
 
 
121
 
122
  **gptq_model-4bit--1g.safetensors**
123
 
124
+ This will work with AutoGPTQ 0.2.0 and later.
125
 
126
  It was created without groupsize to reduce VRAM requirements, and with `desc_act` (act-order) to improve inference quality.
127
 
128
  * `gptq_model-4bit--1g.safetensors`
129
+ * Works AutoGPTQ 0.2.0 and later.
130
  * At this time it does not work with AutoGPTQ Triton, but support will hopefully be added in time.
131
+ * Works with text-generation-webui using `--trust-remote-code`
 
132
  * Does not work with any version of GPTQ-for-LLaMa
133
  * Parameters: Groupsize = None. Act order (desc_act)
134