TheBloke commited on
Commit
a84714c
1 Parent(s): 6a881cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -26,14 +26,16 @@ Specifically, the second file uses `--act-order` for maximum quantisation qualit
26
  Unless you are able to use the latest GPTQ-for-LLaMa code, please use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
27
 
28
  * `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
29
- * Created with the latest GPTQ-for-LLaMa code
 
30
  * Parameters: Groupsize = 128g. No act-order.
31
  * Command:
32
  ```
33
  CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors
34
  ```
35
  * `medalpaca-13B-GPTQ-4bit-128g.safetensors`
36
- * Created with the latest GPTQ-for-LLaMa code
 
37
  * Parameters: Groupsize = 128g. act-order.
38
  * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
39
  * Command:
 
26
  Unless you are able to use the latest GPTQ-for-LLaMa code, please use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
27
 
28
  * `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
29
+ * Works with all versions of GPTQ-for-LLaMa code
30
+ * Works with text-generation-webui one-click-installers
31
  * Parameters: Groupsize = 128g. No act-order.
32
  * Command:
33
  ```
34
  CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors
35
  ```
36
  * `medalpaca-13B-GPTQ-4bit-128g.safetensors`
37
+ * Only works with the latest GPTQ-for-LLaMa code
38
+ * **Does not** work with text-generation-webui one-click-installers
39
  * Parameters: Groupsize = 128g. act-order.
40
  * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
41
  * Command: