TheBloke commited on
Commit
4323f60
1 Parent(s): 996ab60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -13,15 +13,27 @@ inference: false
13
 
14
  This is a 4bit 128g GPTQ of [chansung's gpt4-alpaca-lora-13b](https://huggingface.co/chansung/gpt4-alpaca-lora-13b).
15
 
16
- More details will be put in this README tomorrow. Until then, please see one of my other GPTQ repos for more instructions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  Command to create was:
19
  ```
20
- cd gptq-safe && CUDA_VISIBLE_DEVICES=0 python3 llama.py /content/gpt4-alpaca-lora-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors /content/gpt4-alpaca-lora-13B-GPTQ-4bit-128g.safetensors
21
  ```
22
 
23
- Note that as `--act-order` was used, this will not work with ooba's fork of GPTQ. You must use the qwopqwop repo as of April 13th.
24
-
25
  Command to clone the latest Triton GPTQ-for-LLaMa repo for inference using `llama_inference.py`, or in `text-generation-webui`:
26
  ```
27
  # Clone text-generation-webui, if you don't already have it
 
13
 
14
  This is a 4bit 128g GPTQ of [chansung's gpt4-alpaca-lora-13b](https://huggingface.co/chansung/gpt4-alpaca-lora-13b).
15
 
16
+ ## How to easily download and use this model in text-generation-webui
17
+
18
+ Open the text-generation-webui UI as normal.
19
+
20
+ 1. Click the **Model tab**.
21
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/gpt4-alpaca-lora-13B-GPTQ-4bit-128g`.
22
+ 3. Click **Download**.
23
+ 4. Wait until it says it's finished downloading.
24
+ 5. Click the **Refresh** icon next to **Model** in the top left.
25
+ 6. In the **Model drop-down**: choose the model you just downloaded,`gpt4-alpaca-lora-13B-GPTQ-4bit-128g`.
26
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
27
+ 8. Check that the `GPTQ parameters` are correct on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
28
+ 9. Click **Save settings for this model** in the top right.
29
+ 10. Click **Reload the Model** in the top right.
30
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
31
 
32
  Command to create was:
33
  ```
34
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py /content/gpt4-alpaca-lora-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors /content/gpt4-alpaca-lora-13B-GPTQ-4bit-128g.safetensors
35
  ```
36
 
 
 
37
  Command to clone the latest Triton GPTQ-for-LLaMa repo for inference using `llama_inference.py`, or in `text-generation-webui`:
38
  ```
39
  # Clone text-generation-webui, if you don't already have it