TheBloke commited on
Commit
bf16319
1 Parent(s): b2eda10

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -17,6 +17,22 @@ This is a [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) 4bit q
17
 
18
  Please read the Provided Files section below. You should use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors` unless you are able to use the latest Triton branch of GPTQ-for-LLaMa.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Provided files
21
 
22
  Two files are provided. **The second file will not work unless you use a recent version of the Triton branch of GPTQ-for-LLaMa**
 
17
 
18
  Please read the Provided Files section below. You should use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors` unless you are able to use the latest Triton branch of GPTQ-for-LLaMa.
19
 
20
+ ## How to easily download and use this model in text-generation-webui
21
+
22
+ Open the text-generation-webui UI as normal.
23
+
24
+ 1. Click the **Model tab**.
25
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/medalpaca-13B-GPTQ-4bit`.
26
+ 3. Click **Download**.
27
+ 4. Wait until it says it's finished downloading.
28
+ 5. Click the **Refresh** icon next to **Model** in the top left.
29
+ 6. In the **Model drop-down**: choose the model you just downloaded,`medalpaca-13B-GPTQ-4bit`.
30
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
31
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
32
+ 9. Click **Save settings for this model** in the top right.
33
+ 10. Click **Reload the Model** in the top right.
34
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
35
+
36
  ## Provided files
37
 
38
  Two files are provided. **The second file will not work unless you use a recent version of the Triton branch of GPTQ-for-LLaMa**