Text Generation
Transformers
Safetensors
English
llama
causal-lm
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
d1ac45d
1 Parent(s): c3b31c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -5
README.md CHANGED
@@ -56,11 +56,7 @@ Open the text-generation-webui UI as normal.
56
  4. Wait until it says it's finished downloading.
57
  5. Click the **Refresh** icon next to **Model** in the top left.
58
  6. In the **Model drop-down**: choose the model you just downloaded,`stable-vicuna-13B-GPTQ`.
59
- 7. If you see an error in the bottom right, ignore it - it's temporary.
60
- 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
61
- 9. Click **Save settings for this model** in the top right.
62
- 10. Click **Reload the Model** in the top right.
63
- 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
64
 
65
  ## Provided files
66
 
 
56
  4. Wait until it says it's finished downloading.
57
  5. Click the **Refresh** icon next to **Model** in the top left.
58
  6. In the **Model drop-down**: choose the model you just downloaded,`stable-vicuna-13B-GPTQ`.
59
+ 7. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
 
 
 
 
60
 
61
  ## Provided files
62