TheBloke commited on
Commit
3b565ee
1 Parent(s): 8a8a74e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -22,11 +22,15 @@ cd gptq-safe && CUDA_VISIBLE_DEVICES=0 python3 llama.py /content/gpt4-alpaca-lor
22
 
23
  Note that as `--act-order` was used, this will not work with ooba's fork of GPTQ. You must use the qwopqwop repo as of April 13th.
24
 
25
- Command to clone the correct GPTQ-for-LLaMa repo for inference using `llama_inference.py`, or in `text-generation-webui`:
26
  ```
27
- git clone -n https://github.com/qwopqwop200/GPTQ-for-LLaMa gptq-safe
28
- cd gptq-safe
29
- git checkout 58c8ab4c7aaccc50f507fd08cce941976affe5e0
 
 
 
 
30
  ```
31
 
32
  There is also a `no-act-order.safetensors` file which will work with oobabooga's fork of GPTQ-for-LLaMa; it does not require the latest GPTQ code.
 
22
 
23
  Note that as `--act-order` was used, this will not work with ooba's fork of GPTQ. You must use the qwopqwop repo as of April 13th.
24
 
25
+ Command to clone the latest Triton GPTQ-for-LLaMa repo for inference using `llama_inference.py`, or in `text-generation-webui`:
26
  ```
27
+ # Clone text-generation-webui, if you don't already have it
28
+ git clone https://github.com/oobabooga/text-generation-webui
29
+ # Make a repositories directory
30
+ mkdir -p text-generation-webui/repositories
31
+ cd text-generation-webui/repositories
32
+ # Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
33
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
34
  ```
35
 
36
  There is also a `no-act-order.safetensors` file which will work with oobabooga's fork of GPTQ-for-LLaMa; it does not require the latest GPTQ code.