TheBloke commited on
Commit
eb73cec
1 Parent(s): d6aa09e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -31,15 +31,21 @@ I have the following Koala model repositories available:
31
  * [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
32
  * [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML)
33
 
 
 
 
 
 
 
 
 
 
 
34
  ## Provided files
35
 
36
  Three model files are provided. You don't need all three - choose the one that suits your needs best!
37
 
38
  Details of the files provided:
39
- * `koala-7B-4bit-128g.pt`
40
- * pt format file, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
41
- * Command to create:
42
- * `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save koala-7B-4bit-128g.pt`
43
  * `koala-7B-4bit-128g.safetensors`
44
  * newer `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
45
  * Command to create:
 
31
  * [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
32
  * [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML)
33
 
34
+ ## GETTING GIBBERISH OUTPUT?
35
+
36
+ Please read the sections below carefully. Gibberish output is expected if you are using the `safetensors` file without the latest GPTQ-for-LLaMa code.
37
+
38
+ Your options are either to update GPTQ-for-LLaMa under `text-generation-webui/repositories` to a more recent version, or use the other file provided, `koala-7B-4bit-128g.no-act-order.ooba.pt` which will work immediately.
39
+
40
+ Unfortunately right now it is a bit more complext to update GPTQ-for-LLaMa because the most recent code has breaking changes which are not supported by `text-generation-webui`.
41
+
42
+ Therefore it's currently recommended to use `koala-7B-4bit-128g.no-act-order.ooba.pt`.
43
+
44
  ## Provided files
45
 
46
  Three model files are provided. You don't need all three - choose the one that suits your needs best!
47
 
48
  Details of the files provided:
 
 
 
 
49
  * `koala-7B-4bit-128g.safetensors`
50
  * newer `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
51
  * Command to create: