New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
Browse files
README.md
CHANGED
@@ -31,13 +31,13 @@ I have the following Koala model repositories available:
|
|
31 |
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
32 |
* [4bit and 5bit models in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GGML)
|
33 |
|
34 |
-
## REQUIRES LATEST LLAMA.CPP (May
|
35 |
|
36 |
-
llama.cpp recently made
|
37 |
|
38 |
-
I have
|
39 |
|
40 |
-
|
41 |
|
42 |
## How to run in `llama.cpp`
|
43 |
|
|
|
31 |
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
32 |
* [4bit and 5bit models in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GGML)
|
33 |
|
34 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
35 |
|
36 |
+
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
37 |
|
38 |
+
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
39 |
|
40 |
+
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
41 |
|
42 |
## How to run in `llama.cpp`
|
43 |
|