TheBloke commited on
Commit
240deea
1 Parent(s): 04acb62

New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -18,27 +18,29 @@ It is the result of quantising to 4bit and 5bit GGML for CPU inference using [ll
18
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML).
19
  * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF).
20
 
21
- ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
22
 
23
- llama.cpp recently made a breaking change to its quantisation methods.
24
 
25
- I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
26
 
27
- The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
28
 
29
  ## Provided files
30
  | Name | Quant method | Bits | Size | RAM required | Use case |
31
  | ---- | ---- | ---- | ---- | ---- | ----- |
32
- `wizard-vicuna-13B.ggml.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
33
- `wizard-vicuna-13B.ggml.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
34
- `wizard-vicuna-13B.ggml.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
 
 
35
 
36
  ## How to run in `llama.cpp`
37
 
38
  I use the following command line; adjust for your tastes and needs:
39
 
40
  ```
41
- ./main -t 18 -m wizard-vicuna-13B.ggml.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
42
  ```
43
 
44
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
@@ -49,9 +51,7 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
49
 
50
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
51
 
52
- Note: at this time text-generation-webui may not support the new May 12th llama.cpp quantisation methods.
53
-
54
- **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) which may be useful to get text-gen-ui working with the new llama.cpp quant methods sooner.
55
 
56
  # Original WizardVicuna-13B model card
57
 
 
18
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML).
19
  * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF).
20
 
21
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
22
 
23
+ llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
24
 
25
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
26
 
27
+ For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
28
 
29
  ## Provided files
30
  | Name | Quant method | Bits | Size | RAM required | Use case |
31
  | ---- | ---- | ---- | ---- | ---- | ----- |
32
+ `wizard-vicuna-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
33
+ `wizard-vicuna-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.95GB | 11.0GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
34
+ `wizard-vicuna-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
35
+ `wizard-vicuna-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
36
+ `wizard-vicuna-13B.ggmlv3.q8_0.bin` | q5_1 | 5bit | 16GB | 18GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.|
37
 
38
  ## How to run in `llama.cpp`
39
 
40
  I use the following command line; adjust for your tastes and needs:
41
 
42
  ```
43
+ ./main -t 18 -m wizard-vicuna-13B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
44
  ```
45
 
46
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
51
 
52
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
53
 
54
+ Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
 
 
55
 
56
  # Original WizardVicuna-13B model card
57