TheBloke commited on
Commit
824f9a8
1 Parent(s): 5ea8040

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -27
README.md CHANGED
@@ -18,41 +18,27 @@ It is the result of quantising to 4bit and 5bit GGML for CPU inference using [ll
18
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML).
19
  * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF).
20
 
21
- ## Provided files
22
- | Name | Quant method | Bits | Size | RAM required | Use case |
23
- | ---- | ---- | ---- | ---- | ---- | ----- |
24
- `wizard-vicuna-13B.ggml.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | Maximum compatibility |
25
- `wizard-vicuna-13B.ggml.q4_2.bin` | q4_2 | 4bit | 8.14GB | 10.5GB | Best compromise between resources, speed and quality |
26
- `wizard-vicuna-13B.ggml.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
27
- `wizard-vicuna-13B.ggml.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
28
-
29
- * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
30
- * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
31
- * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
32
- * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
33
-
34
- ## q4_2 compatibility
35
-
36
- q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
37
 
38
- In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
39
 
40
- If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
41
 
42
- If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
43
 
44
- ## q5_0 and q5_1 compatibility
45
-
46
- These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
47
-
48
- Third party tools/UIs may or may not support them. Check you're using the latest version of any such tools and ask the devs for advice if you find you can't load q5 files.
 
49
 
50
  ## How to run in `llama.cpp`
51
 
52
  I use the following command line; adjust for your tastes and needs:
53
 
54
  ```
55
- ./main -t 18 -m wizard-vicuna-13B.ggml.q4_2.bi --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
56
  ```
57
 
58
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
@@ -63,9 +49,9 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
63
 
64
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
65
 
66
- Note: at this time text-generation-webui may not support the new q5 quantisation methods.
67
 
68
- **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
69
 
70
  # Original WizardVicuna-13B model card
71
 
 
18
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML).
19
  * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF).
20
 
21
+ ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
+ llama.cpp recently made a breaking change to its quantisation methods.
24
 
25
+ I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
26
 
27
+ The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
28
 
29
+ ## Provided files
30
+ | Name | Quant method | Bits | Size | RAM required | Use case |
31
+ | ---- | ---- | ---- | ---- | ---- | ----- |
32
+ `wizard-vicuna-13B.ggml.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
33
+ `wizard-vicuna-13B.ggml.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
34
+ `wizard-vicuna-13B.ggml.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
35
 
36
  ## How to run in `llama.cpp`
37
 
38
  I use the following command line; adjust for your tastes and needs:
39
 
40
  ```
41
+ ./main -t 18 -m wizard-vicuna-13B.ggml.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
42
  ```
43
 
44
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
49
 
50
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
51
 
52
+ Note: at this time text-generation-webui may not support the new May 12th llama.cpp quantisation methods.
53
 
54
+ **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) which may be useful to get text-gen-ui working with the new llama.cpp quant methods sooner.
55
 
56
  # Original WizardVicuna-13B model card
57