TheBloke commited on
Commit
9688e0a
1 Parent(s): 9fbec81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -23,11 +23,15 @@ This repo contains GGML files for for CPU inference using [llama.cpp](https://gi
23
  | ---- | ---- | ---- | ---- | ---- | ----- |
24
  `WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.0GB | 6GB | Maximum compatibility |
25
  `WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.0GB | 6GB | Best compromise between resources, speed and quality |
26
- `WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 4.8GB | 7GB | Maximum quality, higher RAM requirements and slower inference |
 
 
27
 
28
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
29
  * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
30
  * The q4_3 file offers the highest quality, at the cost of increased RAM usage and slower inference speed. This format is still subject to change and there may be compatibility issues, see below.
 
 
31
 
32
  ## q4_2 and q4_3 compatibility
33
 
@@ -39,6 +43,12 @@ If and when the q4_2 and q4_3 files no longer work with recent versions of llama
39
 
40
  If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
41
 
 
 
 
 
 
 
42
  # Original model info
43
 
44
  Overview of Evol-Instruct
 
23
  | ---- | ---- | ---- | ---- | ---- | ----- |
24
  `WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.0GB | 6GB | Maximum compatibility |
25
  `WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.0GB | 6GB | Best compromise between resources, speed and quality |
26
+ `WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 4.8GB | 7GB | Maximum quality 4bit, higher RAM requirements and slower inference |
27
+ `WizardLM-7B.GGML.q5_0.bin` | q5_0 | 5bit | 4.4GB | 7GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
28
+ `WizardLM-7B.GGML.q5_1.bin` | q5_1 | 5bit | 4.8GB | 7GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
29
 
30
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
31
  * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
32
  * The q4_3 file offers the highest quality, at the cost of increased RAM usage and slower inference speed. This format is still subject to change and there may be compatibility issues, see below.
33
+ * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
34
+ * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
35
 
36
  ## q4_2 and q4_3 compatibility
37
 
 
43
 
44
  If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
45
 
46
+ ## q5_0 and q5_1 compatibility
47
+
48
+ These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
49
+
50
+ Don't expect any third-party UIs/tools to support them yet.
51
+
52
  # Original model info
53
 
54
  Overview of Evol-Instruct