TheBloke commited on
Commit
b83783d
1 Parent(s): ecd42d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -21,11 +21,11 @@ This repo contains GGML files for for CPU inference using [llama.cpp](https://gi
21
  ## Provided files
22
  | Name | Quant method | Bits | Size | RAM required | Use case |
23
  | ---- | ---- | ---- | ---- | ---- | ----- |
24
- `WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.0GB | 6GB | Maximum compatibility |
25
- `WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.0GB | 6GB | Best compromise between resources, speed and quality |
26
- `WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 4.8GB | 7GB | Maximum quality 4bit, higher RAM requirements and slower inference |
27
- `WizardLM-7B.GGML.q5_0.bin` | q5_0 | 5bit | 4.4GB | 7GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
28
- `WizardLM-7B.GGML.q5_1.bin` | q5_1 | 5bit | 4.8GB | 7GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
29
 
30
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
31
  * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
 
21
  ## Provided files
22
  | Name | Quant method | Bits | Size | RAM required | Use case |
23
  | ---- | ---- | ---- | ---- | ---- | ----- |
24
+ `WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.2GB | 6GB | Maximum compatibility |
25
+ `WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.2GB | 6GB | Best compromise between resources, speed and quality |
26
+ `WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 5.0GB | 7GB | Maximum quality 4bit, higher RAM requirements and slower inference |
27
+ `WizardLM-7B.GGML.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
28
+ `WizardLM-7B.GGML.q5_1.bin` | q5_1 | 5bit | 5.0GB | 7GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
29
 
30
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
31
  * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.