TheBloke commited on
Commit
950c144
1 Parent(s): 59371f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,9 +16,9 @@ This repo contains GGML files for WizardLM-7B for CPU inference
16
  ## Provided files
17
  | Name | Quant method | Bits | Size | RAM required | Use case |
18
  | ---- | ---- | ---- | ---- | ---- | ----- |
19
- `WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 39GB | 41GB | Superseded and not recommended |
20
- `WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 39GB | 41GB | Best compromise between resources, speed and quality |
21
- `WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 47GB | 49GB | Maximum quality, high RAM requirements and slow inference |
22
 
23
  * The q4_0 file is provided for compatibility with older versions of llama.cpp. It has been superseded and is no longer recommended.
24
  * The q4_2 file offers the best combination of performance and quality.
 
16
  ## Provided files
17
  | Name | Quant method | Bits | Size | RAM required | Use case |
18
  | ---- | ---- | ---- | ---- | ---- | ----- |
19
+ `WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.0GB | 6GB | Superseded and not recommended |
20
+ `WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.0GB | 6GB | Best compromise between resources, speed and quality |
21
+ `WizardLM-7B.GGML.q4_3.bin` | q4_3 | 4bit | 4.8GB | 7GB | Maximum quality, high RAM requirements and slow inference |
22
 
23
  * The q4_0 file is provided for compatibility with older versions of llama.cpp. It has been superseded and is no longer recommended.
24
  * The q4_2 file offers the best combination of performance and quality.