TheBloke commited on
Commit
27941f8
1 Parent(s): 18fab4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -39,8 +39,9 @@ I have quantised the GGML files in this repo with the latest version. Therefore
39
  | Name | Quant method | Bits | Size | RAM required | Use case |
40
  | ---- | ---- | ---- | ---- | ---- | ----- |
41
  `manticore-13B.ggmlv2.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
42
- `manticore-13B.ggmlv2.q4_1.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
43
- `manticore-13B.ggmlv2.q5_0.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
 
44
  `manticore-13B.ggmlv2.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
45
 
46
  ## How to run in `llama.cpp`
39
  | Name | Quant method | Bits | Size | RAM required | Use case |
40
  | ---- | ---- | ---- | ---- | ---- | ----- |
41
  `manticore-13B.ggmlv2.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
42
+ `manticore-13B.ggmlv2.q4_1.bin` | q4_1 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
43
+ `manticore-13B.ggmlv2.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
44
+ `manticore-13B.ggmlv2.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
45
  `manticore-13B.ggmlv2.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
46
 
47
  ## How to run in `llama.cpp`