Update README.md
Browse files
README.md
CHANGED
@@ -56,16 +56,16 @@ As with llama.cpp, if you can fully offload the model to VRAM you should use `-t
|
|
56 |
## Provided files
|
57 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
58 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
59 |
-
| wizard-falcon40b.ggmlv3.q2_K.bin | q2_K | 2 | 13.74 GB | 16.24 GB |
|
60 |
-
| wizard-falcon40b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.98 GB | 20.48 GB |
|
61 |
-
| wizard-falcon40b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 17.98 GB | 20.48 GB |
|
62 |
-
| wizard-falcon40b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 17.98 GB | 20.48 GB |
|
63 |
-
| wizard-falcon40b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 23.54 GB | 26.04 GB |
|
64 |
-
| wizard-falcon40b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 23.54 GB | 26.04 GB |
|
65 |
-
| wizard-falcon40b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 28.77 GB | 31.27 GB |
|
66 |
-
| wizard-falcon40b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 28.77 GB | 31.27 GB |
|
67 |
-
| wizard-falcon40b.ggmlv3.q6_K.bin | q6_K | 6 | 34.33 GB | 36.83 GB |
|
68 |
-
| wizard-falcon40b.ggmlv3.q8_0.bin | q8_0 | 8 | 44.46 GB | 46.96 GB |
|
69 |
|
70 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
71 |
|
|
|
56 |
## Provided files
|
57 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
58 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
59 |
+
| wizard-falcon40b.ggmlv3.q2_K.bin | q2_K | 2 | 13.74 GB | 16.24 GB | Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
60 |
+
| wizard-falcon40b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.98 GB | 20.48 GB | Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
61 |
+
| wizard-falcon40b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 17.98 GB | 20.48 GB | Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
62 |
+
| wizard-falcon40b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 17.98 GB | 20.48 GB | Uses GGML_TYPE_Q3_K for all tensors |
|
63 |
+
| wizard-falcon40b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 23.54 GB | 26.04 GB | Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
64 |
+
| wizard-falcon40b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 23.54 GB | 26.04 GB | Uses GGML_TYPE_Q4_K for all tensors |
|
65 |
+
| wizard-falcon40b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 28.77 GB | 31.27 GB | Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
66 |
+
| wizard-falcon40b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 28.77 GB | 31.27 GB | Uses GGML_TYPE_Q5_K for all tensors |
|
67 |
+
| wizard-falcon40b.ggmlv3.q6_K.bin | q6_K | 6 | 34.33 GB | 36.83 GB | Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
68 |
+
| wizard-falcon40b.ggmlv3.q8_0.bin | q8_0 | 8 | 44.46 GB | 46.96 GB | 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
69 |
|
70 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
71 |
|