WizardLM 30B v1.0 ggml

From: https://huggingface.co/WizardLM/WizardLM-30B-V1.0


Original llama.cpp quant methods: q4_0, q4_1, q5_0, q5_1, q8_0

Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.

k-quant methods: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K

Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.


Files

Name Quant method Bits Size Max RAM required, no GPU offloading Use case
wizardlm-30b.ggmlv3.q2_K.bin q2_K 2 13.60 GB 16.10 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
wizardlm-30b.ggmlv3.q3_K_L.bin q3_K_L 3 17.20 GB 19.70 GB New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
wizardlm-30b.ggmlv3.q3_K_M.bin q3_K_M 3 15.64 GB 18.14 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
wizardlm-30b.ggmlv3.q3_K_S.bin q3_K_S 3 13.98 GB 16.48 GB New k-quant method. Uses GGML_TYPE_Q3_K for all tensors
wizardlm-30b.ggmlv3.q4_0.bin q4_0 4 18.30 GB 20.80 GB Original llama.cpp quant method, 4-bit.
wizardlm-30b.ggmlv3.q4_1.bin q4_1 4 20.33 GB 22.83 GB Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
wizardlm-30b.ggmlv3.q4_K_M.bin q4_K_M 4 19.57 GB 22.07 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
wizardlm-30b.ggmlv3.q4_K_S.bin q4_K_S 4 18.30 GB 20.80 GB New k-quant method. Uses GGML_TYPE_Q4_K for all tensors
wizardlm-30b.ggmlv3.q5_0.bin q5_0 5 22.37 GB 24.87 GB Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.
wizardlm-30b.ggmlv3.q5_1.bin q5_1 5 24.40 GB 26.90 GB Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference.
wizardlm-30b.ggmlv3.q5_K_M.bin q5_K_M 5 23.02 GB 25.52 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
wizardlm-30b.ggmlv3.q5_K_S.bin q5_K_S 5 22.37 GB 24.87 GB New k-quant method. Uses GGML_TYPE_Q5_K for all tensors
wizardlm-30b.ggmlv3.q6_K.bin q6_K 6 26.69 GB 29.19 GB New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors
wizardlm-30b.ggmlv3.q8_0.bin q8_0 8 34.56 GB 37.06 GB Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .