Edit model card

Llamacpp Quantizations of Qwen1.5-32B-Chat

Using llama.cpp release b2589 for quantization.

Original model: https://huggingface.co/Qwen/Qwen1.5-32B-Chat

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
Qwen1.5-32B-Chat-Q8_0.gguf Q8_0 34.55GB Extremely high quality, generally unneeded but max available quant.
Qwen1.5-32B-Chat-Q6_K.gguf Q6_K 26.67GB Very high quality, near perfect, recommended.
Qwen1.5-32B-Chat-Q5_K_M.gguf Q5_K_M 23.08GB High quality, very usable.
Qwen1.5-32B-Chat-Q5_K_S.gguf Q5_K_S 22.46GB High quality, very usable.
Qwen1.5-32B-Chat-Q5_0.gguf Q5_0 22.46GB High quality, older format, generally not recommended.
Qwen1.5-32B-Chat-Q4_K_M.gguf Q4_K_M 19.69GB Good quality, uses about 4.83 bits per weight.
Qwen1.5-32B-Chat-Q4_K_S.gguf Q4_K_S 18.64GB Slightly lower quality with small space savings.
Qwen1.5-32B-Chat-IQ4_NL.gguf IQ4_NL 18.68GB Decent quality, similar to Q4_K_S, new method of quanting,
Qwen1.5-32B-Chat-IQ4_XS.gguf IQ4_XS 17.73GB Decent quality, new method with similar performance to Q4.
Qwen1.5-32B-Chat-Q4_0.gguf Q4_0 18.49GB Decent quality, older format, generally not recommended.
Qwen1.5-32B-Chat-Q3_K_L.gguf Q3_K_L 17.11GB Lower quality but usable, good for low RAM availability.
Qwen1.5-32B-Chat-Q3_K_M.gguf Q3_K_M 15.81GB Even lower quality.
Qwen1.5-32B-Chat-IQ3_M.gguf IQ3_M 14.70GB Medium-low quality, new method with decent performance.
Qwen1.5-32B-Chat-IQ3_S.gguf IQ3_S 14.32GB Lower quality, new method with decent performance, recommended over Q3 quants.
Qwen1.5-32B-Chat-Q3_K_S.gguf Q3_K_S 14.28GB Low quality, not recommended.
Qwen1.5-32B-Chat-Q2_K.gguf Q2_K 12.22GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
653
GGUF
+1
Inference Examples
Unable to determine this model's library. Check the docs .