Edit model card

About

weighted/imatrix quants of https://huggingface.co/Qwen/Qwen1.5-110B

static quants are available at https://huggingface.co/mradermacher/Qwen1.5-110B-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF i1-IQ1_S 23.7 for the desperate
GGUF i1-IQ1_M 26.0 mostly desperate
GGUF i1-IQ2_XXS 29.9
GGUF i1-IQ2_XS 33.1
GGUF i1-IQ2_S 34.4
GGUF i1-IQ2_M 37.5
GGUF i1-Q2_K 41.3 IQ3_XXS probably better
GGUF i1-IQ3_XXS 43.2 lower quality
GGUF i1-IQ3_XS 46.0
GGUF i1-IQ3_S 48.6 beats Q3_K*
GGUF i1-Q3_K_S 48.6 IQ3_XS probably better
GGUF i1-IQ3_M 49.8
PART 1 PART 2 i1-Q3_K_M 53.8 IQ3_S probably better
PART 1 PART 2 i1-Q3_K_L 58.2 IQ3_M probably better
PART 1 PART 2 i1-IQ4_XS 59.7
PART 1 PART 2 i1-Q4_0 63.2 fast, low quality
PART 1 PART 2 i1-Q4_K_S 63.6 optimal size/speed/quality
PART 1 PART 2 i1-Q4_K_M 67.3 fast, recommended
PART 1 PART 2 i1-Q5_K_S 76.7
PART 1 PART 2 i1-Q5_K_M 78.9
PART 1 PART 2 i1-Q6_K 91.3 practically like static Q6_K

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.

Downloads last month
447
GGUF
Model size
111B params
Architecture
qwen2
+4
Unable to determine this model’s pipeline type. Check the docs .

Quantized from