Edit model card

About

static quants of https://huggingface.co/wolfram/miquliz-120b-v2.0

While other static and imatrix quants are available already, I wanted a wider selection of quants available for this model.

weighted/imatrix quants are available at https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 44.6
GGUF IQ3_XS 49.3
PART 1 PART 2 Q3_K_XS 49.3
PART 1 PART 2 IQ3_S 52.1 beats Q3_K*
PART 1 PART 2 Q3_K_S 52.2
PART 1 PART 2 IQ3_M 53.8
PART 1 PART 2 Q3_K_M 58.2 lower quality
PART 1 PART 2 Q3_K_L 63.4
PART 1 PART 2 IQ4_XS 64.9
PART 1 PART 2 Q4_K_S 68.7 fast, recommended
PART 1 PART 2 IQ4_NL 68.8 prefer IQ4_XS
PART 1 PART 2 Q4_K_M 72.6 fast, recommended
PART 1 PART 2 Q5_K_S 83.2
PART 1 PART 2 Q5_K_M 85.4
PART 1 PART 2 PART 3 Q6_K 99.1 very good quality
PART 1 PART 2 PART 3 Q8_0 128.2 fast, best quality

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.

Downloads last month
102
GGUF
Model size
120B params
Architecture
llama
+1
Unable to determine this model’s pipeline type. Check the docs .

Quantized from