Edit model card

About

This repository contains weighted quants of https://huggingface.co/tiiuae/falcon-180B, using an experimental (read: crappy) method based on 65k semi-random english-only tokens, and requantized from TheBlokes Q8 quant rather than the original because my llama couldn't read the f16 model.

It would be nice to see some real-world comparison between this Q2_K and the static Q2_K by TheBloke for example.

The algorithm used is iterative, so if this works, there will be an i2 variant that might or might not be better.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF i1-IQ1_S 38.4 for the desperate
GGUF i1-IQ1_M 42.0 mostly desperate
GGUF i1-IQ2_XXS 47.9
PART 1 PART 2 i1-IQ2_XS 53.1
PART 1 PART 2 i1-IQ2_S 56.6
PART 1 PART 2 i1-IQ2_M 61.3
PART 1 PART 2 i1-Q2_K_S 61.8 very low quality
PART 1 PART 2 i1-Q2_K 66.9 IQ3_XXS probably better
PART 1 PART 2 i1-IQ3_XXS 69.7 lower quality
PART 1 PART 2 i1-IQ3_XS 75.4
PART 1 PART 2 i1-Q3_K_XS 75.5
PART 1 PART 2 i1-IQ3_S 77.9 beats Q3_K*
PART 1 PART 2 i1-Q3_K_S 77.9 IQ3_XS probably better
PART 1 PART 2 i1-IQ3_M 81.5
PART 1 PART 2 i1-Q3_K_M 85.6 IQ3_S probably better
PART 1 PART 2 i1-Q3_K_L 92.1 IQ3_M probably better
PART 1 PART 2 i1-IQ4_XS 96.0
PART 1 PART 2 PART 3 i1-Q4_K_S 101.6 optimal size/speed/quality
PART 1 PART 2 PART 3 i1-Q4_0 102.1 fast, low quality
PART 1 PART 2 PART 3 i1-Q4_K_M 108.9 fast, recommended
PART 1 PART 2 PART 3 i1-Q5_K_S 123.9
PART 1 PART 2 PART 3 i1-Q5_K_M 131.1
PART 1 PART 2 PART 3 i1-Q6_K 147.6 practically like static Q6_K

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.

Downloads last month
148
GGUF
Model size
180B params
Architecture
falcon
+1

Quantized from

Dataset used to train mradermacher/falcon-180B-i1-GGUF