Try my quantizations...

#3
by ZeroWw - opened

https://huggingface.co/ZeroWw/Mistroll-7B-v2.2-GGUF/tree/main

My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.

Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.

BarraHome changed discussion status to closed

Sign up or log in to comment