My alternative quantizations.

#5
by ZeroWw - opened

These are my own quantizations (updated almost daily).

The difference with normal quantizations is that I quantize the output and embed tensors to f16.
and the other tensors to 15_k,q6_k or q8_0.
This creates models that are little or not degraded at all and have a smaller size.
They run at about 3-6 t/sec on CPU only using llama.cpp
And obviously faster on computers with potent GPUs

@J22
that's normal.. but check how q6_k and q5_k perform compared to q8_p and q8_0 :D

and then check how q6_k and q5_k perform compared to f16.

you'll be surprised.

@J22
that's normal.. but check how q6_k and q5_k perform compared to q8_p and q8_0 :D

and then check how q6_k and q5_k perform compared to f16.

you'll be surprised.

Would you say that F16 has the best output out of all the quantized formats?

yes. indeed. but there is very little degradation in the others because also the others have the most important tensors still at f16 while the rest of the tensors are at the specified quantization.

yes. indeed. but there is very little degradation in the others because also the others have the most important tensors still at f16 while the rest of the tensors are at the specified quantization.

Got it. I have been trying to use the f_16 model with open-webui but have been unable to run inference on it (I was able to do it for q6_k); I'm guessing this is a open-webui issue where it either fails/is extemely slow when dealing with LLMs which are a bit large (more than 8GB I think).
Thanks for the quick reply!

Sign up or log in to comment