Quantization suggestion

#3
by ZeroWw - opened

Always quantize the output and the embed tensors to f16 and the others to q4-q8.
Quantizing output and embed tensors more will degrade the model a lot.
I tried quantizing output/embed to q8 but the results were way worse than f16.
My best recipe is f16 for output/embed and q5_k or q6_k for the others.

Examples:
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q5.gguf q5_k

quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q6_k

interesting, i've never heard anyone mention this before, i may look into this to see how it works..

interesting, i've never heard anyone mention this before, i may look into this to see how it works..

You will see the difference. Consider the embed tensor as their "understanding" and the ouput tensor as their "speaking".
I want those to be as untouched as possible and the reasoning in the middle could be less detailed...
Let's say you need to describe if something is good or bad, wurely 2 bits would be seeing the world in black and white... but 8 bit (256 shades between good and bad) could be overkill. So I tried various quantizations and changing the output and the embed to even Q8 had a more significant impact than quantizing more the "inner" tensors.
If instead of testing them sinthetically you chat with them for a while, you will see the difference.
To counter prove my theory I even did the opposite: I quantized heavily the output and embed and lightly the inner ones.. the result was horrible and it seemed to chat with a brain damaged child.

F16 seems a bit aggressive though, it massively increased the size of the quant, I could consider Q8 though, I imagine that there's hardly a difference between f16 and Q8, especially compared to Q5

I'd also be interested in running some PPL tests.. may be interesting

quick test

Quant - embed/output - PPL
Q8 - default - 6.8556 +/- 0.04371
Q8 - f16 - 6.8543 +/- 0.04370
Q4_K_M - default - 6.8888 +/- 0.04385
Q4_K_M - f16 - 6.8844 +/- 0.04382

considering it's 2-300 mb larger for 0.004 PPL.. it's hard to be sure if this is worth, got any more reliable tests..?

Sincerely no, but I use to chat with some models (mistral v03 instruct for example) and the difference is huge both in understand and expressing, considering the slight increase in size. Perhaps in synthetic tests it doesn't show up, no idea. If you try the opposite you will understand how impactful are the output/embed tesnors compared to the other ones... I tried quantizing q6_k and then f16 for the inner ones.. and the difference was huge (way worse and sort of lobotomized).

q4 is quite a stretch... the best results (with mistral 7B v03 instruck, wizardLM-2 7B and Llama-3-8B) where using f16 for output/embed and 16_k or 15_k for the others.

I'd really like it if you had some hard examples, even just a prompt that shows a difference in generation, otherwise I'm not sure I can justify it

I can consider Q8, but f16 is a sizeable increase in final model, to the point where I question the value vs using a bigger quant level

this is definitely worth continuing investigation but I won't - yet - blindly implement it across the board, I do really appreciate the suggestion and bringing it to my attention

This has caught my attention, so I did some comparisons, and the f16 variant has won pretty much every time. Here is a side-by-side comparison of L3-8B-Stheno-v3.2 summarizing Chapter 2032 from the webnovel Overgeared, which is roughly 2700 tokens long:

Q8_0:

It appears that Grid has arrived at the Collapsed Temple, where the demon cultivators are hiding. Marie Rose has used her blood magic to clear a path through the poisonous lake, revealing a massive temple at the end. Noe has also joined Grid, bearing the armor and gear crafted by Khan and Hexetia. The situation is tense as Grid prepares to confront the cultivators, but Marie Rose's actions have given them an advantage.

f16:

The narrative now shifts to Grid and his companions approaching the temple, with Marie Rose having cleared a path through the poisonous lake using her blood magic. Noe, who had been guided by angels, joins them, though he seems disoriented and exhausted from his journey. Grid, meanwhile, remains focused on their mission, dispatching Noe to fight without hesitation. The story hints at the complex world politics and alliances at play, with Asgard and the demon cultivators now on the same side against Grid and the Overgeared Kingdom. The chapter ends with a sense of foreboding and tension as the group prepares to face whatever awaits them within the ancient temple.


The Q8_0 essentially just picked a few sentences and spliced them together, while the f16 actually gave a deeper analysis of the chapter. In my other comparisons, I generally found that the Q8_0 can make obvious connections between things, while the f16 can understand more without having to explicitly tell it. The f16 had a file size increase of about 900~ MegaBytes, which, for me, is definitely worth it. I can't effectively test 70B models on my machine, but the embeddings might be one of the reasons quantization affects smaller models more than larger ones.

I wonder if it would be worth considering releasing a few side by side, like a pseudo Q8+ for those who want to push quality even further.. especially for these smaller models it would be almost negligible for compute to make 2 extra sizes, one for Q8 and one for Q6

As I said, my tests are just chatting for a long time with them about any subject that comes to mind or just brainstorming.
If I quantize the embed and output to q6_k (for example) and the others at f16 I get a horrible result. Lobotomized/braindamaged child.
If I do the opposite I get very close to the normal "pure" F16. Sometimes I can't tell the difference.
And the "size" of those 2 experiment is almost the same because the embed+output tensors are as big as the rest.
This tells me that a good quantization is the one I proposed (or variations of it) considering always that the ouput tensor influences the "expression" and the more that gets quantized the more the model speaks like a child.
The embed tensor instead seems to influence it's understanding.
Probably I am not discovering anything or I am just reinventing the wheel... but I thought it was worth mentioning it.

I wonder if it would be worth considering releasing a few side by side, like a pseudo Q8+ for those who want to push quality even further.. especially for these smaller models it would be almost negligible for compute to make 2 extra sizes, one for Q8 and one for Q6

I usually first convert the HF model to f16. Then I produce my quantizations using this naming template:

model.f16.q6.gguf or f16.q5 or q8.q4 (this gives very bad results but can be useful for simpler tasks)

where f16 represents the embed and output tensors and the other represents the "inner" tensors.

also I found that a "pure" Q8_0 performs way worse than an f16/q5 which has practically the same size.
I am curious of your tests with other models...

14,484,731,552 WizardLM-2-7B.fp16.gguf
 4,263,540,448 WizardLM-2-7B.fp16.q4.gguf
 5,131,409,120 WizardLM-2-7B.fp16.q5.gguf
 5,942,064,864 WizardLM-2-7B.fp16.q6.gguf
 4,368,438,976 zephyr-7b-beta.Q4_K_M.gguf
14,484,732,192 zephyr-orpo-7b-v0.2.f16.gguf
 5,458,065,696 zephyr-orpo-7b-v0.2.f16.q5.gguf
 6,251,313,440 zephyr-orpo-7b-v0.2.f16.q6.gguf

in these, the difference between the f16 (14gb) models and the f16/q5 and f16/q6 is minimal.
I start to notice degradation in f16/q4.
I am still studying because in the same way, some inner tensors could be quantized more than others... but that will take long to test...

This has caught my attention, so I did some comparisons, and the f16 variant has won pretty much every time. Here is a side-by-side comparison of L3-8B-Stheno-v3.2 summarizing Chapter 2032 from the webnovel Overgeared, which is roughly 2700 tokens long:

Yep! Summarizing requires understanding (embed) and then expressing (output). It's probably a good test for this.
The more difficult the subject is, the more reasoning is needed, the more the results will be evident.

I also think that there should be leaderboards organized in this way:

  1. size of the model. (in bytes)
  2. reasoning and expressing evaluation.

Number 2 could be accomplished by summarizing different and complex narratives or scientific papers.
With the strict rule to use original and not finetuned (on those tasks) models.
That would probably prompt people to start optimizing for size and efficiency.
That's just my thought... perhaps naive...

I'll release a couple with the f16 embed and output with the normal ones later today and you can make some side by side comparisons

Another important thig is this: from my tests, if I quantize a model to a pure Q8, it comes out bigger and dumber that am f16/q6 or f16/q5 which on average are 20-30% smaller.

Example:

image.png

You can find them here: https://huggingface.co/ZeroWw/Test/tree/main

image.png

@ZeroWw seems really interesting, is there any discussion about this on the llama.cpp repo?

@turboderp do you know if a similar approach has already been tried in exllama2? It seems that exl2 does some calibration to find the best quant per layer etc, maybe you've explored this?

This hasn't been an issue with Phi3 or any other model to my knowledge. All the objective tests I can do show that a quantized head layer works fine for this model (difference compared to FP16 model vanishes completely around 6 bpw). So if it's subjectively dumber somehow, I have no idea why that would be. And I wouldn't know where to begin investigating it without something a little more concrete to go on.

Can't say if there's anything particular about GGUF that causes it to clamp the logits differently when the output layer is FP16, and maybe that has an effect at extreme temperatures or something?

i don't think it's specific to phi3

i also don't know that i believe there is that big a difference, i'd want to see more side-by-side comparisons to confirm any changes in behaviour before i commit to doing it too much

same can be done with exl2 I suppose. can the head bits even go about 8?

Sign up or log in to comment