Q4_K_S scores higher than Q8_0 on benchmarks

#5
by JorritJ - opened

I'm testing Aider deployment with different models. Aider has its own in which it asks the model 134 programming questions and then tests the results. From previous tests on different models I know that yes, it is normal for the larger model to score better. In case of this model though, I'm seeing these scores:

modelname : first-shot : second-shot
deepseek-coder-33b-instruct.Q4_K_S.gguf : 43.3% : 52.2%
deepseek-coder-33b-instruct.Q8_0.gguf : 38.8% : 44.8%

This seems wrong. The Q4_K_S test was run twice on RTX4090 and once on M1Max with similar results. The Q8_0 test was run only once on the M1Max. I have no idea what is going on here. Could the Q8_0 be bad?

Both files were downloaded after your (several) fixes.

Sign up or log in to comment