Perplexity test goes way lower than the base model at equivalent quant!

#1
by Nexesenex - opened

Impressive results, confirmed on 2 exl2-2 quants :

Screenshot 2024-01-07 at 04-29-01 Text generation web UI.png

Screenshot 2024-01-07 at 04-40-56 Text generation web UI.png

Your model https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-limarp-exl2 equaled the perplexity of the original Mixtral instruct quantized by turboderp. Norobara decreases it beyond the margin of error.

Good job !

Nexesenex changed discussion title from Perplexity test to Perplexity test goes way lower than the base model at equal quant!
Nexesenex changed discussion title from Perplexity test goes way lower than the base model at equal quant! to Perplexity test goes way lower than the base model at equivalent quant!

Sign up or log in to comment