Perplexity issue

#1
by mammour - opened

Hey, base perplexity in the measurements is over 2000 when it should be < 10, kudos

(all of your measurements seem to face this issue excepted for Tiefighter)

I use the exact same command for all of them. The first two were done with Exllama2 0.0.1 is the only difference, where tiefighter was done with 0.0.8

Not entirely sure what "perplexity" means. The quantizations perform completely fine. I shared the Mythomax privately before uploading to HF and the responses were overwhelmingly positive.

I ran test_inference.py on my 8bit mythomax quant and got an evaluation perplexity of 5.7459 which leads me to believe it's just a err with how it's represented in the .json from older exllama2 versions. The actual quants should still work as intended.

mammour changed discussion status to closed

Sign up or log in to comment