Error during EXL2 Quants
#1
by
FluffyKaeloky
- opened
I was trying to quantize this model to EXL2 format using Exllamav2 scripts. During quantization however, it returned a Quatization error (2).
Looking it up online, I was able to find this : https://github.com/turboderp/exllamav2/issues/305
I'm not versed well enough into the science of llms to understand fully what it means, but could it mean that the miqu model you used to finetune had broken weights ? Maybe you'll be able to extract information better than me here.
Hello, I tried to LASER the Miqu fp16 I used yesterday and yes, there is some layer that have, I cite "inf. perplexity", we know the fp16 come from a Q5 so, yep, probably some piece are broken.
Sorry to hear that.