No 8 bit weights

#2
by Minus0 - opened

It looks like the current weights in the 8 bit branches are the 4 bit versions.

Yeah since the only way to do that is with auto gptq I believe and it doesn’t have mistral support yet.

@Minus0 thanks for the report, I'm not sure what happened there. You're right, I've somehow done 4-bit twice

@johnwick123forevr it does work with Transformers as well - I have other Mistral 7B repos with 8-bit.

This was the first Mistral repo I made and I did them all by hand, as I didn't have Transformers GPTQ code written at that point. I obviously screwed it up :) I will remake them

Oh, I didn’t know that. Thanks for correcting me!

Thanks! Appreciate your work!

Sign up or log in to comment