Transformers
GGUF
English
mixtral
text-generation-inference

Reasoning this model id worse that original mixtral-8x7b-instruct model.

#4
by mirek190 - opened

Reasoning this model id worse that original mixtral-8x7b model.

I have own sets of questions and this version failed badly comparing to mixtral-8x7b-instruct model.

I'm finding these mixtrals, some of them are really good, others fail my tests.
And the exl2 version fails a lot of logic tests.

Which quant did you try? I read that Q2 and Q6 are broken. I'm going to try Q8 on my main rig, Q4KM on my mbp

I tested full q5 version not k quant version.
I heard k quant version are broken with mixtral 8x7b

Sign up or log in to comment