Mixtral Instruct quant

#1
by cmh - opened

Do you plan on quantizing the instruct version ?

Yes, I'm looking into it. Coming soon.

The instruct version is now here.

I have also added the new 2.06 bits-per-weight quantized model to this repo (mixtral-instruct-8x7b-2.10bpw.gguf).

That's fantastic, thanks a lot. I'll report back in the appropriate repo if there's any issue.

Sign up or log in to comment