I have managed to convert mixtral to GGUF

#1
by Kquant03 - opened

I managed to merge 8x of these into a mixtral but I can't test until I convert it to GGUF.

image.png

Did you name your weights something strange? Here's the model I'm trying to quantize. https://huggingface.co/Kquant03/MistralTrix8x9B/blob/main/README.md

That tensor name "model.layers.0.block_sparse_moe.experts.0.w3.weight" is from Mixtral, not MistralTrix which uses standard Mistral tensor names aside from the extra layers.

For whatever reason, convert.py isn't expecting to handle Mixtral input there.

That tensor name "model.layers.0.block_sparse_moe.experts.0.w3.weight" is from Mixtral, not MistralTrix which uses standard Mistral tensor names aside from the extra layers.

For whatever reason, convert.py isn't expecting to handle Mixtral input there.

thanks for letting me know. Don't worry...I'll just leave it as base, then. I think I'm about to drop two back to back open LLm #1 spots. That base float is nasty...it knows more than I ever possibly thought it would have.

To anyone worried about quantizing MoE's of this model, it's not just this model...it's mergekit-moe in general. Most models over 8x7B will not convert to GGUF. Just letting everyone know.

This model is great, btw...I'm making a 4x MoE of it right now for roleplay haha

Kquant03 changed discussion status to closed
Kquant03 changed discussion status to open
Kquant03 changed discussion title from I'm trying to convert to GGUF to I have managed to convert mixtral to GGUF

After days of merging and editing code and trying new things...I found out that convert-hf-to-gguf.py works, but is very bugged when used for MoEs created by mergekit-moe. This thread will be closed, now.

Kquant03 changed discussion status to closed

Sign up or log in to comment