https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
#136
by
yttria
- opened
Mistral fixed the tokenizer
Thanks for notifying me, let's try to quant it at once :)
mradermacher
changed discussion status to
closed
Unfortunately, not even transformers can now load it without crashing. Maybe there is a special way to load it now, but that isn't supported by llama.cpp. I'll look around for a workaround, but it seems completely hosed now. Shouldn't have deleted it :()
Hmm, the configs of base and instruct differ. If I replace one by the other it seems to load. Hope that's correct...
That was a bit unclear, the base model didn't load, not the instruct model.