mixtral-instruct-8x7b-q3k-medium.gguf

#2
by cmh - opened

There seems to be an issue with mixtral-instruct-8x7b-q3k-medium.gguf. It will give the first token then output garbage. The other quants are fine tho.
Exemple:
PS C:\Users\Windows\AI> C:\Users\Windows\AI\main.exe -m C:\Users\Windows\AI\models\mixtral-instruct-8x7b-q3k-medium.gguf -ngl 15 -t 7 --repeat_penalty 1 --no-penalize-nl --color --temp 0 --top-k 50 --top-p 1 -c 8192 -n -1 --seed 1 -p " [INST] You are an LLM trained to follow instructions. Here's an instruction: explain the incompleteness theorems [/INST] "
[...]
"Theβ–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…β–…
PS C:\Users\Windows\AI>

I observe similar behavior with mixtral-instruct-8x7b-q4k-medium.gguf, except I canΚΌt even get it to output a single good token. With --temp 0 it only outputs β–…, and with higher temperatures only nonsense.

Not sure. With the above I get

The Incompleteness Theorems are two of the most famous theorems in mathematical logic, both proven by Kurt GΓΆdel in 1931.
The first theorem, known as GΓΆdel's First Incompleteness Theorem, states that no consistent, recursive axiomatic system can prove
all true statements about the natural numbers. This theorem implies that there are true statements that cannot be proven within any
given axiomatic system, demonstrating that mathematics is inherently incomplete.

The second theorem, GΓΆdel's Second Incompleteness Theorem, states that within any consistent, recursive axiomatic system, the
consistency of the system cannot be proven. This theorem highlights the limitations of formal systems and shows that mathematics
cannot be reduced to a set of axioms that can be proven to be consistent.

These theorems have profound implications for the foundations of mathematics and have been the subject of much debate and
discussion in the philosophical and mathematical communities. They demonstrate that there are limits to what can be proven within
formal systems and that mathematics is not just a matter of deduction from axioms, but also involves creativity and intuition. [end of text]

Yeah, mixtral-instruct-8x7b-q4k-medium.gguf is also borked here.
Llama.cpp was compiled with -DLLAMA_CUBLAS=ON -DLLAMA_CUDA_F16=ON so I recompiled without F16 but it has the same behavior.
I don't have those issues with the files that TheBloke have made available (but some didn't work in the past and they removed them).
IDK. Just to be thorough, I'm using CUDA 12.1, llama.cpp's master branch (latest commit cd108e6) on Windows 11 with an RTX 3060 12GB.

edit:
mixtral-instruct-8x7b-2.10bpw.gguf works fine
mixtral-instruct-8x7b-2.34bpw.gguf do not.

The file is 12,7 gb, the model size reported is wrong:
llm_load_print_meta: model ftype = unknown, may not work
llm_load_print_meta: model params = 46.70 B
llm_load_print_meta: model size = 42.15 GiB (7.75 BPW)

Llama.cpp crashes with
GGML_ASSERT: C:\Users\Windows\AI\llama.cpp\ggml-cuda.cu:7899: false

This is fixed by commit 0c06c11.

Both mixtral-instruct-8x7b-q3k-medium.gguf and mixtral-instruct-8x7b-q4k-medium.gguf work correctly for me now. β˜Ί

Sign up or log in to comment