Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization of Mixtral-8x7B (https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)

Using llama.cpp, PR add Mixtral support #4406 (https://github.com/ggerganov/llama.cpp/pull/4406)

Instructions to run:

./main -m ./models/mixtral-8x7b-32k-q4_0.gguf
-p "I believe the meaning of life is"
-ngl 999 -s 1 -n 128 -t 8

Downloads last month
42
GGUF
Model size
46.7B params
Architecture
llama

4-bit

Unable to determine this model's library. Check the docs .