This is a reconversion / quantization of https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
There was a breaking change in llama.cpp's GGUF file format in https://github.com/ggerganov/llama.cpp/pull/6387 and the https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF repo hasn't been updated since. This prevents one to memory-map the model, causing it to take much longer to load than needed when the file is already in the IO cache.
- Downloads last month
- 82
Unable to determine this model's library. Check the
docs
.