--- base_model: Nous-Hermes-2-Mixtral-8x7B-DPO license: apache-2.0 tags: - Mixtral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation model-index: - name: qwp4w3hyb/Nous-Hermes-2-Mixtral-8x7B-DPO-iMat-GGUF results: [] language: - en datasets: - teknium/OpenHermes-2.5 --- # Nous-Hermes-2-Mixtral-8x7B-DPO-iMat-GGUF Source Model: [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO) Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [46acb3676718b983157058aecf729a2064fc7d34](https://github.com/ggerganov/llama.cpp/commit/46acb3676718b983157058aecf729a2064fc7d34) Imatrix was generated from the f16 gguf via this command: ./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)