--- base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO --- # Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ This model is part of a series of HQQ tests. I make no claims on the performance of this model and it very well may change or be deleted. This is a very extreme example of quantization. ```python from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('macadeliccc/Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ', trust_remote_code=True) model = HQQModelForCausalLM.from_pretrained( "macadeliccc/Nous-Hermes-2-Mixtral-8x7B-DPO-HQQ", torch_dtype=torch.float16, device_map="auto" ) ```