--- license: apache-2.0 tags: - moe train: false inference: false pipeline_tag: text-generation --- ## Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-3bit-metaoffload-HQQ This is a version of the Mixtral-8x7B-Instruct-v0.1 model quantized with a mix of 4-bit and 3-bit via Half-Quadratic Quantization (HQQ). More specifically, the attention layers are quantized to 4-bit and the experts are quantized to 3-bit. Contrary to the 2bitgs8 model that was designed to use less GPU memory, this one uses about 22GB for the folks who want to get better quality and use the maximum VRAM available on 24GB GPUs. ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/636b945ef575d3705149e982/-gwGOZHDb9l5VxLexIhkM.gif) ----------------------------------------------------------------------------------------------------------------------------------
## Performance | Models | Mixtral Original | HQQ quantized | |-------------------|------------------|------------------| | Runtime VRAM | 94 GB | 22.3 GB | | ARC (25-shot) | 70.22 | 69.62 | | Hellaswag (10-shot)| 87.63 | | | MMLU (5-shot) | 71.16 | | | TruthfulQA-MC2 | 64.58 | 62.63 | | Winogrande (5-shot)| 81.37 | 81.06 | | GSM8K (5-shot)| 60.73 | | | Average| 72.62 | | ### Basic Usage To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows: ``` Python import transformers from threading import Thread model_id = 'mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-3bit-metaoffload-HQQ' #Load the model from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_id) model = HQQModelForCausalLM.from_quantized(model_id) #Optional: set backend/compile #You will need to install CUDA kernels apriori # git clone https://github.com/mobiusml/hqq/ # cd hqq/kernels && python setup_cuda.py install from hqq.core.quantize import * HQQLinear.set_backend(HQQBackend.ATEN_BACKPROP) def chat_processor(chat, max_new_tokens=100, do_sample=True): tokenizer.use_default_system_prompt = False streamer = transformers.TextIteratorStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True) generate_params = dict( tokenizer("