--- license: apache-2.0 tags: - mlx --- # GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-2.2-mlx This quantized low-bit model was converted to MLX format from [`GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-2.2`](). Refer to the [original model card](https://huggingface.co/GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-2.2) for more details on the model. ## Use with mlx ```bash pip install gbx-lm ``` ```python from gbx_lm import load, generate model, tokenizer = load("GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-2.2-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```