--- license: apache-2.0 base_model: - mistralai/Mistral-Small-24B-Instruct-2501 tags: - transformers - BitsAndBytes base_model_relation: quantized --- We quantified [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) to 4bit model using `BitsAndBytes`. To use this model you need install `BitsAndBytes` at first: ```bash pip install -U bitsandbytes ``` Then, use `AutoModelForCausalLM`: ```py from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("minicreeper/Mistral-Small-24B-Instruct-2501-bnb-4bit") ```