This model was converted to MLX format and quantized Q4, from mistralai/Mistral-Small-24B-Instruct-2501 using mlx-lm version 0.4.0. Refer to the original model card for more details on the model.
Use with mlx . pip install mlx-lm . from mlx_lm import load, generate
model, tokenizer = load("jesusoctavioas/Mistral-Small-24B-Instruct-2501-MLX-Q4") response = generate(model, tokenizer, prompt="hello", verbose=True)
Original model link https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501/ .
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for jesusoctavioas/Mistral-Small-24B-Instruct-2501-MLX-Q4
Base model
mistralai/Mistral-Small-24B-Base-2501