This is mistralai/Mistral-Small-Instruct-2409, converted to GGUF and quantized to q8_0. Both the model and the embedding/output tensors are q8_0.
The model is split using the llama.cpp/llama-gguf-split
CLI utility into shards no larger than 2GB. The purpose of this is to make it less painful to resume downloading if interrupted.
The purpose of this upload is archival.
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.