# Supa-AI/Ministral-8B-Instruct-2410-gguf
            This model was converted to GGUF format from [`mistralai/Ministral-8B-Instruct-2410`](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) using llama.cpp.
            Refer to the [original model card](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) for more details on the model.

            ## Available Versions
            - `Ministral-8B-Instruct-2410.q4_0.gguf` (q4_0)
  • Ministral-8B-Instruct-2410.q4_1.gguf (q4_1)

  • Ministral-8B-Instruct-2410.q5_0.gguf (q5_0)

  • Ministral-8B-Instruct-2410.q5_1.gguf (q5_1)

  • Ministral-8B-Instruct-2410.q8_0.gguf (q8_0)

  • Ministral-8B-Instruct-2410.q3_k_s.gguf (q3_K_S)

  • Ministral-8B-Instruct-2410.q3_k_m.gguf (q3_K_M)

  • Ministral-8B-Instruct-2410.q3_k_l.gguf (q3_K_L)

  • Ministral-8B-Instruct-2410.q4_k_s.gguf (q4_K_S)

  • Ministral-8B-Instruct-2410.q4_k_m.gguf (q4_K_M)

  • Ministral-8B-Instruct-2410.q5_k_s.gguf (q5_K_S)

  • Ministral-8B-Instruct-2410.q5_k_m.gguf (q5_K_M)

  • Ministral-8B-Instruct-2410.q6_k.gguf (q6_K)

              ## Use with llama.cpp
              Replace `FILENAME` with one of the above filenames.
    
              ### CLI:
              ```bash
              llama-cli --hf-repo Supa-AI/Ministral-8B-Instruct-2410-gguf --hf-file FILENAME -p "Your prompt here"
              ```
    
              ### Server:
              ```bash
              llama-server --hf-repo Supa-AI/Ministral-8B-Instruct-2410-gguf --hf-file FILENAME -c 2048
              ```
    
              ## Model Details
              - **Original Model:** [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410)
              - **Format:** GGUF
    
Downloads last month
411
GGUF
Model size
8.02B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for Supa-AI/Ministral-8B-Instruct-2410-gguf

Quantized
(21)
this model