This repository hosts GGUF-IQ-Imatrix quantizations for Virt-io/FuseChat-Kunoichi-10.7B.

Uploaded:

    quantization_options = [
        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", 
        "Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XS", "IQ3_XXS"
    ]

image/png

Downloads last month
122
GGUF
Model size
10.7B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including Lewdiculous/FuseChat-Kunoichi-10.7B-GGUF-IQ-Imatrix