Edit model card

This repository hosts GGUF-IQ-Imatrix quantizations for Virt-io/FuseChat-Kunoichi-10.7B.

Uploaded:

    quantization_options = [
        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", 
        "Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XS", "IQ3_XXS"
    ]

image/png

Downloads last month
459
GGUF
+2
Unable to determine this model's library. Check the docs .

Collection including Lewdiculous/FuseChat-Kunoichi-10.7B-GGUF-IQ-Imatrix