Nidum Uncensored GGUF
Collection
4 items
β’
Updated
β’
1
Welcome to Nidum's Gemma-3-4B IT Uncensored GGUF, designed for creators, researchers, and AI enthusiasts looking for a powerful, versatile, and unrestricted AI experience. Our quantized models give you fast, efficient, and high-quality inference, empowering innovative use cases without limitations.
Choose from multiple GGUF quantized formats tailored for your specific needs:
Quantization | Description | Bits per Weight | Link |
---|---|---|---|
Q8_0 | Best accuracy and excellent performance | 8-bit | model-Q8_0.gguf |
Q6_K | Strong accuracy, faster inference | 6-bit | model-Q6_K.gguf |
Q5_K_M | Ideal balance of speed & accuracy | 5-bit | model-Q5_K_M.gguf |
Q3_K_M | Memory-efficient with good performance | 3-bit | model-Q3_K_M.gguf |
TQ2_0 | Tiny quantization for maximum speed | 2-bit | model-TQ2_0.gguf |
TQ1_0 | Ultra-lightweight, minimal memory usage | 1-bit | model-TQ1_0.gguf |
Q8_0
or Q6_K
Q5_K_M
Q3_K_M
, TQ2_0
, or TQ1_0
Explore limitless AI use-cases with Nidum Gemma-3-4B IT Uncensored GGUF:
Dive in today and experience an uncensored, fast, and powerful AI journey tailored for open-minded creators, innovators, and AI enthusiasts!
Enjoy your uncensored AI experience with Nidum! π