Zoo Coder-1 GGUF (Quantized Coding Model)
Overview
Zoo Coder-1 GGUF provides quantized versions of our enterprise-grade coding AI model. These GGUF-formatted models enable efficient deployment across various hardware configurations while maintaining excellent coding capabilities.
Model Details
- Base: Qwen3-Coder with A3B technology
- Format: GGUF quantized
- Context: 32K tokens (extensible to 128K)
- Languages: Python, JavaScript, TypeScript, Go, Rust, Java, C++, and 50+ more
Available Quantizations
Variant | Size | RAM Required | Use Case |
---|---|---|---|
Q2_K | ~2GB | 4GB | Edge devices, prototyping |
Q3_K_M | ~2.5GB | 5GB | Mobile, lightweight servers |
Q4_K_M | ~3.2GB | 6GB | Recommended - Best balance |
Q5_K_M | ~4GB | 7GB | High-quality production |
Q6_K | ~5GB | 8GB | Maximum quality |
Quick Start
With llama.cpp
./main -m Q4_K_M-GGUF/Q4_K_M-GGUF-00001-of-00032.gguf \
-p "Write a Python function to calculate fibonacci numbers"
With Zoo Desktop
zoo model download coder-1-gguf
About Zoo AI
Zoo Labs Foundation Inc is a 501(c)(3) nonprofit organization pioneering accessible AI infrastructure.
- Website: zoo.ngo
- HuggingFace: huggingface.co/zooai
License
Apache 2.0
- Downloads last month
- 20
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit