ff670's picture
Update README.md
5f0b5d9 verified
---
license: apache-2.0
base_model: OpenBuddy/openbuddy-qwen2.5coder-32b-v24.1q-200k
---
# ⚛️ Q Model: Optimized for Enhanced Quantized Inference Capability
This model has been specially optimized to improve the performance of quantized inference and is recommended for use in 3 to 8-bit quantization scenarios.