GGUF
conversational

Model Summary

This repository hosts quantized versions of the QWQ-32B reasoning model.

Format: GGUF
Converter: llama.cpp ba7654380a3c7c1b5ae154bea19134a3a9417a1e
Quantizer: LM-Kit.NET 2025.3.3

For more detailed information on the base model, please visit the following link

Downloads last month
14
GGUF
Model size
32.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support