rdsm/QwenPhi-4-0.5b-Draft-GGUF

converted to GGUF from rdsm/QwenPhi-4-0.5b-Draft

Downloads last month
66
GGUF
Model size
538M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rdsm/QwenPhi-4-0.5b-Draft-GGUF

Base model

Qwen/Qwen2.5-0.5B
Quantized
(5)
this model