Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF
This repo contains GGUF quantizations of Qwen/Qwen2.5-14B, Qwen/Qwen2.5-14B-Instruct, and Qwen/Qwen2.5-Coder-14B-Instruct models at q6_K, using q8_0 for output and embedding tensors.
- Downloads last month
- 21
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for ddh0/Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF
Base model
Qwen/Qwen2.5-14B