Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF

This repo contains GGUF quantizations of Qwen/Qwen2.5-14B, Qwen/Qwen2.5-14B-Instruct, and Qwen/Qwen2.5-Coder-14B-Instruct models at q6_K, using q8_0 for output and embedding tensors.

Downloads last month
21
GGUF
Model size
14.8B params
Architecture
qwen2

6-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for ddh0/Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(73)
this model