license: apache-2.0 | |
base_model: | |
- Qwen/Qwen2.5-14B | |
# Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF | |
This repo contains GGUF quantizations of [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B), [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct), and [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) models at q6_K, using q8_0 for output and embedding tensors. |