Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
{}
|
3 |
+
---
|
4 |
+
# GGUF quants for [**Qwen/Qwen2.5-7B**](https://huggingface.co/Qwen/Qwen2.5-7B) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
5 |
+
|
6 |
+
**Terms of Use**: Please check the [**original model**](https://huggingface.co/Qwen/Qwen2.5-7B)
|
7 |
+
|
8 |
+
<picture>
|
9 |
+
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
|
10 |
+
</picture>
|
11 |
+
|
12 |
+
## Quants
|
13 |
+
|
14 |
+
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
|
15 |
+
* `q3_k_s`: Uses Q3_K for all tensors
|
16 |
+
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
|
17 |
+
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
|
18 |
+
* `q4_0`: Original quant method, 4-bit.
|
19 |
+
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
|
20 |
+
* `q4_k_s`: Uses Q4_K for all tensors
|
21 |
+
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
|
22 |
+
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
|
23 |
+
* `q5_1`: Even higher accuracy, resource usage and slower inference.
|
24 |
+
* `q5_k_s`: Uses Q5_K for all tensors
|
25 |
+
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
|
26 |
+
* `q6_k`: Uses Q8_K for all tensors
|
27 |
+
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
|