Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,18 @@ A frankenMoE using only DPO models. To be used with Chat-instruct mode enabled.
|
|
13 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/wGRcusncUd-mCdksvYckY.png)
|
14 |
|
15 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/rx1GfLMEIP3T-r3bxqW9r.png)
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
- [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) - router
|
19 |
- [udkai/Turdus](https://huggingface.co/udkai/Turdus) - expert #1
|
|
|
13 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/wGRcusncUd-mCdksvYckY.png)
|
14 |
|
15 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/rx1GfLMEIP3T-r3bxqW9r.png)
|
16 |
+
## Provided files
|
17 |
+
|
18 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
19 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
20 |
+
| [Q2_K Tiny](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 7.87 GB| 9.87 GB | smallest, significant quality loss - not recommended for most purposes |
|
21 |
+
| [Q3_K_M](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 10.28 GB| 12.28 GB | very small, high quality loss |
|
22 |
+
| [Q4_0](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 13.3 GB| 15.3 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
23 |
+
| [Q4_K_M](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 13.32 GB| 15.32 GB | medium, balanced quality - recommended |
|
24 |
+
| [Q5_0](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.24 GB | legacy; large, balanced quality |
|
25 |
+
| [Q5_K_M](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~16.24 GB| ~18.24 GB | large, balanced quality - recommended |
|
26 |
+
| [Q6 XL](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 19.35 GB| 21.35 GB | very large, extremely low quality loss |
|
27 |
+
| [Q8 XXL](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 25.1 GB| 27.1 GB | very large, extremely low quality loss - not recommended |
|
28 |
|
29 |
- [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) - router
|
30 |
- [udkai/Turdus](https://huggingface.co/udkai/Turdus) - expert #1
|