Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,19 @@ This repo contains GGUF format model files for [malhajar's Mixtral 8x7B v0.1 Tur
|
|
25 |
malhajar/Mixtral-8x7B-v0.1-turkish is a finetuned version of Mixtral-8x7B-v0.1 using SFT Training.
|
26 |
This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`alpaca-gpt4-tr`]( https://huggingface.co/datasets/malhajar/alpaca-gpt4-tr)
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
## Prompt Template
|
29 |
```
|
30 |
### Instruction:
|
|
|
25 |
malhajar/Mixtral-8x7B-v0.1-turkish is a finetuned version of Mixtral-8x7B-v0.1 using SFT Training.
|
26 |
This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`alpaca-gpt4-tr`]( https://huggingface.co/datasets/malhajar/alpaca-gpt4-tr)
|
27 |
|
28 |
+
## Quantizon types
|
29 |
+
| quantization method | bits | size | description | recommended |
|
30 |
+
|---------------------|------|----------|-----------------------------------------------------|-------------|
|
31 |
+
| Q3_K_S | 3 | 20.4 GB | very small, high quality loss | ❌ |
|
32 |
+
| Q3_K_L | 3 | 26.4 GB | small, substantial quality loss | ❌ |
|
33 |
+
| Q4_0 | 4 | 26.4 GB | legacy; small, very high quality loss | ❌ |
|
34 |
+
| Q4_K_M | 4 | 28.4 GB | medium, balanced quality | ✅ |
|
35 |
+
| Q5_0 | 5 | 33.2 GB | legacy; medium, balanced quality | ❌ |
|
36 |
+
| Q5_K_S | 5 | 32.2 GB | large, low quality loss | ✅ |
|
37 |
+
| Q5_K_M | 5 | 33.2 GB | large, very low quality loss | ✅ |
|
38 |
+
| Q6_K | 6 | 38.4 GB | very large, extremely low quality loss | ❌ |
|
39 |
+
| Q8_0 | 8 | 49.6 GB | very large, extremely low quality loss | ❌ |
|
40 |
+
|
41 |
## Prompt Template
|
42 |
```
|
43 |
### Instruction:
|