File size: 2,386 Bytes
cd759a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6ec733b
 
 
 
 
 
 
 
 
 
 
 
 
cd759a2
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
base_model: malhajar/Mixtral-8x7B-v0.1-turkish
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
model_type: mixtral
library_name: transformers
inference: false
---
## Mixtral 8x7B v0.1 Turkish
- **Model creator:** [malhajar](https://huggingface.co/malhajar)
- **Original model:** [Mixtral-8x7B-v0.1-turkish](https://huggingface.co/malhajar/Mixtral-8x7B-v0.1-turkish)

<!-- description start -->
## Description
This repo contains GGUF format model files for [malhajar's Mixtral 8x7B v0.1 Turkish](https://huggingface.co/malhajar/Mixtral-8x7B-v0.1-turkish)

## Original model
- **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/) 
- **Language(s) (NLP):** Turkish
- **Finetuned from model:** [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
### Original model description
malhajar/Mixtral-8x7B-v0.1-turkish is a finetuned version of Mixtral-8x7B-v0.1 using SFT Training.
This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`alpaca-gpt4-tr`]( https://huggingface.co/datasets/malhajar/alpaca-gpt4-tr) 

## Quantizon types
| quantization method | bits | size     | description                                            | recommended |
|---------------------|------|----------|-----------------------------------------------------|-------------|
| Q3_K_S              | 3    | 20.4 GB  | very small, high quality loss                       | ❌         |
| Q3_K_L              | 3    | 26.4 GB  | small, substantial quality loss                     | ❌         |
| Q4_0                | 4    | 26.4 GB  | legacy; small, very high quality loss | ❌         |
| Q4_K_M              | 4    | 28.4 GB  | medium, balanced quality              | ✅         |
| Q5_0                | 5    | 33.2 GB  | legacy; medium, balanced quality  | ❌         |
| Q5_K_S              | 5    | 32.2 GB  | large, low quality loss | ✅         |
| Q5_K_M              | 5    | 33.2 GB  | large, very low quality loss | ✅         |
| Q6_K                | 6    | 38.4 GB  | very large, extremely low quality loss              | ❌         |
| Q8_0                | 8    | 49.6 GB  | very large, extremely low quality loss | ❌         |

## Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
<!-- description end -->