Edit model card

Mistral 7B Instruct v0.2 Turkish

Description

This repo contains GGUF format model files for malhajar's Mistral 7B Instruct v0.2 Turkish

Original model

Quantization methods

quantization method bits size use case recommended
Q2_K 2 2.72 GB smallest, significant quality loss
Q3_K_S 3 3.16 GB very small, high quality loss
Q3_K_M 3 3.52 GB very small, high quality loss
Q3_K_L 3 3.82 GB small, substantial quality loss
Q4_0 4 4.11 GB legacy; small, very high quality loss
Q4_K_S 4 4.14 GB small, greater quality loss
Q4_K_M 4 4.37 GB medium, balanced quality
Q5_0 5 5.00 GB legacy; medium, balanced quality
Q5_K_S 5 5.00 GB large, low quality loss
Q5_K_M 5 5.13 GB large, very low quality loss
Q6_K 6 5.94 GB very large, extremely low quality loss
Q8_0 8 7.70 GB very large, extremely low quality loss
FP16 16 14.5 GB enormous, minuscule quality loss

Prompt Template

### Instruction:
<prompt> (without the <>)
### Response:
Downloads last month
618
GGUF
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from

Collection including sayhan/Mistral-7B-Instruct-v0.2-turkish-GGUF