---
inference: false
language:
- de
library_name: transformers
license: apache-2.0
model_creator: jphme
model_name: EM German
model_type: mistral
pipeline_tag: text-generation
prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:'
tags:
- pytorch
- german
- deutsch
- mistral
- leolm
- TensorBlock
- GGUF
base_model: jphme/em_german_leo_mistral
---
## jphme/em_german_leo_mistral - GGUF
This repo contains GGUF format model files for [jphme/em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Prompt template
```
{system_prompt} USER: {prompt} ASSISTANT:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [em_german_leo_mistral-Q2_K.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
| [em_german_leo_mistral-Q3_K_S.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
| [em_german_leo_mistral-Q3_K_M.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
| [em_german_leo_mistral-Q3_K_L.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
| [em_german_leo_mistral-Q4_0.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [em_german_leo_mistral-Q4_K_S.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
| [em_german_leo_mistral-Q4_K_M.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
| [em_german_leo_mistral-Q5_0.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [em_german_leo_mistral-Q5_K_S.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
| [em_german_leo_mistral-Q5_K_M.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
| [em_german_leo_mistral-Q6_K.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
| [em_german_leo_mistral-Q8_0.gguf](https://huggingface.co/tensorblock/em_german_leo_mistral-GGUF/blob/main/em_german_leo_mistral-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/em_german_leo_mistral-GGUF --include "em_german_leo_mistral-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/em_german_leo_mistral-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```