|
--- |
|
datasets: |
|
- flozi00/conversations |
|
language: |
|
- de |
|
--- |
|
|
|
## This project is sponsored by [ ![PrimeLine](https://www.primeline-solutions.com/skin/frontend/default/theme566/images/primeline-solutions-logo.png) ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/) |
|
|
|
# Model Card |
|
|
|
This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:", trained with a context length of 8k tokens. |
|
The dataset used is deduplicated and cleaned, with no codes inside and uncensored. The focus is on instruction following and conversational tasks. |
|
|
|
The model archictecture is based on Mistral v0.1 with 7B parameters, trained on 100% renewable energy powered hardware. |
|
|
|
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00) |