Daryoush Vaziri
Update README.md
7e69288
metadata
license: apache-2.0
language:
  - en
  - de
  - fr
  - it
  - es
library_name: transformers
pipeline_tag: text-generation
tags:
  - mistral
  - finetune
  - sft
  - dpo
  - chatml
  - augmentation
  - german
  - mixtral
datasets:
  - Open-Orca/SlimOrca
  - argilla/distilabel-math-preference-dpo

SauerkrautLM

VAGO solutions SauerkrautLM-Mixtral-8x7B

Introducing SauerkrautLM-Mixtral-8x7B – our Sauerkraut version of the powerful Mixtral-8x7B! Finetuned and aligned with SFT and DPO

Table of Contents

  1. Overview of all SauerkrautLM-Mixtral models
  2. Model Details
  3. Evaluation
  4. Disclaimer
  5. Contact
  6. Collaborations
  7. Acknowledgement

All SauerkrautLM-Mixtral Models

Model HF GPTQ GGUF AWQ
SauerkrautLM-Mixtral-8x7B Link coming soon coming soon coming soon
SauerkrautLM-Mixtral-8x7B-Instruct Link coming soon coming soon coming soon

Model Details

SauerkrautLM-Mixtral-8x7B

Training Dataset:

SauerkrautLM-Mixtral-8x7B was trained with mix of German data augmentation and translated data. SFT with the datasetOpenOrca/Slim-Orca and aligned through DPO with our new German SauerkrautLM-DPO dataset based on parts of the SFT SauerkrautLM dataset as chosen answers and Sauerkraut-7b-HerO as rejected answers. Added with additional translated Parts of the HuggingFaceH4/ultrafeedback_binarized and argilla/distilabel-math-preference-dpo.
We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.

Prompt Template:

<|im_start|>system
Du bist ein großes Sprachmodell, das höflich und kompetent antwortet. Schreibe deine Gedanken Schritt für Schritt auf, um Probleme sinnvoll zu lösen.<|im_end|>
<|im_start|>user
Wie geht es dir?<|im_end|>
<|im_start|>assistant

Evaluation

Disclaimer

We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.  

Contact

If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at Dr. Daryoush Vaziri. We are also grateful for your feedback and suggestions.  

Collaborations

We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.

Acknowledgement

Many thanks to OpenOrca, argilla and Huggingface for providing such valuable datasets to the Open-Source community.