DavidGF commited on
Commit
e219ca9
1 Parent(s): 0d82c2d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1,3 +1,84 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ - de
6
+ - fr
7
+ - it
8
+ - es
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - mistral
13
+ - finetune
14
+ - dpo
15
+ - Instruct
16
+ - augmentation
17
+ - german
18
+ - mixtral
19
+ datasets:
20
+ - argilla/distilabel-math-preference-dpo
21
  ---
22
+
23
+ ![SauerkrautLM](https://vago-solutions.de/wp-content/uploads/2023/11/hero.png "SauerkrautLM-Mixtral-8x7B")
24
+ ## VAGO solutions SauerkrautLM-Mixtral-8x7B-Instruct
25
+ Introducing **SauerkrautLM-Mixtral-8x7B-Instruct** – our German version of the powerful Mixtral-8x7B!
26
+ Aligned with **DPO**
27
+
28
+ # Table of Contents
29
+ 1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models)
30
+ 2. [Model Details](#model-details)
31
+ - [Prompt template](#prompt-template)
32
+ - [Training Dataset](#training-dataset)
33
+ 3. [Evaluation](#evaluation)
34
+ 5. [Disclaimer](#disclaimer)
35
+ 6. [Contact](#contact)
36
+ 7. [Collaborations](#collaborations)
37
+ 8. [Acknowledgement](#acknowledgement)
38
+
39
+
40
+ ## All SauerkrautLM-Mixtral Models
41
+
42
+ | Model | HF | GPTQ | GGUF | AWQ |
43
+ |-------|-------|-------|-------|-------|
44
+ | SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | coming soon | coming soon | coming soon |
45
+ | SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | coming soon | coming soon | coming soon |
46
+
47
+ ## Model Details
48
+ **SauerkrautLM-Mixtral-8x7B-Instruct**
49
+ - **Model Type:** SauerkrautLM-Mixtral-8x7B-Instruct-v0.1 is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
50
+ - **Language(s):** English, German, French, Italian, Spanish
51
+ - **License:** APACHE 2.0
52
+ - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:golchinfar@vago-solutions.de)
53
+
54
+ ### Training Dataset:
55
+
56
+ SauerkrautLM-Mixtral-8x7B-Instruct was trained with mix of German data augmentation and translated data.
57
+ Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
58
+ as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional augmented Parts of the Ultrafeedback Dataset [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) and [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
59
+ We found, that only a simple translation of training data can lead to unnatural German phrasings.
60
+ Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
61
+
62
+
63
+
64
+ ### Prompt Template:
65
+ ```
66
+ [INST] Instruction [/INST] Model answer [INST] Follow-up instruction [/INST]
67
+ ```
68
+ ## Evaluation
69
+
70
+
71
+
72
+ ## Disclaimer
73
+ We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
74
+ However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
75
+ Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
76
+  
77
+ ## Contact
78
+ If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:vaziri@vago-solutions.de). We are also grateful for your feedback and suggestions.
79
+  
80
+ ## Collaborations
81
+ We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
82
+
83
+ ## Acknowledgement
84
+ Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community.