Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ Finetuned and aligned with **SFT** and **DPO**
|
|
57 |
|
58 |
SauerkrautLM-Mixtral-8x7B was trained with mix of German data augmentation and translated data.
|
59 |
**SFT** with the dataset[OpenOrca/Slim-Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
|
60 |
-
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional
|
61 |
We found, that only a simple translation of training data can lead to unnatural German phrasings.
|
62 |
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
|
63 |
|
|
|
57 |
|
58 |
SauerkrautLM-Mixtral-8x7B was trained with mix of German data augmentation and translated data.
|
59 |
**SFT** with the dataset[OpenOrca/Slim-Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
|
60 |
+
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
|
61 |
We found, that only a simple translation of training data can lead to unnatural German phrasings.
|
62 |
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
|
63 |
|