johannhartmann
commited on
Commit
•
e74b4ca
1
Parent(s):
e7e3584
Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ base_model:
|
|
18 |
|
19 |
![image/png](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b/resolve/main/Wiedervereinigung-7b.png)
|
20 |
|
21 |
-
Some of the best german models with 7b parameters as
|
22 |
|
23 |
Since the original models based on mistral - three of them on the brilliant german LeoLM/leo-mistral-hessianai-7b - they are reunited in this merged model.
|
24 |
Hence the name. To improve result quality they are dpo-trained with a german translation of oaast-dpo using our german fork of [LLaMA-Factory](https://github.com/mayflower/LLaMA-Factory).
|
|
|
18 |
|
19 |
![image/png](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b/resolve/main/Wiedervereinigung-7b.png)
|
20 |
|
21 |
+
Some of the best german models with 7b parameters as lasered dpo-trained dare_ties merge.
|
22 |
|
23 |
Since the original models based on mistral - three of them on the brilliant german LeoLM/leo-mistral-hessianai-7b - they are reunited in this merged model.
|
24 |
Hence the name. To improve result quality they are dpo-trained with a german translation of oaast-dpo using our german fork of [LLaMA-Factory](https://github.com/mayflower/LLaMA-Factory).
|