Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- LeoLM/OpenSchnabeltier
|
4 |
+
- OpenAssistant/OASST-DE
|
5 |
+
- FreedomIntelligence/alpaca-gpt4-deutsch
|
6 |
+
- FreedomIntelligence/evol-instruct-deutsch
|
7 |
+
- LeoLM/German_Poems
|
8 |
+
- LeoLM/German_Songs
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
- de
|
12 |
+
library_name: transformers
|
13 |
+
pipeline_tag: text-generation
|
14 |
+
---
|
15 |
+
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
|
16 |
+
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
|
17 |
+
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
|
18 |
+
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release three foundation models trained with 8k context length,
|
19 |
+
[`LeoLM/leo-mistral-hessianai-7b`](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b), [`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
|
20 |
+
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
|
21 |
+
Read our [blog post](https://laion.ai/blog/leo-lm/) or our paper (preprint coming soon) for more details!
|
22 |
+
|
23 |
+
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
|
24 |
+
|
25 |
+
## LeoLM Chat
|
26 |
+
`LeoLM/leo-mistral-hessianai-7b-chat` is a German chat model built on our foundation model `LeoLM/leo-mistral-hessianai-7b` and finetuned on a selection of German instruction datasets.
|
27 |
+
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench-DE scores:
|
28 |
+
```
|
29 |
+
{
|
30 |
+
"first_turn": 6.1,
|
31 |
+
"second_turn": 4.7,
|
32 |
+
"categories": {
|
33 |
+
"writing": 6.8,
|
34 |
+
"roleplay": 6.35,
|
35 |
+
"reasoning": 3.3,
|
36 |
+
"math": 2.75,
|
37 |
+
"coding": 4.4,
|
38 |
+
"extraction": 4.5,
|
39 |
+
"stem": 6.85,
|
40 |
+
"humanities": 8.25
|
41 |
+
},
|
42 |
+
"average": 5.4
|
43 |
+
}
|
44 |
+
```
|
45 |
+
|
46 |
+
## Model Details
|
47 |
+
|
48 |
+
- **Finetuned from:** [LeoLM/leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-hessianai-7b)
|
49 |
+
- **Model type:** Causal decoder-only transformer language model
|
50 |
+
- **Language:** English and German
|
51 |
+
- **Demo:** [Web Demo coming soon !]()
|
52 |
+
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
|
53 |
+
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:bjoern.pl@outlook.de)
|
54 |
+
|
55 |
+
|
56 |
+
## Use in 🤗Transformers
|
57 |
+
First install direct dependencies:
|
58 |
+
```
|
59 |
+
pip install transformers torch sentencepiece
|
60 |
+
```
|
61 |
+
If you want faster inference using flash-attention2, you need to install these dependencies:
|
62 |
+
```bash
|
63 |
+
pip install packaging ninja
|
64 |
+
pip install flash-attn
|
65 |
+
```
|
66 |
+
Then load the model in transformers:
|
67 |
+
```python
|
68 |
+
from transformers import pipeline
|
69 |
+
import torch
|
70 |
+
|
71 |
+
system_prompt = """<|im_start|>system
|
72 |
+
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
|
73 |
+
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
|
74 |
+
|
75 |
+
"""
|
76 |
+
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
|
77 |
+
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
|
78 |
+
|
79 |
+
generator = pipeline(model="LeoLM/leo-mistral-hessianai-7b-chat", device="cuda", torch_dtype=torch.float16, use_flash_attention_2=True) # True for flash-attn2 else False
|
80 |
+
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
|
81 |
+
```
|
82 |
+
|
83 |
+
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
|
84 |
+
|
85 |
+
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
|
86 |
+
|
87 |
+
## Prompting / Prompt Template
|
88 |
+
|
89 |
+
Prompt dialogue template (ChatML format):
|
90 |
+
|
91 |
+
```
|
92 |
+
"""
|
93 |
+
<|im_start|>system
|
94 |
+
{system_message}<|im_end|>
|
95 |
+
<|im_start|>user
|
96 |
+
{prompt}<|im_end|>
|
97 |
+
<|im_start|>assistant
|
98 |
+
"""
|
99 |
+
```
|
100 |
+
|
101 |
+
The model input can contain multiple conversation turns between user and assistant, e.g.
|
102 |
+
```
|
103 |
+
<|im_start|>user
|
104 |
+
{prompt 1}<|im_end|>
|
105 |
+
<|im_start|>assistant
|
106 |
+
{reply 1}<|im_end|>
|
107 |
+
<|im_start|>user
|
108 |
+
{prompt 2}<|im_end|>
|
109 |
+
<|im_start|>assistant
|
110 |
+
(...)
|
111 |
+
```
|
112 |
+
|
113 |
+
## Ethical Considerations and Limitations
|
114 |
+
|
115 |
+
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
|
116 |
+
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-mistral-hessianai-7b-chat` cannot be predicted
|
117 |
+
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
|
118 |
+
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-mistral-hessianai-7b-chat`, developers should
|
119 |
+
perform safety testing and tuning tailored to their specific applications of the model.
|
120 |
+
|
121 |
+
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
|
122 |
+
|
123 |
+
## Finetuning Details
|
124 |
+
|
125 |
+
| Hyperparameter | Value |
|
126 |
+
|---|---|
|
127 |
+
| Num epochs | 4 |
|
128 |
+
| Examples per epoch | 131214 |
|
129 |
+
| Global batch size | 256 |
|
130 |
+
| Learning rate | 1e-5 |
|
131 |
+
| Warmup steps | 100 |
|
132 |
+
| LR scheduler | Cosine |
|
133 |
+
| Adam betas | (0.9, 0.95) |
|
134 |
+
|
135 |
+
|
136 |
+
## Dataset Details
|
137 |
+
```
|
138 |
+
## Stats for 'Subset of OpenAssistant/OASST-DE' (3534 samples (100.0%))
|
139 |
+
-----------------
|
140 |
+
Accepted: 3534/3534 (100.0%)
|
141 |
+
Accepted tokens: 2259302
|
142 |
+
Skipped: 0 (0.0%)
|
143 |
+
Min tokens per sample: 29
|
144 |
+
Max tokens per sample: 2484
|
145 |
+
Avg tokens per sample: 639.3044708545557
|
146 |
+
-----------------
|
147 |
+
|
148 |
+
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
|
149 |
+
-----------------
|
150 |
+
Accepted: 57841/57841 (100.0%)
|
151 |
+
Accepted tokens: 42958192
|
152 |
+
Skipped: 0 (0.0%)
|
153 |
+
Min tokens per sample: 33
|
154 |
+
Max tokens per sample: 5507
|
155 |
+
Avg tokens per sample: 742.6944900675991
|
156 |
+
-----------------
|
157 |
+
|
158 |
+
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
|
159 |
+
-----------------
|
160 |
+
Accepted: 48969/48969 (100.0%)
|
161 |
+
Accepted tokens: 13372005
|
162 |
+
Skipped: 0 (0.0%)
|
163 |
+
Min tokens per sample: 19
|
164 |
+
Max tokens per sample: 1359
|
165 |
+
Avg tokens per sample: 273.07082031489307
|
166 |
+
-----------------
|
167 |
+
|
168 |
+
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
|
169 |
+
-----------------
|
170 |
+
Accepted: 21314/21314 (100.0%)
|
171 |
+
Accepted tokens: 8134690
|
172 |
+
Skipped: 0 (0.0%)
|
173 |
+
Min tokens per sample: 25
|
174 |
+
Max tokens per sample: 1202
|
175 |
+
Avg tokens per sample: 381.65947264708643
|
176 |
+
-----------------
|
177 |
+
|
178 |
+
## Stats for 'Subset of LeoLM/German_Poems' (490 samples (100.0%))
|
179 |
+
-----------------
|
180 |
+
Accepted: 490/490 (100.0%)
|
181 |
+
Accepted tokens: 618642
|
182 |
+
Skipped: 0 (0.0%)
|
183 |
+
Min tokens per sample: 747
|
184 |
+
Max tokens per sample: 1678
|
185 |
+
Avg tokens per sample: 1262.534693877551
|
186 |
+
-----------------
|
187 |
+
|
188 |
+
## Stats for 'Subset of LeoLM/German_Songs' (392 samples (100.0%))
|
189 |
+
-----------------
|
190 |
+
Accepted: 392/392 (100.0%)
|
191 |
+
Accepted tokens: 187897
|
192 |
+
Skipped: 0 (0.0%)
|
193 |
+
Min tokens per sample: 231
|
194 |
+
Max tokens per sample: 826
|
195 |
+
Avg tokens per sample: 479.3290816326531
|
196 |
+
-----------------
|
197 |
+
|
198 |
+
## Stats for 'total' (132540 samples (100.0%))
|
199 |
+
-----------------
|
200 |
+
Accepted: 132540/132540 (100.0%)
|
201 |
+
Accepted tokens: 67530728
|
202 |
+
Skipped: 0 (0.0%)
|
203 |
+
Min tokens per sample: 19
|
204 |
+
Max tokens per sample: 5507
|
205 |
+
Avg tokens per sample: 509.51205673758864
|
206 |
+
-----------------
|
207 |
+
```
|