bjoernp commited on
Commit
8ed10b3
1 Parent(s): c6dbfc3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - oscar-corpus/OSCAR-2301
4
+ - wikipedia
5
+ - bjoernp/tagesschau-2018-2023
6
+ language:
7
+ - en
8
+ - de
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ license: apache-2.0
12
+ ---
13
+ # LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
14
+ Meet LeoLM-Mistral, the first open and commercially available German Foundation Language Model built on Mistral 7b.
15
+ Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
16
+ Thanks to a compute grant at HessianAI's new supercomputer **42**, we release three foundation models trained with 8k context length,
17
+ [`LeoLM/leo-mistral-hessianai-7b`](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b),
18
+ [`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
19
+ With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
20
+ Read our [blog post](https://laion.ai/blog/leo-lm/) or our paper (preprint coming soon) for more details!
21
+
22
+ *A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
23
+
24
+
25
+ ## Model Details
26
+ - **Finetuned from:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
27
+ - **Model type:** Causal decoder-only transformer language model
28
+ - **Language:** English and German
29
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
30
+ - **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:bjoern.pl@outlook.de)
31
+
32
+
33
+ ## Use in 🤗Transformers
34
+ First install direct dependencies:
35
+ ```
36
+ pip install transformers torch accelerate
37
+ ```
38
+ If you want faster inference using flash-attention2, you need to install these dependencies:
39
+ ```bash
40
+ pip install packaging ninja
41
+ pip install flash-attn
42
+ ```
43
+ Then load the model in transformers:
44
+ ```python
45
+ from transformers import AutoModelForCausalLM, AutoTokenizer
46
+ import torch
47
+
48
+ model = AutoModelForCausalLM.from_pretrained(
49
+ model="LeoLM/leo-mistral-hessianai-7b",
50
+ device_map="auto",
51
+ torch_dtype=torch.bfloat16,
52
+ use_flash_attn_2=True # optional
53
+ )
54
+ ```
55
+
56
+ ## Training parameters
57
+ Note that for Mistral training, we changed learning rate to `1e-5` going down to `1e-6`. We also used Zero stage 3 and bfloat16 dtype.
58
+ ![training_parameters](imgs/training_params.png "Training Hyperparameters")
59
+
60
+
61
+ ## Benchmarks
62
+ ![benchmarks](imgs/benchmarks.png "Benchmark Scores")