bjoernp commited on
Commit
f54bb78
·
1 Parent(s): 0f9d0d2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - oscar-corpus/OSCAR-2301
4
+ - wikipedia
5
+ - bjoernp/tagesschau-2018-2023
6
+ language:
7
+ - en
8
+ - de
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ license: llama2
12
+ ---
13
+ # LAION LeoLM 70b: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
14
+ Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
15
+ Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
16
+ Thanks to a compute grant at HessianAI's new supercomputer **42**, we release a series foundation models trained with 8k context length
17
+ under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt). Now, we're finally releasing the
18
+ much anticipated `leo-hessianai-70b`, the largest model of this series based on `Llama-2-70b`.
19
+ With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
20
+ Read our [blog post](https://laion.ai/blog/leo-lm/) or our paper (preprint coming soon) for more details!
21
+
22
+ *A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
23
+
24
+
25
+ ## Model Details
26
+ - **Finetuned from:** [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf)
27
+ - **Model type:** Causal decoder-only transformer language model
28
+ - **Language:** English and German
29
+ - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
30
+ - **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:bjoern.pl@outlook.de)
31
+
32
+
33
+ ## Use in 🤗Transformers
34
+ First install direct dependencies:
35
+ ```
36
+ pip install transformers torch
37
+ ```
38
+
39
+ Then load the model in transformers. Note that this requires lots of VRAM and most-likely multiple devices. Use `load_in_8bit=True` or `load_in_4bit=True`
40
+ to save some memory by using a quantized version. For more quantized versions, check out our models at TheBloke's page: (coming soon!)
41
+ ```python
42
+ from transformers import AutoModelForCausalLM, AutoTokenizer
43
+ import torch
44
+
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model="LeoLM/leo-hessianai-70b",
47
+ device_map="auto",
48
+ torch_dtype=torch.bfloat16,
49
+ use_flash_attention_2=False # Set to true to use FA2. Requires `pip install flash-attn`
50
+ )
51
+ ```
52
+
53
+ ## Training parameters
54
+ ![training_parameters](imgs/training_params.png "Training Hyperparameters")
55
+
56
+
57
+ ## Benchmarks
58
+ ![benchmarks](imgs/benchmarks.png "Benchmark Scores")
59
+ ![benchmarks](imgs/translation_scores.png "Translation Benchmark Scores")