File size: 2,502 Bytes
7d6a34c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aea382d
 
 
 
8d8e96c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d6a34c
aea382d
 
 
7d6a34c
aea382d
a3fff4a
aea382d
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
language: de
widget:
- text: "Heute ist sehr schönes Wetter in"
license: mit
---

# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on ~100 GB from the ["German colossal, clean Common Crawl corpus" ](https://german-nlp-group.github.io/projects/gc4-corpus.html).

The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉

---

**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, this GPT-2 model can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:

[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)

from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.

The aim of this released GPT-2 model for German is to boost research on (large) pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done for English only.

---

# Changelog

06.09.2021: Initial release. Detailed information about training parameters follow soon.

# Text Generation

The following code snippet can be used to generate text with this German GPT-2 model:

```python
from transformers import pipeline

model_name = "stefan-it/german-gpt2-larger"

pipe = pipeline('text-generation', model=model_name, tokenizer=model_name)

text = pipe("Der Sinn des Lebens ist es", max_length=200)[0]["generated_text"]

print(text)
```

# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️

Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download this model from their S3 storage 🤗

This project heavily profited from the amazing Hugging Face
[Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104).
Many thanks for the great organization and discussions during and after the week!