stefan-it commited on
Commit
7d6a34c
1 Parent(s): 7792e33

readme: add initial version

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: de
3
+ widget:
4
+ - text: "Heute ist sehr schönes Wetter in"
5
+ license: mit
6
+ ---
7
+
8
+ # German GPT-2 model
9
+ In this repository we release (yet another) GPT-2 model, that was trained on ~100 GB from the ["German colossal, clean Common Crawl corpus" ](https://german-nlp-group.github.io/projects/gc4-corpus.html).
10
+
11
+ The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
12
+
13
+ ---
14
+
15
+ **Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
16
+ The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, this GPT-2 model can
17
+ be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
18
+ ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
19
+ to read:
20
+
21
+ [On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
22
+
23
+ from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
24
+
25
+ The aim of this released GPT-2 model for German is to boost research on (large) pre-trained language models for German, especially
26
+ for identifying biases and how to prevent them, as most research is currently done for English only.
27
+
28
+ ---
29
+
30
+
31
+ # Changelog
32
+
33
+ 06.09.2021: Initial release. Detailed information about training parameters follow soon.