german-gpt2-larger / README.md
stefan-it's picture
readme: add initial version
7d6a34c
|
raw
history blame
1.57 kB
metadata
language: de
widget:
  - text: Heute ist sehr schönes Wetter in
license: mit

German GPT-2 model

In this repository we release (yet another) GPT-2 model, that was trained on ~100 GB from the "German colossal, clean Common Crawl corpus" .

The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉


Disclaimer: the presented and trained language models in this repository are for research only purposes. The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, this GPT-2 model can be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race, ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended to read:

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.

The aim of this released GPT-2 model for German is to boost research on (large) pre-trained language models for German, especially for identifying biases and how to prevent them, as most research is currently done for English only.


Changelog

06.09.2021: Initial release. Detailed information about training parameters follow soon.