readme: add more references
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: mit
|
|
6 |
---
|
7 |
|
8 |
# German GPT-2 model
|
9 |
-
In this repository we release (yet another) GPT-2 model, that was trained on ~90 GB from the ["German colossal, clean Common Crawl corpus" ](https://german-nlp-group.github.io/projects/gc4-corpus.html).
|
10 |
|
11 |
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
|
12 |
|
@@ -77,6 +77,9 @@ The following archives are used for training the (first version) of this GPT-2 m
|
|
77 |
* `de_head_0007_2019-47.tar.gz`
|
78 |
* `de_head_0007_2020-10.tar.gz`
|
79 |
|
|
|
|
|
|
|
80 |
Archives are then extracted and NLTK (`german` model) is used to sentence split the corpus.
|
81 |
This results in a total training corpus size of 90GB.
|
82 |
|
@@ -88,15 +91,7 @@ Thus, the tokenizer and vocab is the same as used in the `dbmdz/german-gpt2` mod
|
|
88 |
The model was trained on a v3-8 TPU, with the following parameters:
|
89 |
|
90 |
```bash
|
91 |
-
python ./run_clm_flax.py --output_dir=/mnt/datasets/german-gpt2-larger/
|
92 |
-
--name_or_path dbmdz/german-gpt2 --do_train --do_eval --block_size=512 \
|
93 |
-
--per_device_train_batch_size=16 --per_device_eval_batch_size=16 \
|
94 |
-
--learning_rate=5e-3 --warmup_steps=1000 --adam_beta1=0.9 --adam_beta2=0.98 \
|
95 |
-
--weight_decay=0.01 --overwrite_output_dir --num_train_epochs=20 \
|
96 |
-
--logging_steps=500 --save_steps=2500 --eval_steps=2500 \
|
97 |
-
--train_file /mnt/datasets/gc4/train.txt \
|
98 |
-
--validation_file /mnt/datasets/gc4/validation.txt \
|
99 |
-
--preprocessing_num_workers 16
|
100 |
```
|
101 |
|
102 |
Training took around 17 days for 20 epochs.
|
|
|
6 |
---
|
7 |
|
8 |
# German GPT-2 model
|
9 |
+
In this repository we release (yet another) GPT-2 model, that was trained on ~90 GB from the ["German colossal, clean Common Crawl corpus" (GC4)](https://german-nlp-group.github.io/projects/gc4-corpus.html).
|
10 |
|
11 |
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
|
12 |
|
|
|
77 |
* `de_head_0007_2019-47.tar.gz`
|
78 |
* `de_head_0007_2020-10.tar.gz`
|
79 |
|
80 |
+
Details and URLs can be found on the [GC4](https://german-nlp-group.github.io/projects/gc4-corpus.html)
|
81 |
+
page.
|
82 |
+
|
83 |
Archives are then extracted and NLTK (`german` model) is used to sentence split the corpus.
|
84 |
This results in a total training corpus size of 90GB.
|
85 |
|
|
|
91 |
The model was trained on a v3-8 TPU, with the following parameters:
|
92 |
|
93 |
```bash
|
94 |
+
python ./run_clm_flax.py --output_dir=/mnt/datasets/german-gpt2-larger/ \\n--name_or_path dbmdz/german-gpt2 --do_train --do_eval --block_size=512 \\n--per_device_train_batch_size=16 --per_device_eval_batch_size=16 \\n--learning_rate=5e-3 --warmup_steps=1000 --adam_beta1=0.9 --adam_beta2=0.98 \\n--weight_decay=0.01 --overwrite_output_dir --num_train_epochs=20 \\n--logging_steps=500 --save_steps=2500 --eval_steps=2500 \\n--train_file /mnt/datasets/gc4/train.txt \\n--validation_file /mnt/datasets/gc4/validation.txt \\n--preprocessing_num_workers 16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
```
|
96 |
|
97 |
Training took around 17 days for 20 epochs.
|