benjamin commited on
Commit
1ec98f3
1 Parent(s): 14c3a2e

initial commit

Browse files
Files changed (7) hide show
  1. README.md +112 -0
  2. config.json +39 -0
  3. merges.txt +0 -0
  4. pytorch_model.bin +3 -0
  5. special_tokens_map.json +1 -0
  6. tokenizer_config.json +1 -0
  7. vocab.json +0 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: de
3
+
4
+ widget:
5
+ - text: "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."
6
+
7
+ license: mit
8
+ ---
9
+
10
+ # GerPT2-large
11
+
12
+ A large German GPT2.
13
+
14
+ See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
15
+
16
+ ## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
17
+
18
+ I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
19
+
20
+ | | CC-100 (PPL) | Wikipedia (PPL) |
21
+ |-------------------|--------------|-----------------|
22
+ | dbmdz/german-gpt2 | 49.47 | 62.92 |
23
+ | GerPT2 | 24.78 | 35.33 |
24
+ | GerPT2-large | 16.08 | 23.26 |
25
+ | | | |
26
+
27
+ See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
28
+
29
+ ## Usage
30
+
31
+ ```python
32
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
33
+
34
+ tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large")
35
+ model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large")
36
+
37
+ prompt = "<your prompt>"
38
+
39
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
40
+ print(pipe(prompt)[0]["generated_text"])
41
+ ```
42
+
43
+ Also, two tricks might improve the generated text:
44
+
45
+ ```python
46
+ output = model.generate(
47
+ # during training an EOS token was used to mark the beginning of each text
48
+ # so it can help to insert it at the start
49
+ torch.tensor(
50
+ [tokenizer.eos_token_id] + tokenizer.encode(prompt)
51
+ ).unsqueeze(0),
52
+ do_sample=True,
53
+ # try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
54
+ # prone to ending generation early because a significant number of texts from the training corpus
55
+ # is quite short
56
+ bad_words_ids=[[0]],
57
+ max_length=max_length,
58
+ )[0]
59
+ print(tokenizer.decode(output))
60
+ ```
61
+
62
+ ## Training details
63
+
64
+ GerPT2 is trained on the entire German data (67GB) from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large).
65
+ GerPT2 was trained with:
66
+
67
+ - a batch size of 256
68
+ - using OneCycle learning rate with a maximum of 5e-3
69
+ - with AdamW with a weight decay of 0.01
70
+ - for 2 epochs
71
+
72
+ Training took roughly 12 days on 8 TPUv3 cores.
73
+
74
+ To train GerPT2, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
75
+
76
+ 0. Download and unzip training data from http://data.statmt.org/cc-100/.
77
+ 1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
78
+ 2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
79
+
80
+ ```
81
+ ĠMinde -> Ġleast
82
+ Ġjed -> Ġwhatsoever
83
+ flughafen -> Air
84
+ vermittlung -> employment
85
+ teilung -> ignment
86
+ ĠInterpretation -> Ġinterpretation
87
+ Ġimport -> Ġimported
88
+ hansa -> irl
89
+ genehmigungen -> exempt
90
+ ĠAuflist -> Ġlists
91
+ Ġverschwunden -> Ġdisappeared
92
+ ĠFlyers -> ĠFlyers
93
+ Kanal -> Channel
94
+ Ġlehr -> Ġteachers
95
+ Ġnahelie -> Ġconvenient
96
+ gener -> Generally
97
+ mitarbeiter -> staff
98
+ ```
99
+
100
+ This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
101
+
102
+ 3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
103
+ 4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`.
104
+
105
+ ## License
106
+
107
+ GerPT2-large is licensed under the MIT License.
108
+
109
+ ## Acknowledgements
110
+
111
+ Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
112
+ Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training.
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "gradient_checkpointing": false,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "label2id": {
16
+ "LABEL_0": 0
17
+ },
18
+ "layer_norm_epsilon": 1e-05,
19
+ "model_type": "gpt2",
20
+ "n_ctx": 1024,
21
+ "n_embd": 1280,
22
+ "n_head": 20,
23
+ "n_inner": null,
24
+ "n_layer": 36,
25
+ "n_positions": 1024,
26
+ "resid_pdrop": 0.1,
27
+ "summary_activation": null,
28
+ "summary_first_dropout": 0.1,
29
+ "summary_proj_to_labels": true,
30
+ "summary_type": "cls_index",
31
+ "summary_use_proj": true,
32
+ "task_specific_params": {
33
+ "text-generation": {
34
+ "do_sample": true,
35
+ "max_length": 500
36
+ }
37
+ },
38
+ "vocab_size": 50257
39
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28e73b994389ab71c24f5b4de30f5898120340b4184da42b916f16b411c963a4
3
+ size 3391352276
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": "<|endoftext|>"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"errors": "replace", "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false}
vocab.json ADDED
The diff for this file is too large to render. See raw diff