noriyukipy commited on
Commit
0e80705
1 Parent(s): 86ee681

Add models and model card

Browse files
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ datasets: wikipedia
4
+ widget:
5
+ - text: "近年の機械学習は"
6
+ license: cc-by-sa-3.0
7
+ ---
8
+
9
+ # GPT-2 small Japanese model
10
+
11
+ This repository contains a pretrained SentencePiece tokenizer model and GPT-2 small model trained on Japanese Wikipedia dataset.
12
+
13
+ ## Training data
14
+
15
+ [Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset which is released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for training both the tokenizer and GPT-2 model as of March 1st, 2021.
16
+ The dataset is splitted into three subsets - train, valid and test. Both of tokenizer and model are trained with the train split.
17
+
18
+ ## Model description
19
+
20
+ The model architecture is the same as GPT-2 small model (n_ctx: 1024, n_embd 768, n_head: 12, n_layer: 12) except for a vocabulary size.
21
+ The vocabulary size is set to 32,000 instead of an original size of 50,257.
22
+ `transformers.GPT2LMHeadModel` is used for training.
23
+
24
+ ## Tokenizer description
25
+
26
+ [SentencePiece](https://github.com/google/sentencepiece) tokenizer is used as a tokenizer for this model.
27
+
28
+ In a training, the tokenizer model is trained with 10,000,000 samples which are extracted from the train split of training data.
29
+ The vocabulary size is set to 32,000. A `add_dummy_prefix` option is set to `True` because words are not separated by whitespaces in Japanese.
30
+
31
+ After training, the model is imported to `transformers.BERTGenerationTokenizer` because it supports SentencePiece models and it does not add any special tokens as default, which is useful expecially for a text generation task.
32
+
33
+ ## Training
34
+
35
+ The model is trained on the train split for 10 epochs with batch size 2 and 1024 tokens for each sample (i.e. 2048 tokens are processed in each batch). Each epoch contains around 250,000 steps.
36
+ Adam optimizer is used. The learning rate is linearly decreased from `1e-4` to `0`. A clip norm is also used to set to `1.0`.
37
+ After finishing training, the training loss is reached to 3.23, wihle the validation loss is reached to 3.50.
38
+
39
+ All the code to train tokenizer and GPT-2 models are available in [a repository on GitHub](https://github.com/colorfulscoop/tfdlg/tree/63d9531870af428b747554684b186c6316e34c54/examples/transformers-gpt2-ja)
40
+
41
+ ## Usage
42
+
43
+ First, install dependecies.
44
+
45
+ ```sh
46
+ $ pip install transformers==4.3.3 torch==1.8.0 sentencepiece==0.1.91
47
+ ```
48
+
49
+ Then load the pretrained tokenizer and GPT-2 model, and call a `generate` method.
50
+
51
+ ```sh
52
+ >>> import transformers
53
+ >>> tokenizer = transformers.AutoTokenizer.from_pretrained("colorfulscoop/gpt2-small-ja")
54
+ >>> model = transformers.AutoModelForCausalLM.from_pretrained("colorfulscoop/gpt2-small-ja")
55
+ >>> input = tokenizer.encode("近年の機械学習は", return_tensors="pt")
56
+ >>> output = model.generate(input, do_sample=True, top_p=0.95, top_k=50, num_return_sequences=3)
57
+ >>> tokenizer.batch_decode(output)
58
+ ['近年の機械学習は、特に、コンピューター学習において重要な概念である。この概念は、教育心理学', '近年の機械学習は時間間隔の短縮、時間間隔の短縮、学習時間の短縮、学習の', '近年の機械学習は、学生と学生が自分の能力を高め、結果を向上させることを目的としている。それは、']
59
+ ```
60
+
61
+ ## License
62
+
63
+ All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "output/model",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 2,
9
+ "cls_token_id": 4,
10
+ "embd_pdrop": 0.1,
11
+ "eos_token_id": 3,
12
+ "gradient_checkpointing": false,
13
+ "initializer_range": 0.02,
14
+ "layer_norm_epsilon": 1e-05,
15
+ "model_type": "gpt2",
16
+ "n_ctx": 1024,
17
+ "n_embd": 768,
18
+ "n_head": 12,
19
+ "n_inner": null,
20
+ "n_layer": 12,
21
+ "n_positions": 1024,
22
+ "pad_token_id": 0,
23
+ "resid_pdrop": 0.1,
24
+ "sep_token_id": 5,
25
+ "summary_activation": null,
26
+ "summary_first_dropout": 0.1,
27
+ "summary_proj_to_labels": true,
28
+ "summary_type": "cls_index",
29
+ "summary_use_proj": true,
30
+ "tokenizer_class": "BertGenerationTokenizer",
31
+ "transformers_version": "4.3.3",
32
+ "unk_token_id": 1,
33
+ "use_cache": true,
34
+ "vocab_size": 32000
35
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01931e624962eb49c467771ad8ac53f5085b73c36b00832d3e7a696a51b94ba6
3
+ size 454320379
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "<sep>", "pad_token": "<pad>", "cls_token": "<cls>"}
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b003f17b42d4a24bc46a9aa216224f2ff4c93f7402df69b8236707e3e91454d5
3
+ size 802167
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fbfff336b880f72b22d5b66422ef579f5d62404fad3280fd8c370f5b1ec6e24
3
+ size 441848144
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "sep_token": "<sep>", "cls_token": "<cls>"}