cl-tohoku commited on
Commit
fb86e47
1 Parent(s): 855e787

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ license: cc-by-sa-4.0
4
+ datasets:
5
+ - wikipedia
6
+ widget:
7
+ - text: "東北大学で[MASK]の研究をしています。"
8
+ ---
9
+
10
+ # BERT base Japanese (IPA dictionary, whole word masking enabled)
11
+
12
+ This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
13
+
14
+ This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.
15
+ Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
16
+
17
+ The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
18
+
19
+ ## Model architecture
20
+
21
+ The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
22
+
23
+ ## Training Data
24
+
25
+ The model is trained on Japanese Wikipedia as of September 1, 2019.
26
+ To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
27
+ The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
28
+
29
+ ## Tokenization
30
+
31
+ The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.
32
+ The vocabulary size is 32000.
33
+
34
+ ## Training
35
+
36
+ The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
37
+
38
+ For the training of the MLM (masked language modeling) objective, we introduced the **Whole Word Masking** in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
39
+
40
+ ## Licenses
41
+
42
+ The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
43
+
44
+ ## Acknowledgments
45
+
46
+ For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.