cl-tohoku commited on
Commit
653367c
1 Parent(s): 4238820

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ license: cc-by-sa-4.0
4
+ datasets:
5
+ - wikipedia
6
+ widget:
7
+ - text: "東北大学で[MASK]の研究をしています。"
8
+ ---
9
+
10
+ # BERT base Japanese (IPA dictionary, whole word masking enabled)
11
+
12
+ This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
13
+
14
+ This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.
15
+
16
+ The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
17
+
18
+ ## Model architecture
19
+
20
+ The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
21
+
22
+ ## Training Data
23
+
24
+ The model is trained on Japanese Wikipedia as of September 1, 2019.
25
+ To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
26
+ The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
27
+
28
+ ## Tokenization
29
+
30
+ The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into characters.
31
+ The vocabulary size is 4000.
32
+
33
+ ## Training
34
+
35
+ The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
36
+
37
+ ## Licenses
38
+
39
+ The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
40
+
41
+ ## Acknowledgments
42
+
43
+ For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.