tianyuz commited on
Commit
be4ec7b
1 Parent(s): 71935de

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ thumbnail: https://github.com/rinnakk/japanese-gpt2/blob/master/rinna.png
4
+ tags:
5
+ - ja
6
+ - japanese
7
+ - roberta
8
+ - masked-lm
9
+ - nlp
10
+ license: mit
11
+ datasets:
12
+ - cc100
13
+ - wikipedia
14
+
15
+ ---
16
+
17
+ # japanese-roberta-base
18
+
19
+ ![rinna-icon](./rinna.png)
20
+
21
+ This repository provides a base-sized Japanese RoBERTa model. The model is provided by [rinna](https://corp.rinna.co.jp/).
22
+
23
+ # How to use the model
24
+
25
+ *NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
26
+
27
+ ~~~~
28
+ from transformers import T5Tokenizer, AutoModelForCausalLM
29
+
30
+ tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-roberta-base")
31
+ tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
32
+
33
+ model = AutoModelForCausalLM.from_pretrained("rinna/japanese-roberta-base")
34
+ ~~~~
35
+
36
+ # Model architecture
37
+ A 12-layer, 768-hidden-size transformer-based masked language model.
38
+
39
+ # Training
40
+ The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/) to optimize a masked language modelling objective on 8\\\\*V100 GPUs for around 15 days.
41
+
42
+ # Tokenization
43
+ The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.
44
+
45
+ # Licenese
46
+ [The MIT license](https://opensource.org/licenses/MIT)