conan1024hao commited on
Commit
0d2c96f
1 Parent(s): 00d40f3

update readme

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -9,7 +9,7 @@ widget:
9
  - text: "早稲田 大学 で 自然 言語 処理 を [MASK] する 。"
10
  ---
11
 
12
- # nlp-waseda/roberta-base-japanese
13
 
14
  ## Model description
15
 
@@ -20,8 +20,8 @@ This is a Japanese RoBERTa base model pretrained on Japanese Wikipedia and the J
20
  You can use this model for masked language modeling as follows:
21
  ```python
22
  from transformers import AutoTokenizer, AutoModelForMaskedLM
23
- tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
24
- model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")
25
 
26
  sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
27
  encoding = tokenizer(sentence, return_tensors='pt')
 
9
  - text: "早稲田 大学 で 自然 言語 処理 を [MASK] する 。"
10
  ---
11
 
12
+ # nlp-waseda/roberta-base-japanese-with-auto-jumanpp
13
 
14
  ## Model description
15
 
 
20
  You can use this model for masked language modeling as follows:
21
  ```python
22
  from transformers import AutoTokenizer, AutoModelForMaskedLM
23
+ tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp")
24
+ model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp")
25
 
26
  sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
27
  encoding = tokenizer(sentence, return_tensors='pt')