dkawahara's picture
Specified mask_token.
31161df
|
raw
history blame
No virus
857 Bytes
metadata
language: ja
tags:
  - exbert
license: cc-by-sa-4.0
datasets:
  - wikipedia
  - cc100
mask_token: '[MASK]'
widget:
  - text: 早稲田 大学  自然 言語 処理  [MASK] する 

nlp-waseda/roberta-base-japanese

Model description

This is a Japanese RoBERTa model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.

How to use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
model=AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")

Tokenization

The input text should be segmented into words by Juman++ in advance. Each word is tokenized into subwords by sentencepiece.

Vocabulary

Training procedure