metadata
language: ja
thumbnail: https://github.com/rinnakk/japanese-gpt2/blob/master/rinna.png
tags:
- ja
- japanese
- roberta
- masked-lm
- nlp
license: mit
datasets:
- cc100
- wikipedia
japanese-roberta-base
This repository provides a base-sized Japanese RoBERTa model. The model is provided by rinna.
How to use the model
NOTE: Use T5Tokenizer
to initiate the tokenizer.
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-roberta-base")
tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-roberta-base")
Model architecture
A 12-layer, 768-hidden-size transformer-based masked language model.
Training
The model was trained on Japanese CC-100 and Japanese Wikipedia to optimize a masked language modelling objective on 8\\*V100 GPUs for around 15 days.
Tokenization
The model uses a sentencepiece-based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script.