Edit model card

Model Card for Japanese BART base

Model description

This is a Japanese BART base model pre-trained on Japanese Wikipedia.

How to use

You can use this model as follows:

from transformers import AutoTokenizer, MBartForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/bart-base-japanese')
model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-base-japanese')
sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。'  # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...

You can fine-tune this model on downstream tasks.

Tokenization

The input text should be segmented into words by Juman++ in advance. Juman++ 2.0.0-rc3 was used for pre-training. Each word is tokenized into subwords by sentencepiece.

Training data

We used the following corpora for pre-training:

  • Japanese Wikipedia (18M sentences)

Training procedure

We first segmented texts in the corpora into words using Juman++. Then, we built a sentencepiece model with 32000 tokens including words (JumanDIC) and subwords induced by the unigram language model of sentencepiece.

We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese BART model using fairseq library. The training took 2 weeks using 4 Tesla V100 GPUs.

The following hyperparameters were used during pre-training:

  • distributed_type: multi-GPU
  • num_devices: 4
  • batch_size: 512
  • training_steps: 500,000
  • encoder layers: 6
  • decoder layers: 6
  • hidden size: 768
Downloads last month
635
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ku-nlp/bart-base-japanese

Finetunes
1 model

Dataset used to train ku-nlp/bart-base-japanese