BertJapaneseΒΆ

OverviewΒΆ

The BERT models trained on Japanese text.

There are models with two different tokenization methods:

  • Tokenize with MeCab and WordPiece. This requires some extra dependencies, fugashi which is a wrapper around MeCab.

  • Tokenize into characters.

To use MecabTokenizer, you should pip install transformers["ja"] (or pip install -e .["ja"] if you install from source) to install dependencies.

See details on cl-tohoku repository.

Example of using a model with MeCab and WordPiece tokenization:

>>> import torch
>>> from transformers import AutoModel, AutoTokenizer

>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")

>>> ## Input Japanese Text
>>> line = "εΎθΌ©γ―ηŒ«γ§γ‚γ‚‹γ€‚"

>>> inputs = tokenizer(line, return_tensors="pt")

>>> print(tokenizer.decode(inputs['input_ids'][0]))
[CLS] 吾輩 は 猫 で ある 。 [SEP]

>>> outputs = bertjapanese(**inputs)

Example of using a model with Character tokenization:

>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")

>>> ## Input Japanese Text
>>> line = "εΎθΌ©γ―ηŒ«γ§γ‚γ‚‹γ€‚"

>>> inputs = tokenizer(line, return_tensors="pt")

>>> print(tokenizer.decode(inputs['input_ids'][0]))
[CLS] 吾 θΌ© は 猫 で あ γ‚‹ 。 [SEP]

>>> outputs = bertjapanese(**inputs)

Tips:

  • This implementation is the same as BERT, except for tokenization method. Refer to the documentation of BERT for more usage examples.

This model was contributed by cl-tohoku.

BertJapaneseTokenizerΒΆ

class transformers.BertJapaneseTokenizer(vocab_file, do_lower_case=False, do_word_tokenize=True, do_subword_tokenize=True, word_tokenizer_type='basic', subword_tokenizer_type='wordpiece', never_split=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', mecab_kwargs=None, **kwargs)[source]ΒΆ

Construct a BERT tokenizer for Japanese text, based on a MecabTokenizer.

Parameters
  • vocab_file (str) – Path to a one-wordpiece-per-line vocabulary file.

  • do_lower_case (bool, optional, defaults to True) – Whether to lower case the input. Only has an effect when do_basic_tokenize=True.

  • do_word_tokenize (bool, optional, defaults to True) – Whether to do word tokenization.

  • do_subword_tokenize (bool, optional, defaults to True) – Whether to do subword tokenization.

  • word_tokenizer_type (str, optional, defaults to "basic") – Type of word tokenizer.

  • subword_tokenizer_type (str, optional, defaults to "wordpiece") – Type of subword tokenizer.

  • mecab_kwargs (str, optional) – Dictionary passed to the MecabTokenizer constructor.