Edit model card

ku-accms/bert-base-japanese-ssuw

Model description

This is a pre-trained Japanese BERT base model for super short unit words (SSUW).

Pre-processing

The input text should be converted to full-width (zenkaku) characters and segmented into super short unit words in advance (e.g., by KyTea).

How to use

You can use this model directly with a pipeline for masked language modeling:

>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='ku-accms/bert-base-japanese-ssuw')
>>> unmasker("京都 大学 で [MASK] を 専攻 する 。")
[{'sequence': '京都 大学 で 文学 を 専攻 する 。',
  'score': '0.1464807540178299',
  'token': '14603',
  'token_str': '文学'}
 {'sequence': '京都 大学 で 哲学 を 専攻 する 。',
  'score': '0.08064978569746017',
  'token': '15917',
  'token_str': '哲学'}
 {'sequence': '京都 大学 で 演劇 を 専攻 する 。',
  'score': '0.0800977498292923',
  'token': '16772',
  'token_str': '演劇'}
 {'sequence': '京都 大学 で 法学 を 専攻 する 。',
  'score': '0.04579947143793106',
  'token': '16255',
  'token_str': '法学'}
 {'sequence': '京都 大学 で 英語 を 専攻 する 。',
  'score': '0.045536939054727554',
  'token': '14592',
  'token_str': '英語'}

Here is how to use this model to get the features of a given text in PyTorch:

import zenhan
import Mykytea
kytea_model_path = "somewhere"
kytea = Mykytea.Mykytea("-model {} -notags".format(kytea_model_path))
def preprocess(text):
    return " ".join(kytea.getWS(zenhan.h2z(text)))

from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('ku-accms/bert-base-japanese-ssuw')
model = BertModel.from_pretrained("ku-accms/bert-base-japanese-ssuw")
text = "京都大学で自然言語処理を専攻する。"
encoded_input = tokenizer(preprocess(text), return_tensors='pt')
output = model(**encoded_input)

Training data

We used a Japanese Wikipedia dump (as of 20230101, 3.3GB).

Training procedure

We first segmented the texts into words by KyTea and then tokenized the words into subwords using WordPiece with a vocabulary size of 32,000. We pre-trained the BERT model using transformers library. The training took about 8 days using 4 NVIDIA A100-SXM4-80GB GPUs.

The following hyperparameters were used for the pre-training.

  • learning_rate: 2e-4
  • weight decay: 1e-2
  • per_device_train_batch_size: 80
  • num_devices: 4
  • gradient_accumulation_steps: 3
  • total_train_batch_size: 960
  • max_seq_length: 512
  • optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-06
  • lr_scheduler_type: linear schedule with warmup
  • training_steps: 500,000
  • warmup_steps: 10,000
Downloads last month
389
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ku-accms/bert-base-japanese-ssuw