Edit model card

japanese-gpt2-medium-unidic

This is a medium-sized Japanese GPT-2 model using BERT-like tokenizer.

Reversed version is published here.

How to use

The model depends on PyTorch, fugashi with unidic-lite, and Hugging Face Transformers.

pip install torch torchvision torchaudio
pip install fugashi[unidic-lite]
pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained('okazaki-lab/japanese-gpt2-medium-unidic')
model = AutoModelForCausalLM.from_pretrained('okazaki-lab/japanese-gpt2-medium-unidic')

text = '今日はいい天気なので、'

bos = tokenizer.convert_tokens_to_ids(['[BOS]']) # [32768]
input_ids = bos + tokenizer.encode(text)[1:-1] # [CLS] and [SEP] generated by BERT Tokenizer are removed
input_ids = torch.tensor(input_ids).unsqueeze(0)
output = model.generate(
    input_ids,
    do_sample=True,
    max_new_tokens=30,
    top_k=50,
    top_p=0.95,
    repetition_penalty=1.0,
    num_return_sequences=1,
    pad_token_id=0,
    eos_token_id=32769,
)[0]

print(tokenizer.decode(output))

Model architecture

Transformer-based Language Model

  • Layers: 24
  • Heads: 16
  • Dimensions of hidden states: 1024

Training

We used a codebase provided by rinna Co., Ltd. for training.

The model was trained on Japanese CC-100 and Japanese Wikipedia (2022/01/31). We employed 8 A100 GPUs for 17 days. The perplexity on the validation set is 9.80.

Tokenization

Our tokenizer is based on the one provided by Tohoku NLP Group. The texts are tokenized by MeCab and then WordPiece.

The vocabulary size is 32771 (32768 original tokens + 2 special tokens + 1 unused token).

License

Creative Commons Attribution-ShareAlike 4.0

Copyright (c) 2021, Tohoku University

Copyright (c) 2023, Tokyo Institute of Technology

Downloads last month
9
Safetensors
Model size
362M params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for okazaki-lab/japanese-gpt2-medium-unidic

Finetunes
1 model

Datasets used to train okazaki-lab/japanese-gpt2-medium-unidic