Edit model card

japanese-gpt-neox-3.6b

rinna-icon

Overview

This repository provides a Japanese GPT-NeoX model of 3.6 billion parameters.

How to use the model

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b")

if torch.cuda.is_available():
    model = model.to("cuda")

text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        token_ids.to(model.device),
        max_new_tokens=100,
        min_new_tokens=100,
        do_sample=True,
        temperature=0.8,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""西田幾多郎は、この「絶対矛盾的自己同一」を「世界の自己同一」と置きかえ、さらに西田哲学を出発点として「絶対無」を「世界の成立」に変え、世界と自己を一つの統一物とみなす哲学として展開する。この世界と自己は絶対矛盾的自己同一として同一の性質を有し、同じ働きをする。西田哲学においては、この世界と自己は矛盾しあうのではなく、同一の性質をもっている。この世界と自己は同一である。絶対"""

Tokenization

The model uses a sentencepiece-based tokenizer.

  • The tokenizer has a vocabulary size of 32,000.
  • It uses sentencepiece's byte fallback feature to decompose unknown text pieces into UTF-8 byte pieces and to avoid producing <UNK> tokens.
  • sentencepiece's --add_dummy_prefix option was turned off so that a leading whitespace will not be prepended automatically.
      print(tokenizer.tokenize("吾輩は猫である"))
      # ['吾', '輩', 'は', '猫', 'である']
      # instead of ['▁', '吾', '輩', 'は', '猫', 'である'] as in rinna/japanese-gpt-1b
    
  • sentencepiece's --remove_extra_whitespaces option was turned off so that leading, trailing, and duplicate whitespaces are reserved.
      print(tokenizer.tokenize("  吾輩は  猫である   "))
      # ['▁', '▁', '吾', '輩', 'は', '▁', '▁', '猫', 'である', '▁', '▁', '▁']
      # instead of ['▁', '吾', '輩', 'は', '▁猫', 'である'] as in rinna/japanese-gpt-1b
    
  • Don't forget to set use_fast=False to make the above features function correctly.
      good_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b", use_fast=False)
      bad_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b")
    
      print(good_tokenizer.decode(good_tokenizer.encode("გამარჯობა  吾輩は  猫である   ")))
      # 'გამარჯობა  吾輩は  猫である   </s>'
      print(bad_tokenizer.decode(bad_tokenizer.encode("გამარჯობა  吾輩は  猫である   ")))
      # 'გამარ[UNK]ობა 吾輩は 猫である </s>'
    

How to cite

@misc{rinna-japanese-gpt-neox-3.6b,
    title = {rinna/japanese-gpt-neox-3.6b},
    author = {Zhao, Tianyu and Sawada, Kei}
    url = {https://huggingface.co/rinna/japanese-gpt-neox-3.6b},
}

@inproceedings{sawada2024release,
    title = {Release of Pre-Trained Models for the {J}apanese Language},
    author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
    booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
    month = {5},
    year = {2024},
    url = {https://arxiv.org/abs/2404.01657},
}

Licenese

The MIT license

Downloads last month
34,587
Safetensors
Model size
3.76B params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Datasets used to train rinna/japanese-gpt-neox-3.6b

Spaces using rinna/japanese-gpt-neox-3.6b 3

Collections including rinna/japanese-gpt-neox-3.6b