Edit model card

japanese-large-lm-3.6b-instruction-sft

This repository provides a 3.6B parameters Japanese language model, fine-tuned and trained by LINE Corporation.

For Japanese

詳細な説明や実験に関しては「Instruction Tuningにより対話性能を向上させた3.6B日本語言語モデルを公開します」をご覧ください。

How to use

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft")
 
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
 
input_text = """四国の県名を全て列挙してください。"""
text = generator(
    f"ユーザー: {input_text}\nシステム: ",
    max_length = 256,
    do_sample = True,
    temperature = 0.7,
    top_p = 0.9,
    top_k = 0,
    repetition_penalty = 1.1,
    num_beams = 1,
    pad_token_id = tokenizer.pad_token_id,
    num_return_sequences = 1,
)
print(text)
# [{'generated_text': 'ユーザー: 四国の県名を全て列挙してください。\nシステム:  高知県、徳島県、香川県、愛媛県'}]

Tokenization

We use a sentencepiece tokenizer with a unigram language model and byte-fallback. We do not apply pre-tokenization with Japanese tokenizer. Thus, a user may directly feed raw sentences into the tokenizer.

License

Apache License, Version 2.0

Downloads last month
1,349
Safetensors
Model size
3.68B params
Tensor type
FP16
·
BOOL
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Spaces using line-corporation/japanese-large-lm-3.6b-instruction-sft 2