Text Generation
Transformers
PyTorch
Safetensors
Japanese
English
llama
Eval Results
text-generation-inference

rinna/youri-7b-chat

rinna-icon

Overview

The model is the instruction-tuned version of rinna/youri-7b. It adopts a chat-style input format.


Benchmarking

Please refer to rinna's LM benchmark page.


How to use the model

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-chat")
model = AutoModelForCausalLM.from_pretrained("rinna/youri-7b-chat")

if torch.cuda.is_available():
    model = model.to("cuda")

instruction = "次の日本語を英語に翻訳してください。"
input = "自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。"

context = [
    {
        "speaker": "設定",
        "text": instruction
    },
    {
        "speaker": "ユーザー",
        "text": input
    }
]
prompt = [
    f"{uttr['speaker']}: {uttr['text']}"
    for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
    prompt
    + "\n"
    + "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        token_ids.to(model.device),
        max_new_tokens=200,
        do_sample=True,
        temperature=0.5,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""
設定: 次の日本語を英語に翻訳してください。
ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。
システム:  Learning to solve tasks based on natural language instructions is called instruction tuning.</s>
"""

output = output[len(prompt):-len("</s>")].strip()
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"

context.extend([
    {
        "speaker": "システム",
        "text": output
    },
    {
        "speaker": "ユーザー",
        "text": input
    }
])
prompt = [
    f"{uttr['speaker']}: {uttr['text']}"
    for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
    prompt
    + "\n"
    + "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        token_ids.to(model.device),
        max_new_tokens=200,
        do_sample=True,
        temperature=0.5,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""
設定: 次の日本語を英語に翻訳してください。
ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。
システム: Learning to solve tasks based on natural language instructions is called instruction tuning.
ユーザー: 大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテ キストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。
システム:  Large language models (LLMs) are computer language models consisting of a deep artificial neural network with millions to billions of parameters that are trained by self-supervised learning or semi-supervised learning using vast unlabeled text corpora.</s>
"""

Tokenization

The model uses the original llama-2 tokenizer.


How to cite

@misc{rinna-youri-7b-chat,
    title = {rinna/youri-7b-chat},
    author = {Zhao, Tianyu and Sawada, Kei},
    url = {https://huggingface.co/rinna/youri-7b-chat}
}

@inproceedings{sawada2024release,
    title = {Release of Pre-Trained Models for the {J}apanese Language},
    author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
    booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
    month = {5},
    year = {2024},
    pages = {13898--13905},
    url = {https://aclanthology.org/2024.lrec-main.1213},
    note = {\url{https://arxiv.org/abs/2404.01657}}
}

License

The llama2 license

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 48.51
AI2 Reasoning Challenge (25-Shot) 51.19
HellaSwag (10-Shot) 76.09
MMLU (5-Shot) 46.06
TruthfulQA (0-shot) 41.17
Winogrande (5-shot) 75.06
GSM8k (5-shot) 1.52
Downloads last month
548
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for rinna/youri-7b-chat

Finetuned
rinna/youri-7b
Finetuned
(6)
this model
Finetunes
7 models
Quantizations
5 models

Datasets used to train rinna/youri-7b-chat

Spaces using rinna/youri-7b-chat 5

Collection including rinna/youri-7b-chat

Evaluation results