Edit model card

License

非商用ライセンスで公開します。

Chat Vector

Tora-7B-v0.2 = NTQAI/chatntq-ja-7b-v1.0 + (NousResearch/Hermes-2-Pro-Mistral-7B - mistralai/Mistral-7B-v0.1)

実装

@jovyan様の実装を参考に下記のコードでモデルを作成しました。

import torch
from transformers import AutoModelForCausalLM


def build_chat_vector_model(
    base_model_name,
    inst_model_name,
    target_model_name,
    skip_layers,
    ):

    base_model = AutoModelForCausalLM.from_pretrained(
        base_model_name,
        torch_dtype=torch.bfloat16,
        device_map="cpu",
    )
    inst_model = AutoModelForCausalLM.from_pretrained(
        inst_model_name,
        torch_dtype=torch.bfloat16,
        device_map="cpu",
    )

    target_model = AutoModelForCausalLM.from_pretrained(
        target_model_name,
        torch_dtype=torch.bfloat16,
        device_map="cuda",
    )

    # 英語ベースモデル
    for k, v in base_model.state_dict().items():
        print(k, v.shape)

    # 日本語継続事前学習モデル
    for k, v in target_model.state_dict().items():
        print(k, v.shape)

    # 除外対象
    skip_layers = ["model.embed_tokens.weight", "lm_head.weight"]

    for k, v in target_model.state_dict().items():
        # layernormも除外
        if (k in skip_layers) or ("layernorm" in k):
            continue
        chat_vector = inst_model.state_dict()[k] - base_model.state_dict()[k]
        new_v = v + chat_vector.to(v.device)
        v.copy_(new_v)

    target_model.save_pretrained("./chat_model")

    return


if __name__ == '__main__':

    base_model_name = "mistralai/Mistral-7B-v0.1"
    inst_model_name = "NousResearch/Hermes-2-Pro-Mistral-7B"
    target_model_name = "NTQAI/chatntq-ja-7b-v1.0"

    skip_layers = ["model.embed_tokens.weight", "lm_head.weight"]

    build_chat_vector_model(
        base_model_name=base_model_name,
        inst_model_name=inst_model_name,
        target_model_name=target_model_name,
        skip_layers=skip_layers
    )

Benchmark (Japanese MT bench)

model category score ver
Tora-7B-v0.2 Writing 3.8 single-turn
Tora-7B-v0.2 Roleplay 7.1 single-turn
Tora-7B-v0.2 Reasoning 6.3 single-turn
Tora-7B-v0.2 Math 3.0 single-turn
Tora-7B-v0.2 Coding 2.2 single-turn
Tora-7B-v0.2 Extraction 6.6 single-turn
Tora-7B-v0.2 STEM 7.2 single-turn
Tora-7B-v0.2 Humanities 8.2 single-turn

image/png

謝辞

ChatVectorの記事を執筆してくださった@jovyan様に深くお礼申し上げます。

参考

Chat Vectorを使って日本語LLMをチャットモデルに改造する

Downloads last month
6
Safetensors
Model size
7.24B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including ryota39/Tora-7B-v0.2