Text Generation
Transformers
Safetensors
qwen
custom_code

本脚本是对千问7B模型的微调和测试,使得Qwen 7B能够有角色扮演的能力

This script fine-tunes and tests the Qwen 7B model to give Qwen 7B the capability of role playing.

项目链接 https://github.com/LC1332/Chat-Haruhi-Suzumiya

  • 118K训练数据由李鲁鲁收集,

  • 模型是由豆角训练的

  • Qwen inference代码由米唯实编写,并接入Chatharuhi,目前进行本模型维护和bug解决

  • 李鲁鲁编写了ChatHaruhi内部的prompt组织函数

A Harry Potter test see in https://github.com/LC1332/Chat-Haruhi-Suzumiya/blob/main/notebook/Harry_Potter_test_on_Qwen7B.ipynb

使用方法

载入函数

from transformers import AutoTokenizer, AutoModel, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("silk-road/ChatHaruhi_RolePlaying_qwen_7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("silk-road/ChatHaruhi_RolePlaying_qwen_7b", device_map="auto", trust_remote_code=True)
model = model.eval()

具体看https://github.com/LC1332/Chat-Haruhi-Suzumiya/blob/main/notebook/ChatHaruhi_x_Qwen7B.ipynb 这个notebook

from ChatHaruhi import ChatHaruhi

chatbot = ChatHaruhi( role_name = 'haruhi', max_len_story = 1000 )

prompt = chatbot.generate_prompt(role='阿虚', text = '我看新一年的棒球比赛要开始了!我们要去参加吗?')

response, history = model.chat(tokenizer, prompt, history=[])
print(response)

chatbot.append_response(response)

目前支持 role_name

role_from_hf

role_from_jsonl

多种角色格式载入。

Downloads last month
37
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Datasets used to train silk-road/ChatHaruhi_RolePlaying_qwen_7b

Spaces using silk-road/ChatHaruhi_RolePlaying_qwen_7b 3