Edit model card

finetuned from Qwen-7B-Chat

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("huskyhong/noname-ai-v1", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("huskyhong/noname-ai-v1", device_map="auto", trust_remote_code=True).eval() # 采用gpu加载模型
# model = AutoModelForCausalLM.from_pretrained("huskyhong/noname-ai-v1", device_map="cpu", trust_remote_code=True).eval() # 采用cpu加载模型
model.generation_config = GenerationConfig.from_pretrained("huskyhong/noname-ai-v1", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参

while True:
    print("请输入技能效果:")
    prompt = "请帮我编写一个技能,技能效果如下:" + input()
    response, history = model.chat(tokenizer, prompt, history = [])
    print(response)
Downloads last month
11
Safetensors
Model size
7.72B params
Tensor type
BF16
·
Inference API
Inference API (serverless) does not yet support model repos that contain custom code.