|
--- |
|
license: apache-2.0 |
|
language: |
|
- zh |
|
--- |
|
A instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B |
|
|
|
Training framework: https://github.com/hiyouga/LLaMA-Factory |
|
|
|
Please follow the baichuan-7B License to use this model. |
|
|
|
Usage: |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("Hongbin37/CBT-LLM", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained("Hongbin37/CBT-LLM", trust_remote_code=True).cuda() |
|
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) |
|
|
|
query = "为什么生怕一点点事情做不好被人批评?做事情,别人告诉了我方法,但身体不会按方法来,非要折腾几遍,才发现别人告诉的方法和口诀,是最高效的;而且最近一个月,睡眠不好,整个白天都是无精打采的,每天活的很丧,知道自己的问题出在哪里,不晓得怎么去做出改变" |
|
template = ( |
|
"你是一名经验丰富的心理咨询师,专长于认知行为疗法, 以心理咨询师的身份回答以下问题。\n" |
|
"Human: {}\nAssistant: " |
|
) |
|
|
|
inputs = tokenizer([template.format(query)], return_tensors="pt") |
|
inputs = inputs.to("cuda") |
|
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer) |
|
``` |