--- license: apache-2.0 datasets: - tatsu-lab/alpaca language: - zh library_name: transformers --- This checkpoint is trained with: https://github.com/hiyouga/LLaMA-Efficient-Tuning Usage: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True) model = PeftModel.from_pretrained(model, "hiyouga/baichuan-7b-sft") query = "晚上睡不着怎么办" inputs = tokenizer([":{}\n:".format(query)], return_tensors="pt") inputs = inputs.to("cuda") generate_ids = model.generate(**inputs) output = tokenizer.batch_decode(generate_ids)[0] print(output) ``` You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning ```bash python src/cli_demo.py \ --model_name_or_path baichuan-inc/baichuan-7B \ --checkpoint_dir hiyouga/baichuan-7b-sft \ --prompt_template ziya ```