hiyouga commited on
Commit
ebce2f4
1 Parent(s): 41e3be6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -1,3 +1,25 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ This checkpoint is trained with: https://github.com/hiyouga/LLaMA-Efficient-Tuning
6
+
7
+ Usage:
8
+
9
+ ```python
10
+ from transformers import AutoModelForCausalLM, AutoTokenizer
11
+ from peft import PeftModel
12
+
13
+ tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True)
14
+ model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True)
15
+ model = PeftModel.from_pretrained(model, "hiyouga/baichuan-7b-sft")
16
+ model = model.merge_and_unload()
17
+
18
+ query = "晚上睡不着怎么办"
19
+
20
+ inputs_ids = tokenizer(["<human>:{}\n<bot>:".format(query)], return_tensors="pt")["input_ids"]
21
+ inputs_ids = inputs_ids.to("cuda")
22
+ generate_ids = model.generate(input_ids)
23
+ output = tokenizer.batch_decode(generate_ids)[0]
24
+ print(output)
25
+ ```