hiyouga commited on
Commit
8224b6c
1 Parent(s): e65fec5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,6 @@ Usage:
22
  ```python
23
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
24
 
25
-
26
  tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
27
  model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
28
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
@@ -36,6 +35,7 @@ generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
36
  ```
37
 
38
  You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
 
39
  ```bash
40
  python src/cli_demo.py --model_name_or_path hiyouga/baichuan-7b-sft
41
  ```
 
22
  ```python
23
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
24
 
 
25
  tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
26
  model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
27
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
 
35
  ```
36
 
37
  You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
38
+
39
  ```bash
40
  python src/cli_demo.py --model_name_or_path hiyouga/baichuan-7b-sft
41
  ```