dododododo commited on
Commit
15753f7
1 Parent(s): a393743

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -30,4 +30,26 @@ We set the hyperparameters as follows:
30
 
31
  ![Alt text](safe.png)
32
  ## Performance on General Benchmark
33
- ![Alt text](general.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ![Alt text](safe.png)
32
  ## Performance on General Benchmark
33
+ ![Alt text](general.png)
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+ ```
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+ model_path = '<your-model-path>'
41
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
42
+ model = AutoModelForCausalLM.from_pretrained(
43
+ model_path,
44
+ device_map="auto",
45
+ torch_dtype='auto'
46
+ ).eval()
47
+ messages = [
48
+ {"role": "system", "content": "你是一个有用的人工智能助手。"},
49
+ {"role": "user", "content": "你好"},
50
+ ]
51
+ input_ids = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, return_tensors='pt')
52
+ output_ids = model.generate(input_ids.to('cuda'), max_new_tokens=20)
53
+ response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
54
+ print(response)
55
+ ```