lvkaokao macadeliccc commited on
Commit
8394403
1 Parent(s): 3995e9a

Added demo code according to the prompt format (#5)

Browse files

- Added demo code according to the prompt format (add8a485bc64cab78ea36d93003a4fdc518a5f4a)


Co-authored-by: tim <macadeliccc@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -52,11 +52,36 @@ The following hyperparameters were used during training:
52
 
53
  ## Inference with transformers
54
 
55
- ```shell
56
  import transformers
57
- model = transformers.AutoModelForCausalLM.from_pretrained(
58
- 'Intel/neural-chat-7b-v3-1'
59
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ```
61
 
62
  ## Ethical Considerations and Limitations
 
52
 
53
  ## Inference with transformers
54
 
55
+ ```python
56
  import transformers
57
+
58
+
59
+ model_name = 'Intel/neural-chat-7b-v3-1'
60
+ model = transformers.AutoModelForCausalLM.from_pretrained(model_name)
61
+ tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
62
+
63
+
64
+ def generate_response(system_input, user_input):
65
+
66
+ # Format the input using the provided template
67
+ prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n"
68
+
69
+ # Tokenize and encode the prompt
70
+ inputs = tokenizer.encode(prompt, return_tensors="pt")
71
+
72
+ # Generate a response
73
+ outputs = model.generate(inputs, max_length=1000, num_return_sequences=1)
74
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
75
+
76
+ # Extract only the assistant's response
77
+ return response.split("### Assistant:\n")[-1]
78
+
79
+
80
+ # Example usage
81
+ system_input = "You are a chatbot developed by Intel. Please answer all questions to the best of your ability."
82
+ user_input = "How does the neural-chat-7b-v3-1 model work?"
83
+ response = generate_response(system_input, user_input)
84
+ print(response)
85
  ```
86
 
87
  ## Ethical Considerations and Limitations