pvduy commited on
Commit
eef70e6
1 Parent(s): 6409fe5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -81,6 +81,50 @@ model-index:
81
  This instruct tune demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using [BigCode's Evaluation Harness](https://github.com/bigcode-project/bigcode-evaluation-harness/tree/main), and on the code portions of
82
  [MT Bench](https://klu.ai/glossary/mt-bench-eval)
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ## How to Cite
85
 
86
  ```bibtex
 
81
  This instruct tune demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using [BigCode's Evaluation Harness](https://github.com/bigcode-project/bigcode-evaluation-harness/tree/main), and on the code portions of
82
  [MT Bench](https://klu.ai/glossary/mt-bench-eval)
83
 
84
+
85
+ ## Usage
86
+ Here's how you can run the model use the model:
87
+
88
+ ```python
89
+ # pip install -U transformers
90
+ # pip install accelerate
91
+
92
+ import torch
93
+ from transformers import AutoModelForCausalLM, AutoTokenizer
94
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-instruct-3b", trust_remote_code=True)
95
+ model = AutoModelForCausalLM.from_pretrained("stabilityai/stable-code-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)
96
+ model.eval()
97
+ model = model.cuda()
98
+
99
+ messages = [
100
+ {
101
+ "role": "system",
102
+ "content": "You are a helpful and polite assistant",
103
+ },
104
+ {
105
+ "role": "user",
106
+ "content": "Write a simple website in HTML. When a user clicks the button, it shows a random joke from a list of 4 jokes."
107
+ },
108
+ ]
109
+
110
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
111
+
112
+ inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
113
+
114
+ tokens = model.generate(
115
+ **inputs,
116
+ max_new_tokens=1024,
117
+ temperature=0.5,
118
+ top_p=0.95,
119
+ top_k=100,
120
+ do_sample=True,
121
+ use_cache=True
122
+ )
123
+
124
+ output = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]
125
+ ```
126
+
127
+
128
  ## How to Cite
129
 
130
  ```bibtex