SalmanFaroz commited on
Commit
35e0587
1 Parent(s): 48c1539

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -301,11 +301,19 @@ llm = Llama(
301
  n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
302
  )
303
 
 
 
304
  output = llm(
305
- "what is life?", # Prompt
306
- max_tokens=10, # Generate up to 512 tokens
 
 
 
 
 
307
  stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
308
  echo=True , # Whether to echo the prompt
309
  temperature=0.001
310
  )
 
311
  ```
 
301
  n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
302
  )
303
 
304
+ prompt = "Tell me about AI"
305
+
306
  output = llm(
307
+ f'''[INST] <<SYS>>
308
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
309
+ <</SYS>>
310
+ {prompt}[/INST]
311
+
312
+ ''', # Prompt
313
+ max_tokens=100, # Generate up to 512 tokens
314
  stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
315
  echo=True , # Whether to echo the prompt
316
  temperature=0.001
317
  )
318
+
319
  ```