Text Generation
Transformers
PyTorch
English
gpt_neox
causal-lm
Inference Endpoints
text-generation-inference
dmayhem93 commited on
Commit
2fc0ec9
1 Parent(s): c96b952

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -45,13 +45,16 @@ system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
45
  - StableLM will refuse to participate in anything that could harm a human.
46
  """
47
 
48
- prompt = f"{system_prompt}<|USER|>What's your mood today<|ASSISTANT|>?"
49
 
50
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
51
  tokens = model.generate(
52
  **inputs,
53
  max_new_tokens=64,
54
- StoppingCriteriaList([StopOnTokens()]))
 
 
 
55
  print(tokenizer.decode(tokens[0], skip_special_tokens=True))
56
  ```
57
 
 
45
  - StableLM will refuse to participate in anything that could harm a human.
46
  """
47
 
48
+ prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>"
49
 
50
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
51
  tokens = model.generate(
52
  **inputs,
53
  max_new_tokens=64,
54
+ temperature=0.7,
55
+ do_sample=True,
56
+ StoppingCriteriaList([StopOnTokens()])
57
+ )
58
  print(tokenizer.decode(tokens[0], skip_special_tokens=True))
59
  ```
60