Text Generation
Transformers
PyTorch
English
gpt_neox
causal-lm
Inference Endpoints
text-generation-inference
dmayhem93 commited on
Commit
e736571
1 Parent(s): 2fc0ec9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -53,7 +53,7 @@ tokens = model.generate(
53
  max_new_tokens=64,
54
  temperature=0.7,
55
  do_sample=True,
56
- StoppingCriteriaList([StopOnTokens()])
57
  )
58
  print(tokenizer.decode(tokens[0], skip_special_tokens=True))
59
  ```
 
53
  max_new_tokens=64,
54
  temperature=0.7,
55
  do_sample=True,
56
+ stopping_criteria=StoppingCriteriaList([StopOnTokens()])
57
  )
58
  print(tokenizer.decode(tokens[0], skip_special_tokens=True))
59
  ```