Text Generation
Transformers
PyTorch
English
gpt_neox
causal-lm
Inference Endpoints
text-generation-inference
vvsotnikov commited on
Commit
1bf767b
1 Parent(s): 35ba507

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -24,9 +24,10 @@ datasets:
24
  Get started chatting with `StableLM-Tuned-Alpha 16-bit` by using the following code snippet:
25
 
26
  ```python
 
27
  from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
28
  tokenizer = AutoTokenizer.from_pretrained("vvsotnikov/stablelm-tuned-alpha-7b-16bit")
29
- model = AutoModelForCausalLM.from_pretrained("vvsotnikov/stablelm-tuned-alpha-7b-16bit")
30
  model.cuda()
31
  class StopOnTokens(StoppingCriteria):
32
  def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
 
24
  Get started chatting with `StableLM-Tuned-Alpha 16-bit` by using the following code snippet:
25
 
26
  ```python
27
+ import torch
28
  from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
29
  tokenizer = AutoTokenizer.from_pretrained("vvsotnikov/stablelm-tuned-alpha-7b-16bit")
30
+ model = AutoModelForCausalLM.from_pretrained("vvsotnikov/stablelm-tuned-alpha-7b-16bit", torch_dtype=torch.float16)
31
  model.cuda()
32
  class StopOnTokens(StoppingCriteria):
33
  def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: