Text Generation
Transformers
PyTorch
English
llama
causal-lm
text-generation-inference
Inference Endpoints
jon-tow commited on
Commit
9312006
1 Parent(s): 33956a3

revert: use saved path as tokenizer_path

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ Once the delta weights are applied, get started chatting with the model by using
33
  ```python
34
  from transformers import AutoTokenizer, AutoModelForCausalLM
35
 
36
- tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-vicuna-13b-delta")
37
  model = AutoModelForCausalLM.from_pretrained("path/to/stable-vicuna-13b-applied")
38
  model.half().cuda()
39
 
 
33
  ```python
34
  from transformers import AutoTokenizer, AutoModelForCausalLM
35
 
36
+ tokenizer = AutoTokenizer.from_pretrained("path/to/stable-vicuna-13b-applied")
37
  model = AutoModelForCausalLM.from_pretrained("path/to/stable-vicuna-13b-applied")
38
  model.half().cuda()
39