Text Generation
Transformers
PyTorch
English
olmo
conversational
Inference Endpoints
shanearora commited on
Commit
c16aa53
1 Parent(s): 02752a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -79,8 +79,8 @@ The base models related to this adapted model are the following:
79
  You can load and run this model as usual so long as your HuggingFace version is >= 4.40:
80
  ```python
81
  from transformers import AutoModelForCausalLM, AutoTokenizer
82
- olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-Instruct-hf")
83
- tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-Instruct-hf")
84
  message = [{"role": "user", "content": "What is 2+2?"}]
85
  inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
86
  # optional verifying cuda
 
79
  You can load and run this model as usual so long as your HuggingFace version is >= 4.40:
80
  ```python
81
  from transformers import AutoModelForCausalLM, AutoTokenizer
82
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-SFT-hf")
83
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-SFT-hf")
84
  message = [{"role": "user", "content": "What is 2+2?"}]
85
  inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
86
  # optional verifying cuda