natolambert taufiqdp commited on
Commit
f64c424
1 Parent(s): f7a1b71

Rename OLMo model from OLMo-7B to OLMo-1B (#2)

Browse files

- Rename OLM model from OLMo-7B to OLMo-1B (381fbabdcbc874ac639d24aa18b151f237c63bd1)


Co-authored-by: Taufiq Dwi Purnomo <taufiqdp@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -93,8 +93,8 @@ Now, proceed as usual with HuggingFace:
93
  import hf_olmo
94
 
95
  from transformers import AutoModelForCausalLM, AutoTokenizer
96
- olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B")
97
- tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B")
98
  message = ["Language modeling is "]
99
  inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
100
  # optional verifying cuda
@@ -109,12 +109,12 @@ Alternatively, with the pipeline abstraction:
109
  import hf_olmo
110
 
111
  from transformers import pipeline
112
- olmo_pipe = pipeline("text-generation", model="allenai/OLMo-7B")
113
  print(olmo_pipe("Language modeling is "))
114
  >> 'Language modeling is a branch of natural language processing that aims to...'
115
  ```
116
 
117
- Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
118
  The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
119
 
120
  Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
 
93
  import hf_olmo
94
 
95
  from transformers import AutoModelForCausalLM, AutoTokenizer
96
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B")
97
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B")
98
  message = ["Language modeling is "]
99
  inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
100
  # optional verifying cuda
 
109
  import hf_olmo
110
 
111
  from transformers import pipeline
112
+ olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B")
113
  print(olmo_pipe("Language modeling is "))
114
  >> 'Language modeling is a branch of natural language processing that aims to...'
115
  ```
116
 
117
+ Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
118
  The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
119
 
120
  Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.