Add generate module and update README.

#1
Files changed (1) hide show
  1. README.md +12 -38
README.md CHANGED
@@ -4,13 +4,13 @@ license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
6
 
7
- # OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
- We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
 
13
- Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14
 
15
  See the list below for the details of each model:
16
 
@@ -54,7 +54,7 @@ Additional arguments to the hugging face generate function can be passed via `ge
54
  ```
55
  python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
56
  ```
57
- Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
58
  ```
59
  python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL_NAME]
60
  ```
@@ -133,10 +133,9 @@ pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
133
  ```bash
134
 
135
  # OpenELM-270M
136
- hf_model=apple/OpenELM-270M
137
 
138
- # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
139
- tokenizer=meta-llama/Llama-2-7b-hf
140
  add_bos_token=True
141
  batch_size=1
142
 
@@ -145,7 +144,7 @@ mkdir lm_eval_output
145
  shot=0
146
  task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
147
  lm_eval --model hf \
148
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
149
  --tasks ${task} \
150
  --device cuda:0 \
151
  --num_fewshot ${shot} \
@@ -155,7 +154,7 @@ lm_eval --model hf \
155
  shot=5
156
  task=mmlu,winogrande
157
  lm_eval --model hf \
158
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
159
  --tasks ${task} \
160
  --device cuda:0 \
161
  --num_fewshot ${shot} \
@@ -165,7 +164,7 @@ lm_eval --model hf \
165
  shot=25
166
  task=arc_challenge,crows_pairs_english
167
  lm_eval --model hf \
168
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
169
  --tasks ${task} \
170
  --device cuda:0 \
171
  --num_fewshot ${shot} \
@@ -175,7 +174,7 @@ lm_eval --model hf \
175
  shot=10
176
  task=hellaswag
177
  lm_eval --model hf \
178
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
179
  --tasks ${task} \
180
  --device cuda:0 \
181
  --num_fewshot ${shot} \
@@ -187,30 +186,5 @@ lm_eval --model hf \
187
 
188
  ## Bias, Risks, and Limitations
189
 
190
- The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
191
-
192
- ## Citation
193
-
194
- If you find our work useful, please cite:
195
-
196
- ```BibTex
197
- @article{mehtaOpenELMEfficientLanguage2024,
198
- title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
199
- shorttitle = {{OpenELM}},
200
- url = {https://arxiv.org/abs/2404.14619v1},
201
- language = {en},
202
- urldate = {2024-04-24},
203
- journal = {arXiv.org},
204
- author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
205
- month = apr,
206
- year = {2024},
207
- }
208
-
209
- @inproceedings{mehta2022cvnets,
210
- author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
211
- title = {CVNets: High Performance Library for Computer Vision},
212
- year = {2022},
213
- booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
214
- series = {MM '22}
215
- }
216
- ```
 
4
  license_link: LICENSE
5
  ---
6
 
7
+ # OpenELM
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
+ We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
 
13
+ Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens.
14
 
15
  See the list below for the details of each model:
16
 
 
54
  ```
55
  python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
56
  ```
57
+ Alternatively, model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) can be also tried by passing a smaller model model through the `assistant_model` argument, for example:
58
  ```
59
  python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL_NAME]
60
  ```
 
133
  ```bash
134
 
135
  # OpenELM-270M
136
+ hf_model=OpenELM-270M
137
 
138
+ # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMa tokenizer which requires add_bos_token to be True
 
139
  add_bos_token=True
140
  batch_size=1
141
 
 
144
  shot=0
145
  task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
146
  lm_eval --model hf \
147
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
148
  --tasks ${task} \
149
  --device cuda:0 \
150
  --num_fewshot ${shot} \
 
154
  shot=5
155
  task=mmlu,winogrande
156
  lm_eval --model hf \
157
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
158
  --tasks ${task} \
159
  --device cuda:0 \
160
  --num_fewshot ${shot} \
 
164
  shot=25
165
  task=arc_challenge,crows_pairs_english
166
  lm_eval --model hf \
167
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
168
  --tasks ${task} \
169
  --device cuda:0 \
170
  --num_fewshot ${shot} \
 
174
  shot=10
175
  task=hellaswag
176
  lm_eval --model hf \
177
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
178
  --tasks ${task} \
179
  --device cuda:0 \
180
  --num_fewshot ${shot} \
 
186
 
187
  ## Bias, Risks, and Limitations
188
 
189
+ Our OpenELM models are not trained with any safety guarantees, the model outputs can be potentially inaccurate, harmful or contain biased information. produce inaccurate, biased or other objectionable responses to user prompts. Therefore, users and developers should conduct extensive safety testing and filtering suited to their specific needs.
190
+