Add generate module and update README.

#1
Files changed (3) hide show
  1. LICENSE +47 -88
  2. README.md +13 -38
  3. modeling_openelm.py +3 -3
LICENSE CHANGED
@@ -1,88 +1,47 @@
1
- Disclaimer: IMPORTANT: This Apple Machine Learning Research Model is
2
- specifically developed and released by Apple Inc. ("Apple") for the sole purpose
3
- of scientific research of artificial intelligence and machine-learning
4
- technology. Apple Machine Learning Research Model” means the model, including
5
- but not limited to algorithms, formulas, trained model weights, parameters,
6
- configurations, checkpoints, and any related materials (including
7
- documentation).
8
-
9
- This Apple Machine Learning Research Model is provided to You by
10
- Apple in consideration of your agreement to the following terms, and your use,
11
- modification, creation of Model Derivatives, and or redistribution of the Apple
12
- Machine Learning Research Model constitutes acceptance of this Agreement. If You
13
- do not agree with these terms, please do not use, modify, create Model
14
- Derivatives of, or distribute this Apple Machine Learning Research Model or
15
- Model Derivatives.
16
-
17
- * License Scope: In consideration of your agreement to abide by the following
18
- terms, and subject to these terms, Apple hereby grants you a personal,
19
- non-exclusive, worldwide, non-transferable, royalty-free, revocable, and
20
- limited license, to use, copy, modify, distribute, and create Model
21
- Derivatives (defined below) of the Apple Machine Learning Research Model
22
- exclusively for Research Purposes. You agree that any Model Derivatives You
23
- may create or that may be created for You will be limited to Research Purposes
24
- as well. “Research Purposes” means non-commercial scientific research and
25
- academic development activities, such as experimentation, analysis, testing
26
- conducted by You with the sole intent to advance scientific knowledge and
27
- research. “Research Purposes” does not include any commercial exploitation,
28
- product development or use in any commercial product or service.
29
-
30
- * Distribution of Apple Machine Learning Research Model and Model Derivatives:
31
- If you choose to redistribute Apple Machine Learning Research Model or its
32
- Model Derivatives, you must provide a copy of this Agreement to such third
33
- party, and ensure that the following attribution notice be provided: “Apple
34
- Machine Learning Research Model is licensed under the Apple Machine Learning
35
- Research Model License Agreement.” Additionally, all Model Derivatives must
36
- clearly be identified as such, including disclosure of modifications and
37
- changes made to the Apple Machine Learning Research Model. The name,
38
- trademarks, service marks or logos of Apple may not be used to endorse or
39
- promote Model Derivatives or the relationship between You and Apple. “Model
40
- Derivatives” means any models or any other artifacts created by modifications,
41
- improvements, adaptations, alterations to the architecture, algorithm or
42
- training processes of the Apple Machine Learning Research Model, or by any
43
- retraining, fine-tuning of the Apple Machine Learning Research Model.
44
-
45
- * No Other License: Except as expressly stated in this notice, no other rights
46
- or licenses, express or implied, are granted by Apple herein, including but
47
- not limited to any patent, trademark, and similar intellectual property rights
48
- worldwide that may be infringed by the Apple Machine Learning Research Model,
49
- the Model Derivatives or by other works in which the Apple Machine Learning
50
- Research Model may be incorporated.
51
-
52
- * Compliance with Laws: Your use of Apple Machine Learning Research Model must
53
- be in compliance with all applicable laws and regulations.
54
-
55
- * Term and Termination: The term of this Agreement will begin upon your
56
- acceptance of this Agreement or use of the Apple Machine Learning Research
57
- Model and will continue until terminated in accordance with the following
58
- terms. Apple may terminate this Agreement at any time if You are in breach of
59
- any term or condition of this Agreement. Upon termination of this Agreement,
60
- You must cease to use all Apple Machine Learning Research Models and Model
61
- Derivatives and permanently delete any copy thereof. Sections 3, 6 and 7 will
62
- survive termination.
63
-
64
- * Disclaimer and Limitation of Liability: This Apple Machine Learning Research
65
- Model and any outputs generated by the Apple Machine Learning Research Model
66
- are provided on an “AS IS” basis. APPLE MAKES NO WARRANTIES, EXPRESS OR
67
- IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF
68
- NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
69
- REGARDING THE APPLE MACHINE LEARNING RESEARCH MODEL OR OUTPUTS GENERATED BY
70
- THE APPLE MACHINE LEARNING RESEARCH MODEL. You are solely responsible for
71
- determining the appropriateness of using or redistributing the Apple Machine
72
- Learning Research Model and any outputs of the Apple Machine Learning Research
73
- Model and assume any risks associated with Your use of the Apple Machine
74
- Learning Research Model and any output and results. IN NO EVENT SHALL APPLE BE
75
- LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
76
- IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF
77
- THE APPLE MACHINE LEARNING RESEARCH MODEL AND ANY OUTPUTS OF THE APPLE MACHINE
78
- LEARNING RESEARCH MODEL, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT,
79
- TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS
80
- BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
81
-
82
- * Governing Law: This Agreement will be governed by and construed under the laws
83
- of the State of California without regard to its choice of law principles. The
84
- Convention on Contracts for the International Sale of Goods shall not apply to
85
- the Agreement except that the arbitration clause and any arbitration hereunder
86
- shall be governed by the Federal Arbitration Act, Chapters 1 and 2. 
87
-
88
- Copyright (C) 2025 Apple Inc. All Rights Reserved.
 
1
+ Copyright (C) 2024 Apple Inc. All Rights Reserved.
2
+
3
+ Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple
4
+ Inc. ("Apple") in consideration of your agreement to the following
5
+ terms, and your use, installation, modification or redistribution of
6
+ this Apple software constitutes acceptance of these terms. If you do
7
+ not agree with these terms, please do not use, install, modify or
8
+ redistribute this Apple software.
9
+
10
+ In consideration of your agreement to abide by the following terms, and
11
+ subject to these terms, Apple grants you a personal, non-exclusive
12
+ license, under Apple's copyrights in this original Apple software (the
13
+ "Apple Software"), to use, reproduce, modify and redistribute the Apple
14
+ Software, with or without modifications, in source and/or binary forms;
15
+ provided that if you redistribute the Apple Software in its entirety and
16
+ without modifications, you must retain this notice and the following
17
+ text and disclaimers in all such redistributions of the Apple Software.
18
+ Neither the name, trademarks, service marks or logos of Apple Inc. may
19
+ be used to endorse or promote products derived from the Apple Software
20
+ without specific prior written permission from Apple. Except as
21
+ expressly stated in this notice, no other rights or licenses, express or
22
+ implied, are granted by Apple herein, including but not limited to any
23
+ patent rights that may be infringed by your derivative works or by other
24
+ works in which the Apple Software may be incorporated.
25
+
26
+ The Apple Software is provided by Apple on an "AS IS" basis. APPLE
27
+ MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION
28
+ THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS
29
+ FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND
30
+ OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.
31
+
32
+ IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL
33
+ OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
34
+ SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
35
+ INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION,
36
+ MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED
37
+ AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE),
38
+ STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE
39
+ POSSIBILITY OF SUCH DAMAGE.
40
+
41
+
42
+ -------------------------------------------------------------------------------
43
+ SOFTWARE DISTRIBUTED IN THIS REPOSITORY:
44
+
45
+ This software includes a number of subcomponents with separate
46
+ copyright notices and license terms - please see the file ACKNOWLEDGEMENTS.
47
+ -------------------------------------------------------------------------------
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: apple-amlr
3
  license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
@@ -8,9 +8,9 @@ license_link: LICENSE
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
- We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
12
 
13
- Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14
 
15
 
16
 
@@ -28,11 +28,12 @@ Additional arguments to the hugging face generate function can be passed via `ge
28
  ```
29
  python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
30
  ```
31
- Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
32
  ```
33
  python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
34
  ```
35
 
 
36
  ## Main Results
37
 
38
  ### Zero-Shot
@@ -106,10 +107,9 @@ pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
106
  ```bash
107
 
108
  # OpenELM-270M
109
- hf_model=apple/OpenELM-270M
110
 
111
- # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
112
- tokenizer=meta-llama/Llama-2-7b-hf
113
  add_bos_token=True
114
  batch_size=1
115
 
@@ -118,7 +118,7 @@ mkdir lm_eval_output
118
  shot=0
119
  task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
120
  lm_eval --model hf \
121
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
122
  --tasks ${task} \
123
  --device cuda:0 \
124
  --num_fewshot ${shot} \
@@ -128,7 +128,7 @@ lm_eval --model hf \
128
  shot=5
129
  task=mmlu,winogrande
130
  lm_eval --model hf \
131
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
132
  --tasks ${task} \
133
  --device cuda:0 \
134
  --num_fewshot ${shot} \
@@ -138,7 +138,7 @@ lm_eval --model hf \
138
  shot=25
139
  task=arc_challenge,crows_pairs_english
140
  lm_eval --model hf \
141
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
142
  --tasks ${task} \
143
  --device cuda:0 \
144
  --num_fewshot ${shot} \
@@ -148,7 +148,7 @@ lm_eval --model hf \
148
  shot=10
149
  task=hellaswag
150
  lm_eval --model hf \
151
- --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
152
  --tasks ${task} \
153
  --device cuda:0 \
154
  --num_fewshot ${shot} \
@@ -160,30 +160,5 @@ lm_eval --model hf \
160
 
161
  ## Bias, Risks, and Limitations
162
 
163
- The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
164
-
165
- ## Citation
166
-
167
- If you find our work useful, please cite:
168
-
169
- ```BibTex
170
- @article{mehtaOpenELMEfficientLanguage2024,
171
- title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
172
- shorttitle = {{OpenELM}},
173
- url = {https://arxiv.org/abs/2404.14619v1},
174
- language = {en},
175
- urldate = {2024-04-24},
176
- journal = {arXiv.org},
177
- author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
178
- month = apr,
179
- year = {2024},
180
- }
181
-
182
- @inproceedings{mehta2022cvnets,
183
- author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
184
- title = {CVNets: High Performance Library for Computer Vision},
185
- year = {2022},
186
- booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
187
- series = {MM '22}
188
- }
189
- ```
 
1
  ---
2
+ license: other
3
  license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
 
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
+ We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
 
13
+ Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens.
14
 
15
 
16
 
 
28
  ```
29
  python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
30
  ```
31
+ Alternatively, model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) can be also tried by passing a smaller model model through the `assistant_model` argument, for example:
32
  ```
33
  python generate_openelm.py --model apple/OpenELM-270M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
34
  ```
35
 
36
+
37
  ## Main Results
38
 
39
  ### Zero-Shot
 
107
  ```bash
108
 
109
  # OpenELM-270M
110
+ hf_model=OpenELM-270M
111
 
112
+ # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMa tokenizer which requires add_bos_token to be True
 
113
  add_bos_token=True
114
  batch_size=1
115
 
 
118
  shot=0
119
  task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
120
  lm_eval --model hf \
121
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
122
  --tasks ${task} \
123
  --device cuda:0 \
124
  --num_fewshot ${shot} \
 
128
  shot=5
129
  task=mmlu,winogrande
130
  lm_eval --model hf \
131
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
132
  --tasks ${task} \
133
  --device cuda:0 \
134
  --num_fewshot ${shot} \
 
138
  shot=25
139
  task=arc_challenge,crows_pairs_english
140
  lm_eval --model hf \
141
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
142
  --tasks ${task} \
143
  --device cuda:0 \
144
  --num_fewshot ${shot} \
 
148
  shot=10
149
  task=hellaswag
150
  lm_eval --model hf \
151
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
152
  --tasks ${task} \
153
  --device cuda:0 \
154
  --num_fewshot ${shot} \
 
160
 
161
  ## Bias, Risks, and Limitations
162
 
163
+ Our OpenELM models are not trained with any safety guarantees, the model outputs can be potentially inaccurate, harmful or contain biased information. produce inaccurate, biased or other objectionable responses to user prompts. Therefore, users and developers should conduct extensive safety testing and filtering suited to their specific needs.
164
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
modeling_openelm.py CHANGED
@@ -783,7 +783,7 @@ class OpenELMModel(OpenELMPreTrainedModel):
783
  )
784
 
785
  if self.config._attn_implementation == "sdpa" and attention_mask is not None:
786
- # For dynamo, rather use a check on fullgraph=True once this is possible (https://github.com/pytorch/pytorch/pull/120400).
787
  is_tracing = (
788
  torch.jit.is_tracing()
789
  or isinstance(input_tensor, torch.fx.Proxy)
@@ -967,7 +967,7 @@ class OpenELMForCausalLM(OpenELMPreTrainedModel):
967
  input_ids = input_ids[:, past_length:]
968
  position_ids = position_ids[:, past_length:]
969
 
970
- # we should only keep a `cache_position` in generate, and do +=1.
971
  # same goes for position ids. Could also help with continued generation.
972
  cache_position = torch.arange(
973
  past_length,
@@ -981,7 +981,7 @@ class OpenELMForCausalLM(OpenELMPreTrainedModel):
981
  else:
982
  # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
983
  # recompiles graphs as the stride of the inputs is a guard. Ref: https://github.com/huggingface/transformers/pull/29114
984
- # We could use `next_tokens` directly instead.
985
  model_inputs = {"input_ids": input_ids.contiguous()}
986
 
987
  model_inputs.update(
 
783
  )
784
 
785
  if self.config._attn_implementation == "sdpa" and attention_mask is not None:
786
+ # TODO: For dynamo, rather use a check on fullgraph=True once this is possible (https://github.com/pytorch/pytorch/pull/120400).
787
  is_tracing = (
788
  torch.jit.is_tracing()
789
  or isinstance(input_tensor, torch.fx.Proxy)
 
967
  input_ids = input_ids[:, past_length:]
968
  position_ids = position_ids[:, past_length:]
969
 
970
+ # TODO @gante we should only keep a `cache_position` in generate, and do +=1.
971
  # same goes for position ids. Could also help with continued generation.
972
  cache_position = torch.arange(
973
  past_length,
 
981
  else:
982
  # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
983
  # recompiles graphs as the stride of the inputs is a guard. Ref: https://github.com/huggingface/transformers/pull/29114
984
+ # TODO: use `next_tokens` directly instead.
985
  model_inputs = {"input_ids": input_ids.contiguous()}
986
 
987
  model_inputs.update(