qicao-apple commited on
Commit
7d53f89
1 Parent(s): 1900662

add OpenELM

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -4,11 +4,11 @@ license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
6
 
7
- # OpenELM
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
- We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
 
13
  Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14
 
@@ -135,7 +135,6 @@ pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
135
  # OpenELM-270M
136
  hf_model=OpenELM-270M
137
 
138
-
139
  # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
140
  tokenizer=meta-llama/Llama-2-7b-hf
141
  add_bos_token=True
@@ -189,3 +188,29 @@ lm_eval --model hf \
189
  ## Bias, Risks, and Limitations
190
 
191
  The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  license_link: LICENSE
5
  ---
6
 
7
+ # OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
+ We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
 
13
  Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14
 
 
135
  # OpenELM-270M
136
  hf_model=OpenELM-270M
137
 
 
138
  # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
139
  tokenizer=meta-llama/Llama-2-7b-hf
140
  add_bos_token=True
 
188
  ## Bias, Risks, and Limitations
189
 
190
  The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
191
+
192
+ ## Citation
193
+
194
+ If you find our work useful, please cite:
195
+
196
+ ```BibTex
197
+ @article{mehtaOpenELMEfficientLanguage2024,
198
+ title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open}-source {Training} and {Inference} {Framework}},
199
+ shorttitle = {{OpenELM}},
200
+ url = {https://arxiv.org/abs/2404.14619v1},
201
+ language = {en},
202
+ urldate = {2024-04-24},
203
+ journal = {arXiv.org},
204
+ author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
205
+ month = apr,
206
+ year = {2024},
207
+ }
208
+
209
+ @inproceedings{mehta2022cvnets,
210
+ author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
211
+ title = {CVNets: High Performance Library for Computer Vision},
212
+ year = {2022},
213
+ booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
214
+ series = {MM '22}
215
+ }
216
+ ```