Text Generation
Transformers
Safetensors
openelm
custom_code
qicao-apple commited on
Commit
24982bb
1 Parent(s): 654de97

update OpenELM-1_1B

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -8,7 +8,7 @@ license_link: LICENSE
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
- We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
 
13
  Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14
 
@@ -106,7 +106,7 @@ pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
106
  ```bash
107
 
108
  # OpenELM-1_1B-Instruct
109
- hf_model=OpenELM-1_1B-Instruct
110
 
111
  # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
112
  tokenizer=meta-llama/Llama-2-7b-hf
@@ -168,7 +168,7 @@ If you find our work useful, please cite:
168
 
169
  ```BibTex
170
  @article{mehtaOpenELMEfficientLanguage2024,
171
- title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open}-source {Training} and {Inference} {Framework}},
172
  shorttitle = {{OpenELM}},
173
  url = {https://arxiv.org/abs/2404.14619v1},
174
  language = {en},
 
8
 
9
  *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
 
11
+ We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
 
13
  Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14
 
 
106
  ```bash
107
 
108
  # OpenELM-1_1B-Instruct
109
+ hf_model=apple/OpenELM-1_1B-Instruct
110
 
111
  # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
112
  tokenizer=meta-llama/Llama-2-7b-hf
 
168
 
169
  ```BibTex
170
  @article{mehtaOpenELMEfficientLanguage2024,
171
+ title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
172
  shorttitle = {{OpenELM}},
173
  url = {https://arxiv.org/abs/2404.14619v1},
174
  language = {en},