Text Generation
Transformers
Safetensors
English
olmo
Inference Endpoints
shanearora commited on
Commit
7d2c3e1
1 Parent(s): 6547019

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +227 -0
README.md CHANGED
@@ -1,3 +1,230 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - allenai/dolma
5
+ language:
6
+ - en
7
  ---
8
+
9
+
10
+ <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
11
+
12
+
13
+ # Model Card for OLMo 1B
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
18
+ The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
19
+ We release all code, checkpoints, logs (coming soon), and details involved in training these models.
20
+ This model has been converted from [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) for the
21
+ Hugging Face Transformers format.
22
+
23
+ ## Model Details
24
+
25
+ The core models released in this batch are the following:
26
+ | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
27
+ |------|--------|---------|-------------|-----------------|----------------|
28
+ | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B-hf) | 3 Trillion |16 | 2048 | 16 | 2048 |
29
+ | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B-hf) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
30
+ | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T-hf) | 2 Trillion | 32 | 4096 | 32 | 2048 |
31
+
32
+ We are releasing many checkpoints for these models, for every 1000 training steps. These have not
33
+ yet been converted into Hugging Face Transformers format, but are available in [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B).
34
+
35
+ ### Model Description
36
+
37
+ <!-- Provide a longer summary of what this model is. -->
38
+
39
+ - **Developed by:** Allen Institute for AI (AI2)
40
+ - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
41
+ - **Model type:** a Transformer style autoregressive language model.
42
+ - **Language(s) (NLP):** English
43
+ - **License:** The code and model are released under Apache 2.0.
44
+ - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
45
+ - **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
46
+
47
+
48
+ ### Model Sources
49
+
50
+ <!-- Provide the basic links for the model. -->
51
+
52
+ - **Project Page:** https://allenai.org/olmo
53
+ - **Repositories:**
54
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
55
+ - Evaluation code: https://github.com/allenai/OLMo-Eval
56
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
57
+ - **Paper:** [Link](https://arxiv.org/abs/2402.00838)
58
+ - **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
59
+ - **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-1B/reports/OLMo-1B--Vmlldzo2NzY1Njk1
60
+ <!-- - **Press release:** TODO -->
61
+
62
+ ## Uses
63
+
64
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
65
+
66
+ ### Inference
67
+ Quickly get inference running with the following:
68
+ ```python
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf")
71
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-hf")
72
+ message = ["Language modeling is "]
73
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
74
+ # optional verifying cuda
75
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
76
+ # olmo = olmo.to('cuda')
77
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
78
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
79
+ >> 'Language modeling is the first step to build natural language generation...'
80
+ ```
81
+ Alternatively, with the pipeline abstraction:
82
+ ```python
83
+ from transformers import pipeline
84
+ olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B")
85
+ print(olmo_pipe("Language modeling is "))
86
+ >> 'Language modeling is a branch of natural language processing that aims to...'
87
+ ```
88
+
89
+ Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
90
+ The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
91
+
92
+ ### Fine-tuning
93
+
94
+ This model does not directly support our fine-tuning processes. Model fine-tuning can be done
95
+ from the final checkpoint or many intermediate checkpoints of
96
+ [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B).
97
+
98
+ ## Evaluation
99
+
100
+ <!-- This section describes the evaluation protocols and provides the results. -->
101
+
102
+ Core model results for the 7B model are found below.
103
+
104
+ | | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) |
105
+ | --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
106
+ | arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 |
107
+ | arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 |
108
+ | boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 |
109
+ | copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 |
110
+ | hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 |
111
+ | openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 |
112
+ | piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 |
113
+ | sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 |
114
+ | winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 |
115
+ | **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 |
116
+ | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 |
117
+ | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 |
118
+ | GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) |
119
+ | **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 |
120
+
121
+ And for the 1B model:
122
+
123
+ | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
124
+ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
125
+ | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 |
126
+ | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 |
127
+ | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 |
128
+ | copa | 50 | 84 | 72 | 78 | 79 |
129
+ | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 |
130
+ | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 |
131
+ | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 |
132
+ | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 |
133
+ | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 |
134
+ | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 |
135
+
136
+ \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
137
+
138
+ ## Model Details
139
+
140
+ ### Data
141
+ For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
142
+
143
+ ### Architecture
144
+
145
+ OLMo 7B architecture with peer models for comparison.
146
+
147
+ | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
148
+ |------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
149
+ | d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
150
+ | num heads | 32 | 32 | 32 | 71 | 16 |
151
+ | num layers | 32 | 32 | 32 | 32 | 32 |
152
+ | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
153
+ | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
154
+ | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
155
+ | attention variant | full | GQA | full | MQA | MQA |
156
+ | biases | none | none | in LN only | in LN only | none |
157
+ | block type | sequential | sequential | sequential | parallel | parallel |
158
+ | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
159
+ | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
160
+ | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
161
+ | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
162
+ | weight tying | no | no | no | no | yes |
163
+
164
+
165
+ ### Hyperparameters
166
+
167
+ AdamW optimizer parameters are shown below.
168
+
169
+ | Size | Peak LR | Betas | Epsilon | Weight Decay |
170
+ |------|------------|-----------------|-------------|--------------|
171
+ | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
172
+ | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
173
+
174
+ Optimizer settings comparison with peer models.
175
+
176
+ | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
177
+ |-----------------------|------------------|---------------------|--------------------|--------------------|
178
+ | warmup steps | 5000 | 2000 | 2000 | 1000 |
179
+ | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
180
+ | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
181
+ | weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
182
+ | beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
183
+ | beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
184
+ | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
185
+ | LR schedule | linear | cosine | cosine | cosine |
186
+ | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
187
+ | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
188
+ | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
189
+
190
+
191
+
192
+ ## Environmental Impact
193
+
194
+ OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
195
+ A summary of the environmental impact. Further details are available in the paper.
196
+
197
+ | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
198
+ |-----------|------------|-----------------------------|--------------------------------|---------------------------|
199
+ | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
200
+ | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
201
+
202
+ ## Bias, Risks, and Limitations
203
+
204
+ Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
205
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
206
+
207
+ Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
208
+
209
+
210
+ ## Citation
211
+
212
+ **BibTeX:**
213
+
214
+ ```
215
+ @article{Groeneveld2023OLMo,
216
+ title={OLMo: Accelerating the Science of Language Models},
217
+ author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
218
+ journal={Preprint},
219
+ year={2024}
220
+ }
221
+ ```
222
+
223
+ **APA:**
224
+
225
+ Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
226
+
227
+ ## Model Card Contact
228
+
229
+
230
+ For errors in this model card, contact Nathan, Akshita or Shane, `{nathanl, akshitab, shanea} at allenai dot org`.