File size: 18,302 Bytes
946a612
 
b9154cf
 
 
 
946a612
b9154cf
 
 
 
 
 
 
f5ad816
b9154cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e96796c
b9154cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
---
license: apache-2.0
datasets:
- allenai/dolma
language:
- en
---


<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>

# Model Card for OLMo 1.7-7B-hf

OLMo 1.7 7B is the latest version of the original [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) model rocking a 24 point increase in MMLU, among other evaluations improvements, from an improved version of the Dolma dataset and staged training.
**This version is for direct use with HuggingFace Transformers** from v4.40 on.

OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
We release all code, checkpoints, logs, and details involved in training these models.

## Model Details

The core models released in this batch are the following: 
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 1B](https://huggingface.co/allenai/OLMo-1B)   | 3 Trillion |16     | 2048        | 16              | 2048  |
| [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion   | 32     | 4096        | 32              |  2048  |
| [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion   | 32     | 4096        | 32              |  2048  |
| [OLMo 1.7-7B](https://huggingface.co/allenai/OLMo-1.7-7B) | 2.05 Trillion   | 32     | 4096        | 32              |  4096  |

*Note: OLMo 1.7-7B also includes QKV clipping.*


[Coming soon] We are releasing many checkpoints for these models, for every 1000 training steps.
The naming convention is `step1000-tokens4B`.

To load a specific model revision with HuggingFace, simply add the argument `revision`:
```bash
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B-hf", revision="step1000-tokens4B")
```

All revisions/branches are listed in the file `revisions.txt`. 
Or, you can access all the revisions for the models via the following code snippet:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-1.7-7B-hf")
branches = [b.name for b in out.branches]
```
A few revisions were lost due to an error, but the vast majority are present.

### Model Description

- **Developed by:** Allen Institute for AI (AI2)
- **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
- **Date cutoff:** Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version.


### Model Sources

- **Project Page:** https://allenai.org/olmo
- **Repositories:** 
    - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
    - Evaluation code: https://github.com/allenai/OLMo-Eval
    - Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** [Link](https://arxiv.org/abs/2402.00838)
- **Technical blog post:** https://blog.allenai.org/olmo-1-7-7b-a-24-point-improvement-on-mmlu-92b43f7d269d 
- **W&B Logs:** [pretraining](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B), [annealing](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B-anneal)
<!-- - **Press release:** TODO -->

## Uses

### Inference

Install Transformers [from source](https://huggingface.co/docs/transformers/en/installation#install-from-source), or update to the next version when this [PR](https://github.com/huggingface/transformers/pull/29890) is integrated.

Now, proceed as usual with HuggingFace:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B-hf")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1.7-7B-hf")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'
```
Alternatively, with the pipeline abstraction:
```python
from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1.7-7B-hf")
print(olmo_pipe("Language modeling is "))
>> 'Language modeling is a branch of natural language processing that aims to...'
```

Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.

Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
```bash
    raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo`
```

### Fine-tuning
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
1. Fine-tune with the OLMo repository:
```bash
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
    --data.paths=[{path_to_data}/input_ids.npy] \
    --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
    --load_path={path_to_checkpoint} \
    --reset_trainer_state
```
For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning).

2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct).

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

Core model results for the new and original 7B model are found below.

| Task              | Llama-7b | Llama2-7b | Falcon-7b | Mpt-7b | OLMo-7B | Llama2-13b | **OLMo 1.7-7B** |
|-------------------|----------|-----------|-----------|--------|---------|------------|-------------|
| arc_c             | 44.5     | 48.5      | 47.5      | 46.5   | 48.5    | 52.8       | 42.5        |
| arc_e             | 67.9     | 69.5      | 70.4      | 70.5   | 65.4    | 73.7       | 67.2        |
| boolq             | 75.4     | 80.2      | 74.6      | 74.2   | 73.4    | 82.2       | 83.7        |
| copa              | 91.0     | 86.0      | 86.0      | 85.0   | 90.0    | 90.0       | 86.0        |
| hellaswag         | 76.2     | 76.8      | 75.9      | 77.6   | 76.4    | 78.6       | 75.5        |
| openbookqa        | 51.2     | 48.4      | 53.0      | 48.6   | 50.4    | 51.8       | 50.0        |
| piqa              | 77.2     | 76.7      | 78.5      | 77.3   | 78.4    | 79.0       | 77.5        |
| sciq              | 93.9     | 94.5      | 93.9      | 93.7   | 93.8    | 95.5       | 96.7        |
| winogrande        | 70.5     | 69.4      | 68.9      | 69.9   | 67.9    | 73.5       | 69.8        |
| truthfulQA (MC2)  | 33.9     | 38.5      | 34.0      | 33.0   | 36.0    | 36.8       | 35.8        |
| MMLU (5 shot MC)  | 31.5     | 45.0      | 24.0      | 30.8   | 28.3    | 55.5       | 52.0        |
| GSM8k             | 10.0     | 12.0      | 4.0       | 4.5    | 8.5     | 25.0       | 29.0        |
| Full average      | 60.3     | 62.1      | 59.2      | 59.3   | 59.8    | 66.2       | 63.8        |

And for the 1B model:

| task       | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
| arc_challenge | 25     | 43.81             | 33.11     | 34.78                                  | 34.45   |
| arc_easy      | 25     | 63.68             | 50.18     | 53.16                                  | 58.07   |
| boolq         | 50     | 76.6              | 61.8      | 64.6                                   | 60.7    |
| copa          | 50     | 84                | 72        | 78                                     | 79      |
| hellaswag     | 25     | 68.2              | 44.7      | 58.7                                   | 62.5    |
| openbookqa    | 25     | 45.8              | 37.8      | 43.6                                   | 46.4    |
| piqa          | 50     | 74                | 69.1      | 71.1                                   | 73.7    |
| sciq          | 25     | 94.7              | 86        | 90.5                                   | 88.1    |
| winogrande    | 50     | 64.9              | 53.3      | 58.9                                   | 58.9    |
| Average       | 36.11  | 68.41             | 56.44     | 61.48                                  | 62.42   |

\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.

## Model Details

### Data
For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
**This model uses the new 1.7 version with more data sources, better deduplication, and quality filtering**.
During the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0.

### Staged training / annealing

In contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum: 
* In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high. 
* At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below.
Both stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top.


### Architecture

OLMo 7B architecture with peer models for comparison.

|                        | **OLMo 7B**   | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
|------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
| d_model     | 4096              | 4096                | 4096               | 4544               | 4096             |
| num heads              | 32                | 32                  | 32                 | 71                 | 16               |
| num layers             | 32                | 32                  | 32                 | 32                 | 32               |
| MLP ratio              | ~8/3         | ~8/3           | ~8/3          | 4                  | 4                |
| LayerNorm type         | non-parametric LN | RMSNorm             | parametric LN      | parametric LN      | parametric LN    |
| pos embeddings         | RoPE              | RoPE                | RoPE               | RoPE               | RoPE             |
| attention variant      | full              | GQA                 | full               | MQA                | MQA              |
| biases                 | none              | none                | in LN only         | in LN only         | none             |
| block type             | sequential        | sequential          | sequential         | parallel           | parallel         |
| activation             | SwiGLU            | SwiGLU              | SwiGLU             | GeLU               | SwiGLU           |
| sequence length        | 2048              | 4096                | 2048               | 2048               | 2048             |
| batch size (instances) | 2160              | 1024                | 2048               | 2304               | 512              |
| batch size (tokens)    | ~4M          | ~4M            | ~4M           | ~4M           | ~1M         |
| weight tying           | no                | no                  | no                 | no                 | yes              |


### Hyperparameters 

AdamW optimizer parameters are shown below.

| Size | Peak LR    | Betas           | Epsilon     | Weight Decay |
|------|------------|-----------------|-------------|--------------|
| 1B   | 4.0E-4   | (0.9, 0.95)   | 1.0E-5    | 0.1          |
| 7B   | 3.0E-4   | (0.9, 0.99)   | 1.0E-5    | 0.1          |

Optimizer settings comparison with peer models.

|                       | **OLMo 7B**  | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
|-----------------------|------------------|---------------------|--------------------|--------------------|
| warmup steps          | 5000             | 2000                | 2000               | 1000               |
| peak LR               | 3.0E-04 | 3.0E-04    | 3.0E-04   | 6.0E-04  |
| minimum LR            | 3.0E-05 | 3.0E-05    | 3.0E-05   | 1.2E-05   |
| weight decay          | 0.1              | 0.1                 | 0.1                | 0.1                |
| beta1                 | 0.9              | 0.9                 | 0.9                | 0.99               |
| beta2                 | 0.95             | 0.95                | 0.95               | 0.999              |
| epsilon               | 1.0E-05 | 1.0E-05    | 1.0E-05   | 1.0E-05   |
| LR schedule           | linear           | cosine              | cosine             | cosine             |
| gradient clipping     | global 1.0       | global 1.0          | global 1.0         | global 1.0         |
| gradient reduce dtype | FP32             | FP32                | FP32               | BF16               |
| optimizer state dtype | FP32             | most likely FP32    | FP32               | FP32               |



## Environmental Impact

OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.

|           | GPU Type   | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
|-----------|------------|-----------------------------|--------------------------------|---------------------------|
| OLMo 7B Twin  | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu))   |  135 MWh                     | 0*                             | 0*                        |
| OLMo 7B   | A100-40GB ([MosaicML](https://www.mosaicml.com)) |  104 MWh                     | 0.656                          | 75.05                     |

## Bias, Risks, and Limitations

Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.

Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.


## Citation

**BibTeX:**

```
@article{Groeneveld2023OLMo,
  title={OLMo: Accelerating the Science of Language Models},
  author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
  journal={Preprint},
  year={2024}
}
```

**APA:**

Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.

## Model Card Contact


For errors in this model card, contact Nathan, `{nathanl} at allenai dot org`.