llamafile
English
jartine commited on
Commit
f297b8e
1 Parent(s): a87f645

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +272 -0
README.md ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ datasets:
6
+ - allenai/dolma
7
+ ---
8
+
9
+ # OLMo 7b 0424 - llamafile
10
+
11
+ - Model creator: [Allen Institute for AI](https://huggingface.co/allenai/)
12
+ - Original model: [allenai/OLMo-7B-0424-hf](https://huggingface.co/allenai/OLMo-7B-0424-hf)
13
+
14
+ The model is packaged into executable weights, which we call
15
+ [llamafiles](https://github.com/Mozilla-Ocho/llamafile). This makes it
16
+ easy to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD, and
17
+ NetBSD for AMD64 and ARM64.
18
+
19
+ ## Quickstart
20
+
21
+ Running the following on a desktop OS will launch a tab in your web
22
+ browser with a chatbot interface.
23
+
24
+ ```
25
+ wget https://huggingface.co/Mozilla/OLMo-7B-0424-llamafile/resolve/main/OLMo-7B-0424.Q6_K.llamafile
26
+ chmod +x OLMo-7B-0424.Q6_K.llamafile
27
+ ./OLMo-7B-0424.Q6_K.llamafile
28
+ ```
29
+
30
+ You then need to fill out the prompt / history template (see below).
31
+
32
+ This model has a max context window size of 8k tokens. By default, a
33
+ context window size of 512 tokens is used. You may increase this to the
34
+ maximum by passing the `-c 0` flag.
35
+
36
+ On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
37
+ the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
38
+ driver needs to be installed. If the prebuilt DSOs should fail, the CUDA
39
+ or ROCm SDKs may need to be installed, in which case llamafile builds a
40
+ native module just for your system.
41
+
42
+ For further information, please see the [llamafile
43
+ README](https://github.com/mozilla-ocho/llamafile/).
44
+
45
+ Having **trouble?** See the ["Gotchas"
46
+ section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
47
+ of the README.
48
+
49
+
50
+ ---
51
+
52
+ <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
53
+
54
+
55
+ # Model Card for OLMo 7B
56
+
57
+ <!-- Provide a quick summary of what the model is/does. -->
58
+
59
+ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
60
+ The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
61
+ We release all code, checkpoints, logs (coming soon), and details involved in training these models.
62
+ This model has been converted from [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B) for the
63
+ Hugging Face Transformers format.
64
+
65
+ ## Model Details
66
+
67
+ The core models released in this batch are the following:
68
+ | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
69
+ |------|--------|---------|-------------|-----------------|----------------|
70
+ | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B-hf) | 3 Trillion |16 | 2048 | 16 | 2048 |
71
+ | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B-hf) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
72
+ | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T-hf) | 2 Trillion | 32 | 4096 | 32 | 2048 |
73
+
74
+ We are releasing many checkpoints for these models, for every 1000 training steps. These have not
75
+ yet been converted into Hugging Face Transformers format, but are available in [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B).
76
+
77
+ ### Model Description
78
+
79
+ <!-- Provide a longer summary of what this model is. -->
80
+
81
+ - **Developed by:** Allen Institute for AI (AI2)
82
+ - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
83
+ - **Model type:** a Transformer style autoregressive language model.
84
+ - **Language(s) (NLP):** English
85
+ - **License:** The code and model are released under Apache 2.0.
86
+ - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
87
+ - **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
88
+
89
+
90
+ ### Model Sources
91
+
92
+ <!-- Provide the basic links for the model. -->
93
+
94
+ - **Project Page:** https://allenai.org/olmo
95
+ - **Repositories:**
96
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
97
+ - Evaluation code: https://github.com/allenai/OLMo-Eval
98
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
99
+ - **Paper:** [Link](https://arxiv.org/abs/2402.00838)
100
+ - **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
101
+ - **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5
102
+ <!-- - **Press release:** TODO -->
103
+
104
+ ## Uses
105
+
106
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
107
+
108
+ ### Inference
109
+ Quickly get inference running with the following:
110
+ ```python
111
+ from transformers import AutoModelForCausalLM, AutoTokenizer
112
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-hf")
113
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-hf")
114
+ message = ["Language modeling is"]
115
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
116
+ # optional verifying cuda
117
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
118
+ # olmo = olmo.to('cuda')
119
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
120
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
121
+ >> 'Language modeling is the first step to build natural language generation...'
122
+ ```
123
+ Alternatively, with the pipeline abstraction:
124
+ ```python
125
+ from transformers import pipeline
126
+ olmo_pipe = pipeline("text-generation", model="allenai/OLMo-7B-hf")
127
+ print(olmo_pipe("Language modeling is "))
128
+ >> 'Language modeling is a branch of natural language processing that aims to...'
129
+ ```
130
+
131
+ Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
132
+ The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
133
+
134
+ ### Fine-tuning
135
+
136
+ This model does not directly support our fine-tuning processes. Model fine-tuning can be done
137
+ from the final checkpoint or many intermediate checkpoints of
138
+ [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B).
139
+
140
+ ## Evaluation
141
+
142
+ <!-- This section describes the evaluation protocols and provides the results. -->
143
+
144
+ Core model results for the 7B model are found below.
145
+
146
+ | | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) |
147
+ | --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
148
+ | arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 |
149
+ | arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 |
150
+ | boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 |
151
+ | copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 |
152
+ | hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 |
153
+ | openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 |
154
+ | piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 |
155
+ | sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 |
156
+ | winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 |
157
+ | **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 |
158
+ | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 |
159
+ | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 |
160
+ | GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) |
161
+ | **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 |
162
+
163
+ And for the 1B model:
164
+
165
+ | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
166
+ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
167
+ | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 |
168
+ | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 |
169
+ | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 |
170
+ | copa | 50 | 84 | 72 | 78 | 79 |
171
+ | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 |
172
+ | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 |
173
+ | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 |
174
+ | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 |
175
+ | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 |
176
+ | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 |
177
+
178
+ \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
179
+
180
+ ## Model Details
181
+
182
+ ### Data
183
+ For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
184
+
185
+ ### Architecture
186
+
187
+ OLMo 7B architecture with peer models for comparison.
188
+
189
+ | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
190
+ |------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
191
+ | d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
192
+ | num heads | 32 | 32 | 32 | 71 | 16 |
193
+ | num layers | 32 | 32 | 32 | 32 | 32 |
194
+ | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
195
+ | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
196
+ | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
197
+ | attention variant | full | GQA | full | MQA | MQA |
198
+ | biases | none | none | in LN only | in LN only | none |
199
+ | block type | sequential | sequential | sequential | parallel | parallel |
200
+ | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
201
+ | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
202
+ | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
203
+ | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
204
+ | weight tying | no | no | no | no | yes |
205
+
206
+
207
+ ### Hyperparameters
208
+
209
+ AdamW optimizer parameters are shown below.
210
+
211
+ | Size | Peak LR | Betas | Epsilon | Weight Decay |
212
+ |------|------------|-----------------|-------------|--------------|
213
+ | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
214
+ | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
215
+
216
+ Optimizer settings comparison with peer models.
217
+
218
+ | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
219
+ |-----------------------|------------------|---------------------|--------------------|--------------------|
220
+ | warmup steps | 5000 | 2000 | 2000 | 1000 |
221
+ | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
222
+ | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
223
+ | weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
224
+ | beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
225
+ | beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
226
+ | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
227
+ | LR schedule | linear | cosine | cosine | cosine |
228
+ | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
229
+ | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
230
+ | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
231
+
232
+
233
+
234
+ ## Environmental Impact
235
+
236
+ OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
237
+ A summary of the environmental impact. Further details are available in the paper.
238
+
239
+ | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
240
+ |-----------|------------|-----------------------------|--------------------------------|---------------------------|
241
+ | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
242
+ | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
243
+
244
+ ## Bias, Risks, and Limitations
245
+
246
+ Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
247
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
248
+
249
+ Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
250
+
251
+
252
+ ## Citation
253
+
254
+ **BibTeX:**
255
+
256
+ ```
257
+ @article{Groeneveld2023OLMo,
258
+ title={OLMo: Accelerating the Science of Language Models},
259
+ author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
260
+ journal={Preprint},
261
+ year={2024}
262
+ }
263
+ ```
264
+
265
+ **APA:**
266
+
267
+ Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
268
+
269
+ ## Model Card Contact
270
+
271
+
272
+ For errors in this model card, contact Nathan, Akshita or Shane, `{nathanl, akshitab, shanea} at allenai dot org`.