Text Generation
Transformers
PyTorch
English
olmo
conversational
custom_code
natolambert commited on
Commit
3709950
•
1 Parent(s): 1f7ec01

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +186 -0
README.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/dolma
5
+ - allenai/tulu-v2-sft-mixture
6
+ language:
7
+ - en
8
+ ---
9
+
10
+
11
+ <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
12
+
13
+
14
+ # Model Card for OLMo 7B SFT
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+
18
+ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
19
+ The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
20
+ The adapted versions are trained on the [Tulu SFT mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) and, for the Instruct version, a [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned).
21
+ We release all code, checkpoints, logs (coming soon), and details involved in training these models.
22
+
23
+ OLMo 7B Instruct and OLMo SFT are two adapted versions of these models trained for better question answering.
24
+ They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques.
25
+
26
+ ## Model Details
27
+
28
+ We release two adapted model versions:
29
+ The base models related to this adapted model are the following:
30
+ | Model | Training Method(s) | Datasets | Context Length |
31
+ |------|--------|---------|--|
32
+ | [OLMo 7B SFT](https://huggingface.co/allenai/OLMo-7B-SFT) | SFT | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | 2048 |
33
+ | [OLMo 7B Instruct](https://huggingface.co/allenai/OLMo-7B-Instruct) | SFT + DPO | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) + [Ultrafeedback Cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) | 2048 |
34
+
35
+
36
+ The base models related to this adapted model are the following:
37
+ | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
38
+ |------|--------|---------|-------------|-----------------|----------------|
39
+ | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 |
40
+ | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
41
+ | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |
42
+
43
+
44
+ ### Model Description
45
+
46
+ <!-- Provide a longer summary of what this model is. -->
47
+
48
+ - **Developed by:** Allen Institute for AI (AI2)
49
+ - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
50
+ - **Model type:** a Transformer style autoregressive language model.
51
+ - **Language(s) (NLP):** English
52
+ - **License:** The code and model are released under Apache 2.0.
53
+ - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
54
+ - **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
55
+
56
+
57
+ ### Model Sources
58
+
59
+ <!-- Provide the basic links for the model. -->
60
+
61
+ - **Project Page:** https://allenai.org/olmo
62
+ - **Repositories:**
63
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
64
+ - Evaluation code: https://github.com/allenai/OLMo-Eval
65
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
66
+ - **Paper:** [Link](https://arxiv.org/abs/2402.00838)
67
+ - **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
68
+ - **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5
69
+ <!-- - **Press release:** TODO -->
70
+
71
+ ## Uses
72
+
73
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
74
+
75
+ ### Inference
76
+ Quickly get inference running with the following required installation:
77
+ ```bash
78
+ pip install ai2-olmo
79
+ ```
80
+ Now, proceed as usual with HuggingFace:
81
+ ```python
82
+ import hf_olmo
83
+
84
+ from transformers import AutoModelForCausalLM, AutoTokenizer
85
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-SFT")
86
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-SFT")
87
+ chat = [
88
+ { "role": "user", "content": "What is language modeling?" },
89
+ ]
90
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
91
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
92
+ # optional verifying cuda
93
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
94
+ # olmo = olmo.to('cuda')
95
+ response = olmo.generate(input_ids=inputs.to(olmo.device), max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
96
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
97
+ >> '<|user|>\nWhat is language modeling?\n<|assistant|>\nLanguage modeling is a type of natural language processing (NLP) task or machine learning task that...'
98
+ ```
99
+ Alternatively, with the pipeline abstraction:
100
+ ```python
101
+ import hf_olmo
102
+
103
+ from transformers import pipeline
104
+ olmo_pipe = pipeline("text-generation", model="allenai/OLMo-7B-SFT")
105
+ print(olmo_pipe("What is language modeling?"))
106
+ >> '[{'generated_text': 'What is language modeling?\nLanguage modeling is a type of natural language processing (NLP) task...'}]'
107
+ ```
108
+
109
+ Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-SFT", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
110
+ The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
111
+
112
+ Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.
113
+ ```bash
114
+ raise ImportError(
115
+ ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo`
116
+ ```
117
+
118
+ ## Evaluation
119
+
120
+ <!-- This section describes the evaluation protocols and provides the results. -->
121
+
122
+ Core model results for the 7B adapted models are found below.
123
+
124
+ | Model | MMLU 0-shot ↑ | AlpacaEval %win ↑ | ToxiGen % Toxic ↓ | TruthfulQA %Info+True ↑ |
125
+ |-----------------------|---------------|--------------------|--------------------|-------------------------|
126
+ | **OLMo (base)** | 28.3 | - | 81.4 | 31.6 |
127
+ | MPT Chat | 33.8 | 46.8 | 0.1 | 42.7 |
128
+ | Falcon Instruct | 25.2 | 14.0 | 70.7 | 27.2 |
129
+ | RPJ-INCITE Chat | 27.0 | 38.0 | 46.4 | 53.0 |
130
+ | Llama-2-Chat 7B | 46.8 | 87.3 | 0.0 | 26.3 |
131
+ | AI2 Tulu 2 7B | 50.4 | 73.9 | 7.0 | 51.7 |
132
+ | AI2 Tulu 2 7B DPO | 50.7 | 85.1 | 0.5 | - * |
133
+ | **[OLMo 7B SFT](https://huggingface.co/allenai/OLMo-7B-SFT)** | 47.3 | 57.0 | 14.4 | 41.2 |
134
+ | **[OLMo 7B Instruct](https://huggingface.co/allenai/OLMo-7B-Instruct)** | 46.2 | 69.3 | 1.7 | 52.0 |
135
+
136
+ *Following Ivision et al. 2023, we do not report Tulu 2 TruthfulQA scores due to test set contamination.
137
+ ## Model Details
138
+
139
+ ### Data
140
+ For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma), [Tulu 2](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), and [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) documentation.
141
+
142
+ ### Architecture
143
+
144
+
145
+ ### Hyperparameters
146
+
147
+ The hyperparameters for the two phases of training are below.
148
+ Certainly! Here's the table with SFT and DPO as rows:
149
+
150
+ | | Learning Rate | Beta | Epochs | Warmup | Weight Decay | Gradient Clipping | Maximum Sequence Length |
151
+ |-------------------------|---------------|------|--------|------------------------------------------------------------------------|--------------|-------------------|-------------------------|
152
+ | **SFT** | 2 × 10^-6 | N/A | 3 | Linear warmup for the first 3% of total training time, then cooldown to 0 | 0 | 0 | 2048 |
153
+ | **DPO** | 5 × 10^-7 | 0.1 | 3 | Linear warmup for the first 10% of total training time, then cooldown to 0| 0 | 0 | 2048 |
154
+
155
+ Compared to Tulu 2, DPO hyperparameters are the same. SFT is lower LR and 3 epochs instead of 2 (and 2k length instead of 8k).
156
+
157
+ ## Bias, Risks, and Limitations
158
+
159
+ The adapted OLMo models do not include a specific safety filter or safety training data.
160
+ While our model shows good scores relative to its peers on ToxiGen, like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
161
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
162
+
163
+ Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
164
+
165
+
166
+ ## Citation
167
+
168
+ **BibTeX:**
169
+
170
+ ```
171
+ @article{Groeneveld2023OLMo,
172
+ title={OLMo: Accelerating the Science of Language Models},
173
+ author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
174
+ journal={Preprint},
175
+ year={2024}
176
+ }
177
+ ```
178
+
179
+ **APA:**
180
+
181
+ Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
182
+
183
+ ## Model Card Contact
184
+
185
+
186
+ For errors in this model card, contact Nathan or Jacob, `{nathanl, jacobm} at allenai dot org`.