rskuzma commited on
Commit
e2b82a1
1 Parent(s): 2f0d2d8
Files changed (1) hide show
  1. README.md +217 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ inference: false
5
+ thumbnail: https://www.cerebras.net/wp-content/uploads/2022/05/Cerebras-Logo-Black.png
6
+ tags:
7
+ - pytorch
8
+ - causal-lm
9
+ - Cerebras
10
+ - BTLM
11
+ license: cc-by-sa-3.0
12
+ datasets:
13
+ - cerebras/SlimPajama-627B
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # BTLM-3B-8k-base
18
+
19
+ Bittensor Language Model (BTLM-3B-8k-base) is a 3 billion parameter language model with an 8k context length trained on 627B tokens of [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). BTLM-3B-8k-base sets a new standard for 3B parameter models, outperforming models trained on hundreds of billions more tokens and achieving comparable performance to open 7B parameter models. BTLM-3B-8k-base can also be quantized to 4-bit to fit in devices with as little as 3GB of memory. The model is made available with an Apache 2.0 license for commercial use.
20
+
21
+ BTLM was trained by [Cerebras](https://www.cerebras.net/) in partnership with [Opentensor](https://opentensor.ai/) on the newly unveiled [Condor Galaxy 1 (CG-1) supercomputer](https://www.cerebras.net/blog/introducing-condor-galaxy-1-a-4-exaflop-supercomputer-for-generative-ai/), the first public deliverable of the G42-Cerebras strategic partnership.
22
+
23
+ BTLM-3B-8k was trained with a similar architecture to [CerebrasGPT](https://arxiv.org/abs/2304.03208) with the addition of [SwiGLU](https://arxiv.org/abs/2002.05202) nonlinearity, [ALiBi](https://arxiv.org/abs/2108.12409) position embeddings, and [maximal update parameterization (muP)](https://arxiv.org/abs/2203.03466). The model was trained for 1 epoch of SlimPajama-627B. 75% of training was performed with 2k sequence length. The final 25% of training was performed at 8k sequence length to enable long sequence applications
24
+
25
+
26
+ ## BTLM-3B-8k Highlights
27
+
28
+ BTLM-3B-8k-base:
29
+ - **Licensed for commercial use** (Apache 2.0).
30
+ - **[State of the art 3B parameter model](#performance-vs-3b-models)**.
31
+ - **Provides 7B model performance in a 3B model** via performance enhancements from [ALiBi](https://arxiv.org/abs/2108.12409), [SwiGLU](https://arxiv.org/abs/2002.05202), [maximal update parameterization (muP)](https://arxiv.org/abs/2203.03466) and the the extensively duduplicated and cleaned [SlimPajama-627B dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
32
+ - **[Fits in devices with as little as 3GB of memory](#memory-requirements) when quantized to 4-bit**.
33
+ - **One of few 3B models that supports 8k sequence length** thanks to ALiBi.
34
+ - **Requires 71% fewer training FLOPs, has 58% smaller memory footprint** for inference than comparable 7B models.
35
+
36
+ ## Usage
37
+ *Note: Transformers does not support muP for all models, so BTLM-3B-8k-base requires a custom model class. This causes a situation where users must either (1) enable `trust_remote_code=True` when loading the model or (2) acknowledge the warning about code execution upon loading the model.*
38
+
39
+ #### With generate():
40
+ ```python
41
+ from transformers import AutoTokenizer, AutoModelForCausalLM
42
+
43
+ # Load the tokenizer and model
44
+ tokenizer = AutoTokenizer.from_pretrained("cerebras/btlm-3b-8k-base")
45
+ model = AutoModelForCausalLM.from_pretrained("cerebras/-3b-8k-base", trust_remote_code=True)
46
+
47
+ # Set the prompt for generating text
48
+ prompt = "Albert Einstein was known for "
49
+
50
+ # Tokenize the prompt and convert to PyTorch tensors
51
+ inputs = tokenizer(prompt, return_tensors="pt")
52
+
53
+ # Generate text using the model
54
+ outputs = model.generate(
55
+ **inputs,
56
+ num_beams=5,
57
+ max_new_tokens=50,
58
+ early_stopping=True,
59
+ no_repeat_ngram_size=2
60
+ )
61
+
62
+ # Convert the generated token IDs back to text
63
+ generated_text = tokenizer.batch_decode(outputs skip_special_tokens=True)
64
+
65
+ # Print the generated text
66
+ print(generated_text[0])
67
+ ```
68
+
69
+ #### With pipeline:
70
+ ```python
71
+ from transformers import AutoTokenizer, AutoModelForCausalLM
72
+ from transformers import pipeline
73
+
74
+ # Load the tokenizer and model
75
+ tokenizer = AutoTokenizer.from_pretrained("cerebras/btlm-3b-8k-base")
76
+ model = AutoModelForCausalLM.from_pretrained("cerebras/btlm-3b-8k-base", trust_remote_code=True)
77
+
78
+ # Set the prompt for text generation
79
+ prompt = """Isaac Newton was a """
80
+
81
+ # Create a text generation pipeline
82
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
83
+
84
+ # Generate text using the pipeline
85
+ generated_text = pipe(
86
+ prompt,
87
+ max_length=50,
88
+ do_sample=False,
89
+ no_repeat_ngram_size=2)[0]
90
+
91
+ # Print the generated text
92
+ print(generated_text['generated_text'])
93
+ ```
94
+
95
+ ## Evaluations and Comparisons to Other Models
96
+
97
+ ### Memory Requirements
98
+ ![figure_1_image](./figure_1_memory_footprint.png)
99
+ Figure 1. Memory requirements of different model sizes and quantization schemes
100
+
101
+ ### Quality, Training Cost, Memory Footprint, Inference Speed
102
+ ![figure_2_image](./figure_2_half_the_size_twice_the_speed.png)
103
+ Figure 2: Comparisons of quality, memory footprint & inference cost between BTLM-3B-8K and 7B model families.
104
+
105
+ ### Performance vs 3B models
106
+ ![table_1_image](./table_1_downstream_performance_3b.png)
107
+ Table 1: Performance at 3B model size. Detailed down-stream tasks comparisons. MMLU task performance is reported using 5-shot, other tasks are 0-shot.
108
+
109
+ ![figure_3_image](./figure_3_performance_vs_3b_models.png)
110
+ Figure 3: Performance at 3B model size
111
+
112
+ ### Performance vs 7B models
113
+ ![table_2_image](./table_2_downstream_performance_7b.png)
114
+ Table 2: Performance at 7B model size. Detailed down-stream tasks comparisons. MMLU task performance is reported using 5-shot, everything else is 0-shot.
115
+
116
+ ![figure_4_image](./figure_4_performance_vs_7b_models.jpg)
117
+ Figure 4: Performance at 7B model size
118
+
119
+ ## Model Details
120
+ - Developed by: [Cerebras Systems](https://www.cerebras.net/) and [Opentensor](https://opentensor.ai/) with generous support from [G42 Cloud](https://www.g42cloud.com/) and [IIAI](https://www.inceptioniai.org/en/)
121
+ - License: Apache 2.0
122
+ - Model type: Decoder-only Language Model
123
+ - Architecture: GPT-2 style architecture with SwiGLU, ALiBi, and muP
124
+ - Data set: SlimPajama-627B
125
+ - Tokenizer: Byte Pair Encoding
126
+ - Vocabulary Size: 50257
127
+ - Sequence Length: 8192
128
+ - Optimizer: AdamW
129
+ - Positional Encoding: ALiBi
130
+ - Language: English
131
+ - Learn more: <TODO: link to blog>
132
+ - Paper: Coming soon
133
+
134
+ ## To continue training with PyTorch and Maximal Update Parameterization
135
+
136
+ ```python
137
+ from transformers import AutoModelForCausalLM
138
+ import torch
139
+
140
+ model = AutoModelForCausalLM.from_pretrained("cerebras/btlm-3b-8k-base", trust_remote_code=True)
141
+
142
+ # Get the parameter groups for the muP optimizer
143
+ param_groups = model.get_mup_param_groups(lr=1e-3, weight_decay=0.1)
144
+
145
+ # Set up the optimizer using AdamW with muP parameters
146
+ optimizer = torch.optim.AdamW(
147
+ param_groups,
148
+ betas=(0.9, 0.95),
149
+ eps=1e-8
150
+ )
151
+ ```
152
+
153
+ Ensure the following muP parameters are passed in your config, otherwise your model will default to standard parameterization
154
+ - `mup_width_scale: <float>`
155
+ - `mup_embeddings_scale: <float>`
156
+ - `mup_output_alpha: <float>`
157
+ - `mup_scale_qk_dot_by_d: true`
158
+
159
+ ## Uses and Limitations
160
+
161
+ ### Intended Use
162
+ The primary intended use is to further research into large language models. BTLM-3B-8k-base can be used as a foundation model for NLP, applications, ethics, and alignment research. We release these models with a fully permissive Apache license for the community to use freely.
163
+
164
+ You may fine-tune and adapt BTLM-3B-8k-base model via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied before using the BTLM-3B-8k-base in production downstream applications.
165
+
166
+ ## Long Sequence Lengths
167
+ To enable long sequence applications, we use ALiBi position embeddings and trained on 470B tokens at the context length of 2,048 followed by 157B of tokens trained at 8,192 context length. To assess BTLM’s long sequence capability, we evaluate the on SlimPajama test set with 32,768 context length and plot loss at each token position. Although ALiBi allows extrapolation in theory, 2,048 context length training alone does not extrapolate well in practice. Thankfully variable sequence length training allows substantially improves extrapolation. BTLM-3B extrapolates well up to 10k context length but the performance degrades slightly beyond this.
168
+
169
+ ![figure_5_image](./figure_5_xentropy_with_sequence_lengths.png)
170
+ Figure 5: BTLM-3B model's cross-entropy evaluation on the SlimPajama’s test set. Inference performed on the extrapolated sequence length of 32,768 tokens.
171
+
172
+ ### Out of Scope Use
173
+ BTLM-3B-8k-base was trained on SlimPajama, with primarily English language, and is not recommended for machine translation tasks. BTLM-3B-8k-base has not been tuned for instruction-following or chat-based use cases.
174
+
175
+ ### Limitations
176
+ Since BTLM-3B-8K-base is not instruction fine-tuned, it performs best with “completion-style” prompts rather than “instruction-style” prompts.
177
+
178
+ For example, an instruction style prompt like: ``"What does a manometer measure?"`` Produces the following text: ``“What does a manometer measure?\nWhat is the difference between a thermometer and hygaugeometer, thermoetermometer\nHow do\nDifference between gauge and barometer”``. This is because the model was trained to complete the text it has seen rather than follow instructions. Instead, if we rephrase the instruction as a text completion problem, we can obtain useful outputs. See examples below.
179
+
180
+ ```
181
+ Prompt: “A manometer is used to measure"
182
+ Generation: “the pressure of a fluid in a liquid or gas. A manometric pressure. It is a pressure gauge. The pressure or gauge pressure is the volume of the.”
183
+ ```
184
+
185
+ ```
186
+ Prompt: “Robert Oppenheimer was a”
187
+ Generation: “theoretical physicist and a physicist who was the first to develop the atomic bomb. who was born in the the first the the bomb and the Manhattan Project the nuclear the project the United States.”
188
+ ```
189
+
190
+ ```
191
+ Prompt: “Albert Einstein was known for”
192
+ Generation: “his theory of relativity, which was a theory that the theory which explained the speed of gravity and the universe. of gravity. The theory. He was the gravity of the relativity of space-time and time and space and theory of.”
193
+ ```
194
+
195
+ ```
196
+ Prompt: “Isaac Newton was a”
197
+ Generation: “English physicist, mathematician, astronomer, and natural philosopher, theologian, alchemist, physicist and astronomer and mathematician. He was born in 1642, who was the greatest scientist, he was also, scientist and philosopher.”
198
+ ```
199
+
200
+ ```
201
+ Prompt: "After it leaves the right ventricle, the blood is pumped"
202
+ Generation: “into the left atrium, it goes through the pulmonary artery to the lungs. The right side of the lung.”
203
+ ```
204
+
205
+ ```
206
+ Prompt: “Peter lost his wallet after attending a local concert. To help him, Bob gave David 10 dollars. David then passed it on to Peter. Now the total amount of money Peter has is”
207
+ Generation: “$10 dollars.”
208
+ ```
209
+
210
+ ## Risk, Bias, Ethical Considerations
211
+ - **Human life:** The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
212
+ - **Risks and harms:** There may be distributional bias in the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
213
+
214
+ ## Acknowledgements
215
+ We are thankful to all Cerebras engineers that made this work possible.
216
+
217
+ We would like to acknowledge the generous support of G42 Cloud and the Inception Institute of Artificial Intelligence for providing compute time on Condor Galaxy 1.