SinclairSchneider commited on
Commit
c3a592b
1 Parent(s): c85b264

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -153
README.md CHANGED
@@ -1,199 +1,173 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
 
9
 
10
 
 
 
 
 
 
 
 
 
 
11
 
12
- ## Model Details
 
 
 
13
 
14
- ### Model Description
 
 
 
 
 
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
 
 
 
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
 
 
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
39
 
40
- ### Direct Use
 
 
 
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
43
 
44
- [More Information Needed]
 
 
45
 
46
- ### Downstream Use [optional]
 
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
 
49
 
50
- [More Information Needed]
 
51
 
52
- ### Out-of-Scope Use
 
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
 
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
 
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
 
 
 
 
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
 
 
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
 
 
73
 
74
- [More Information Needed]
75
 
76
- ## Training Details
 
77
 
78
- ### Training Data
 
 
 
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
 
 
 
 
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
1
  ---
2
+ extra_gated_heading: You need to share contact information with Databricks to access this model
3
+ extra_gated_prompt: >-
4
+
5
+ ### DBRX Terms of Use
6
+
7
+ Use of DBRX is governed by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and the [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
8
+
9
+ extra_gated_fields:
10
+ First Name: text
11
+ Last Name: text
12
+ Organization: text
13
+ By clicking 'Submit' below, I accept the terms of the license and acknowledge that the information I provide will be collected, stored, processed, and shared in accordance with Databricks' Privacy Notice and I understand I can update my preferences at any time: checkbox
14
+ extra_gated_description: >-
15
+ The information you provide will be collected, stored, processed, and shared in accordance with Databricks [Privacy Notice](https://www.databricks.com/legal/privacynotice).
16
+ extra_gated_button_content: Submit
17
+ inference: false
18
+ license: other
19
+ license_name: databricks-open-model-license
20
+ license_link: https://www.databricks.com/legal/open-model-license
21
  ---
22
 
23
+ # DBRX Instruct
24
 
25
+ * DBRX Instruct is a mixture-of-experts (MoE) large language model trained from scratch by Databricks. DBRX Instruct specializes in few-turn interactions.
26
+ * We are releasing both DBRX Instruct and DBRX Base, the pretrained base model which underlies it, under [an open license](https://www.databricks.com/legal/open-model-license).
27
+ * This is the repository for DBRX Instruct. DBRX Base can be found [here](https://huggingface.co/databricks/dbrx-base).
28
+ * For full details on the DBRX models, please read our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
29
 
30
 
31
+ ## Model Overview
32
+ DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction.
33
+ It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input.
34
+ It was pre-trained on 12T tokens of text and code data.
35
+ Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2.
36
+ This provides 65x more possible combinations of experts and we found that this improves model quality.
37
+ DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA).
38
+ It uses the GPT-4 tokenizer as provided in the [tiktoken](https://github.com/openai/tiktoken) repository.
39
+ We made these choices based on exhaustive evaluation and scaling experiments.
40
 
41
+ DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens.
42
+ We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models.
43
+ This new dataset was developed using the full suite of Databricks tools, including Apache Spark™ and Databricks notebooks for data processing, and Unity Catalog for data management and governance.
44
+ We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality.
45
 
46
+ * **Inputs:** DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens.
47
+ * **Outputs:** DBRX only produces text-based outputs.
48
+ * **Model Architecture:** More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
49
+ * **License:** [Databricks Open Model License](https://www.databricks.com/legal/open-model-license)
50
+ * **Acceptable Use Policy:** [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model)
51
+ * **Version:** 1.0
52
+ * **Owner:** Databricks, Inc.
53
 
 
54
 
55
+ ## Usage
56
+ These are several general ways to use the DBRX models:
57
+ * DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Instruct; DBRX Base can be found [here](https://huggingface.co/databricks/dbrx-base).
58
+ * The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
59
+ * DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
60
+ * For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
61
 
 
 
 
 
 
 
 
62
 
63
+ ## Quickstart Guide
64
+ **NOTE: This is DBRX Instruct, and has been instruction finetuned.**
65
+ If you are looking for the base model, please use [DBRX Base](https://huggingface.co/databricks/dbrx-base).
66
 
67
+ Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
68
 
69
+ ```bash
70
+ pip install transformers tiktoken
71
+ ```
72
 
73
+ If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads).
74
+ ```bash
75
+ pip install hf_transfer
76
+ export HF_HUB_ENABLE_HF_TRANSFER=1
77
+ ```
78
 
79
+ You will need to request access to this repository to download the model. Once this is granted,
80
+ [obtain an access token](https://huggingface.co/docs/hub/en/security-tokens) with `read` permission, and supply the token below.
81
 
82
+ ### Run the model on a CPU:
83
+ ```python
84
+ from transformers import AutoTokenizer, AutoModelForCausalLM
85
+ import torch
86
 
87
+ tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", trust_remote_code=True, token="hf_YOUR_TOKEN")
88
+ model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN")
89
 
90
+ input_text = "What does it take to build a great LLM?"
91
+ messages = [{"role": "user", "content": input_text}]
92
+ input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt")
93
 
94
+ outputs = model.generate(**input_ids, max_new_tokens=200)
95
+ print(tokenizer.decode(outputs[0]))
96
+ ```
97
 
98
+ ### Run the model on multiple GPUs:
99
+ ```python
100
+ from transformers import AutoTokenizer, AutoModelForCausalLM
101
+ import torch
102
 
103
+ tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-instruct", trust_remote_code=True, token="hf_YOUR_TOKEN")
104
+ model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-instruct", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN")
105
 
106
+ input_text = "What does it take to build a great LLM?"
107
+ messages = [{"role": "user", "content": input_text}]
108
+ input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
109
 
110
+ outputs = model.generate(**input_ids, max_new_tokens=200)
111
+ print(tokenizer.decode(outputs[0]))
112
+ ```
113
+ If your GPU system supports [FlashAttention2](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2), you can add `attn_implementation=”flash_attention_2”` as a keyword to `AutoModelForCausalLM.from_pretrained()` to achieve faster inference.
114
 
 
115
 
116
+ ## Limitations and Ethical Considerations
117
+ ### Training Dataset Limitations
118
+ The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023.
119
 
120
+ The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language.
121
 
122
+ DBRX does not have multimodal capabilities.
123
 
124
+ ### Associated Risks and Recommendations
125
+ All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive.
126
+ Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it.
127
+ Databricks recommends [using retrieval augmented generation (RAG)](https://www.databricks.com/glossary/retrieval-augmented-generation-rag) in scenarios where accuracy and fidelity are important.
128
+ We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain.
129
 
 
130
 
131
+ ## Intended Uses
132
+ ### Intended Use Cases
133
+ The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications.
134
+ They can be further fine-tuned for various domain-specific natural language and coding tasks.
135
+ DBRX Instruct can be used as an off-the-shelf model for few-turn question answering related to general English-language and coding tasks.
136
 
137
+ Please review the Associated Risks section above, as well as the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) for further information about permissible uses of DBRX Base and its derivatives.
138
 
139
+ ### Out-of-Scope Use Cases
140
+ DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling.
141
+ DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model).
142
 
 
143
 
144
+ ## Training Stack
145
+ MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g., [Composer](https://github.com/mosaicml/composer), [Streaming](https://github.com/mosaicml/streaming), [Megablocks](https://github.com/stanford-futuredata/megablocks), and [LLM Foundry](https://github.com/mosaicml/llm-foundry)).
146
 
147
+ Composer is our core library for large-scale training.
148
+ It provides an optimized training loop, easy [checkpointing](https://docs.mosaicml.com/projects/composer/en/latest/trainer/checkpointing.html) and [logging](https://docs.mosaicml.com/projects/composer/en/latest/trainer/logging.html#wood-logging),
149
+ [FSDP](https://pytorch.org/docs/stable/fsdp.html)-based [model sharding](https://docs.mosaicml.com/projects/composer/en/latest/notes/distributed_training.html#fullyshardeddataparallel-fsdp),
150
+ convenient [abstractions](https://docs.mosaicml.com/projects/composer/en/latest/trainer/time.html), extreme customizability via [callbacks](https://docs.mosaicml.com/projects/composer/en/latest/trainer/callbacks.html), and more.
151
 
152
+ Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed.
153
 
154
+ Megablocks is a lightweight library for MoE training. Crucially, it supports “dropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch.
155
 
156
+ LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience.
157
 
158
+ DBRX was trained using proprietary optimized versions of the above open source libraries, along with our [LLM training platform](https://www.databricks.com/product/machine-learning/mosaic-ai-training).
159
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
160
 
161
  ## Evaluation
162
+ We find that DBRX outperforms established open-source and open-weight base models on the [Databricks Model Gauntlet](https://www.databricks.com/blog/llm-evaluation-for-icl), the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and HumanEval.
163
+ The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming.
164
+ The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k.
165
+ HumanEval measures coding ability.
166
 
167
+ Full evaluation details can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
168
 
 
169
 
170
+ ## Acknowledgements
171
+ The DBRX models were made possible thanks in large part to the open-source community, especially:
172
+ * The [MegaBlocks](https://arxiv.org/abs/2211.15841) library, which established a foundation for our MoE implementation.
173
+ * [PyTorch FSDP](https://arxiv.org/abs/2304.11277), which we built on for distributed training.