Updating model files
Browse files
README.md
CHANGED
@@ -13,6 +13,17 @@ datasets:
|
|
13 |
- allenai/s2orc
|
14 |
inference: false
|
15 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
# MPT-7B GGML
|
18 |
|
@@ -64,17 +75,28 @@ bin/mpt -m /path/to/mpt-7b.ggmlv3.q4_0.bin -t 8 -n 512 -p "Write a story about l
|
|
64 |
|
65 |
Please see the ggml repo for other build options.
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
# Original model card: MPT-7B
|
68 |
|
69 |
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
|
70 |
This model was trained by [MosaicML](https://www.mosaicml.com).
|
71 |
|
72 |
-
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
|
73 |
|
74 |
-
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
|
75 |
-
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
|
76 |
-
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
|
77 |
-
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
|
78 |
|
79 |
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
|
80 |
|
@@ -99,7 +121,7 @@ We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in ou
|
|
99 |
* License: Apache 2.0
|
100 |
|
101 |
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
|
102 |
-
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
|
103 |
* License: _CC-By-SA-3.0_
|
104 |
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
|
105 |
|
@@ -135,7 +157,7 @@ model = transformers.AutoModelForCausalLM.from_pretrained(
|
|
135 |
trust_remote_code=True
|
136 |
)
|
137 |
```
|
138 |
-
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
|
139 |
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
|
140 |
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
|
141 |
|
@@ -203,7 +225,7 @@ The model has been modified from a standard transformer in the following ways:
|
|
203 |
|
204 |
### Streaming Datasets
|
205 |
|
206 |
-
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
|
207 |
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
|
208 |
|
209 |
|
@@ -228,24 +250,24 @@ The model was trained for 1T tokens (with batch size 1760 and sequence length 20
|
|
228 |
Samples for each batch were selected from one of the datasets with the probability specified above.
|
229 |
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
|
230 |
|
231 |
-
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
|
232 |
-
most of which are relevant for tokenizing code:
|
233 |
-
(1) It was trained on a diverse mix of data that includes code (The Pile)
|
234 |
-
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
|
235 |
-
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
|
236 |
|
237 |
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
|
238 |
|
239 |
### Training Configuration
|
240 |
|
241 |
-
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
|
242 |
-
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
|
243 |
|
244 |
## Limitations and Biases
|
245 |
|
246 |
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
|
247 |
|
248 |
-
MPT-7B (Base) is **not** intended for deployment without finetuning.
|
249 |
It should not be used for human-facing interactions without further guardrails and user consent.
|
250 |
|
251 |
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
@@ -268,7 +290,7 @@ Please cite this model using the following format:
|
|
268 |
```
|
269 |
@online{MosaicML2023Introducing,
|
270 |
author = {MosaicML NLP Team},
|
271 |
-
title = {Introducing MPT-7B: A New Standard for Open-Source,
|
272 |
ly Usable LLMs},
|
273 |
year = {2023},
|
274 |
url = {www.mosaicml.com/blog/mpt-7b},
|
|
|
13 |
- allenai/s2orc
|
14 |
inference: false
|
15 |
---
|
16 |
+
<div style="width: 100%;">
|
17 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
18 |
+
</div>
|
19 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
20 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
21 |
+
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
22 |
+
</div>
|
23 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
24 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
|
25 |
+
</div>
|
26 |
+
</div>
|
27 |
|
28 |
# MPT-7B GGML
|
29 |
|
|
|
75 |
|
76 |
Please see the ggml repo for other build options.
|
77 |
|
78 |
+
## Want to support my work?
|
79 |
+
|
80 |
+
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
|
81 |
+
|
82 |
+
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
|
83 |
+
|
84 |
+
Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
|
85 |
+
|
86 |
+
* Patreon: coming soon! (just awaiting approval)
|
87 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
88 |
+
* Discord: https://discord.gg/UBgz4VXf
|
89 |
# Original model card: MPT-7B
|
90 |
|
91 |
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
|
92 |
This model was trained by [MosaicML](https://www.mosaicml.com).
|
93 |
|
94 |
+
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
|
95 |
|
96 |
+
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
|
97 |
+
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
|
98 |
+
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
|
99 |
+
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
|
100 |
|
101 |
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
|
102 |
|
|
|
121 |
* License: Apache 2.0
|
122 |
|
123 |
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
|
124 |
+
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
|
125 |
* License: _CC-By-SA-3.0_
|
126 |
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
|
127 |
|
|
|
157 |
trust_remote_code=True
|
158 |
)
|
159 |
```
|
160 |
+
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
|
161 |
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
|
162 |
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
|
163 |
|
|
|
225 |
|
226 |
### Streaming Datasets
|
227 |
|
228 |
+
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
|
229 |
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
|
230 |
|
231 |
|
|
|
250 |
Samples for each batch were selected from one of the datasets with the probability specified above.
|
251 |
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
|
252 |
|
253 |
+
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
|
254 |
+
most of which are relevant for tokenizing code:
|
255 |
+
(1) It was trained on a diverse mix of data that includes code (The Pile)
|
256 |
+
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
|
257 |
+
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
|
258 |
|
259 |
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
|
260 |
|
261 |
### Training Configuration
|
262 |
|
263 |
+
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
|
264 |
+
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
|
265 |
|
266 |
## Limitations and Biases
|
267 |
|
268 |
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
|
269 |
|
270 |
+
MPT-7B (Base) is **not** intended for deployment without finetuning.
|
271 |
It should not be used for human-facing interactions without further guardrails and user consent.
|
272 |
|
273 |
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
|
|
290 |
```
|
291 |
@online{MosaicML2023Introducing,
|
292 |
author = {MosaicML NLP Team},
|
293 |
+
title = {Introducing MPT-7B: A New Standard for Open-Source,
|
294 |
ly Usable LLMs},
|
295 |
year = {2023},
|
296 |
url = {www.mosaicml.com/blog/mpt-7b},
|