michaelfeil commited on
Commit
6060b8c
1 Parent(s): 842a015

Upload mosaicml/mpt-7b ctranslate fp16 weights

Browse files
Files changed (5) hide show
  1. README.md +72 -50
  2. config.json +55 -4
  3. model.bin +2 -2
  4. requirements.txt +2 -0
  5. vocabulary.json +0 -0
README.md CHANGED
@@ -21,38 +21,40 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
21
 
22
  quantized version of [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)
23
  ```bash
24
- pip install hf-hub-ctranslate2>=2.0.8 ctranslate2>=3.14.0
25
- ```
26
- Converted on 2023-05-31 using
27
- ```
28
- ct2-transformers-converter --model mosaicml/mpt-7b --output_dir /home/michael/tmp-ct2fast-mpt-7b --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16 --trust_remote_code
29
  ```
30
 
31
- Checkpoint compatible to [ctranslate2>=3.14.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.8](https://github.com/michaelfeil/hf-hub-ctranslate2)
32
- - `compute_type=int8_float16` for `device="cuda"`
33
- - `compute_type=int8` for `device="cpu"`
34
-
35
  ```python
36
- from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
37
- from transformers import AutoTokenizer
38
-
39
  model_name = "michaelfeil/ct2fast-mpt-7b"
40
- # use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
 
 
41
  model = GeneratorCT2fromHfHub(
42
  # load in int8 on CUDA
43
- model_name_or_path=model_name,
44
  device="cuda",
45
  compute_type="int8_float16",
46
- # tokenizer=AutoTokenizer.from_pretrained("mosaicml/mpt-7b")
47
  )
48
  outputs = model.generate(
49
- text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
50
- max_length=64,
51
  include_prompt_in_result=False
52
  )
53
  print(outputs)
54
  ```
55
 
 
 
 
 
 
 
 
 
 
 
56
  # Licence and other remarks:
57
  This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
58
 
@@ -64,12 +66,12 @@ This is just a quantized version. Licence conditions are intended to be idential
64
  MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
65
  This model was trained by [MosaicML](https://www.mosaicml.com).
66
 
67
- MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
68
 
69
- These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
70
- positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
71
- Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
72
- MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
73
 
74
  This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
75
 
@@ -94,7 +96,7 @@ We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in ou
94
  * License: Apache 2.0
95
 
96
  * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
97
- Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
98
  * License: _CC-By-SA-3.0_
99
  * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
100
 
@@ -130,37 +132,41 @@ model = transformers.AutoModelForCausalLM.from_pretrained(
130
  trust_remote_code=True
131
  )
132
  ```
133
- Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
134
  This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
135
  `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
136
 
137
- To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
138
  ```python
139
- config = transformers.AutoConfig.from_pretrained(
140
- 'mosaicml/mpt-7b',
141
- trust_remote_code=True
142
- )
 
 
143
  config.attn_config['attn_impl'] = 'triton'
 
144
 
145
  model = transformers.AutoModelForCausalLM.from_pretrained(
146
- 'mosaicml/mpt-7b',
147
  config=config,
148
- torch_dtype=torch.bfloat16,
149
  trust_remote_code=True
150
  )
151
- model.to(device='cuda:0')
152
  ```
153
 
154
  Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
155
 
156
  ```python
157
- config = transformers.AutoConfig.from_pretrained(
158
- 'mosaicml/mpt-7b',
159
- trust_remote_code=True
160
- )
161
- config.update({"max_seq_len": 4096})
 
 
162
  model = transformers.AutoModelForCausalLM.from_pretrained(
163
- 'mosaicml/mpt-7b',
164
  config=config,
165
  trust_remote_code=True
166
  )
@@ -170,7 +176,23 @@ This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co
170
 
171
  ```python
172
  from transformers import AutoTokenizer
173
- tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
174
  ```
175
 
176
  ## Model Description
@@ -198,7 +220,7 @@ The model has been modified from a standard transformer in the following ways:
198
 
199
  ### Streaming Datasets
200
 
201
- Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
202
  StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
203
 
204
 
@@ -223,24 +245,24 @@ The model was trained for 1T tokens (with batch size 1760 and sequence length 20
223
  Samples for each batch were selected from one of the datasets with the probability specified above.
224
  The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
225
 
226
- The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
227
- most of which are relevant for tokenizing code:
228
- (1) It was trained on a diverse mix of data that includes code (The Pile)
229
- (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
230
- (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
231
 
232
  The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
233
 
234
  ### Training Configuration
235
 
236
- This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
237
- The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
238
 
239
  ## Limitations and Biases
240
 
241
  _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
242
 
243
- MPT-7B (Base) is **not** intended for deployment without finetuning.
244
  It should not be used for human-facing interactions without further guardrails and user consent.
245
 
246
  MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
@@ -263,11 +285,11 @@ Please cite this model using the following format:
263
  ```
264
  @online{MosaicML2023Introducing,
265
  author = {MosaicML NLP Team},
266
- title = {Introducing MPT-7B: A New Standard for Open-Source,
267
  ly Usable LLMs},
268
  year = {2023},
269
  url = {www.mosaicml.com/blog/mpt-7b},
270
  note = {Accessed: 2023-03-28}, % change this date
271
  urldate = {2023-03-28} % change this date
272
  }
273
- ```
 
21
 
22
  quantized version of [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)
23
  ```bash
24
+ pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
 
 
 
 
25
  ```
26
 
 
 
 
 
27
  ```python
28
+ # from transformers import AutoTokenizer
 
 
29
  model_name = "michaelfeil/ct2fast-mpt-7b"
30
+
31
+
32
+ from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
33
  model = GeneratorCT2fromHfHub(
34
  # load in int8 on CUDA
35
+ model_name_or_path=model_name,
36
  device="cuda",
37
  compute_type="int8_float16",
38
+ # tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
39
  )
40
  outputs = model.generate(
41
+ text=["def fibonnaci(", "User: How are you doing? Bot:"],
42
+ max_length=64,
43
  include_prompt_in_result=False
44
  )
45
  print(outputs)
46
  ```
47
 
48
+ Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
49
+ and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
50
+ - `compute_type=int8_float16` for `device="cuda"`
51
+ - `compute_type=int8` for `device="cpu"`
52
+
53
+ Converted on 2023-06-27 using
54
+ ```
55
+ ct2-transformers-converter --model mosaicml/mpt-7b --output_dir ~/tmp-ct2fast-mpt-7b --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json requirements.txt .gitattributes --quantization int8_float16 --trust_remote_code
56
+ ```
57
+
58
  # Licence and other remarks:
59
  This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
60
 
 
66
  MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
67
  This model was trained by [MosaicML](https://www.mosaicml.com).
68
 
69
+ MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
70
 
71
+ These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
72
+ positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
73
+ Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
74
+ MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
75
 
76
  This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
77
 
 
96
  * License: Apache 2.0
97
 
98
  * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
99
+ Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
100
  * License: _CC-By-SA-3.0_
101
  * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
102
 
 
132
  trust_remote_code=True
133
  )
134
  ```
135
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
136
  This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
137
  `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
138
 
139
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
140
  ```python
141
+ import torch
142
+ import transformers
143
+
144
+ name = 'mosaicml/mpt-7b'
145
+
146
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
147
  config.attn_config['attn_impl'] = 'triton'
148
+ config.init_device = 'cuda:0' # For fast initialization directly on GPU!
149
 
150
  model = transformers.AutoModelForCausalLM.from_pretrained(
151
+ name,
152
  config=config,
153
+ torch_dtype=torch.bfloat16, # Load model weights in bfloat16
154
  trust_remote_code=True
155
  )
 
156
  ```
157
 
158
  Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
159
 
160
  ```python
161
+ import transformers
162
+
163
+ name = 'mosaicml/mpt-7b'
164
+
165
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
166
+ config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
167
+
168
  model = transformers.AutoModelForCausalLM.from_pretrained(
169
+ name,
170
  config=config,
171
  trust_remote_code=True
172
  )
 
176
 
177
  ```python
178
  from transformers import AutoTokenizer
179
+ tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
180
+ ```
181
+
182
+ The model can then be used, for example, within a text-generation pipeline.
183
+ Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
184
+
185
+ ```python
186
+ from transformers import pipeline
187
+
188
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
189
+
190
+ with torch.autocast('cuda', dtype=torch.bfloat16):
191
+ print(
192
+ pipe('Here is a recipe for vegan banana bread:\n',
193
+ max_new_tokens=100,
194
+ do_sample=True,
195
+ use_cache=True))
196
  ```
197
 
198
  ## Model Description
 
220
 
221
  ### Streaming Datasets
222
 
223
+ Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
224
  StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
225
 
226
 
 
245
  Samples for each batch were selected from one of the datasets with the probability specified above.
246
  The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
247
 
248
+ The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
249
+ most of which are relevant for tokenizing code:
250
+ (1) It was trained on a diverse mix of data that includes code (The Pile)
251
+ (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
252
+ (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
253
 
254
  The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
255
 
256
  ### Training Configuration
257
 
258
+ This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
259
+ The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
260
 
261
  ## Limitations and Biases
262
 
263
  _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
264
 
265
+ MPT-7B (Base) is **not** intended for deployment without finetuning.
266
  It should not be used for human-facing interactions without further guardrails and user consent.
267
 
268
  MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
 
285
  ```
286
  @online{MosaicML2023Introducing,
287
  author = {MosaicML NLP Team},
288
+ title = {Introducing MPT-7B: A New Standard for Open-Source,
289
  ly Usable LLMs},
290
  year = {2023},
291
  url = {www.mosaicml.com/blog/mpt-7b},
292
  note = {Accessed: 2023-03-28}, % change this date
293
  urldate = {2023-03-28} % change this date
294
  }
295
+ ```
config.json CHANGED
@@ -1,5 +1,56 @@
1
  {
2
- "bos_token": "<|endoftext|>",
3
- "eos_token": "<|endoftext|>",
4
- "unk_token": "<|endoftext|>"
5
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  {
2
+ "architectures": [
3
+ "MPTForCausalLM"
4
+ ],
5
+ "attn_config": {
6
+ "alibi": true,
7
+ "alibi_bias_max": 8,
8
+ "attn_impl": "torch",
9
+ "attn_pdrop": 0,
10
+ "attn_type": "multihead_attention",
11
+ "attn_uses_sequence_id": false,
12
+ "clip_qkv": null,
13
+ "prefix_lm": false,
14
+ "qk_ln": false,
15
+ "softmax_scale": null
16
+ },
17
+ "auto_map": {
18
+ "AutoConfig": "configuration_mpt.MPTConfig",
19
+ "AutoModelForCausalLM": "modeling_mpt.MPTForCausalLM"
20
+ },
21
+ "d_model": 4096,
22
+ "emb_pdrop": 0,
23
+ "embedding_fraction": 1.0,
24
+ "expansion_ratio": 4,
25
+ "init_config": {
26
+ "emb_init_std": null,
27
+ "emb_init_uniform_lim": null,
28
+ "fan_mode": "fan_in",
29
+ "init_div_is_residual": true,
30
+ "init_gain": 0,
31
+ "init_nonlinearity": "relu",
32
+ "init_std": 0.02,
33
+ "name": "kaiming_normal_",
34
+ "verbose": 0
35
+ },
36
+ "init_device": "cpu",
37
+ "learned_pos_emb": true,
38
+ "logit_scale": null,
39
+ "max_seq_len": 2048,
40
+ "model_type": "mpt",
41
+ "n_heads": 32,
42
+ "n_layers": 32,
43
+ "no_bias": true,
44
+ "norm_type": "low_precision_layernorm",
45
+ "resid_pdrop": 0,
46
+ "tokenizer_name": "EleutherAI/gpt-neox-20b",
47
+ "torch_dtype": "bfloat16",
48
+ "transformers_version": "4.28.1",
49
+ "use_cache": false,
50
+ "verbose": 0,
51
+ "vocab_size": 50432,
52
+ "bos_token": "<|endoftext|>",
53
+ "eos_token": "<|endoftext|>",
54
+ "layer_norm_epsilon": null,
55
+ "unk_token": "<|endoftext|>"
56
+ }
model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e0b56675dcfb2208f90b599b216e647e808b1835f4e1e176877d5e77546566e
3
- size 13298599938
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d82106e0ac05df8469ebba696197da8a3b1eaec83c858b7af823c61073a03fa
3
+ size 6654505904
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ einops==0.5.0
2
+ triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir_sm90#subdirectory=python
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff