selfmodifai-mpt-7b / README.md
jacobfulano's picture
Update README.md
6af8a50
|
raw
history blame
11.6 kB
metadata
license: apache-2.0
tags:
  - Composer
  - MosaicML
  - llm-foundry
  - StreamingDatasets

MPT-7B (Base)

MPT-7B (Base) is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by MosaicML and is open-sourced for commercial use (Apache-2.0).

MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.

These architectural changes include performance-optimized layer implementations, changes that provide greater training stability, and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases (ALiBi). Thanks to these modifications, MPT models can be trained with high throughput efficiency and highly stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's FasterTransformer.

This model uses the MosaicML LLM codebase, which can be found in the llm-foundry repository, and was built by MosaicML’s NLP team on the MosaicML platform for pretraining, finetuning and/or deploying LLMs for inference.

How is this model different?

  • Licensed for commercial use (unlike LLaMA).
  • Trained on a large amount of data (1T tokens like LLaMA vs. 300B for Pythia, 300B for OpenLLaMA, and 800B for StableLM).
  • Prepared to handle extremely long inputs thanks to ALiBi (we trained on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
  • Capable of fast training and inference (via FlashAttention and FasterTransformer)
  • Equipped with highly efficient open-source training code via the llm-foundry repository

Models finetuned off MPT-7B (Base):

Model Date

May 5, 2023

Model License

Apache-2.0 (commercial use permitted)

Documentation

How to Use

This model is best used with the MosaicML llm-foundry repository for training, finetuning, evaluating, and deploying LLMs for inference.

Note: This model requires that trust_remote_code=True be passed to the from_pretrained method. This is because we use a custom MPT model architecture that is not yet part of the Hugging Face transformers package. MPT includes options for many training efficiency features such as FlashAttention, ALiBi, QK LayerNorm, and more.

import transformers
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)

To use the optimized triton implementation of FlashAttention (pip install flash_attn), you can load the model with attn_impl='triton' and move the model to bfloat16:

config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'

model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
model.to(device='cuda:0')

Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or deployment. For example:

config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', config=config, trust_remote_code=True)

This model was trained with the EleutherAI/gpt-neox-20b tokenizer.

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")

Model Description

The architecture is a modification of a standard decoder-only transformer.

The model has been modified from a standard transformer in the following ways:

Hyperparameter Value
n_parameters 6.7B
n_layers 32
n_heads 32
d_model 4096
vocab size 50432
sequence length 2048

Training Data

Streaming Datasets

Data was formatted using the MosaicML StreamingDataset library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.

Data Mix

The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:

Data Source Number of Tokens in Source Proportion Effective Number of Tokens Epochs
mC4 3.1.0 - English 417.99 B 0.33 330 B 0.14
C4 - English - SemDedup 80% 100.42 B 0.299 299 B 2.98
RedPajama - CommonCrawl 878.45 B 0.1 100 B 0.11
The Stack - Selected Languages 463.78 B 0.1 100 B 0.22
RedPajama - Wikipedia 24.84 B 0.04 40 B 1.61
The Stack - Markdown 107.07 B 0.035 35 B 0.33
S2ORC 48.85 B 0.033 33 B 0.68
RedPajama - Books 26.02 B 0.03 30 B 1.15
RedPajama - arXiv 28.10 B 0.019 19 B 0.04
RedPajama - StackExchange 20.54 B 0.014 14 B 0.68

Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.

The data was tokenized using the EleutherAI/gpt-neox-20b tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.

The model vocabulary size of 50432 was set to be a multiple of 128 (as in MEGATRON-LM), model flop utilization (MFU) increased by up to four percentage points.

Training Configuration

This model was trained on 440 A100-40GBs for about 9.5 days using the MosaicML Platform. The model was trained with sharded data parallelism using FSDP and used the LION optimizer.

Limitations and Biases

The following language is modified from EleutherAI's GPT-NeoX-20B

MPT-7B (Base) is not intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets detailed below including C4, the colossal, cleaned version of Common Crawl's web crawl corpus. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Acknowledgements

We would like to thank our friends at AI2 for helping us to curate our pretraining dataset, choose a great tokenizer, and for many other helpful conversations along the way ⚔️ We gratefully acknowledge the work of the researchers who created the LLaMA series of models, which was the impetus for our efforts. and also acknowledge the hard work of the Together team, which put together the RedPajama dataset.

Citation

Please cite this model using the following format:

@online{MosaicML2023Introducing,
    author    = {MosaicML NLP Team},
    title     = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
    year      = {2023},
    url       = {www.mosaicml.com/blog/mpt-7b},
    note      = {Accessed: 2023-03-28}, % change this date
    urldate   = {2023-03-28} % change this date
}