mpt-7b-storywriter / README.md
jacobfulano's picture
Create README.md
aa53cd9
|
raw
history blame
4.78 kB
metadata
license: cc-by-sa-3.0
tags:
  - Composer
  - MosaicML
  - llm-foundry

MPT-7B-StoryWriter-65k+

MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths. It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the books3 dataset. At inference time, thanks to ALiBi, MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 84k tokens on a single A100-80GB GPU in our blogpost.

  • License: Apache-2.0 (commercial use permitted)

This model was trained by MosaicML and follows a modified decoder-only transformer architecture.

Model Date

May 5, 2023

Model License

Apache-2.0 (commercial use permitted)

Documentation

How to Use

Note: This model requires that trust_remote_code=True be passed to the from_pretrained method. This is because we use a custom model architecture that is not yet part of the transformers package.

It includes options for many training efficiency features such as FlashAttention (Dao et al. 2022), ALiBi, QK LayerNorm, and more.

import transformers
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-storywriter', trust_remote_code=True, torch_dtype=torch.bfloat16)

To use the optimized triton implementation of FlashAttention, you can load with attn_impl='triton' and move the model to bfloat16 like so:

model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-storywriter', trust_remote_code=True, torch_dtype=torch.bfloat16, attn_impl='triton')
model.to(device='cuda:0', dtype=torch.bfloat16)

Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:

config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b-storywriter', trust_remote_code=True)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-storywriter', config=config, trust_remote_code=True)

This model was trained with the EleutherAI/gpt-neox-20b tokenizer.

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")

Model Description

The architecture is a modification of a standard decoder-only transformer.

The model has been modified from a standard transformer in the following ways:

Hyperparameter Value
n_parameters 6.7B
n_layers 32
n_heads 32
d_model 4096
vocab size 50432
sequence length 65536

PreTraining Data

For more details on the pretraining process, see MPT-7B.

The data was tokenized using the EleutherAI/gpt-neox-20b tokenizer.

Limitations and Biases

The following language is modified from EleutherAI's GPT-NeoX-20B

MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Acknowledgements

This model was finetuned by Alex Trott and the MosaicML NLP team

Citation

Please cite this model using the following format:

@online{MosaicML2023Introducing,
    author    = {MosaicML NLP Team},
    title     = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
    year      = {2023},
    url       = {www.mosaicml.com/blog/mpt-7b},
    note      = {Accessed: 2023-03-28}, % change this date
    urldate   = {2023-03-28} % change this date
}