Transformers
mpt
Composer
MosaicML
llm-foundry
text-generation-inference
TheBloke's picture
Updating model files
5f0a3a1
|
raw
history blame
10.1 kB
metadata
license: cc-by-sa-3.0
datasets:
  - mosaicml/dolly_hhrlhf
tags:
  - Composer
  - MosaicML
  - llm-foundry
inference: false
TheBlokeAI

MPT-7B-Instruct GGML

This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of MosaicML's MPT-7B-Instruct.

This repo is the result of converting to GGML and quantising.

Please note that these MPT GGMLs are not compatbile with llama.cpp. Please see below for a list of tools known to work with these model files.

Repositories available

Provided files

Name Quant method Bits Size RAM required Use case
mpt7b-instruct.ggmlv3.q4_0.bin q4_0 4bit 4.16GB 6.2GB 4-bit.
mpt7b-instruct.ggmlv3.q4_1.bin q4_0 4bit 4.99GB 7.2GB 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
mpt7b-instruct.ggmlv3.q5_0.bin q5_0 5bit 4.57GB 6.8GB 5-bit. Higher accuracy, higher resource usage and slower inference.
mpt7b-instruct.ggmlv3.q5_1.bin q5_1 5bit 4.99GB 7.2GB 5-bit. Even higher accuracy, and higher resource usage and slower inference.
mpt7b-instruct.ggmlv3.q8_0.bin q8_0 8bit 7.48GB 9.7GB 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.
mpt7b-instruct.ggmlv3.fp16.bin fp16 16bit 13.30GB 16GB Full 16-bit.

Compatibilty

These files are not compatible with llama.cpp.

Currently they can be used with:

As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)

How to build, and an example of using the ggml mpt binary (command line only):

git clone https://github.com/ggerganov/ggml
cd ggml
mkdir build
cd build
cmake ..
cmake --build . --config Release
bin/mpt -m /path/to/mpt7b-instruct.ggmlv3.q4_0.bin -t 8 -n 512 -p "Write a story about llamas"

Please see the ggml repo for other build options.

Want to support my work?

I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.

So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.

Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.

Original model card: MPT-7B-Instruct

MPT-7B-Instruct

MPT-7B-Instruct is a model for short-form instruction following. It is built by finetuning MPT-7B on a dataset derived from the Databricks Dolly-15k and the Anthropic Helpful and Harmless (HH-RLHF) datasets.

This model was trained by MosaicML and follows a modified decoder-only transformer architecture.

Model Date

May 5, 2023

Model License

CC-By-SA-3.0

Documentation

Example Question/Instruction

Longboi24:

What is a quoll?

MPT-7B-Instruct:

A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America

How to Use

Note: This model requires that trust_remote_code=True be passed to the from_pretrained method. This is because we use a custom model architecture that is not yet part of the transformers package.

It includes options for many training efficiency features such as FlashAttention (Dao et al. 2022), ALiBi, QK LayerNorm, and more.

import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
  'mosaicml/mpt-7b-instruct',
  trust_remote_code=True
)

Note: This model requires that trust_remote_code=True be passed to the from_pretrained method. This is because we use a custom MPT model architecture that is not yet part of the Hugging Face transformers package. MPT includes options for many training efficiency features such as FlashAttention, ALiBi, QK LayerNorm, and more.

To use the optimized triton implementation of FlashAttention, you can load the model with attn_impl='triton' and move the model to bfloat16:

config = transformers.AutoConfig.from_pretrained(
  'mosaicml/mpt-7b-instruct',
  trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'

model = transformers.AutoModelForCausalLM.from_pretrained(
  'mosaicml/mpt-7b-instruct',
  config=config,
  torch_dtype=torch.bfloat16,
  trust_remote_code=True
)
model.to(device='cuda:0')

Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:

config = transformers.AutoConfig.from_pretrained(
  'mosaicml/mpt-7b-instruct',
  trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
  'mosaicml/mpt-7b-instruct',
  config=config,
  trust_remote_code=True
)

This model was trained with the EleutherAI/gpt-neox-20b tokenizer.

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")

Model Description

The architecture is a modification of a standard decoder-only transformer.

The model has been modified from a standard transformer in the following ways:

Hyperparameter Value
n_parameters 6.7B
n_layers 32
n_heads 32
d_model 4096
vocab size 50432
sequence length 2048

PreTraining Data

For more details on the pretraining process, see MPT-7B.

The data was tokenized using the EleutherAI/gpt-neox-20b tokenizer.

Limitations and Biases

The following language is modified from EleutherAI's GPT-NeoX-20B

MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Acknowledgements

This model was finetuned by Sam Havens and the MosaicML NLP team

MosaicML Platform

If you're interested in training and deploying your own MPT or LLMs on the MosaicML Platform, sign up here.

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.

Citation

Please cite this model using the following format:

@online{MosaicML2023Introducing,
    author    = {MosaicML NLP Team},
    title     = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
    year      = {2023},
    url       = {www.mosaicml.com/blog/mpt-7b},
    note      = {Accessed: 2023-03-28}, % change this date
    urldate   = {2023-03-28} % change this date
}