GGUF
Composer
MosaicML
llm-foundry
maddes8cht's picture
"Update README.md"
d918b3f
|
raw
history blame
12.2 kB
metadata
license: apache-2.0
tags:
  - Composer
  - MosaicML
  - llm-foundry
datasets:
  - the_pile_books3
inference: false

banner

I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information

mpt-7b-storywriter - GGUF

Important Update for Falcon Models in llama.cpp Versions After October 18, 2023

As noted on the Llama.cpp GitHub repository, all new Llama.cpp releases after October 18, 2023, will require a re-quantization due to the new BPE tokenizer.

Good news! I am glad that my re-quantization process for Falcon Models is nearly complete. Download the latest quantized models to ensure compatibility with recent llama.cpp software.

Key Points:

  • Stay Informed: Keep an eye on software application release schedules using llama.cpp libraries.
  • Monitor Upload Times: Re-quantization is almost done. Watch for updates on my Hugging Face Model pages.

Important Compatibility Note: Old software will work with old Falcon models, but expect updated software to exclusively support the new models.

This change primarily affects Falcon and Starcoder models, with other models remaining unaffected.

About GGUF format

gguf is the current file format used by the ggml library. A growing list of Software is using it and can therefore use this model. The core project making use of the ggml library is the llama.cpp project by Georgi Gerganov

Quantization variants

There is a bunch of quantized files available. How to choose the best for you:

Legacy quants

Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are legacy quantization types. Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants. Falcon 7B models cannot be quantized to K-quants.

K-quants

K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance. So, if possible, use K-quants. With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.


Original Model Card:

MPT-7B-StoryWriter-65k+

MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths. It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the books3 dataset. At inference time, thanks to ALiBi, MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our blogpost.

  • License: Apache 2.0

This model was trained by MosaicML and follows a modified decoder-only transformer architecture.

Model Date

May 5, 2023

Model License

Apache 2.0

Documentation

How to Use

Note: This model requires that trust_remote_code=True be passed to the from_pretrained method. This is because we use a custom model architecture that is not yet part of the transformers package.

It includes options for many training efficiency features such as FlashAttention (Dao et al. 2022), ALiBi, QK LayerNorm, and more.

import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
  'mosaicml/mpt-7b-storywriter',
  trust_remote_code=True
)

To use the optimized triton implementation of FlashAttention, you can load the model on GPU (cuda:0) with attn_impl='triton' and with bfloat16 precision:

import torch
import transformers

name = 'mosaicml/mpt-7b-storywriter'

config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!

model = transformers.AutoModelForCausalLM.from_pretrained(
  name,
  config=config,
  torch_dtype=torch.bfloat16, # Load model weights in bfloat16
  trust_remote_code=True
)

Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:

import transformers

name = 'mosaicml/mpt-7b'

config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 83968 # (input + output) tokens can now be up to 83968

model = transformers.AutoModelForCausalLM.from_pretrained(
  name,
  config=config,
  trust_remote_code=True
)

This model was trained with the EleutherAI/gpt-neox-20b tokenizer.

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")

The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the torch.autocast context manager.

from transformers import pipeline

pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')

with torch.autocast('cuda', dtype=torch.bfloat16):
    print(
        pipe('Here is a recipe for vegan banana bread:\n',
            max_new_tokens=100,
            do_sample=True,
            use_cache=True))

Community-Created Resources

These were not created by MosaicML, but you may find them useful. These links are not an endorsement of the creators or their content.

Example Epilogue

The full text of the The Great Gatsby (67873 tokens) was fed to the model, followed by the text "EPILOGUE"

TheRealFitzgerald233:

...

So we beat on, boats against the current, borne back ceaselessly into the past

EPILOGUE

MPT-7B-StoryWriter:

I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.

And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.

And I knew that day that I didn't love her.

That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.

I have sorted them by date and placed them in the order in which they were written. They are very few in number.

They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.

There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.

But a few months ago I read somewhere that he had been seen in New York. He had become a great man.

And I knew that he had not changed at all.

Model Description

The architecture is a modification of a standard decoder-only transformer.

The model has been modified from a standard transformer in the following ways:

Hyperparameter Value
n_parameters 6.7B
n_layers 32
n_heads 32
d_model 4096
vocab size 50432
sequence length 65536

PreTraining Data

For more details on the pretraining process, see MPT-7B.

The data was tokenized using the EleutherAI/gpt-neox-20b tokenizer.

Training Configuration

This model was trained on 8 A100-80GBs for about 2 days using the MosaicML Platform. The model was trained with sharded data parallelism using FSDP and used the LION optimizer.

Limitations and Biases

The following language is modified from EleutherAI's GPT-NeoX-20B

MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-StoryWriter was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Acknowledgements

This model was finetuned by Alex Trott and the MosaicML NLP team

MosaicML Platform

If you're interested in training and deploying your own MPT or LLMs on the MosaicML Platform, sign up here.

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.

Citation

Please cite this model using the following format:

@online{MosaicML2023Introducing,
    author    = {MosaicML NLP Team},
    title     = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
    year      = {2023},
    url       = {www.mosaicml.com/blog/mpt-7b},
    note      = {Accessed: 2023-03-28}, % change this date
    urldate   = {2023-03-28} % change this date
}

End of original Model File

Please consider to support my work

Coming Soon: I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.

GitHub Stack Exchange GitHub HuggingFace Twitter