This is a version of the mpt-7b-storywriter model, sharded to 2 GB chunks for low-RAM loading (i.e. Colab). The weights are stored in
bfloat16 so in theory you can run this on CPU, though it may take forever.
Please refer to the previously linked repo for details on usage/implementation/etc. This model was downloaded from the original repo under Apache-2.0 and is redistributed under the same license.
Note when using: this is not an instruction-tuned model, so you need to give it sufficient input text to continue generating something on-topic with your prompt
pip install -U torch transformers accelerate einops
Load the model:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'ethzanalytics/mpt-7b-storywriter-sharded' model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, trust_remote_code=True, revision='197d14245ad874da82194248cab1ce8cf87fa713', # optional, but a good idea device_map='auto', load_in_8bit=False, # install bitsandbytes then set to true for 8-bit ) model = torch.compile(model) tokenizer = AutoTokenizer.from_pretrained(model_name)
Then you can use
model.generate() as you would normally - see the notebook for details.
- Downloads last month
Inference API has been turned off for this model.