Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Pricing

  • Log In
  • Sign Up

mosaicml
/
mpt-7b-instruct

Text Generation
Transformers PyTorch
mpt custom_code Composer MosaicML llm-foundry text-generation-inference
License: cc-by-sa-3.0
Model card Files Files and versions Community
67
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

How to improve inference runtime performance?

3
#67 opened 2 months ago by redraptor

Onnx export fails when using cuda:0 as init device

1
#66 opened 2 months ago by Akshay1996

changing "max_seq_len" has no effect

#64 opened 2 months ago by giuliogalvan

How to run on Colab through CPU ?

#63 opened 3 months ago by deepakkaura26

Whats the input Max Token Size for MPT 7B Instruct? (Usecase - Transcript Summarization)

1
#61 opened 3 months ago by vibhanu

Prompt for summarization

#60 opened 3 months ago by Sven00

Missing the space so badly 😞

6
#59 opened 3 months ago by AayushShah

Adding `safetensors` variant of this model

#56 opened 3 months ago by eyang9002

Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack

1
#55 opened 3 months ago by sebrahimi

Fact check: Is MPT-7b developed with Alexa Skills Kit as it just claimed in my conversation?

2
#53 opened 3 months ago by sh37

Fixing "RuntimeError: expected scalar type Half but found Float" error

2
#46 opened 3 months ago by marygm

few shot prompting best practices/examples

1
#34 opened 4 months ago by bharven

Finetuning MPT-7B-Instruct in 4-bit

#33 opened 4 months ago by rmihaylov

KeyError in triton implementation

7
#25 opened 4 months ago by datacow
Company
© Hugging Face
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs