Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
conversational
custom_code
text-generation-inference

DIfferent results here in the chat and locally

#10
by KarBik - opened

Hello,
I was testing some prompts here and they worked very well. But when I was testing locally the results for the same prompts were absolutely different and very bad. Is there something specific you do during the inference?
Thanks in advance

Mosaic ML, Inc. org

are you using the chat formatting? what generation settings are you using?

I have the same issue. I also tried chat formatting.

Sign up or log in to comment