Text Generation
Transformers
Safetensors
llama
Generated from Trainer
axolotl
conversational
Inference Endpoints
text-generation-inference

bug: system message exposed

#1
by PriNova - opened

When using the system message as pre-prompt it exposes it in the response even by strictly instructing not to do and the response does not reflect the user instruction at all (or only a small portion of it).
But when I set the system message as assistant role it works perfectly.

I checked against other models and other chat clients to ensure the issue was not there.

I use Ollama as a client on Linux Ubuntu 22.04 with the Q4_0 model.

PriNova changed discussion title from bug: system message exposing to bug: system message exposed

I'm having a similar problem, using lang Chain and Hugging face libraries. Did you find a solution?

Sign up or log in to comment