Concerns regarding Prompt Format

#1
by wolfram - opened

So the prompt format isn't using any special tokens? Won't it get very confused if the RAG or user messages include "User:" or "Assistant:" strings? Or anything that looks similar and could get misinterpreted by the model?

Isn't it using \n\n to differentiate?

That's just two newlines. All of that can be inside the RAG data or a user message.

It's important to use special tokens that cannot ever occur in the normal input. With ChatML, you have e. g. <|im_start|>, which is a unique special token so your application can ensure it's never sent to the model from what a user inputs or the RAG component retrieves.

Oops...My youngest son is actually named <|im_start|>

Hopefully \n\nassistant: doesn't re-occur too much. I guess you will have to see how it goes.

Hi @wolfram ,
Thanks for bringing this up. We think that the user-assistant turns or context usually will not contain strings like "\n\nUser:" or "\n\nAssistant". The model should be robust to this template since it is how we train our model.

Why not use escape sequences? Making some crazy chat template that is likely not to appear in user conversation is flawed anyway, if we added escape sequences to the training data would could remove this issue entirely would just require front ends to support it

Sign up or log in to comment