Prompt template discrepancy

#4
by wolfram - opened

The examples you give have <|end_of_turn|> as the only separator between user and assistant message, without any newlines:

User: Hello<|end_of_turn|>Assistant: Hi<|end_of_turn|>User: How are you today?<|end_of_turn|>Assistant:

But further down the page, your ooba prompt template has a newline after the seperator and also after the assistant message:

<|user|> <|user-message|><|end_of_turn|>\n<|bot|> <|bot-message|>\n

So which is correct? And where, and how, is the "context" you give for ooba supposed to go in the prompt (I assume it's the system message)?

OpenOrca org

There are still some bugs in the way different inference engines process tokens. The way the model was trained, the newline characters shouldn't be necessary, but they also shouldn't hurt.
We've found that in some cases including them can reduce the chances of unexpected behavior.
So try without newlines if you're token-budget-conscious. If you find any unusual behavior (e.g. not stopping inference when done outputting, or the model starting to have a conversation with itself), try inserting them.

Note: The space after "User:" (so this "User: " not this "User:") and "Assistant: " will also help avoid inference bugs, and are also part of the training regimen, so should in all circumstances be included.

Thanks for the explanation. Just noticed another discrepancy:

What about the <|end_of_turn|> itself? In the first example there's one after every message, both user and assistant, whereas in the ooba example, there's only one after the user message, but none after the bot message, just a linebreak.

OpenOrca org

The model should generate the end of turn token when it is done responding to the prompt.

OK. So when inference software catches that as a stop token and removes it before returning the output, it makes sense that we add it into the prompt for the next round of inference.

Sign up or log in to comment