Appropriate chat template

#1
by hannahbernstein - opened

Is llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " the most optimal template for chatting? Because there's no chat template in the tokenizer_config.json file, grabbing the tokenizer and applying the chat defaults to the LlamaTokenizer class. Could you add a chat template to this model?

I'm also seeing sequences like <|User|>, <|Assistant|>, <|begin▁of▁sentence|>, <|end▁of▁sentence|>, etc. If these are part of the model's vocabulary, what is the most optimal prompting?

Sign up or log in to comment