Prompt format?

#4
by bartowski - opened

Does this just use the default mistral prompt? chatml? something else?

I do wonder about that. Since it's a merge with different EOS tokens (ChatML's <|im_end|> and < /s>) that could lead to difficulties during inference. I wonder if the team fine-tuned each model specifically to follow one EOS token (in that case, < /s>)

I tried the mistral prompt format... worked for a while but i do have issues... would be nice to get a prompt template as i keep getting No chat template is defined for this tokenizer - using the default template for the LlamaTokenizerFast class. If the default is not appropriate for your model, please set tokenizer.chat_template to an appropriate template.

Sign up or log in to comment