Prompt Template

#2
by LouisSF - opened

Hi, thank you for the model !
Was it trained using a specific prompt which we should replicate for a non-chat case. For adding system prompts or just text completion.
I was thinking of the original mistral prompt format: <s>[INST] {prompt} [/INST], is this correct ?

Equall org

Yes any issues ?

PierreColombo changed discussion status to closed

Is it necessary to use the tokenizer's chat template (according to the model card)? Can I use the prompt format mentioned by Louis: [INST] {prompt} [/INST]?

Okay I'm going to put it out there has anyone thought of we need to find a way to make these llms less susceptible to malicious content at this point no one currently can be held liable for what an llm outputs when it comes to being fine-tuned but a certain point we will be held accountable for this and so we have to make sure that we prevent an llm from outputting dangerous content.

Sign up or log in to comment