Prompt format

#2
by bartowski - opened

You have alpaca in your model card, but the tokenizer_config.json defines something different:

{system_prompt}
Human: {prompt}
Assistant: </s>

Which is correct?

It was trained on alpaca, which is

### Instruction:
{prompt}
### Response:

I have tested it and it works both ways, even though the base model (mistral) is not like that.

I guess it may be better to do the one in tokenizer_config, both work though.

Sign up or log in to comment