Text Generation
Transformers
PyTorch
Safetensors
llama
Inference Endpoints
text-generation-inference

Prompt template?

#23
by bdambrosio - opened

there is a closed topic with this question, but the poster hid all content. Why?
I found this in github:
https://github.com/01-ai/Yi/issues/30
but it doesn't really tell you what to do TODAY, before they release chat.
So, I'm guessing, for multi-turn problem-solving:
<|System|>
{sys prompt}
<|Human|>
{human inst/prompt}
<|endoftext|>
<|Assistant|>

(ie, and empty <|Assistant|> at the end, followed by a newline.)

Comments? Am I on the right track? I occasionally get just '<|Human' as the response, sigh.

There is no prompt format for generic base models, at least there shouldn't be one (looking at you, llama 2).
If you want model to respond to a particular prompt format, you should fine tune it on instruct dataset or download the model/lora from someone who finetuned it already. Unofficial fine-tune of Yi-34b on airoboros dataset is available now and seems to use default llama 2 prompt format.

https://huggingface.co/LoneStriker/Yi-34B-Spicyboros-3.1

Just as replied in https://github.com/01-ai/Yi/issues/30#issuecomment-1797290170

Actually, just EOS (<|endoftext|>) in the base models :)

FancyZhao changed discussion status to closed

Sign up or log in to comment