Question

#1
by dillfrescott - opened

Is there a way to make it stop giving the stop token after every response?

> hello there! 
Hi! I'm glad to see you here!<|end_of_text|>

> could you help me with an issue im having? 
Sure, I'll do my best to help you with that. Please feel free to ask me any questions you have.<|end_of_text|>

Have you added it as a stop string? This model does feel a bit odd with tokenization either way, something is off

oh, im using just ./main from llama.cpp. i know you can add reverse prompts, is that the same as a stop string?

It seems these tokens are specified in the metadata, and llama.cpp uses them:

llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'

However, they were never mentioned by the model's authors so I suppose they just should not be in the metadata.

Sign up or log in to comment