Transformers
GGUF
English
yi
sft
Yi-34B-200K

Incorrect EOS token

#2
by Henk717 - opened

Using Koboldcpp the model reports <|endoftext|> as the Endoftext token, however the finetune generates </s> as the end of text token.
This makes our usual techniques to produce nice outputs fail on this model, might be an upstream issue but the GGUF is definately not doing it right.

upstream issue. The generated </s> is not from a single token.

Reported it to them to, but if its not from a single token thats probably going to require a full retune.

Yes, noticed that as well. SillyTavern is filtering it out, but I still am seeing it pop up before it gets deleted.

Yeah I noticed that rogue </s> at the end of llama.cpp generation as well. It stops at the right point, but that extra token is there at the end.

I was going to suggest that we could use Kerfuffle's new script to set the GGUF EOS token to </s> but if you're saying it's not a single token ID, then I guess we can't?

If you don't actually want the model to general HTML/code then you could possibly try setting logit biases that ban tokens that start with <. If it can't produce the weird </s> thing it might generate an EOS. If it just isn't generating EOS correctly, that won't help (but it will get rid of stray </s> in the output possibly. The base Yi model token id for </ is 1359 so with llama.cpp at least you can specify something like -l 1359-inf to ban the token.

Workaround for Koboldcpp users is to use </s> as your stop sequence, problem is that you can't tell the model to keep generating that way so its short responses only.

Is there any update on this one? I'm using the GPTQ model and EXLLama2 on Kobold United and it has the same issue, but I think I can only use stop sequences when using it via the API? Any workaround for using as a stop sequence within the UI itself?

Lite has it, and in the normal UI you might be having more luck with phrase biasing. At this point its safe to say the model won't be fixed and people should move on to their hermes or a different tune instead.

Ah, that's a shame, this seems like such a smart model at the size otherwise.

Sign up or log in to comment