Transformers
GGUF
English
stablelm

Wrong EOS token has been fixed in upstream tokenizer_config.json, consider reconverting

#2
by compilade - opened

llama.cpp's ./main example uses the EOS token stored in the GGUF to figure out when to stop.

For context, convert-hf-to-gguf.py uses transformers.AutoTokenizer which uses settings from tokenizer_config.json.
The EOS token was wrong (but was recently fixed upstream), so the output never seemed to end when I first tried this model.

Consider re-converting this model so that the GGUF files contain the correct EOS token (which should be <|im_end|> (aka token id 50279) for this model).

compilade changed discussion status to closed

Sign up or log in to comment