When decoding the output, a space is added between tokens

#4
by hancheol - opened

Hi,

First, thank you for the useful model :-)

When I using the tokenizer, I found a weird situation as shown in the attached image.
The decoded text has a space betwwen two tokens and the leading space of tokens are not properly removed.

Are there anyone experienced the same problem, and a possible solution?

image.png

Sign up or log in to comment