Endless genreration

#5
by GGreedy - opened

It seems that an endless generation will be given. Is this question related to the tokenizer? How to fix it that I can correctly run model infer?

Hello, try to edit your config.json as follows:
"eos_token_id": 128001 --> "eos_token_id": 128009

edit:
I additionally changed "eos_token": "<|end_of_text|>" --> "eos_token": "<|eot_id|>" in tokenizer_config.json and special_tokens_map.json. But I believe the first one was the one to go.

Sign up or log in to comment