Tokenizer config files represent the special_tokens correctly. Not sure why they are not coming through. This makes the model repeat itself until max_token_length is reached.
· Sign up or log in to comment