The tokenizer has two ids for the same token

#3
by dimidd - opened

I'm loading a HF tokenizer, and wanted to stop on the sequence "</|im_end|>", but it looks like the tokenizer has 2 different ids for the same token, is it a bug or supposed to be so?

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("LoneStriker/AlphaMonarch-7B-AWQ", device_map='cuda')
model = AutoModelForCausalLM.from_pretrained("LoneStriker/AlphaMonarch-7B-AWQ", device_map='cuda')

tokenizer.decode(700) #  '</'
tokenizer.decode(1867) # '</'
tokenizer.decode(700) == tokenizer.decode(1867)

Hmm I've never encountered that before, it's quite strange. The tokenizer should be Mistral/Llama. You don't have the same issue with base models?

So it turns out, that the encoding depends on the position in the sentence
https://stackoverflow.com/questions/78039649/huggingface-tokenizer-has-two-ids-for-the-same-token/78039999#78039999

I was also surprised to see generation end with</|im_end|>, as ChatML mentions only <|im_end|> (without a slash).
https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/includes/chat-markup-language.md

BTW, I see that sometimes the model does generate <|im_end|> and also the Russian version <|им_начало|>user and <|им_конец|>.

I ended-up writing custom code to check the stop sequence.

self.chevron_slash_stop_token_ids = [tensor(700).cuda(), tensor(1867).cuda()]
# Could be used for <|im_end|> and also other languages. E.g., the Russian version <|им_конец|>
self.chevron_pipe_token_sequence = tokenizer.encode("<|", add_special_tokens=False, return_tensors='pt').cuda()[0]
self.stopping_criteria = StoppingCriteriaList([self.custom_stopping_criteria])


def custom_stopping_criteria(self, input_ids: torch.LongTensor, _, **__) -> bool:
        # Check if the last token matches the stop tokens </
        if input_ids[0][-1].equal(self.chevron_slash_stop_token_ids[0]) or input_ids[0][-1].equal(
                self.chevron_slash_stop_token_ids[1]):
            return True
        # Check if the last tokens match the sequence for <|
        if input_ids[0][-len(self.chevron_pipe_token_sequence):].equal(self.chevron_pipe_token_sequence):
            return True

        return False
dimidd changed discussion status to closed

Sign up or log in to comment