what are the changes from `normalized=true` to `false` in `special_tokens_map.json`?
hi I noticed your configs changed and could I ask why did this change and what could it bring with normalized=false
? Thanks!
running the given example would produce
# normalized=true
User: What is in this image?
Assistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.
User: And who is that?
Assistant: That is a cartoon character from the Asterix comics, which is a popular French comic series created by René Goscinny and Albert Uderzo.
# normalized=false
User: What is in this image?
Assistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.
User: And who is that?
Assistant: The person in the image is Julius Caesar, a prominent Roman politician and military general in ancient Rome.
@HugoLaurencon is the best person to comment!
Hi @luodian
Can you tell me which version of transformers
you are using?
If you are on the main branch and installed the repo from source, there has been a recent big change in tokenizers
.
Essentially, if normalized=true
, now the special tokens can be split into several sub-tokens, which is not something wanted.
For example, we trained the model using <fake_token_around_image><image><fake_token_around_image>
, but here the token <image>
could be split into <
followed by image>
.
In that case, we would not have the token <image>
, and we would have no image attention mask or pixel values.
Note that if you are using the example code, we are dealing with these tokens for you in the processor script.
Could you try to see, with your version of transformers
, how the prompt is tokenized with both normalized=true
and normalized=false
?
You need to write
from transformers import AutoProcessor
checkpoint = "HuggingFaceM4/idefics-9b"
processor = AutoProcessor.from_pretrained(checkpoint)
tokenizer = processor.tokenizer
prompt = "<fake_token_around_image><image><fake_token_around_image>In this picture from Asterix and Obelix, we can see" # Or a longer prompt
tokens = tokenizer.encode(prompt)
print(tokens)
Are you also using the base model or the instruct one?