Why is the use of the <fake_token_around_image> token different from Flamingo's <EOC> token?

#12
by gigant - opened

From my understanding, the <fake_token_around_image> replaces the <EOC> end of chunk token from the original Flamingo paper. According to the latter, the <EOC> token is used at the end of a text chunk: "prior to any image and at the end of the document".
However, if I am not mistaken, the <fake_token_around_image> is to be used before and after the image token, and is not used at the end of the document.
Why this difference? Did you do or refer to experiments/ablation studies on this added token?

Thank you

HuggingFaceM4 org

Hey,
We use:

  • <fake_token_around_image> to wrap the <image> tokens. As such, if we have consecutive images, the token sequence will be <fake_token_around_image><image><fake_token_around_image><image><fake_token_around_image>, if you only have one image, it will be <fake_token_around_image><image><fake_token_around_image>.
  • <eos> tokens to mark the end of a document.
  • <bos> tokens to mark the beginning of a document.

At the end of the day, the reason why we wrap image tokens around other ones is to ensure that that we always have tokens associated with all the images even when they are consecutive. it ensures the model can reason across all images. Beyond that, the exact implementation (EOC or not) depends on other factors.
For instance, we started using \n\n instead of a new learned <fake_token_around_image> token but the model was confusing the \n\n as in double line breaks and \n\n as in an image is next.

Sign up or log in to comment