Does not contain `<image>` token.

#1
by Xenova HF staff - opened

From: https://huggingface.co/docs/transformers/model_doc/llava#transformers.LlavaForConditionalGeneration.forward.example

Running

from PIL import Image
import requests
from transformers import AutoProcessor, LlavaForConditionalGeneration

model_id = "IlyasMoutawwakil/tiny-random-LlavaForConditionalGeneration"
model = LlavaForConditionalGeneration.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)

prompt = "<image>\nUSER: What's the content of the image?\nASSISTANT:"
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(text=prompt, images=image, return_tensors="pt")

# Generate
generate_ids = model.generate(**inputs, max_length=30)
processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

results in

ValueError: The input provided to the model are wrong. The number of image tokens is 0 while the number of image given to the model is 1. This prevents correct indexing and breaks batch generation.

This is because you're missing the <image> token in the tokenizer (and the model embedding layer is only 32000 x 16).

I've forked your model and added it with:

processor.tokenizer.add_tokens(["<image>", "<pad>"], special_tokens=True)
model.resize_token_embeddings(len(processor.tokenizer))

The final model can be found at https://huggingface.co/Xenova/tiny-random-LlavaForConditionalGeneration

Thanks a lot ! I was wondering why the model only worked for image captioning image-to-text with an AutoImageProcessor in my tests πŸ₯²
I'll use yours and delete this one πŸ™

Sign up or log in to comment