BLIP2 Always Gives `\n` as Output

#15
by james-passio - opened

I've literally copied and pasted the demo code:

processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))

All I get for an output is \n. I'm running: torch==2.0.1+cu117 transformers==4.33.1.

original BLIP works well, as for other VQA model architectures, any ideas what I'm doing wrong?

I think there's an issue with the example. Once I looked further in the documentation I saw there's a specific prompt format:

qtext = f"Question: {question} Answer:"

If I send qtext I get good results.

Hi,

Yes the authors are aware of this, they explicitly strip() the output as seen here: https://github.com/salesforce/LAVIS/blob/e4040b13d6120062829ee9625f016f3cd3dd16e6/lavis/models/blip2_models/blip2_opt.py#L278.

Will update the model card, thanks for reporting.

Not updated yet ;) But I found this discussion fortunately :)

Sign up or log in to comment