The model does not currently support batch size more than 1

#6
by Napron - opened

I am trying to run inference with more than 1 image and it throws the error, could you please fix this.

Thanks in advance.

tokenizer.padding_side = "left"

#Adding one more img in the list
inputs = image_processor([img, img], return_tensors="pt", image_aspect_ratio='anyres') # image_aspect_ratio='anyres')

prompt = apply_prompt_template(sample['question'])
language_inputs = tokenizer([prompt], return_tensors="pt")
inputs.update(language_inputs)

#Batch size becomes 2
torch.Size([1, 2, 5, 3, 378, 378])

RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [2, 5, 3, 378, 378]
Salesforce org

@Napron Thanks for your interest in our work. We will work on it and update you when the batch inference feature is ready.

Salesforce org

Hi @Napron , thank you for being patient. Check out our latest notebook for batch inference. Let us know if you have any questions.

Thank you, appreciate it!

Manli changed discussion status to closed

Sign up or log in to comment