= os.listdir(SAMPLE_IMG_DIR)
sample_images sample_images
['veggie-fridge.jpeg',
'veg-groceries-table.jpg',
'fridge-splendid.jpg',
'neat-veg-groceries.jpg',
'veg-groceries-table.jpeg',
'Fruits-and-vegetables-one-a-table.jpg']
Inspiration drawn from TaskMartix aka Visual ChatGPT
format_image (image:str)
Type | Details | |
---|---|---|
image | str | Image file path |
BlipVQA (device:str)
BLIP Visual Question Answering Useful when you need an answer for a question based on an image. Examples: what is the background color of this image, how many cats are in this figure, what is in this figure?
BlipVQA.inference (image:<module'PIL.Image'from'/home/evylz/AnimalEquali ty/lv-recipe-chatbot/env/lib/python3.10/site- packages/PIL/Image.py'>, question:str)
Type | Details | |
---|---|---|
image | PIL.Image | |
question | str | |
Returns | str | Answer to the query on the image |
['veggie-fridge.jpeg',
'veg-groceries-table.jpg',
'fridge-splendid.jpg',
'neat-veg-groceries.jpg',
'veg-groceries-table.jpeg',
'Fruits-and-vegetables-one-a-table.jpg']
The process:
for img in sample_images:
img = format_image(SAMPLE_IMG_DIR / img)
display(desc, img.resize((int(img.size[0] * 0.5), int(img.size[1] * 0.5))))
CPU times: user 11.4 s, sys: 7.42 ms, total: 11.4 s
Wall time: 1.19 s
CPU times: user 13.5 s, sys: 7.5 ms, total: 13.5 s
Wall time: 1.36 s
CPU times: user 12 s, sys: 0 ns, total: 12 s
Wall time: 1.21 s
CPU times: user 12.5 s, sys: 0 ns, total: 12.5 s
Wall time: 1.27 s
CPU times: user 9.25 s, sys: 7.71 ms, total: 9.25 s
Wall time: 936 ms
CPU times: user 15.7 s, sys: 7.66 ms, total: 15.7 s
Wall time: 1.58 s
'a refrigerator with food inside'
'a table with a variety of fruits and vegetables'
'a refrigerator filled with food and drinks'
'a counter with various foods on it'
'a wooden table'
'a table with a variety of fruits and vegetables'
for img in sample_images:
img = format_image(SAMPLE_IMG_DIR / img)
desc = img_cap.inference(img)
answer += "\n" + vqa.inference(
img, f"What are three of the fruits seen in the image if any?"
)
answer += "\n" + vqa.inference(
img, f"What grains and starches are in the image if any?"
)
answer += "\n" + vqa.inference(img, f"Is there plant-based milk in the image?")
print(
f"""{desc}
{answer}"""
)
display(img.resize((int(img.size[0] * 0.75), int(img.size[1] * 0.75))))
CPU times: user 7.67 s, sys: 12.1 ms, total: 7.68 s
Wall time: 779 ms
a refrigerator with food inside
cabbage lettuce onion
apples
rice
yes
CPU times: user 10.5 s, sys: 8.13 ms, total: 10.5 s
Wall time: 1.06 s
a table with a variety of fruits and vegetables
broccoli and tomatoes
bananas apples oranges
potatoes
yes
CPU times: user 11.7 s, sys: 0 ns, total: 11.7 s
Wall time: 1.18 s
a refrigerator filled with food and drinks
broccoli and zucchini
bananas
rice
yes
CPU times: user 11.5 s, sys: 12.2 ms, total: 11.5 s
Wall time: 1.16 s
a counter with various foods on it
carrots and broccoli
apples bananas and tomatoes
rice
yes
CPU times: user 9.62 s, sys: 4.22 ms, total: 9.63 s
Wall time: 973 ms
a wooden table
potatoes and carrots
apples
potatoes
yes
CPU times: user 11.1 s, sys: 8.23 ms, total: 11.1 s
Wall time: 1.12 s
a table with a variety of fruits and vegetables
peppers broccoli and squash
watermelon limes and pineapple
rice
no
VeganIngredientFinder ()
Initialize self. See help(type(self)) for accurate signature.
VeganIngredientFinder.list_ingredients (img:str)
Type | Details | |
---|---|---|
img | str | Image file path |
Returns | str |