In this section, we will briefly look at the different multimodal tasks involving Image and Text modalities and their corresponding models. Before diving in, let’s have a small recap on what is meant by “multimodal” which was covered in previous sections. The human world is a symphony of diverse sensory inputs. We perceive and understand through sight, sound, touch, and more. This multimodality is what separates our rich understanding from the limitations of traditional, unimodal AI models. Drawing inspiration from human cognition, multimodal models aim to bridge this gap by integrating information from multiple sources, like text, images, audio, and even sensor data. This fusion of modalities leads to a more comprehensive and nuanced understanding of the world, unlocking a vast range of tasks and applications.
Before looking into specific models, it’s crucial to understand the diverse range of tasks involving image and text. These tasks include but are not limited to:
Visual Question Anwering (VQA) and Visual Reasoning: Imagine a machine that looks at a picture and understands your questions about it. Visual Question Answering (VQA) is just that! It trains computers to extract meaning from images and answer questions like “Who’s driving the car?” while Visual Reasoning is the secret sauce, enabling the machine to go beyond simple recognition and infer relationships, compare objects, and understand scene context to give accurate answers. It’s like asking a detective to read the clues in a picture, only much faster and better!
Document Visual Question Answering (DocVQA): Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. That’s Document Visual Question Answering (DocVQA) in a nutshell. It combines computer vision for processing image elements and natural language processing to interpret text, allowing machines to “read” and answer questions about documents just like humans do. Think of it as supercharging document search with AI to unlock all the information trapped within those images.
Image captioning: Image captioning bridges the gap between vision and language. It analyzes an image like a detective, extracting details, understanding the scene, and then crafting a sentence or two that tells the story – a sunset over a calm sea, a child laughing on a swing, or even a bustling city street. It’s a fascinating blend of computer vision and language, letting computers describe the world around them, one picture at a time.
Image-Text Retrieval: Image-text retrieval is like a matchmaker for images and their descriptions. Think of it like searching for a specific book in a library, but instead of browsing titles, you can use either the picture on the cover or a brief summary to find it. It’s like a super-powered search engine that understands both pictures and words, opening doors for exciting applications like image search, automatic captioning, and even helping visually impaired people “see” through text descriptions.
Visual grounding: Visual grounding is like connecting the dots between what we see and say. It’s about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. Imagine asking “Where’s the red apple in the fruit bowl?” and the AI instantly highlights it in the picture - that’s visual grounding at work!
Text-to-Image generation: Imagine a magical paintbrush that interprets your words and brings them to life! Text-to-image generation is like that; it transforms your written descriptions into unique images. It’s a blend of language understanding and image creation, where your text unlocks a visual world from photorealistic landscapes to dreamlike abstractions, all born from the power of your words.
Example of Input (Image + Text) and Output (Text) for the VQA and Visual Reasoning Models [1]
Visual Question Answering (VQA)
Visual Reasoning
In general, both VQA and Visual Reasoning are treated as Visual Question Answering (VQA) task. Some of the popular models for VQA tasks are:
from PIL import Image
from transformers import pipeline
vqa_pipeline = pipeline(
"visual-question-answering", model="Salesforce/blip-vqa-capfilt-large"
)
image = Image.open("elephant.jpeg")
question = "Is there an elephant?"
vqa_pipeline(image, question, top_k=1)
from transformers import Pix2StructProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
processor = Pix2StructProcessor.from_pretrained("google/deplot")
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
images=image,
text="Generate underlying data table of the figure below:",
return_tensors="pt",
)
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
Learn more about how to train and use VQA models in HuggingFace transformers
library here.
Example of Input (Image + Text) and Output (Text) for the Doc VQA Model. [2]
Input:
Task:
Output: Answer to the question: A text response that directly addresses the query and accurately reflects the information found in the document.
Now, let’s look at the some of the popular DocVQA models in the HuggingFace:
from transformers import pipeline
from PIL import Image
pipe = pipeline("document-question-answering", model="impira/layoutlm-document-qa")
question = "What is the purchase amount?"
image = Image.open("your-document.png")
pipe(image=image, question=question)
## [{'answer': '20,000$'}]
from transformers import pipeline
from PIL import Image
pipe = pipeline(
"document-question-answering", model="naver-clova-ix/donut-base-finetuned-docvqa"
)
question = "What is the purchase amount?"
image = Image.open("your-document.png")
pipe(image=image, question=question)
## [{'answer': '20,000$'}]
from huggingface_hub import hf_hub_download
import re
from PIL import Image
from transformers import NougatProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = NougatProcessor.from_pretrained("facebook/nougat-base")
model = VisionEncoderDecoderModel.from_pretrained("facebook/nougat-base")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# prepare PDF image for the model
filepath = hf_hub_download(
repo_id="hf-internal-testing/fixtures_docvqa",
filename="nougat_paper.png",
repo_type="dataset",
)
image = Image.open(filepath)
pixel_values = processor(image, return_tensors="pt").pixel_values
# generate transcription (here we only generate 30 tokens)
outputs = model.generate(
pixel_values.to(device),
min_length=1,
max_new_tokens=30,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
)
sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0]
sequence = processor.post_process_generation(sequence, fix_markdown=False)
# note: we're using repr here such for the sake of printing the \n characters, feel free to just print the sequence
print(repr(sequence))
Learn more about how to train and use DocVQA models in HuggingFace transformers
library here.
Example of Input (Image) and Output (Text) for the Image Captioning Model. [1]
Now, let’s look at some of the popular Image Captioning models in HuggingFace:
from transformers import pipeline
image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning")
image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png")
# [{'generated_text': 'a soccer game with a player jumping to catch the ball '}]
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained(
"Salesforce/blip-image-captioning-large"
)
img_url = "https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
microsoft/git-base
is a base-sized version of the GIT (GenerativeImage2Text) model, a Transformer decoder trained to generate text descriptions of images. It takes both image tokens and text tokens as input, predicting the next text token based on both the image and previous text. This makes it suitable for tasks like image and video captioning. Fine-tuned versions like microsoft/git-base-coco
and microsoft/git-base-textcaps
exist for specific datasets, while the base model offers a starting point for further customization. You can use git-base model in HuggingFace as follows:from transformers import AutoProcessor, AutoModelForCausalLM
import requests
from PIL import Image
processor = AutoProcessor.from_pretrained("microsoft/git-base-coco")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
Learn more about how to train and use Image Captioning models in HuggingFace transformers
libraries here.
Example of Input (Text Query) and Output (Image) for the Text-to-Image Retrieval. [1]
One of most popular model for the Image-Text Retrieval is CLIP.
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"],
images=image,
return_tensors="pt",
padding=True,
)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(
dim=1
) # we can take the softmax to get the label probabilities
Learn more about how to use CLIP for Image-Text retrieval in HuggingFace here.
Example of Input (Image + Text) and Output (Bounding Boxes).(a) Phrase Grounding (b) Expression Comprehension. [1]
Inputs:
Output: Bounding box or segmentation mask: A spatial region within the image that corresponds to the object or area described in the query. This is typically represented as coordinates or a highlighted region.
Task: Locating the relevant object or region: The model must correctly identify the part of the image that matches the query. This involves understanding both the visual content of the image and the linguistic meaning of the query.
Now, see some of the popular Visual Grounding (Object Detection) models in HuggingFace.
import requests
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process_object_detection(
outputs=outputs, threshold=0.1, target_sizes=target_sizes
)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
# Print detected objects and rescaled box coordinates
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}"
)
Illustration of Auto-regressive and Diffusion Models for Text-Image Generation.[1]
Auto-regressive Models: These models treat the task like translating text descriptions into sequences of image tokens, similar to language models generating sentences. Like puzzle pieces, these tokens, created by image tokenizers like VQ-VAE, represent basic image features. The model uses an encoder-decoder architecture: the encoder extracts information from the text prompt, and the decoder, guided by this information, predicts one image token at a time, gradually building the final image pixel by pixel. This approach allows for high control and detail, but faces challenges in handling long, complex prompts and can be slower than alternative methods like diffusion models. The generation process is shown in the above figure (a).
Stable Diffusion Models: Stable Diffusion Models uses “Latent Diffusion” technique, where it builds images from noise by progressively denoising it, guided by a text prompt and a frozen CLIP text encoder. Its light architecture with a UNet backbone and CLIP encoder allows for GPU-powered image generation, while its latent focus reduces memory consumption. This unique setup empowers diverse artistic expression, translating textual inputs into photorealistic and imaginative visuals. The generation process is shown in the above figure (b).
Now, let’s how can we use text-image generation models in HuggingFace.
Install diffusers
library
pip install diffusers --upgrade
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
pip install invisible_watermark transformers accelerate safetensors
To just use the base model, you can run:
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
pipe.to("cuda")
prompt = "An astronaut riding a unicorn"
images = pipe(prompt=prompt).images[0]
To learn more about text-image generation models, you can refer to the HuggingFace Diffusers Course.
Now you know what some of the popular tasks and models involving Image and Text modalities. But you might be wondering on how to train or fine-tune for the above mentioned tasks. So, let’s have a glimpse on training the Vision-Language models.
General framework for Transformer based vision-language models. [1]
Given an image-text pair, a VL model first extracts text features via a text encoder and a vision encoder, respectively. The text and visual features are then fed into a multimodal fusion module to produce cross-modal representations, which are then optionally fed into a decoder before generating the final outputs. An illustration of this general framework is shown in the above figure. In many cases, there are no clear boundaries among image/text backbones, multimodal fusion module, and the decoder.
Congratulations! you made it till the end. Now on to the next section for more on Vision-Language Pretrained Models.