Transformers documentation

ColPali

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.47.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

ColPali

Overview

The ColPali model was proposed in ColPali: Efficient Document Retrieval with Vision Language Models by Manuel Faysse*, Hugues Sibille*, Tony Wu*, Bilel Omrani, Gautier Viaud, Céline Hudelot, Pierre Colombo (* denotes equal contribution).

With our new model ColPali, we propose to leverage VLMs to construct efficient multi-vector embeddings in the visual space for document retrieval. By feeding the ViT output patches from PaliGemma-3B to a linear projection, we create a multi-vector representation of documents. We train the model to maximize the similarity between these document embeddings and the query embeddings, following the ColBERT method.

Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, …) of a document. ColPali is also highly interpretable: similarity maps can be obtained between patches and query tokens. These maps highlight ColPali’s strong OCR capabilities and chart understanding.

Paper abstract:

Documents are visually rich structures that convey information through text, but also figures, page layouts, tables, or even fonts. Since modern retrieval systems mainly rely on the textual information they extract from document pages to index documents -often through lengthy and brittle processes-, they struggle to exploit key visual cues efficiently. This limits their capabilities in many practical document retrieval applications such as Retrieval Augmented Generation (RAG). To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval Benchmark ViDoRe, composed of various page-level retrieval tasks spanning multiple domains, languages, and practical settings. The inherent complexity and performance shortcomings of modern systems motivate a new concept; doing document retrieval by directly embedding the images of the document pages. We release ColPali, a Vision Language Model trained to produce high-quality multi-vector embeddings from images of document pages. Combined with a late interaction matching mechanism, ColPali largely outperforms modern document retrieval pipelines while being drastically simpler, faster and end-to-end trainable.

We release models, data, code and benchmarks under open licenses at https://huggingface.co/vidore.

Resources

  • The official blog post detailing ColPali can be found here. 📝
  • The original model implementation code for the ColPali model and for the colpali-engine package can be found here. 🌎
  • Cookbooks for learning to use the transformers-native version of ColPali, fine-tuning, and similarity maps generation can be found here. 📚

This model was contributed by @tonywu71 and @yonigozlan.

Usage

This example demonstrates how to use ColPali to embed both queries and images, calculate their similarity scores, and identify the most relevant matches. For a specific query, you can retrieve the top-k most similar images by selecting the ones with the highest similarity scores.

import torch
from PIL import Image

from transformers import ColPaliForRetrieval, ColPaliProcessor

model_name = "vidore/colpali-v1.2-hf"

model = ColPaliForRetrieval.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="cuda:0",  # or "mps" if on Apple Silicon
).eval()

processor = ColPaliProcessor.from_pretrained(model_name)

# Your inputs (replace dummy images with screenshots of your documents)
images = [
    Image.new("RGB", (32, 32), color="white"),
    Image.new("RGB", (16, 16), color="black"),
]
queries = [
    "What is the organizational structure for our R&D department?",
    "Can you provide a breakdown of last year’s financial performance?",
]

# Process the inputs
batch_images = processor(images=images).to(model.device)
batch_queries = processor(text=queries).to(model.device)

# Forward pass
with torch.no_grad():
    image_embeddings = model(**batch_images)
    query_embeddings = model(**batch_queries)

# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)

ColPaliConfig

class transformers.ColPaliConfig

< >

( vlm_config = None text_config = None embedding_dim: int = 128 **kwargs )

Parameters

  • vlm_config (PretrainedConfig, optional) — Configuration of the VLM backbone model.
  • text_config (PretrainedConfig, optional) — Configuration of the text backbone model. Overrides the text_config attribute of the vlm_config if provided.
  • embedding_dim (int, optional, defaults to 128) — Dimension of the multi-vector embeddings produced by the model.

Configuration class to store the configuration of a ColPaliForRetrieval. It is used to instantiate an instance of ColPaliForRetrieval according to the specified arguments, defining the model architecture following the methodology from the “ColPali: Efficient Document Retrieval with Vision Language Models” paper.

Creating a configuration with the default settings will result in a configuration where the VLM backbone is set to the default PaliGemma configuration, i.e the one from vidore/colpali-v1.2.

The ColPali config is very similar to PaligemmaConfig, but with an extra attribute defining the embedding dimension.

Note that contrarily to what the class name suggests (actually the name refers to the ColPali methodology), you can use a different VLM backbone model than PaliGemma by passing the corresponding VLM configuration to the class constructor.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

from transformers.models.colpali import ColPaliConfig, ColPaliForRetrieval

config = ColPaliConfig()
model = ColPaliForRetrieval(config)

ColPaliProcessor

class transformers.ColPaliProcessor

< >

( image_processor = None tokenizer = None chat_template = None **kwargs )

Parameters

  • image_processor (SiglipImageProcessor, optional) — The image processor is a required input.
  • tokenizer (LlamaTokenizerFast, optional) — The tokenizer is a required input.
  • chat_template (str, optional) — A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.

Constructs a ColPali processor which wraps a PaliGemmaProcessor and special methods to process images and queries, as well as to compute the late-interaction retrieval score.

ColPaliProcessor offers all the functionalities of PaliGemmaProcessor. See the __call__() for more information.

batch_decode

< >

( *args **kwargs )

This method forwards all its arguments to GemmaTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.

decode

< >

( *args **kwargs )

This method forwards all its arguments to GemmaTokenizerFast’s decode(). Please refer to the docstring of this method for more information.

process_images

< >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] = None **kwargs: typing_extensions.Unpack[transformers.models.colpali.processing_colpali.ColPaliProcessorKwargs] ) BatchFeature

Parameters

  • images (PIL.Image.Image, np.ndarray, torch.Tensor, List[PIL.Image.Image], List[np.ndarray], List[torch.Tensor]) — The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a number of channels, H and W are image height and width.
  • return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.
    • 'pt': Return PyTorch torch.Tensor objects.
    • 'np': Return NumPy np.ndarray objects.
    • 'jax': Return JAX jnp.ndarray objects.

Returns

BatchFeature

A BatchFeature with the following fields:

  • input_ids — List of token ids to be fed to a model.
  • attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names and if text is not None).
  • pixel_values — Pixel values to be fed to a model. Returned when images is not None.

Prepare for the model one or several image(s). This method is a wrapper around the __call__ method of the ColPaliProcessor’s ColPaliProcessor.__call__().

This method forwards the images and kwargs arguments to SiglipImageProcessor’s call().

process_queries

< >

( text: typing.Union[str, typing.List[str]] **kwargs: typing_extensions.Unpack[transformers.models.colpali.processing_colpali.ColPaliProcessorKwargs] ) BatchFeature

Parameters

  • text (str, List[str], List[List[str]]) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
  • return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Acceptable values are:

    • 'tf': Return TensorFlow tf.constant objects.
    • 'pt': Return PyTorch torch.Tensor objects.
    • 'np': Return NumPy np.ndarray objects.
    • 'jax': Return JAX jnp.ndarray objects.

Returns

BatchFeature

A BatchFeature with the following fields:

  • input_ids — List of token ids to be fed to a model.
  • attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names and if text is not None).

Prepare for the model one or several texts. This method is a wrapper around the __call__ method of the ColPaliProcessor’s ColPaliProcessor.__call__().

This method forwards the text and kwargs arguments to LlamaTokenizerFast’s call().

score_retrieval

< >

( query_embeddings: typing.Union[ForwardRef('torch.Tensor'), typing.List[ForwardRef('torch.Tensor')]] passage_embeddings: typing.Union[ForwardRef('torch.Tensor'), typing.List[ForwardRef('torch.Tensor')]] batch_size: int = 128 output_dtype: typing.Optional[ForwardRef('torch.dtype')] = None output_device: typing.Union[ForwardRef('torch.device'), str] = 'cpu' ) torch.Tensor

Parameters

  • query_embeddings (Union[torch.Tensor, List[torch.Tensor]) — Query embeddings.
  • passage_embeddings (Union[torch.Tensor, List[torch.Tensor]) — Passage embeddings.
  • batch_size (int, optional, defaults to 128) — Batch size for computing scores.
  • output_dtype (torch.dtype, optional, defaults to torch.float32) — The dtype of the output tensor. If None, the dtype of the input embeddings is used.
  • output_device (torch.device or str, optional, defaults to “cpu”) — The device of the output tensor.

Returns

torch.Tensor

A tensor of shape (n_queries, n_passages) containing the scores. The score tensor is saved on the “cpu” device.

Compute the late-interaction/MaxSim score (ColBERT-like) for the given multi-vector query embeddings (qs) and passage embeddings (ps). For ColPali, a passage is the image of a document page.

Because the embedding tensors are multi-vector and can thus have different shapes, they should be fed as: (1) a list of tensors, where the i-th tensor is of shape (sequence_length_i, embedding_dim) (2) a single tensor of shape (n_passages, max_sequence_length, embedding_dim) -> usually obtained by padding the list of tensors.

ColPaliForRetrieval

class transformers.ColPaliForRetrieval

< >

( config: ColPaliConfig )

ColPali leverages Vision Language Models (VLMs) to construct efficient multi-vector embeddings in the visual space for document retrieval. By feeding the ViT output patches from PaliGemma-3B to a linear projection, we create a multi-vector representation of documents. The model is trained to maximize the similarity between these document embeddings and the query embeddings, following the ColBERT method.

Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, …) of a document.

ColPali was introduced in the following paper: ColPali: Efficient Document Retrieval with Vision Language Models.

Resources:

  • A blog post detailing ColPali, a vision retrieval model, can be found here. 📝
  • The code for using and training the original ColPali model and for the colpali-engine package can be found here. 🌎
  • Cookbooks for learning to use the Hf version of ColPali, fine-tuning, and similarity maps generation can be found here. 📚

forward

< >

( input_ids: LongTensor = None pixel_values: FloatTensor = None attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) transformers.models.colpali.modeling_colpali.ColPaliForRetrievalOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?
  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size)) -- The tensors corresponding to the input images. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/main/en/model_doc/auto#transformers.AutoImageProcessor). See [SiglipImageProcessor.__call__()](/docs/transformers/main/en/model_doc/imagegpt#transformers.ImageGPTFeatureExtractor.__call__) for details ([]PaliGemmaProcessor`] uses SiglipImageProcessor for processing images). If none, ColPali will only process text (query embeddings).
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
    • 1 for tokens that are not masked,
    • 0 for tokens that are masked. What are attention masks? Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.
    • 1 indicates the head is not masked,
    • 0 indicates the head is masked.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the vlm backbone model.

Returns

transformers.models.colpali.modeling_colpali.ColPaliForRetrievalOutput or tuple(torch.FloatTensor)

A transformers.models.colpali.modeling_colpali.ColPaliForRetrievalOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ColPaliConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).

  • embeddings (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — The embeddings of the model.

  • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head))

    Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • image_hidden_states (torch.FloatTensor, optional) — A torch.FloatTensor of size (batch_size, num_images, sequence_length, hidden_size). image_hidden_states of the model produced by the vision encoder after projecting last hidden state.

The ColPaliForRetrieval forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

< > Update on GitHub