Transformers documentation

CLIP

You are viewing v4.15.0 version. A newer version v4.46.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

CLIP

Overview

The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.

The abstract from the paper is the following:

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.

Usage

CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score.

To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The CLIPFeatureExtractor can be used to resize (or rescale) and normalize images for the model.

The CLIPTokenizer is used to encode the text. The CLIPProcessor wraps CLIPFeatureExtractor and CLIPTokenizer into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using CLIPProcessor and CLIPModel.

>>> from PIL import Image
>>> import requests

>>> from transformers import CLIPProcessor, CLIPModel

>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities

This model was contributed by valhalla. The original code can be found here.

CLIPConfig

class transformers.CLIPConfig < >

( text_config_dict = None vision_config_dict = None projection_dim = 512 logit_scale_init_value = 2.6592 **kwargs )

Parameters

  • text_config_dict (dict, optional) — Dictionary of configuration options used to initialize CLIPTextConfig.
  • vision_config_dict (dict, optional) — Dictionary of configuration options used to initialize CLIPVisionConfig.
  • projection_dim (int, optional, defaults to 512) — Dimentionality of text and vision projection layers.
  • logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original CLIP implementation.
  • kwargs (optional) — Dictionary of keyword arguments.

CLIPConfig is the configuration class to store the configuration of a CLIPModel. It is used to instantiate CLIP model according to the specified arguments, defining the text model and vision model configs.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

from_text_vision_configs < >

( text_config: CLIPTextConfig vision_config: CLIPVisionConfig **kwargs ) β†’ CLIPConfig

Returns

CLIPConfig

An instance of a configuration object

Instantiate a CLIPConfig (or a derived class) from clip text model configuration and clip vision model configuration.

CLIPTextConfig

class transformers.CLIPTextConfig < >

( vocab_size = 49408 hidden_size = 512 intermediate_size = 2048 num_hidden_layers = 12 num_attention_heads = 8 max_position_embeddings = 77 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 dropout = 0.0 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 **kwargs )

Parameters

  • vocab_size (int, optional, defaults to 49408) — Vocabulary size of the CLIP text model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling CLIPModel.
  • hidden_size (int, optional, defaults to 512) — Dimensionality of the encoder layers and the pooler layer.
  • intermediate_size (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
  • num_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder.
  • max_position_embeddings (int, optional, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
  • hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5): The epsilon used by the layer normalization layers.
  • attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • dropout (float, optional, defaults to 0.0) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).

This is the configuration class to store the configuration of a CLIPModel. It is used to instantiate an CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIP openai/clip-vit-base-patch32 architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import CLIPTextModel, CLIPTextConfig

>>> # Initializing a CLIPTextModel with openai/clip-vit-base-patch32 style configuration
>>> configuration = CLIPTextConfig()

>>> # Initializing a CLIPTextConfig from the openai/clip-vit-base-patch32 style configuration
>>> model = CLIPTextModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

CLIPVisionConfig

class transformers.CLIPVisionConfig < >

( hidden_size = 768 intermediate_size = 3072 num_hidden_layers = 12 num_attention_heads = 12 image_size = 224 patch_size = 32 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 dropout = 0.0 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 **kwargs )

Parameters

  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
  • intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
  • num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
  • image_size (int, optional, defaults to 224) — The size (resolution) of each image.
  • patch_size (int, optional, defaults to 32) — The size (resolution) of each patch.
  • hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5): The epsilon used by the layer normalization layers.
  • dropout (float, optional, defaults to 0.0) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
  • attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).

This is the configuration class to store the configuration of a CLIPModel. It is used to instantiate an CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIP openai/clip-vit-base-patch32 architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import CLIPVisionModel, CLIPVisionConfig

>>> # Initializing a CLIPVisionModel with openai/clip-vit-base-patch32 style configuration
>>> configuration = CLIPVisionConfig()

>>> # Initializing a CLIPVisionModel model from the openai/clip-vit-base-patch32 style configuration
>>> model = CLIPVisionModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

CLIPTokenizer

class transformers.CLIPTokenizer < >

( vocab_file merges_file errors = 'replace' unk_token = '<|endoftext|>' bos_token = '<|startoftext|>' eos_token = '<|endoftext|>' pad_token = '<|endoftext|>' add_prefix_space = False do_lower_case = True **kwargs )

Parameters

  • vocab_file (str) — Path to the vocabulary file.
  • merges_file (str) — Path to the merges file.
  • errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.
  • unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
  • bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token.
  • eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token.
  • add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (CLIP tokenizer detect beginning of words by the preceding space).

Construct a CLIP tokenizer. Based on byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).

This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

build_inputs_with_special_tokens < >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) β†’ List[int]

Parameters

  • token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
  • token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.

Returns

List[int]

List of input IDs with the appropriate special tokens.

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A CLIP sequence has the following format:

  • single sequence: <|startoftext|> X <|endoftext|>

Pairs of sequences are not the expected use case, but they will be handled without a separator.

get_special_tokens_mask < >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) β†’ List[int]

Parameters

  • token_ids_0 (List[int]) — List of IDs.
  • token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
  • already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.

Returns

List[int]

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

create_token_type_ids_from_sequences < >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) β†’ List[int]

Parameters

  • token_ids_0 (List[int]) — The first tokenized sequence.
  • token_ids_1 (List[int], optional) — The second tokenized sequence.

Returns

List[int]

The token type ids.

Create the token type IDs corresponding to the sequences passed. What are token type IDs?

Should be overridden in a subclass if the model has a special way of building those.

CLIPTokenizerFast

class transformers.CLIPTokenizerFast < >

( vocab_file = None merges_file = None tokenizer_file = None unk_token = '<|endoftext|>' bos_token = '<|startoftext|>' eos_token = '<|endoftext|>' pad_token = '<|endoftext|>' add_prefix_space = False **kwargs )

Parameters

  • vocab_file (str) — Path to the vocabulary file.
  • merges_file (str) — Path to the merges file.
  • errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.
  • unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
  • bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token.
  • eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token.
  • add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (CLIP tokenizer detect beginning of words by the preceding space).
  • trim_offsets (bool, optional, defaults to True) — Whether or not the post-processing step should trim offsets to avoid including whitespaces.

Construct a β€œfast” CLIP tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

>>> from transformers import CLIPTokenizerFast
>>> tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32")
>>> tokenizer("Hello world")['input_ids']
[15496, 995]
>>> tokenizer(" Hello world")['input_ids']
[18435, 995]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.

This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

CLIPFeatureExtractor

class transformers.CLIPFeatureExtractor < >

( do_resize = True size = 224 resample = 3 do_center_crop = True crop_size = 224 do_normalize = True image_mean = None image_std = None **kwargs )

Parameters

  • do_resize (bool, optional, defaults to True) — Whether to resize the input to a certain size.
  • size (int, optional, defaults to 224) — Resize the input to the given size. Only has an effect if do_resize is set to True.
  • resample (int, optional, defaults to PIL.Image.BICUBIC) — An optional resampling filter. This can be one of PIL.Image.NEAREST, PIL.Image.BOX, PIL.Image.BILINEAR, PIL.Image.HAMMING, PIL.Image.BICUBIC or PIL.Image.LANCZOS. Only has an effect if do_resize is set to True.
  • do_center_crop (bool, optional, defaults to True) — Whether to crop the input at the center. If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped.
  • crop_size (int, optional, defaults to 224) — Desired output size when applying center-cropping. Only has an effect if do_center_crop is set to True.
  • do_normalize (bool, optional, defaults to True) — Whether or not to normalize the input with image_mean and image_std.
  • image_mean (List[int], defaults to [0.485, 0.456, 0.406]) — The sequence of means for each channel, to be used when normalizing images.
  • image_std (List[int], defaults to [0.229, 0.224, 0.225]) — The sequence of standard deviations for each channel, to be used when normalizing images.

Constructs a CLIP feature extractor.

This feature extractor inherits from FeatureExtractionMixin which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

center_crop < >

( image size )

Parameters

  • image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to resize.
  • size (int or Tuple[int, int]) — The size to which crop the image.

Crops image to the given size using a center crop. Note that if the image is too small to be cropped to the size is given, it will be padded (so the returned result has the size asked).

resize < >

( image size resample = 3 )

Parameters

  • image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to resize.
  • size (int or Tuple[int, int]) — The size to use for resizing the image. If int it will be resized to match the shorter side
  • resample (int, optional, defaults to PIL.Image.BILINEAR) — The filter to user for resampling.

Resizes image. Note that this will trigger a conversion of image to a PIL Image.

CLIPProcessor

class transformers.CLIPProcessor < >

( feature_extractor tokenizer )

Parameters

Constructs a CLIP processor which wraps a CLIP feature extractor and a CLIP tokenizer into a single processor.

CLIPProcessor offers all the functionalities of CLIPFeatureExtractor and CLIPTokenizer. See the __call__() and decode() for more information.

batch_decode < >

( *args **kwargs )

This method forwards all its arguments to CLIPTokenizer’s batch_decode(). Please refer to the docstring of this method for more information.

decode < >

( *args **kwargs )

This method forwards all its arguments to CLIPTokenizer’s decode(). Please refer to the docstring of this method for more information.

from_pretrained < >

( pretrained_model_name_or_path **kwargs )

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — This can be either:

    • a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like clip-vit-base-patch32, or namespaced under a user or organization name, like openai/clip-vit-base-patch32.
    • a path to a directory containing a feature extractor file saved using the save_pretrained method, e.g., ./my_model_directory/.
    • a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json.

Instantiate a CLIPProcessor from a pretrained CLIP processor.

This class method is simply calling CLIPFeatureExtractor’s from_pretrained and CLIPTokenizer’s from_pretrained. Please refer to the docstrings of the methods above for more information.

save_pretrained < >

( save_directory )

Parameters

  • save_directory (str or os.PathLike) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).

Save a CLIP feature extractor object and CLIP tokenizer object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method.

This class method is simply calling save_pretrained and save_pretrained. Please refer to the docstrings of the methods above for more information.

CLIPModel

class transformers.CLIPModel < >

( config: CLIPConfig )

Parameters

  • config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward < >

( input_ids = None pixel_values = None attention_mask = None position_ids = None return_loss = None output_attentions = None output_hidden_states = None return_dict = None ) β†’ transformers.models.clip.modeling_clip.CLIPOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using CLIPFeatureExtractor. See CLIPFeatureExtractor.__call__() for details.
  • return_loss (bool, optional) — Whether or not to return the contrastive loss.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

transformers.models.clip.modeling_clip.CLIPOutput or tuple(torch.FloatTensor)

A transformers.models.clip.modeling_clip.CLIPOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) β€” Contrastive loss for image-text similarity.
  • logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) β€” The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores.
  • logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) β€” The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores.
  • text_embeds(torch.FloatTensor of shape (batch_size, output_dim) β€” The text embeddings obtained by applying the projection layer to the pooled output of CLIPTextModel.
  • image_embeds(torch.FloatTensor of shape (batch_size, output_dim) β€” The image embeddings obtained by applying the projection layer to the pooled output of CLIPVisionModel.
  • text_model_output(BaseModelOutputWithPooling): The output of the CLIPTextModel.
  • vision_model_output(BaseModelOutputWithPooling): The output of the CLIPVisionModel.

The CLIPModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel

>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features < >

( input_ids = None attention_mask = None position_ids = None output_attentions = None output_hidden_states = None return_dict = None ) β†’ text_features (torch.FloatTensor of shape (batch_size, output_dim)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

text_features (torch.FloatTensor of shape (batch_size, output_dim)

The text embeddings obtained by applying the projection layer to the pooled output of CLIPTextModel.

The CLIPModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import CLIPTokenizer, CLIPModel

>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")

>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"],  padding=True, return_tensors="pt")
>>> text_features = model.get_text_features(**inputs)
get_image_features < >

( pixel_values = None output_attentions = None output_hidden_states = None return_dict = None ) β†’ image_features (torch.FloatTensor of shape (batch_size, output_dim)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using CLIPFeatureExtractor. See CLIPFeatureExtractor.__call__() for details.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

image_features (torch.FloatTensor of shape (batch_size, output_dim)

The image embeddings obtained by applying the projection layer to the pooled output of CLIPVisionModel.

The CLIPModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel

>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(images=image, return_tensors="pt")

>>> image_features = model.get_image_features(**inputs)

CLIPTextModel

forward < >

( input_ids = None attention_mask = None position_ids = None output_attentions = None output_hidden_states = None return_dict = None ) β†’ transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) β€” Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The CLIPTextModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import CLIPTokenizer, CLIPTextModel

>>> model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
>>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")

>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"],  padding=True, return_tensors="pt")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output # pooled (EOS token) states

CLIPVisionModel

forward < >

( pixel_values = None output_attentions = None output_hidden_states = None return_dict = None ) β†’ transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using CLIPFeatureExtractor. See CLIPFeatureExtractor.__call__() for details.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) β€” Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The CLIPVisionModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPVisionModel

>>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output # pooled CLS states

FlaxCLIPModel

class transformers.FlaxCLIPModel < >

( config: CLIPConfig input_shape: typing.Optional[typing.Tuple] = None seed: int = 0 dtype: dtype = <class 'jax._src.numpy.lax_numpy.float32'> **kwargs )

Parameters

  • config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).

    This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.

    Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.

    If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)

This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.

Finally, this model supports inherent JAX features such as:

__call__ < >

( input_ids pixel_values attention_mask = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) β†’ transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using CLIPFeatureExtractor. See CLIPFeatureExtractor.__call__() for details.
  • return_loss (bool, optional) — Whether or not to return the contrastive loss.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or tuple(torch.FloatTensor)

A transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs.

  • logits_per_image:(jnp.ndarray of shape (image_batch_size, text_batch_size)) β€” The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores.
  • logits_per_text:(jnp.ndarray of shape (text_batch_size, image_batch_size)) β€” The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores.
  • text_embeds(jnp.ndarray of shape (batch_size, output_dim) β€” The text embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPTextModel.
  • image_embeds(jnp.ndarray of shape (batch_size, output_dim) β€” The image embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPVisionModel.
  • text_model_output(FlaxBaseModelOutputWithPooling): The output of the FlaxCLIPTextModel.
  • vision_model_output(FlaxBaseModelOutputWithPooling): The output of the FlaxCLIPVisionModel.

The FlaxCLIPPreTrainedModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> import jax
>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, FlaxCLIPModel

>>> model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="np", padding=True)

>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = jax.nn.softmax(logits_per_image, axis=1) # we can take the softmax to get the label probabilities
get_text_features < >

( input_ids attention_mask = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train = False ) β†’ text_features (jnp.ndarray of shape (batch_size, output_dim)

Parameters

Returns

text_features (jnp.ndarray of shape (batch_size, output_dim)

The text embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPTextModel.

Examples:

>>> from transformers import CLIPTokenizer, FlaxCLIPModel

>>> model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")

>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"],  padding=True, return_tensors="np")
>>> text_features = model.get_text_features(**inputs)
get_image_features < >

( pixel_values params: dict = None dropout_rng: PRNGKey = None train = False ) β†’ image_features (jnp.ndarray of shape (batch_size, output_dim)

Parameters

  • pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using CLIPFeatureExtractor. See CLIPFeatureExtractor.__call__() for details.

Returns

image_features (jnp.ndarray of shape (batch_size, output_dim)

The image embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPVisionModel

Examples:

>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, FlaxCLIPModel

>>> model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(images=image, return_tensors="np")

>>> image_features = model.get_image_features(**inputs)

FlaxCLIPTextModel

__call__ < >

( input_ids attention_mask = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) β†’ transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)

Parameters

  • input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.

    Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs.

  • last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) β€” Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The FlaxCLIPTextPreTrainedModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from transformers import CLIPTokenizer, FlaxCLIPTextModel

>>> model = FlaxCLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
>>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")

>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"],  padding=True, return_tensors="np")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooler_output = outputs.pooler_output # pooled (EOS token) states

FlaxCLIPVisionModel

__call__ < >

( pixel_values params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) β†’ transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)

Parameters

  • pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using CLIPFeatureExtractor. See CLIPFeatureExtractor.__call__() for details.
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs.

  • last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) β€” Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the initial embedding outputs.

  • attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The FlaxCLIPVisionPreTrainedModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, FlaxCLIPVisionModel

>>> model = FlaxCLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(images=image, return_tensors="np")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooler_output = outputs.pooler_output # pooled CLS states