GroupViT
Overview
The GroupViT model was proposed in GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. Inspired by CLIP, GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
The abstract from the paper is the following:
Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.
Tips:
- You may specify
output_segmentation=True
in the forward ofGroupViTModel
to get the segmentation logits of input texts.
This model was contributed by xvjiarui. The TensorFlow version was contributed by ariG23498 with the help of Yih-Dar SHIEH, Amy Roberts, and Joao Gante. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GroupViT.
- The quickest way to get started with GroupViT is by checking the example notebooks (which showcase zero-shot segmentation inference).
- One can also check out the HuggingFace Spaces demo to play with GroupViT.
GroupViTConfig
class transformers.GroupViTConfig
< source >( text_config = None vision_config = None projection_dim = 256 projection_intermediate_dim = 4096 logit_scale_init_value = 2.6592 **kwargs )
Parameters
-
text_config (
dict
, optional) — Dictionary of configuration options used to initialize GroupViTTextConfig. -
vision_config (
dict
, optional) — Dictionary of configuration options used to initialize GroupViTVisionConfig. -
projection_dim (
int
, optional, defaults to 256) — Dimentionality of text and vision projection layers. -
projection_intermediate_dim (
int
, optional, defaults to 4096) — Dimentionality of intermediate layer of text and vision projection layers. -
logit_scale_init_value (
float
, optional, defaults to 2.6592) — The inital value of the logit_scale parameter. Default is used as per the original GroupViT implementation. - kwargs (optional) — Dictionary of keyword arguments.
GroupViTConfig is the configuration class to store the configuration of a GroupViTModel. It is used to instantiate a GroupViT model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT nvidia/groupvit-gcc-yfcc architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
from_text_vision_configs
< source >( text_config: GroupViTTextConfig vision_config: GroupViTVisionConfig **kwargs ) → GroupViTConfig
Instantiate a GroupViTConfig (or a derived class) from groupvit text model configuration and groupvit vision model configuration.
GroupViTTextConfig
class transformers.GroupViTTextConfig
< source >( vocab_size = 49408 hidden_size = 256 intermediate_size = 1024 num_hidden_layers = 12 num_attention_heads = 4 max_position_embeddings = 77 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 dropout = 0.0 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 **kwargs )
Parameters
-
vocab_size (
int
, optional, defaults to 49408) — Vocabulary size of the GroupViT text model. Defines the number of different tokens that can be represented by theinputs_ids
passed when calling GroupViTModel. - hidden_size (
int
, optional, defaults to 256) — Dimensionality of the encoder layers and the pooler layer. -
intermediate_size (
int
, optional, defaults to 1024) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - num_hidden_layers (
int
, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. -
num_attention_heads (
int
, optional, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder. -
max_position_embeddings (
int
, optional, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - hidden_act (
str
orfunction
, optional, defaults to"quick_gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
`"quick_gelu"
are supported. -
layer_norm_eps (
float
, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. -
attention_dropout (
float
, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. -
dropout (
float
, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. -
initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. -
initializer_factor (
float
, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).
This is the configuration class to store the configuration of a GroupViTTextModel. It is used to instantiate an GroupViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT nvidia/groupvit-gcc-yfcc architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import GroupViTTextConfig, GroupViTTextModel
>>> # Initializing a GroupViTTextModel with nvidia/groupvit-gcc-yfcc style configuration
>>> configuration = GroupViTTextConfig()
>>> model = GroupViTTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
GroupViTVisionConfig
class transformers.GroupViTVisionConfig
< source >( hidden_size = 384 intermediate_size = 1536 depths = [6, 3, 3] num_hidden_layers = 12 num_group_tokens = [64, 8, 0] num_output_groups = [64, 8, 8] num_attention_heads = 6 image_size = 224 patch_size = 16 num_channels = 3 hidden_act = 'gelu' layer_norm_eps = 1e-05 dropout = 0.0 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 assign_eps = 1.0 assign_mlp_ratio = [0.5, 4] **kwargs )
Parameters
- hidden_size (
int
, optional, defaults to 384) — Dimensionality of the encoder layers and the pooler layer. -
intermediate_size (
int
, optional, defaults to 1536) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. -
depths (
List[int]
, optional, defaults to [6, 3, 3]) — The number of layers in each encoder block. -
num_group_tokens (
List[int]
, optional, defaults to [64, 8, 0]) — The number of group tokens for each stage. -
num_output_groups (
List[int]
, optional, defaults to [64, 8, 8]) — The number of output groups for each stage, 0 means no group. -
num_attention_heads (
int
, optional, defaults to 6) — Number of attention heads for each attention layer in the Transformer encoder. -
image_size (
int
, optional, defaults to 224) — The size (resolution) of each image. -
patch_size (
int
, optional, defaults to 16) — The size (resolution) of each patch. - hidden_act (
str
orfunction
, optional, defaults to"gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
`"quick_gelu"
are supported. -
layer_norm_eps (
float
, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. -
dropout (
float
, optional, defaults to 0.0) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. -
attention_dropout (
float
, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. -
initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. -
initializer_factor (
float
, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).
This is the configuration class to store the configuration of a GroupViTVisionModel. It is used to instantiate an GroupViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT nvidia/groupvit-gcc-yfcc architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import GroupViTVisionConfig, GroupViTVisionModel
>>> # Initializing a GroupViTVisionModel with nvidia/groupvit-gcc-yfcc style configuration
>>> configuration = GroupViTVisionConfig()
>>> model = GroupViTVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
GroupViTModel
class transformers.GroupViTModel
< source >( config: GroupViTConfig )
Parameters
- config (GroupViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_segmentation: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput
or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. -
pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. -
return_loss (
bool
, optional) — Whether or not to return the contrastive loss. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput
or tuple(torch.FloatTensor)
A transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTConfig'>
) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenreturn_loss
isTrue
) — Contrastive loss for image-text similarity. -
logits_per_image (
torch.FloatTensor
of shape(image_batch_size, text_batch_size)
) — The scaled dot product scores betweenimage_embeds
andtext_embeds
. This represents the image-text similarity scores. -
logits_per_text (
torch.FloatTensor
of shape(text_batch_size, image_batch_size)
) — The scaled dot product scores betweentext_embeds
andimage_embeds
. This represents the text-image similarity scores. -
segmentation_logits (
torch.FloatTensor
of shape(batch_size, config.num_labels, logits_height, logits_width)
) — Classification scores for each pixel.The logits returned do not necessarily have the same size as the
pixel_values
passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. -
text_embeds (
torch.FloatTensor
of shape(batch_size, output_dim
) — The text embeddings obtained by applying the projection layer to the pooled output of GroupViTTextModel. -
image_embeds (
torch.FloatTensor
of shape(batch_size, output_dim
) — The image embeddings obtained by applying the projection layer to the pooled output of GroupViTVisionModel. -
text_model_output (
BaseModelOutputWithPooling
) — The output of the GroupViTTextModel. -
vision_model_output (
BaseModelOutputWithPooling
) — The output of the GroupViTVisionModel.
The GroupViTModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, GroupViTModel
>>> model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
< source >(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor
of shape (batch_size, output_dim
)
Parameters
-
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor
of shape (batch_size, output_dim
)
The text embeddings obtained by applying the projection layer to the pooled output of GroupViTTextModel.
The GroupViTModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import CLIPTokenizer, GroupViTModel
>>> model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
>>> text_features = model.get_text_features(**inputs)
get_image_features
< source >(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor
of shape (batch_size, output_dim
)
Parameters
-
pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor
of shape (batch_size, output_dim
)
The image embeddings obtained by applying the projection layer to the pooled output of GroupViTVisionModel.
The GroupViTModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, GroupViTModel
>>> model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt")
>>> image_features = model.get_image_features(**inputs)
GroupViTTextModel
forward
< source >(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
-
attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTTextConfig'>
) and inputs.
-
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
torch.FloatTensor
of shape(batch_size, hidden_size)
) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The GroupViTTextModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import CLIPTokenizer, GroupViTTextModel
>>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> model = GroupViTTextModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output # pooled (EOS token) states
GroupViTVisionModel
forward
< source >(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
-
pixel_values (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTVisionConfig'>
) and inputs.
-
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
torch.FloatTensor
of shape(batch_size, hidden_size)
) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The GroupViTVisionModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, GroupViTVisionModel
>>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> model = GroupViTVisionModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output # pooled CLS states
TFGroupViTModel
class transformers.TFGroupViTModel
< source >( *args **kwargs )
Parameters
- config (GroupViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TF 2.0 models accepts two formats as inputs:
- having all inputs as keyword arguments (like PyTorch models), or
- having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit
method which currently requires having all the
tensors in the first argument of the model call function: model(inputs)
.
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
- a single Tensor with
input_ids
only and nothing else:model(input_ids)
- a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])
ormodel([input_ids, attention_mask, token_type_ids])
- a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
call
< source >(
input_ids: TFModelInputType | None = None
pixel_values: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
return_loss: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
output_segmentation: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput
or tuple(tf.Tensor)
Parameters
-
input_ids (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
`Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using BertTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
-
pixel_values (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape(batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. -
attention_mask (
np.ndarray
ortf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
position_ids (
np.ndarray
ortf.Tensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. -
return_loss (
bool
, optional) — Whether or not to return the contrastive loss. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool
, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
Returns
transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput
or tuple(tf.Tensor)
A transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput
or a tuple of tf.Tensor
(if
return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the
configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTConfig'>
) and inputs.
-
loss (
tf.Tensor
of shape(1,)
, optional, returned whenreturn_loss
isTrue
) — Contrastive loss for image-text similarity. -
logits_per_image (
tf.Tensor
of shape(image_batch_size, text_batch_size)
) — The scaled dot product scores betweenimage_embeds
andtext_embeds
. This represents the image-text similarity scores. -
logits_per_text (
tf.Tensor
of shape(text_batch_size, image_batch_size)
) — The scaled dot product scores betweentext_embeds
andimage_embeds
. This represents the text-image similarity scores. -
segmentation_logits (
tf.Tensor
of shape(batch_size, config.num_labels, logits_height, logits_width)
) — Classification scores for each pixel.The logits returned do not necessarily have the same size as the
pixel_values
passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. -
text_embeds (
tf.Tensor
of shape(batch_size, output_dim
) — The text embeddings obtained by applying the projection layer to the pooled output of TFGroupViTTextModel. -
image_embeds (
tf.Tensor
of shape(batch_size, output_dim
) — The image embeddings obtained by applying the projection layer to the pooled output of TFGroupViTVisionModel. -
text_model_output (
TFBaseModelOutputWithPooling
) — The output of the TFGroupViTTextModel. -
vision_model_output (
TFBaseModelOutputWithPooling
) — The output of the TFGroupViTVisionModel.
The TFGroupViTModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, TFGroupViTModel
>>> import tensorflow as tf
>>> model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True
... )
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = tf.math.softmax(logits_per_image, axis=1) # we can take the softmax to get the label probabilities
get_text_features
< source >(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
text_features (tf.Tensor
of shape (batch_size, output_dim
)
Parameters
-
input_ids (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
`Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using BertTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
-
attention_mask (
np.ndarray
ortf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
position_ids (
np.ndarray
ortf.Tensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool
, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
Returns
text_features (tf.Tensor
of shape (batch_size, output_dim
)
The text embeddings obtained by applying the projection layer to the pooled output of TFGroupViTTextModel.
The TFGroupViTModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import CLIPTokenizer, TFGroupViTModel
>>> model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf")
>>> text_features = model.get_text_features(**inputs)
get_image_features
< source >(
pixel_values: TFModelInputType | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
image_features (tf.Tensor
of shape (batch_size, output_dim
)
Parameters
-
pixel_values (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
,Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape(batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool
, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
Returns
image_features (tf.Tensor
of shape (batch_size, output_dim
)
The image embeddings obtained by applying the projection layer to the pooled output of TFGroupViTVisionModel.
The TFGroupViTModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, TFGroupViTModel
>>> model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="tf")
>>> image_features = model.get_image_features(**inputs)
TFGroupViTTextModel
call
< source >(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
-
input_ids (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
`Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using BertTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
-
attention_mask (
np.ndarray
ortf.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
-
position_ids (
np.ndarray
ortf.Tensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool
, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor
(if
return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the
configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTTextConfig'>
) and inputs.
-
last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
tf.Tensor
of shape(batch_size, hidden_size)
) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.
-
hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFGroupViTTextModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import CLIPTokenizer, TFGroupViTTextModel
>>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> model = TFGroupViTTextModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output # pooled (EOS token) states
TFGroupViTVisionModel
call
< source >(
pixel_values: TFModelInputType | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
-
pixel_values (
np.ndarray
,tf.Tensor
,List[tf.Tensor]
,Dict[str, tf.Tensor]
orDict[str, np.ndarray]
and each example must have the shape(batch_size, num_channels, height, width)
) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. -
training (
bool
, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor
(if
return_dict=False
is passed or when config.return_dict=False
) comprising various elements depending on the
configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTVisionConfig'>
) and inputs.
-
last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
tf.Tensor
of shape(batch_size, hidden_size)
) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.
-
hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
-
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFGroupViTVisionModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, TFGroupViTVisionModel
>>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> model = TFGroupViTVisionModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="tf")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output # pooled CLS states