BEiT-3

Overview

The BEiT-3 model was proposed in Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks by Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei.

The abstract from the paper is the following:

A big convergence of language, vision, and multimodal pretraining is emerging. In this work, we introduce a general-purpose multimodal foundation model BEiT-3, which achieves state-of-the-art transfer performance on both vision and vision-language tasks. Specifically, we advance the big convergence from three aspects: backbone architecture, pretraining task, and model scaling up. We introduce Multiway Transformers for general-purpose modeling, where the modular architecture enables both deep fusion and modality-specific encoding. Based on the shared backbone, we perform masked “language” modeling on images (Imglish), texts (English), and image-text pairs (“parallel sentences”) in a unified manner. Experimental results show that BEiT-3 obtains state-of-the-art performance on object detection (COCO), image classification (ImageNet), visual reasoning (NLVR2), visual question answering (VQAv2), image captioning (COCO), and cross-modal retrieval (Flickr30K, COCO).

This model was contributed by Raghavan. The original code can be found here.

BEiT3 specific outputs

class transformers.models.beit3.modeling_beit3.Biet3ImageTextMatchingModelOutput

< >

( loss: typing.Optional[torch.Tensor] = None text_hidden: typing.Optional[torch.FloatTensor] = None image_hidden: typing.Optional[torch.FloatTensor] = None )

Parameters

  • similarity (torch.Tensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder.
  • image_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.
  • text_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.

Adapted from the base class for vision model’s outputs that also contains image embeddings of the pooling of the last hidden states. This class also adds the loss term from the text decoder as well as the image-text similarity scores.

Beit3Config

class transformers.Beit3Config

< >

( embed_dim = 768 num_attention_heads = 12 hidden_size = 3072 layers = 12 encoder_normalize_before = False normalize_before = False activation_fn = 'gelu' dropout = 0.0 attention_dropout = 0.0 activation_dropout = 0.0 subln = True max_source_positions = 1024 layernorm_eps = 1e-05 vocab_size = 64010 img_size = 224 patch_size = 16 in_chans = 3 num_labels = 2 initializer_range = 0.02 label_smoothing = 0.1 **kwargs )

Parameters

  • vocab_size (int, optional, defaults to 64010) — Vocabulary size of the BEiT3 model. Defines the number of different image tokens that can be used during pre-training.
  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
  • num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
  • intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
  • hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
  • hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
  • attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
  • image_size (int, optional, defaults to 224) — The size (resolution) of each image.
  • patch_size (int, optional, defaults to 16) — The size (resolution) of each patch.
  • num_channels (int, optional, defaults to 3) — The number of input channels.
  • use_mask_token (bool, optional, defaults to False) — Whether to use a mask token for masked image modeling.
  • use_absolute_position_embeddings (bool, optional, defaults to False) — Whether to use BERT-style absolute position embeddings.
  • use_relative_position_bias (bool, optional, defaults to False) — Whether to use T5-style relative position embeddings in the self-attention layers.
  • use_shared_relative_position_bias (bool, optional, defaults to False) — Whether to use the same relative position embeddings across all self-attention layers of the Transformer.
  • layer_scale_init_value (float, optional, defaults to 0.1) — Scale to use in the self-attention layers. 0.1 for base, 1e-5 for large. Set 0 to disable layer scale.
  • drop_path_rate (float, optional, defaults to 0.1) — Stochastic depth rate per sample (when applied in the main path of residual layers).
  • use_mean_pooling (bool, optional, defaults to True) — Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the CLS token, before applying the classification head.
  • out_indices (List[int], optional, defaults to [3, 5, 7, 11]) — Indices of the feature maps to use for semantic segmentation.
  • pool_scales (Tuple[int], optional, defaults to [1, 2, 3, 6]) — Pooling scales used in Pooling Pyramid Module applied on the last feature map.
  • use_auxiliary_head (bool, optional, defaults to True) — Whether to use an auxiliary head during training.
  • auxiliary_loss_weight (float, optional, defaults to 0.4) — Weight of the cross-entropy loss of the auxiliary head.
  • auxiliary_channels (int, optional, defaults to 256) — Number of channels to use in the auxiliary head.
  • auxiliary_num_convs (int, optional, defaults to 1) — Number of convolutional layers to use in the auxiliary head.
  • auxiliary_concat_input (bool, optional, defaults to False) — Whether to concatenate the output of the auxiliary head with the input before the classification layer.
  • semantic_loss_ignore_index (int, optional, defaults to 255) — The index that is ignored by the loss function of the semantic segmentation model.

This is the configuration class to store the configuration of a Beit3Model. It is used to instantiate an BEiT3 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BEiT3 microsoft/beit3-base-patch16-224-pt22k architecture.

Example:

>>> from transformers import BeitConfig, BeitModel

>>> # Initializing a BEiT3 beit3-base-patch16-224-pt22k style configuration
>>> configuration = Beit3Config()

>>> # Initializing a model (with random weights) from the beit3-base-patch16-224-pt22k style configuration
>>> model = Beit3Model(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

Beit3Processor

class transformers.Beit3Processor

< >

( image_processor = None tokenizer = None **kwargs )

Parameters

  • image_processor (Beit3ImageProcessor) — The image processor is a required input.
  • tokenizer ([XLMRobertaTokenizer, XLMRobertaTokenizerFast]) — The tokenizer is a required input.

Constructs a Beit3 processor which wraps Beit3ImageProcessor and XLMRobertaTokenizer/XLMRobertaTokenizerFast into a single processor that interits both the image processor and tokenizer functionalities. See the __call__() and decode() for more information.

batch_decode

< >

( *args **kwargs )

This method forwards all its arguments to XLMRobertaTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.

decode

< >

( *args **kwargs )

This method forwards all its arguments to XLMRobertaTokenizerFast’s decode(). Please refer to the docstring of this method for more information.

Beit3ImageProcessor

class transformers.Beit3ImageProcessor

< >

( do_resize = True size = None resample = <Resampling.BICUBIC: 3> do_center_crop = False crop_size = None do_rescale = True rescale_factor = 0.00392156862745098 do_normalize = True image_mean = None image_std = None **kwargs )

Parameters

  • do_resize (bool, optional, defaults to True) — Whether to resize the shorter edge of the input to a certain size.
  • size (Dict[str, int], optional, defaults to {“height” — 768, “width”: 768}): The size to use for resizing the image. Only has an effect if do_resize is set to True. If size is a sequence like (h, w), output size will be matched to this. If size is an int, then image will be resized to (size, size).
  • resample (int, optional, defaults to PIL.Image.Resampling.BICUBIC) — An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST, PIL.Image.Resampling.BOX, PIL.Image.Resampling.BILINEAR, PIL.Image.Resampling.HAMMING, PIL.Image.Resampling.BICUBIC or PIL.Image.Resampling.LANCZOS. Only has an effect if do_resize is set to True.
  • do_center_crop (bool, optional, defaults to False) — Whether to crop the input at the center. If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped.
  • crop_size (int, optional, defaults to {“height” — 768, “width”: 768}): The size to use for center cropping the image. Only has an effect if do_center_crop is set to True.
  • do_rescale (bool, optional, defaults to True) — Whether to rescale the input by a certain factor.
  • rescale_factor (float, optional, defaults to 1/255) — The factor to use for rescaling the image. Only has an effect if do_rescale is set to True.
  • do_normalize (bool, optional, defaults to True) — Whether or not to normalize the input with image_mean and image_std. Desired output size when applying center-cropping. Only has an effect if do_center_crop is set to True.
  • image_mean (List[int], optional, defaults to [0.48145466, 0.4578275, 0.40821073]) — The sequence of means for each channel, to be used when normalizing images.
  • image_std (List[int], optional, defaults to [0.26862954, 0.26130258, 0.27577711]) — The sequence of standard deviations for each channel, to be used when normalizing images.

Constructs an Beit3 image processor.

This image processor inherits from ImageProcessingMixin which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

preprocess

< >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample: Resampling = None do_center_crop: typing.Optional[bool] = None crop_size: typing.Union[typing.Dict[str, int], NoneType] = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> **kwargs )

Parameters

  • images (ImageInput) — The image or batch of images to be prepared.
  • do_resize (bool, optional, defaults to self.do_resize) — Whether or not to resize the input. If True, will resize the input to the size specified by size.
  • size (Dict[str, int], optional, defaults to self.size) — The size to resize the input to. Only has an effect if do_resize is set to True.
  • resample (PILImageResampling, optional, defaults to self.resample) — The resampling filter to use when resizing the input. Only has an effect if do_resize is set to True.
  • do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether or not to center crop the input. If True, will center crop the input to the size specified by crop_size.
  • crop_size (Dict[str, int], optional, defaults to self.crop_size) — The size to center crop the input to. Only has an effect if do_center_crop is set to True.
  • do_rescale (bool, optional, defaults to self.do_rescale) — Whether or not to rescale the input. If True, will rescale the input by dividing it by rescale_factor.
  • rescale_factor (float, optional, defaults to self.rescale_factor) — The factor to rescale the input by. Only has an effect if do_rescale is set to True.
  • do_normalize (bool, optional, defaults to self.do_normalize) — Whether or not to normalize the input. If True, will normalize the input by subtracting image_mean and dividing by image_std.
  • image_mean (Union[float, List[float]], optional, defaults to self.image_mean) — The mean to subtract from the input when normalizing. Only has an effect if do_normalize is set to True.
  • image_std (Union[float, List[float]], optional, defaults to self.image_std) — The standard deviation to divide the input by when normalizing. Only has an effect if do_normalize is set to True.
  • return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of:
    • Unset: Return a list of np.ndarray.
    • TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
    • TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
    • TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
    • TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
  • data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of:
    • ChannelDimension.FIRST: image in (num_channels, height, width) format.
    • ChannelDimension.LAST: image in (height, width, num_channels) format.
    • Unset: defaults to the channel dimension format of the input image.

Prepares an image or batch of images for the model.

Beit3Model

class transformers.Beit3Model

< >

( config )

Parameters

  • config (Beit3Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids = None pixel_values = None text_padding_position = None attn_mask = None vision_masked_position = None incremental_state = None positions = None return_dict = None output_hidden_states = True )

Beit3ForCaptioning

class transformers.Beit3ForCaptioning

< >

( config )

Parameters

  • config (Beit3Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Beit3ForCaptioning has a Linear head on top of Beit3Model for Image captioning . Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids pixel_values padding_mask language_masked_pos text_len = None incremental_state = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None )

Parameters

  • input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
  • language_masked_pos (torch.LongTensor of shape ({0})) — language_masked_pos for denoting tokens for captioning

    • 1 indicates the token is Present,
    • 0 indicates the token is absent.
  • text_len (torch.LongTensor of shape ({0})) — Length of text for captioning
  • incremental_state (Dict) — A Dictionary containing the incremental states layerwise
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the classification loss. Indices should be in [0, ..., config.num_labels - 1]. A classification loss is computed (Cross-Entropy) against these labels.

The Beit3ForCaptioning forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Beit3ForImageClassification

class transformers.Beit3ForImageClassification

< >

( config )

Parameters

  • config (Beit3Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Beit3ForImageClassification has a Linear head on top of Beit3Model for classification. Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: typing.Optional[torch.FloatTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None )

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the classification loss. Indices should be in [0, ..., config.num_labels - 1]. A classification loss is computed (Cross-Entropy) against these labels.

The Beit3ForImageClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Beit3ForImageTextRetrieval

class transformers.Beit3ForImageTextRetrieval

< >

( config )

Parameters

  • config (Beit3Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Beit Model transformer with a ‘language’ modeling head on top. BEiT does masked image modeling by predicting visual tokens of a Vector-Quantize Variational Autoencoder (VQ-VAE), whereas other vision models like ViT and DeiT predict RGB pixel values. As a result, this class is incompatible with AutoModelForMaskedImageModeling, so you will need to use BeitForMaskedImageModeling directly if you wish to do masked image modeling with BEiT. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: LongTensor pixel_values: FloatTensor padding_mask = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )

Parameters

  • input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
  • padding_mask (torch.LongTensor of shape ({0})) — Padding mask for input tokens , of same shape as input_ids

    • 1 indicates the token is not masked,
    • 0 indicates the token is masked.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

The Beit3ForImageTextRetrieval forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Beit3ForVisualQuestionAnswering

class transformers.Beit3ForVisualQuestionAnswering

< >

( config )

Parameters

  • config (Beit3Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Beit3ForVisualQuestionAnswering has a Linear head on top of Beit3Model for visual question answering . Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids pixel_values padding_mask output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None )

Parameters

  • input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
  • padding_mask (torch.LongTensor of shape ({0})) — Padding mask for input tokens , of same shape as input_ids

    • 1 indicates the token is not masked,
    • 0 indicates the token is masked.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the classification loss. Indices should be in [0, ..., config.num_labels - 1]. A classification loss is computed (Cross-Entropy) against these labels.

The Beit3ForVisualQuestionAnswering forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Beit3ForVisualReasoning

class transformers.Beit3ForVisualReasoning

< >

( config )

Parameters

  • config (Beit3Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Beit3ForVisualReasoning has a MLP head on top of Beit3Model. Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids pixel_values1 pixel_values2 padding_mask output_hidden_states = None return_dict = None labels = None )

Parameters

  • input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • pixel_values1 (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
  • pixel_values2 (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
  • padding_mask (torch.LongTensor of shape ({0})) — Padding mask for input tokens , of same shape as input_ids

    • 1 indicates the token is not masked,
    • 0 indicates the token is masked.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. A classification loss is computed (Cross-Entropy) against these labels.

The Beit3ForVisualReasoning forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.