The BEiT-3 model was proposed in Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks by Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei.
The abstract from the paper is the following:
A big convergence of language, vision, and multimodal pretraining is emerging. In this work, we introduce a general-purpose multimodal foundation model BEiT-3, which achieves state-of-the-art transfer performance on both vision and vision-language tasks. Specifically, we advance the big convergence from three aspects: backbone architecture, pretraining task, and model scaling up. We introduce Multiway Transformers for general-purpose modeling, where the modular architecture enables both deep fusion and modality-specific encoding. Based on the shared backbone, we perform masked “language” modeling on images (Imglish), texts (English), and image-text pairs (“parallel sentences”) in a unified manner. Experimental results show that BEiT-3 obtains state-of-the-art performance on object detection (COCO), image classification (ImageNet), visual reasoning (NLVR2), visual question answering (VQAv2), image captioning (COCO), and cross-modal retrieval (Flickr30K, COCO).
This model was contributed by Raghavan. The original code can be found here.
( loss: typing.Optional[torch.Tensor] = None text_hidden: typing.Optional[torch.FloatTensor] = None image_hidden: typing.Optional[torch.FloatTensor] = None )
Parameters
torch.Tensor
of shape (1,)
, optional, returned when labels
is provided) —
Languge modeling loss from the text decoder.
torch.FloatTensor
of shape (batch_size, output_dim)
optional returned when model is initialized with with_projection=True
) —
The image embeddings obtained by applying the projection layer to the pooler_output.
torch.FloatTensor
of shape (batch_size, output_dim)
optional returned when model is initialized with with_projection=True
) —
The image embeddings obtained by applying the projection layer to the pooler_output.
Adapted from the base class for vision model’s outputs that also contains image embeddings of the pooling of the last hidden states. This class also adds the loss term from the text decoder as well as the image-text similarity scores.
( embed_dim = 768 num_attention_heads = 12 hidden_size = 3072 layers = 12 encoder_normalize_before = False normalize_before = False activation_fn = 'gelu' dropout = 0.0 attention_dropout = 0.0 activation_dropout = 0.0 subln = True max_source_positions = 1024 layernorm_eps = 1e-05 vocab_size = 64010 img_size = 224 patch_size = 16 in_chans = 3 num_labels = 2 initializer_range = 0.02 label_smoothing = 0.1 **kwargs )
Parameters
int
, optional, defaults to 64010) —
Vocabulary size of the BEiT3 model. Defines the number of different image tokens that can be used during
pre-training.
int
, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
int
, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
int
, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
int
, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
str
or function
, optional, defaults to "gelu"
) —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu"
,
"relu"
, "selu"
and "gelu_new"
are supported.
float
, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
float
, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
float
, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
int
, optional, defaults to 224) —
The size (resolution) of each image.
int
, optional, defaults to 16) —
The size (resolution) of each patch.
int
, optional, defaults to 3) —
The number of input channels.
bool
, optional, defaults to False
) —
Whether to use a mask token for masked image modeling.
bool
, optional, defaults to False
) —
Whether to use BERT-style absolute position embeddings.
bool
, optional, defaults to False
) —
Whether to use T5-style relative position embeddings in the self-attention layers.
bool
, optional, defaults to False
) —
Whether to use the same relative position embeddings across all self-attention layers of the Transformer.
float
, optional, defaults to 0.1) —
Scale to use in the self-attention layers. 0.1 for base, 1e-5 for large. Set 0 to disable layer scale.
float
, optional, defaults to 0.1) —
Stochastic depth rate per sample (when applied in the main path of residual layers).
bool
, optional, defaults to True
) —
Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the
CLS token, before applying the classification head.
List[int]
, optional, defaults to [3, 5, 7, 11]
) —
Indices of the feature maps to use for semantic segmentation.
Tuple[int]
, optional, defaults to [1, 2, 3, 6]
) —
Pooling scales used in Pooling Pyramid Module applied on the last feature map.
bool
, optional, defaults to True
) —
Whether to use an auxiliary head during training.
float
, optional, defaults to 0.4) —
Weight of the cross-entropy loss of the auxiliary head.
int
, optional, defaults to 256) —
Number of channels to use in the auxiliary head.
int
, optional, defaults to 1) —
Number of convolutional layers to use in the auxiliary head.
bool
, optional, defaults to False
) —
Whether to concatenate the output of the auxiliary head with the input before the classification layer.
int
, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a Beit3Model. It is used to instantiate an BEiT3 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BEiT3 microsoft/beit3-base-patch16-224-pt22k architecture.
Example:
>>> from transformers import BeitConfig, BeitModel
>>> # Initializing a BEiT3 beit3-base-patch16-224-pt22k style configuration
>>> configuration = Beit3Config()
>>> # Initializing a model (with random weights) from the beit3-base-patch16-224-pt22k style configuration
>>> model = Beit3Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( image_processor = None tokenizer = None **kwargs )
Parameters
XLMRobertaTokenizer
, XLMRobertaTokenizerFast
]) —
The tokenizer is a required input.
Constructs a Beit3 processor which wraps Beit3ImageProcessor and
XLMRobertaTokenizer/XLMRobertaTokenizerFast into a single processor that interits both the image processor
and tokenizer functionalities. See the __call__()
and decode() for more
information.
This method forwards all its arguments to XLMRobertaTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to XLMRobertaTokenizerFast’s decode(). Please refer to the docstring of this method for more information.
( do_resize = True size = None resample = <Resampling.BICUBIC: 3> do_center_crop = False crop_size = None do_rescale = True rescale_factor = 0.00392156862745098 do_normalize = True image_mean = None image_std = None **kwargs )
Parameters
bool
, optional, defaults to True
) —
Whether to resize the shorter edge of the input to a certain size
.
Dict[str, int]
, optional, defaults to {“height” — 768, “width”: 768}):
The size to use for resizing the image. Only has an effect if do_resize
is set to True
. If size
is a
sequence like (h, w), output size will be matched to this. If size
is an int, then image will be resized
to (size, size).
int
, optional, defaults to PIL.Image.Resampling.BICUBIC
) —
An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST
,
PIL.Image.Resampling.BOX
, PIL.Image.Resampling.BILINEAR
, PIL.Image.Resampling.HAMMING
,
PIL.Image.Resampling.BICUBIC
or PIL.Image.Resampling.LANCZOS
. Only has an effect if do_resize
is set
to True
.
bool
, optional, defaults to False
) —
Whether to crop the input at the center. If the input size is smaller than crop_size
along any edge, the
image is padded with 0’s and then center cropped.
int
, optional, defaults to {“height” — 768, “width”: 768}):
The size to use for center cropping the image. Only has an effect if do_center_crop
is set to True
.
bool
, optional, defaults to True
) —
Whether to rescale the input by a certain factor.
float
, optional, defaults to 1/255
) —
The factor to use for rescaling the image. Only has an effect if do_rescale
is set to True
.
bool
, optional, defaults to True
) —
Whether or not to normalize the input with image_mean
and image_std
. Desired output size when applying
center-cropping. Only has an effect if do_center_crop
is set to True
.
List[int]
, optional, defaults to [0.48145466, 0.4578275, 0.40821073]
) —
The sequence of means for each channel, to be used when normalizing images.
List[int]
, optional, defaults to [0.26862954, 0.26130258, 0.27577711]
) —
The sequence of standard deviations for each channel, to be used when normalizing images.
Constructs an Beit3 image processor.
This image processor inherits from ImageProcessingMixin which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample: Resampling = None do_center_crop: typing.Optional[bool] = None crop_size: typing.Union[typing.Dict[str, int], NoneType] = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> **kwargs )
Parameters
ImageInput
) —
The image or batch of images to be prepared.
bool
, optional, defaults to self.do_resize
) —
Whether or not to resize the input. If True
, will resize the input to the size specified by size
.
Dict[str, int]
, optional, defaults to self.size
) —
The size to resize the input to. Only has an effect if do_resize
is set to True
.
PILImageResampling
, optional, defaults to self.resample
) —
The resampling filter to use when resizing the input. Only has an effect if do_resize
is set to
True
.
bool
, optional, defaults to self.do_center_crop
) —
Whether or not to center crop the input. If True
, will center crop the input to the size specified by
crop_size
.
Dict[str, int]
, optional, defaults to self.crop_size
) —
The size to center crop the input to. Only has an effect if do_center_crop
is set to True
.
bool
, optional, defaults to self.do_rescale
) —
Whether or not to rescale the input. If True
, will rescale the input by dividing it by
rescale_factor
.
float
, optional, defaults to self.rescale_factor
) —
The factor to rescale the input by. Only has an effect if do_rescale
is set to True
.
bool
, optional, defaults to self.do_normalize
) —
Whether or not to normalize the input. If True
, will normalize the input by subtracting image_mean
and dividing by image_std
.
Union[float, List[float]]
, optional, defaults to self.image_mean
) —
The mean to subtract from the input when normalizing. Only has an effect if do_normalize
is set to
True
.
Union[float, List[float]]
, optional, defaults to self.image_std
) —
The standard deviation to divide the input by when normalizing. Only has an effect if do_normalize
is
set to True
.
str
or TensorType
, optional) —
The type of tensors to return. Can be one of:np.ndarray
.TensorType.TENSORFLOW
or 'tf'
: Return a batch of type tf.Tensor
.TensorType.PYTORCH
or 'pt'
: Return a batch of type torch.Tensor
.TensorType.NUMPY
or 'np'
: Return a batch of type np.ndarray
.TensorType.JAX
or 'jax'
: Return a batch of type jax.numpy.ndarray
.ChannelDimension
or str
, optional, defaults to ChannelDimension.FIRST
) —
The channel dimension format for the output image. Can be one of:ChannelDimension.FIRST
: image in (num_channels, height, width) format.ChannelDimension.LAST
: image in (height, width, num_channels) format.Prepares an image or batch of images for the model.
( config )
Parameters
Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids = None pixel_values = None text_padding_position = None attn_mask = None vision_masked_position = None incremental_state = None positions = None return_dict = None output_hidden_states = True )
( config )
Parameters
Beit3ForCaptioning has a Linear head on top of Beit3Model for Image captioning . Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids pixel_values padding_mask language_masked_pos text_len = None incremental_state = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None )
Parameters
torch.LongTensor
of shape ({0})
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
torch.LongTensor
of shape ({0})
) —
language_masked_pos for denoting tokens for captioning
torch.LongTensor
of shape ({0})
) —
Length of text for captioning
Dict
) —
A Dictionary containing the incremental states layerwise
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the classification loss. Indices should be in [0, ..., config.num_labels - 1]
. A
classification loss is computed (Cross-Entropy) against these labels.
The Beit3ForCaptioning forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( config )
Parameters
Beit3ForImageClassification has a Linear head on top of Beit3Model for classification. Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( pixel_values: typing.Optional[torch.FloatTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None )
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the classification loss. Indices should be in [0, ..., config.num_labels - 1]
. A
classification loss is computed (Cross-Entropy) against these labels.
The Beit3ForImageClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( config )
Parameters
Beit Model transformer with a ‘language’ modeling head on top. BEiT does masked image modeling by predicting visual tokens of a Vector-Quantize Variational Autoencoder (VQ-VAE), whereas other vision models like ViT and DeiT predict RGB pixel values. As a result, this class is incompatible with AutoModelForMaskedImageModeling, so you will need to use BeitForMaskedImageModeling directly if you wish to do masked image modeling with BEiT. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: LongTensor pixel_values: FloatTensor padding_mask = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
torch.LongTensor
of shape ({0})
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
torch.LongTensor
of shape ({0})
) —
Padding mask for input tokens , of same shape as input_ids
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The Beit3ForImageTextRetrieval forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( config )
Parameters
Beit3ForVisualQuestionAnswering has a Linear head on top of Beit3Model for visual question answering . Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids pixel_values padding_mask output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None )
Parameters
torch.LongTensor
of shape ({0})
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
torch.LongTensor
of shape ({0})
) —
Padding mask for input tokens , of same shape as input_ids
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the classification loss. Indices should be in [0, ..., config.num_labels - 1]
. A
classification loss is computed (Cross-Entropy) against these labels.
The Beit3ForVisualQuestionAnswering forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( config )
Parameters
Beit3ForVisualReasoning has a MLP head on top of Beit3Model. Beit3 is a multimodal foundation model, It achieves big convergence from bachbone architecture, pretraining task and model scaling. The key idea in BEiT-3 is to model images as another language. Beit3 uses multiway Transformers architecture which uses a shared self-attention module. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids pixel_values1 pixel_values2 padding_mask output_hidden_states = None return_dict = None labels = None )
Parameters
torch.LongTensor
of shape ({0})
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
torch.LongTensor
of shape ({0})
) —
Padding mask for input tokens , of same shape as input_ids
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. A classification loss is computed (Cross-Entropy) against these labels.
The Beit3ForVisualReasoning forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.