The ICT model was proposed in High-Fidelity Pluralistic Image Completion with Transformers by Ziyu Wan, Jingbo Zhang, Dongdong Chen, Jing Liao. ICT (Image Completion with Transformers) leverages both a transformer and CNNs by decoupling image completion into two steps: pluralistic appearance priors reconstruction with a transformer to recover the coherent image structures, and low-resolution upsampling with CNNs to replenish fine textures.
The abstract from the paper is the following:
Image completion has made tremendous progress with convolutional neural networks (CNNs), because of their powerful texture modeling capacity. However, due to some inherent properties (e.g., local inductive prior, spatial-invariant kernels), CNNs do not perform well in understanding global structures or naturally support pluralistic completion. Recently, transformers demonstrate their power in modeling the long-term relationship and generating diverse results, but their computation complexity is quadratic to input length, thus hampering the application in processing high-resolution images. This paper brings the best of both worlds to pluralistic image completion: appearance prior reconstruction with transformer and texture replenishment with CNN. The former transformer recovers pluralistic coherent structures together with some coarse textures, while the latter CNN enhances the local texture details of coarse priors guided by the high-resolution masked images. The proposed method vastly outperforms state-of-the-art methods in terms of three aspects: 1) large performance boost on image fidelity even compared to deterministic completion methods; 2) better diversity and higher fidelity for pluralistic completion; 3) exceptional generalization ability on large masks and generic dataset, like ImageNet.
Tips:
This model was contributed by Sheon Han. The original code can be found here.
( vocab_size = 512 hidden_size = 1024 num_hidden_layers = 35 num_attention_heads = 8 num_residual_blocks = 8 intermediate_size = 4096 activation_function = 'gelu' embedding_dropout_prob = 0.0 residual_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-12 image_size = 1024 num_channels = 3 qkv_bias = True output_height = 256 output_width = 256 clusters = None **kwargs )
Parameters
int
, optional, defaults to 512) —
Vocabulary size of the ICT model. Defines the number of different tokens that can be represented by the
pixel_values
passed when calling IctTransformer
.
int
, optional, defaults to 1024) —
Dimensionality of the embeddings and hidden states.
int
, optional, defaults to 35) —
Number of hidden layers in the Transformer encoder.
int
, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
int
, optional, defaults to 8) —
The number of residual blocks in IctGuidedUpsampler
.
int
, optional, defaults to 4096) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
str
, optional, defaults to "gelu"
) —
Activation function (can be one of the activation functions defined in src/transformers/activations.py).
Defaults to “quick_gelu”.
float
, optional, defaults to 0.0) —
The dropout ratio for the embeddings.
float
, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
float
, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
float
, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
int
, optional, defaults to 1024) —
The size (resolution) of each image.
int
, optional, defaults to 3) —
The number of input channels.
bool
, optional, defaults to True
) —
Whether to add a bias to the queries, keys and values.
int
, optional, defaults to 256) —
The height of the final image.
int
, optional, defaults to 256) —
The width of the final image.
np.ndarray
, optional, defaults to None
) —
Clusters used to quantize the image of shape (n_clusters, 3)
. Provide the same clusters
used for
IctImageProcessor
.
This is the configuration class to store the configuration of a IctModel. It is used to instantiate an ICT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ICT model trained with the ImageNet dataset sheonhan/ict-imagenet-256.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import IctConfig, IctModel
>>> # Initializing a ICT ict-imagenet-256 style configuration
>>> configuration = IctConfig()
>>> # Initializing a model (with random weights) from the ict-imagenet-256 style configuration
>>> model = IctModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( do_resize: bool = True size: typing.Union[typing.Dict[str, int], NoneType] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = False rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = False image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_color_quantize: bool = True clusters: typing.Optional[numpy.ndarray] = None **kwargs )
Parameters
bool
, optional, defaults to True
) —
Whether to resize the image’s (height, width) dimensions to the specified (size["height"], size["width"])
. Can be overridden by the do_resize
parameter in the preprocess
method.
Dict[str, int]
optional, defaults to {"height" -- 32, "width": 32}
):
Size of the output image after resizing. Can be overridden by the size
parameter in the preprocess
method.
PILImageResampling
, optional, defaults to PILImageResampling.BILINEAR
) —
Resampling filter to use if resizing the image. Can be overridden by the resample
parameter in the
preprocess
method.
bool
, optional, defaults to False
) —
Whether to rescale the image by the specified scale rescale_factor
. Can be overridden by the do_rescale
parameter in the preprocess
method.
int
or float
, optional, defaults to 1/255
) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor
parameter in the
preprocess
method.
bool
, optional, defaults to False
) —:
Whether to normalize the image. Can be overridden by the do_normalize
parameter in the preprocess
method.
float
or List[float]
, optional, defaults to IMAGENET_STANDARD_MEAN
) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean
parameter in the preprocess
method.
float
or List[float]
, optional, defaults to IMAGENET_STANDARD_STD
) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std
parameter in the preprocess
method.
bool
, optional, defaults to self.do_color_quantize
) —
Whether to color quantize the image. Can be overridden by the do_color_quantize
parameter in the
preprocess
method.
np.ndarray
, optional, defaults to self.clusters
) —
Clusters used to quantize the image of shape (n_clusters, 3)
. Only has an effect if do_color_quantize
is set to True
.
Constructs a ICT image processor.
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: typing.Optional[bool] = None size: typing.Dict[str, int] = None resample: Resampling = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_color_quantize: bool = True clusters: typing.Optional[numpy.ndarray] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> **kwargs )
Parameters
ImageInput
) —
Image to preprocess.
bool
, optional, defaults to self.do_resize
) —
Whether to resize the image.
Dict[str, int]
, optional, defaults to self.size
) —
Dictionary in the format {"height": h, "width": w}
specifying the size of the output image after
resizing.
PILImageResampling
filter, optional, defaults to self.resample
) —
PILImageResampling
filter to use if resizing the image e.g. PILImageResampling.BILINEAR
. Only has
an effect if do_resize
is set to True
.
bool
, optional, defaults to self.do_rescale
) —
Whether to rescale the image values between [0 - 1].
float
, optional, defaults to self.rescale_factor
) —
Rescale factor to rescale the image by if do_rescale
is set to True
.
bool
, optional, defaults to self.do_normalize
) —
Whether to normalize the image.
float
or List[float]
, optional, defaults to self.image_mean
) —
Image mean to use if do_normalize
is set to True
.
float
or List[float]
, optional, defaults to self.image_std
) —
Image standard deviation to use if do_normalize
is set to True
.
bool
, optional, defaults to self.do_color_quantize
) —
Whether to color quantize the image.
np.ndarray
, optional, defaults to self.clusters
) —
Clusters used to quantize the image of shape (n_clusters, 3)
. Only has an effect if
do_color_quantize
is set to True
.
str
or TensorType
, optional) —
The type of tensors to return. Can be one of:np.ndarray
.TensorType.TENSORFLOW
or 'tf'
: Return a batch of type tf.Tensor
.TensorType.PYTORCH
or 'pt'
: Return a batch of type torch.Tensor
.TensorType.NUMPY
or 'np'
: Return a batch of type np.ndarray
.TensorType.JAX
or 'jax'
: Return a batch of type jax.numpy.ndarray
.ChannelDimension
or str
, optional, defaults to ChannelDimension.FIRST
) —
The channel dimension format for the output image. Can be one of:"channels_first"
or ChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
or ChannelDimension.LAST
: image in (height, width, num_channels) format.Preprocess an image or batch of images.
( config: IctConfig use_mask_token: bool = True )
Parameters
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values: typing.Optional[torch.Tensor]
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
clusters: typing.Optional[numpy.ndarray] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedImageModelingOutput
or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, height * width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See IctImageProcessor.call()
for details.
np.ndarray
, of shape (n_clusters, 3)
) —
Clusters used to quantize the image of shape (n_clusters, 3)
before being fed to Guided Upsampler.
torch.BoolTensor
of shape (batch_size, height * width)
, optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Generate random
masks if not provided.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.MaskedImageModelingOutput
or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedImageModelingOutput
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (IctConfig) and inputs.
torch.FloatTensor
of shape (1,)
, optional, returned when bool_masked_pos
is provided) — Reconstruction loss.torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Reconstructed / completed images.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed orconfig.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size)
. Hidden-states
(also called feature maps) of the model at the output of each stage.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or whenconfig.output_attentions=True
):
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length)
. Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.The IctModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> import torch
>>> import numpy as np
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoImageProcessor, IctModel
>>> image_processor = image_AutoImageProcessor.from_pretrained("sheonhan/ict-imagenet-256")
>>> model = IctModel.from_pretrained("sheonhan/ict-imagenet-256")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> pixel_values = image_processor(image, return_tensors="pt").pixel_values
>>> # create random boolean mask of shape (batch_size, num_patches)
>>> bool_masked_pos = torch.randint(low=0, high=2, size=(pixel_values.shape[0], pixel_values.shape[1])).bool()
>>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)