The Seaformer model was proposed in SeaFormer: Squeeze-enhanced Axial Transformer for Mobile Semantic Segmentation by Qiang Wan, Zilong Huang, Jiachen Lu, Gang Yu, Li Zhang. SeaFormer is a mobile-friendly semantic segmentation model that proposes a squeeze-enhanced Axial TransFormer for detail enhancement at a lower computational cost.
The abstract from the paper is the following:
Since the introduction of Vision Transformers, the landscape of many computer vision tasks (e.g., semantic segmentation), which has been overwhelmingly dominated by CNNs, recently has significantly revolutionized. However, the computational cost and memory requirement render these methods unsuitable on the mobile device, especially for the high-resolution per-pixel semantic segmentation task. In this paper, we introduce a new method squeeze-enhanced Axial TransFormer (SeaFormer) for mobile semantic segmentation. Specifically, we design a generic attention block characterized by the formulation of squeeze Axial and detail enhancement. It can be further used to create a family of backbone architectures with superior cost-effectiveness. Coupled with a light segmentation head, we achieve the best trade-off between segmentation accuracy and latency on the ARM-based mobile devices on the ADE20K and Cityscapes datasets. Critically, we beat both the mobile-friendly rivals and Transformer-based counterparts with better performance and lower latency without bells and whistles. Beyond semantic segmentation, we further apply the proposed SeaFormer architecture to image classification problem, demonstrating the potentials of serving as a versatile mobile-friendly backbone.
Tips:
<INSERT TIPS ABOUT MODEL HERE>
This model was contributed by Inderpreet01. The original code can be found fudan-zvg/SeaFormer.
( depths = [3, 3, 3] channels = [32, 64, 128, 192, 256, 320] mv2_blocks_cfgs = [[[3, 3, 32, 1], [3, 4, 64, 2], [3, 4, 64, 1]], [[5, 4, 128, 2], [5, 4, 128, 1]], [[3, 4, 192, 2], [3, 4, 192, 1]], [[5, 4, 256, 2]], [[3, 6, 320, 2]]] drop_path_rate = 0.1 emb_dims = [192, 256, 320] key_dims = [16, 20, 24] num_attention_heads = 8 mlp_ratios = [2, 4, 6] attn_ratios = 2 in_channels = [128, 192, 256, 320] in_index = [0, 1, 2, 3] decoder_channels = 192 embed_dims = [128, 160, 192] is_depthwise = True semantic_loss_ignore_index = 255 hidden_act = 'relu' **kwargs )
Parameters
int
, optional, defaults to 3) —
The number of input channels.
int
, optional, defaults to 3) —
The number of encoder blocks (i.e. stages in the Mix Transformer encoder).
List[int]
, optional, defaults to [3, 3, 3]
) —
The number of layers in each encoder block.
int
, optional, defaults to 150) —
Number of classes in output
List[int]
, optional, defaults to [32, 64, 128, 192, 256, 320]
) —
Number of input channels in each StackedMV2Block
List[List[List[int]]]
, optional, defaults to [ -- [ [3, 3, 32, 1], [3, 4, 64, 2], [3, 4, 64, 1]], [ [5, 4, 128, 2], [5, 4, 128, 1]], [ [3, 4, 192, 2], [3, 4, 192, 1]], [ [5, 4, 256, 2]], [ [3, 6, 320, 2]] ]
): Input parameters [kernel_size, expand_ratio, out_channels, stride] for all Inverted Residual blocks
within each StackedMV2Block
List[int]
, optional, defaults to [192, 256, 320]
) —
Dimension of Seaformer Attention block
List[int]
, optional, defaults to [16, 20, 24]
) —
Dimension into which key and query will be projected
int
, optional, defaults to 2) —
Ratio of dimension of value to query
List[int]
, optional, defaults to [128, 192, 256, 320]
) —
Input channels in fusion block
List[int]
, optional, defaults to [0, 1, 2, 3]
) —
Indexes required by decoder head from hidden_states
int
, optional, defaults to 192) —
Dimension of last fusion block output which will be fed to decoder head
List[int]
, optional, defaults to [128, 160, 192]
) —
Embedding dimension of Fusion block
bool
, optional, defaults to True) —
Flag if set True will perform depthwise convolution
List[int]
, optional, defaults to [128]
) —
Dimension of each of the encoder blocks.
List[int]
, optional, defaults to [1, 2, 5, 8]
) —
Number of attention heads for each attention layer in each block of the Transformer encoder.
List[int]
, optional, defaults to [2, 4, 6]
) —
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
float
, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
float
, optional, defaults to 0.1) —
The dropout probability for stochastic depth, used in the blocks of the Transformer encoder.
int
, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
str
or function
, optional, defaults to ‘relu’) —
The non-linear activation function in the encoder
This is the configuration class to store the configuration of a SeaformerModel. It is used to instantiate an Seaformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Seaformer nvidia/seaformer-b0-finetuned-ade-512-512 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import SeaformerModel, SeaformerConfig
>>> # Initializing a Seaformer nvidia/seaformer-b0-finetuned-ade-512-512 style configuration
>>> configuration = SeaformerConfig()
>>> # Initializing a model from the nvidia/seaformer-b0-finetuned-ade-512-512 style configuration
>>> model = SeaformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_reduce_labels: bool = False **kwargs )
Parameters
bool
, optional, defaults to True
) —
Whether to resize the image’s (height, width) dimensions to the specified (size["height"], size["width"])
. Can be overridden by the do_resize
parameter in the preprocess
method.
Dict[str, int]
optional, defaults to {"height" -- 512, "width": 512}
):
Size of the output image after resizing. Can be overridden by the size
parameter in the preprocess
method.
PILImageResampling
, optional, defaults to PILImageResampling.BILINEAR
) —
Resampling filter to use if resizing the image. Can be overridden by the resample
parameter in the
preprocess
method.
bool
, optional, defaults to True
) —
Whether to rescale the image by the specified scale rescale_factor
. Can be overridden by the do_rescale
parameter in the preprocess
method.
int
or float
, optional, defaults to 1/255
) —
Whether to normalize the image. Can be overridden by the do_normalize
parameter in the preprocess
method.
bool
, optional, defaults to True
) —
Whether to normalize the image. Can be overridden by the do_normalize
parameter in the preprocess
method.
float
or List[float]
, optional, defaults to IMAGENET_STANDARD_MEAN
) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean
parameter in the preprocess
method.
float
or List[float]
, optional, defaults to IMAGENET_STANDARD_STD
) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std
parameter in the preprocess
method.
bool
, optional, defaults to False
) —
Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is
used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The
background label will be replaced by 255. Can be overridden by the do_reduce_labels
parameter in the
preprocess
method.
Constructs a Seaformer image processor.
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')], NoneType] = None do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample: Resampling = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_reduce_labels: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> **kwargs )
Parameters
ImageInput
) —
Image to preprocess.
ImageInput
, optional) —
Segmentation map to preprocess.
bool
, optional, defaults to self.do_resize
) —
Whether to resize the image.
Dict[str, int]
, optional, defaults to self.size
) —
Size of the image after resize
is applied.
int
, optional, defaults to self.resample
) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling
, Only
has an effect if do_resize
is set to True
.
bool
, optional, defaults to self.do_rescale
) —
Whether to rescale the image values between [0 - 1].
float
, optional, defaults to self.rescale_factor
) —
Rescale factor to rescale the image by if do_rescale
is set to True
.
bool
, optional, defaults to self.do_normalize
) —
Whether to normalize the image.
float
or List[float]
, optional, defaults to self.image_mean
) —
Image mean.
float
or List[float]
, optional, defaults to self.image_std
) —
Image standard deviation.
bool
, optional, defaults to self.do_reduce_labels
) —
Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g.
ADE20k). The background label will be replaced by 255.
str
or TensorType
, optional) —
The type of tensors to return. Can be one of:np.ndarray
.TensorType.TENSORFLOW
or 'tf'
: Return a batch of type tf.Tensor
.TensorType.PYTORCH
or 'pt'
: Return a batch of type torch.Tensor
.TensorType.NUMPY
or 'np'
: Return a batch of type np.ndarray
.TensorType.JAX
or 'jax'
: Return a batch of type jax.numpy.ndarray
.ChannelDimension
or str
, optional, defaults to ChannelDimension.FIRST
) —
The channel dimension format for the output image. Can be one of:ChannelDimension.FIRST
: image in (num_channels, height, width) format.ChannelDimension.LAST
: image in (height, width, num_channels) format.Preprocess an image or batch of images.
( outputs target_sizes: typing.List[typing.Tuple] = None ) → semantic_segmentation
Parameters
List[Tuple]
of length batch_size
, optional) —
List of tuples corresponding to the requested final size (height, width) of each prediction. If left to
None, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor]
of length batch_size
, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes
is
specified). Each entry of each torch.Tensor
correspond to a semantic class id.
Converts the output of SeaformerForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
( config )
Parameters
The bare Seaformer encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values: FloatTensor
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See SeaformerImageProcessor.__call__()
for details.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (SeaformerConfig) and inputs.
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The SeaformerModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoImageProcessor, SeaformerModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("seaformer-large")
>>> model = SeaformerModel.from_pretrained("seaformer-large")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 128, 64, 64]
( config )
Parameters
Seaformer Model transformer with an all-MLP decode head on top e.g. for ADE20k, CityScapes. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values: FloatTensor
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See SeaformerImageProcessor.__call__()
for details.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
torch.LongTensor
of shape (batch_size, height, width)
, optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels > 1
, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (SeaformerConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor
of shape (batch_size, config.num_labels, logits_height, logits_width)
) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values
passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The SeaformerForSemanticSegmentation forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import AutoImageProcessor, SeaformerForSemanticSegmentation
>>> from PIL import Image
>>> import requests
>>> image_processor = AutoImageProcessor.from_pretrained("nvidia/seaformer-b0-finetuned-ade-512-512")
>>> model = SeaformerForSemanticSegmentation.from_pretrained("nvidia/seaformer-b0-finetuned-ade-512-512")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
>>> list(logits.shape)
[1, 150, 128, 128]