The DETR model was proposed in End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs.
The abstract from the paper is the following:
We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines.
This model was contributed by nielsr. The original code can be found here.
The quickest way to get started with DETR is by checking the example notebooks (which showcase both inference and fine-tuning on custom data).
Here’s a TLDR explaining how DetrForObjectDetection works:
First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use
ResNet-50/ResNet-101). Let’s assume we also add a batch dimension. This means that the input to the backbone is a
tensor of shape (batch_size, 3, height, width)
, assuming the image has 3 color channels (RGB). The CNN backbone
outputs a new lower-resolution feature map, typically of shape (batch_size, 2048, height/32, width/32)
. This is
then projected to match the hidden dimension of the Transformer of DETR, which is 256
by default, using a
nn.Conv2D
layer. So now, we have a tensor of shape (batch_size, 256, height/32, width/32).
Next, the
feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model)
=
(batch_size, width/32*height/32, 256)
. So a difference with NLP models is that the sequence length is actually
longer than usual, but with a smaller d_model
(which in NLP is typically 768 or higher).
Next, this is sent through the encoder, outputting encoder_hidden_states
of the same shape (you can consider
these as image features). Next, so-called object queries are sent through the decoder. This is a tensor of shape
(batch_size, num_queries, d_model)
, with num_queries
typically set to 100 and initialized with zeros.
These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to
the encoder, they are added to the input of each attention layer. Each object query will look for a particular object
in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers
to output decoder_hidden_states
of the same shape: (batch_size, num_queries, d_model)
. Next, two heads
are added on top for object detection: a linear layer for classifying each object query into one of the objects or “no
object”, and a MLP to predict bounding boxes for each query.
The model is trained using a bipartite matching loss: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a “no object” as class and “no bounding box” as bounding box). The Hungarian matching algorithm is used to find an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance segmentation). DetrForSegmentation adds a segmentation mask head on top of DetrForObjectDetection. The mask head can be trained either jointly, or in a two steps process, where one first trains a DetrForObjectDetection model to detect bounding boxes around both “things” (instances) and “stuff” (background things like trees, roads, sky), then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes.
Tips:
num_queries
of DetrConfig). Note that it’s good to have some slack (in COCO, the
authors used 100, while the maximum number of objects in a COCO image is ~70).position_embedding_type
of
DetrConfig is set to "sine"
.auxiliary_loss
of
DetrConfig to True
, then prediction feedforward neural networks and Hungarian losses
are added after each decoder layer (with the FFNs sharing parameters).backbone
attribute of
DetrConfig to "tf_mobilenetv3_small_075"
, and then initializing the model with that
config.collate_fn
in order to batch images together, using
pad_and_create_pixel_mask().batch_size
.
It is advised to use a batch size of 2 per GPU. See this Github thread for more info.There are three ways to instantiate a DETR model (depending on what you prefer):
Option 1: Instantiate DETR with pre-trained weights for entire model
>>> from transformers import DetrForObjectDetection
>>> model = DetrForObjectDetection.from_pretrained("facebook/resnet-50")
Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone
>>> from transformers import DetrConfig, DetrForObjectDetection
>>> config = DetrConfig()
>>> model = DetrForObjectDetection(config)
Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformer
>>> config = DetrConfig(use_pretrained_backbone=False)
>>> model = DetrForObjectDetection(config)
As a summary, consider the following table:
Task | Object detection | Instance segmentation | Panoptic segmentation |
---|---|---|---|
Description | Predicting bounding boxes and class labels around objects in an image | Predicting masks around objects (i.e. instances) in an image | Predicting masks around both objects (i.e. instances) as well as “stuff” (i.e. background things like trees and roads) in an image |
Model | DetrForObjectDetection | DetrForSegmentation | DetrForSegmentation |
Example dataset | COCO detection | COCO detection, COCO panoptic | COCO panoptic |
Format of annotations to provide to DetrFeatureExtractor | {‘image_id’: int , ‘annotations’: List[Dict] } each Dict being a COCO object annotation |
{‘image_id’: int , ‘annotations’: List[Dict] } (in case of COCO detection) or {‘file_name’: str , ‘image_id’: int , ‘segments_info’: List[Dict] } (in case of COCO panoptic) |
{‘file_name’: str , ‘image_id’: int , ‘segments_info’: List[Dict] } and masks_path (path to directory containing PNG files of the masks) |
Postprocessing (i.e. converting the output of the model to COCO API) | post_process() |
post_process_segmentation() |
post_process_segmentation() , post_process_panoptic() |
evaluators | CocoEvaluator with iou_types="bbox" |
CocoEvaluator with iou_types="bbox" or "segm" |
CocoEvaluator with iou_tupes="bbox" or "segm" , PanopticEvaluator |
In short, one should prepare the data either in COCO detection or COCO panoptic format, then use
DetrFeatureExtractor to create pixel_values
, pixel_mask
and optional
labels
, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the
outputs of the model using one of the postprocessing methods of DetrFeatureExtractor. These can
be be provided to either CocoEvaluator
or PanopticEvaluator
, which allow you to calculate metrics like
mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the original repository. See the example notebooks for more info regarding evaluation.
( last_hidden_state: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None intermediate_hidden_states: typing.Optional[torch.FloatTensor] = None )
Parameters
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
torch.FloatTensor
of shape (config.decoder_layers, batch_size, sequence_length, hidden_size)
, optional, returned when config.auxiliary_loss=True
) —
Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
Base class for outputs of the DETR encoder-decoder model. This class adds one attribute to Seq2SeqModelOutput, namely an optional stack of intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a layernorm. This is useful when training the model with auxiliary decoding losses.
( loss: typing.Optional[torch.FloatTensor] = None loss_dict: typing.Optional[typing.Dict] = None logits: FloatTensor = None pred_boxes: FloatTensor = None auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None last_hidden_state: typing.Optional[torch.FloatTensor] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
torch.FloatTensor
of shape (1,)
, optional, returned when labels
are provided)) —
Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
Dict
, optional) —
A dictionary containing the individual losses. Useful for logging.
torch.FloatTensor
of shape (batch_size, num_queries, num_classes + 1)
) —
Classification logits (including no-object) for all queries.
torch.FloatTensor
of shape (batch_size, num_queries, 4)
) —
Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.
list[Dict]
, optional) —
Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss
is set to True
)
and labels are provided. It is a list of dictionaries containing the two above keys (logits
and
pred_boxes
) for each decoder layer.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
Output type of DetrForObjectDetection.
( loss: typing.Optional[torch.FloatTensor] = None loss_dict: typing.Optional[typing.Dict] = None logits: FloatTensor = None pred_boxes: FloatTensor = None pred_masks: FloatTensor = None auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None last_hidden_state: typing.Optional[torch.FloatTensor] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
torch.FloatTensor
of shape (1,)
, optional, returned when labels
are provided)) —
Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
Dict
, optional) —
A dictionary containing the individual losses. Useful for logging.
torch.FloatTensor
of shape (batch_size, num_queries, num_classes + 1)
) —
Classification logits (including no-object) for all queries.
torch.FloatTensor
of shape (batch_size, num_queries, 4)
) —
Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.
torch.FloatTensor
of shape (batch_size, num_queries, height/4, width/4)
) —
Segmentation masks logits for all queries. See also
post_process_semantic_segmentation() or
post_process_instance_segmentation()
post_process_panoptic_segmentation() to evaluate semantic, instance and panoptic
segmentation masks respectively.
list[Dict]
, optional) —
Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss
is set to True
)
and labels are provided. It is a list of dictionaries containing the two above keys (logits
and
pred_boxes
) for each decoder layer.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
Output type of DetrForSegmentation.
( num_channels = 3 num_queries = 100 max_position_embeddings = 1024 encoder_layers = 6 encoder_ffn_dim = 2048 encoder_attention_heads = 8 decoder_layers = 6 decoder_ffn_dim = 2048 decoder_attention_heads = 8 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 is_encoder_decoder = True activation_function = 'relu' d_model = 256 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 init_xavier_std = 1.0 classifier_dropout = 0.0 scale_embedding = False auxiliary_loss = False position_embedding_type = 'sine' backbone = 'resnet50' use_pretrained_backbone = True dilation = False class_cost = 1 bbox_cost = 5 giou_cost = 2 mask_loss_coefficient = 1 dice_loss_coefficient = 1 bbox_loss_coefficient = 5 giou_loss_coefficient = 2 eos_coefficient = 0.1 **kwargs )
Parameters
int
, optional, defaults to 3) —
The number of input channels.
int
, optional, defaults to 100) —
Number of object queries, i.e. detection slots. This is the maximal number of objects DetrModel can
detect in a single image. For COCO, we recommend 100 queries.
int
, optional, defaults to 256) —
Dimension of the layers.
int
, optional, defaults to 6) —
Number of encoder layers.
int
, optional, defaults to 6) —
Number of decoder layers.
int
, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
int
, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer decoder.
int
, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
int
, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
str
or function
, optional, defaults to "relu"
) —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu"
,
"relu"
, "silu"
and "gelu_new"
are supported.
float
, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
float
, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
float
, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
float
, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
float
, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
float
, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
bool
, optional, defaults to False
) —
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
str
, optional, defaults to "sine"
) —
Type of position embeddings to be used on top of the image features. One of "sine"
or "learned"
.
str
, optional, defaults to "resnet50"
) —
Name of convolutional backbone to use. Supports any convolutional backbone from the timm package. For a
list of all available models, see this
page.
bool
, optional, defaults to True
) —
Whether to use pretrained weights for the backbone.
bool
, optional, defaults to False
) —
Whether to replace stride with dilation in the last convolutional block (DC5).
float
, optional, defaults to 1) —
Relative weight of the classification error in the Hungarian matching cost.
float
, optional, defaults to 5) —
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
float
, optional, defaults to 2) —
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
float
, optional, defaults to 1) —
Relative weight of the Focal loss in the panoptic segmentation loss.
float
, optional, defaults to 1) —
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
float
, optional, defaults to 5) —
Relative weight of the L1 bounding box loss in the object detection loss.
float
, optional, defaults to 2) —
Relative weight of the generalized IoU loss in the object detection loss.
float
, optional, defaults to 0.1) —
Relative classification weight of the ‘no-object’ class in the object detection loss.
This is the configuration class to store the configuration of a DetrModel. It is used to instantiate a DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DETR facebook/detr-resnet-50 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import DetrConfig, DetrModel
>>> # Initializing a DETR facebook/detr-resnet-50 style configuration
>>> configuration = DetrConfig()
>>> # Initializing a model (with random weights) from the facebook/detr-resnet-50 style configuration
>>> model = DetrModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( format = 'coco_detection' do_resize = True size = 800 max_size = 1333 do_normalize = True image_mean = None image_std = None **kwargs )
Parameters
str
, optional, defaults to "coco_detection"
) —
Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
bool
, optional, defaults to True
) —
Whether to resize the input to a certain size
.
int
, optional, defaults to 800) —
Resize the input to the given size. Only has an effect if do_resize
is set to True
. If size is a
sequence like (width, height)
, output size will be matched to this. If size is an int, smaller edge of
the image will be matched to this number. i.e, if height > width
, then image will be rescaled to (size * height / width, size)
.
int
, optional, defaults to 1333) —
The largest size an image dimension can have (otherwise it’s capped). Only has an effect if do_resize
is
set to True
.
bool
, optional, defaults to True
) —
Whether or not to normalize the input with mean and standard deviation.
int
, optional, defaults to [0.485, 0.456, 0.406]
) —
The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean.
int
, optional, defaults to [0.229, 0.224, 0.225]
) —
The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the
ImageNet std.
Constructs a DETR feature extractor.
This feature extractor inherits from FeatureExtractionMixin which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
( images: typing.Union[PIL.Image.Image, numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] annotations: typing.Union[typing.List[typing.Dict], typing.List[typing.List[typing.Dict]]] = None return_segmentation_masks: typing.Optional[bool] = False masks_path: typing.Optional[pathlib.Path] = None pad_and_return_pixel_mask: typing.Optional[bool] = True return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) → BatchFeature
Parameters
PIL.Image.Image
, np.ndarray
, torch.Tensor
, List[PIL.Image.Image]
, List[np.ndarray]
, List[torch.Tensor]
) —
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
Dict
, List[Dict]
, optional) —
The corresponding annotations in COCO format.
In case DetrFeatureExtractor was initialized with format = "coco_detection"
, the annotations for
each image should have the following format: {‘image_id’: int, ‘annotations’: [annotation]}, with the
annotations being a list of COCO object annotations.
In case DetrFeatureExtractor was initialized with format = "coco_panoptic"
, the annotations for
each image should have the following format: {‘image_id’: int, ‘file_name’: str, ‘segments_info’:
[segment_info]} with segments_info being a list of COCO panoptic annotations.
Dict
, List[Dict]
, optional, defaults to False
) —
Whether to also include instance segmentation masks as part of the labels in case format = "coco_detection"
.
pathlib.Path
, optional) —
Path to the directory containing the PNG files that store the class-agnostic image segmentations. Only
relevant in case DetrFeatureExtractor was initialized with format = "coco_panoptic"
.
bool
, optional, defaults to True
) —
Whether or not to pad images up to the largest image in a batch and create a pixel mask.
If left to the default, will return a pixel mask that is:
str
or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt'
, return PyTorch torch.Tensor
objects.
Returns
A BatchFeature with the following fields:
pad_and_return_pixel_mask=True
or if
“pixel_mask” is in self.model_input_names
).annotations
are provided)Main method to prepare for the model one or several image(s) and optional annotations. Images are by default padded up to the largest image in a batch, and a pixel mask is created that indicates which pixels are real/which are padding.
NumPy arrays and PyTorch tensors are converted to PIL images when resizing, so the most efficient is to pass PIL images.
( pixel_values_list: typing.List[ForwardRef('torch.Tensor')] return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None ) → BatchFeature
Parameters
List[torch.Tensor]
) —
List of images (pixel values) to be padded. Each image should be a tensor of shape (C, H, W).
str
or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt'
, return PyTorch torch.Tensor
objects.
Returns
A BatchFeature with the following fields:
pad_and_return_pixel_mask=True
or if
“pixel_mask” is in self.model_input_names
).Pad images up to the largest image in a batch and create a corresponding pixel_mask
.
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
)
→
List[Dict]
Parameters
DetrObjectDetectionOutput
) —
Raw outputs of the model.
float
, optional) —
Score threshold to keep object detection predictions.
torch.Tensor
or List[Tuple[int, int]]
, optional, defaults to None
) —
Tensor of shape (batch_size, 2)
or list of tuples (Tuple[int, int]
) containing the target size
(height, width) of each image in the batch. If left to None, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.
Converts the output of DetrForObjectDetection into the format expected by the COCO api. Only supports PyTorch.
(
outputs
target_sizes: typing.List[typing.Tuple[int, int]] = None
)
→
List[torch.Tensor]
Parameters
List[Tuple[int, int]]
, optional, defaults to None
):
A list of tuples (Tuple[int, int]
) containing the target size (height, width) of each image in the
batch. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size
, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes
is specified). Each entry of each
torch.Tensor
correspond to a semantic class id.
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
float
, optional, defaults to 0.5):
The probability score threshold to keep predicted instance masks.
mask_threshold (float
, optional, defaults to 0.5):
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float
, optional, defaults to 0.8):
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple]
, optional):
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
return_coco_annotation (bool
, optional):
Defaults to False
. If set to True
, segmentation maps are returned in COCO run-length encoding (RLE)
format.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
(height, width)
where each pixel represents a segment_id
or
List[List]
run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True
. Set to None
if no mask if found above threshold
.segment_id
.segment_id
.segment_id
.(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
float
, optional, defaults to 0.5):
The probability score threshold to keep predicted instance masks.
mask_threshold (float
, optional, defaults to 0.5):
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float
, optional, defaults to 0.8):
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int]
, optional):
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple]
, optional):
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
(height, width)
where each pixel represents a segment_id
or
None
if no mask if found above threshold
. If target_sizes
is specified, segmentation is resized to
the corresponding target_sizes
entry.segment_id
.segment_id
.True
if label_id
was in label_ids_to_fuse
, False
otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id
.segment_id
.( config: DetrConfig )
Parameters
The bare DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.detr.modeling_detr.DetrModelOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using DetrFeatureExtractor. See DetrFeatureExtractor.call() for details.
torch.LongTensor
of shape (batch_size, height, width)
, optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, num_queries)
, optional) —
Not used by default. Can be used to mask object queries.
tuple(tuple(torch.FloatTensor)
, optional) —
Tuple consists of (last_hidden_state
, optional: hidden_states
, optional: attentions
)
last_hidden_state
of shape (batch_size, sequence_length, hidden_size)
, optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
torch.FloatTensor
of shape (batch_size, num_queries, hidden_size)
, optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.detr.modeling_detr.DetrModelOutput or tuple(torch.FloatTensor)
A transformers.models.detr.modeling_detr.DetrModelOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (DetrConfig) and inputs.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the decoder of the model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.torch.FloatTensor
of shape (config.decoder_layers, batch_size, sequence_length, hidden_size)
, optional, returned when config.auxiliary_loss=True
) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.The DetrModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import DetrFeatureExtractor, DetrModel
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50")
>>> model = DetrModel.from_pretrained("facebook/detr-resnet-50")
>>> # prepare image for the model
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> # forward pass
>>> outputs = model(**inputs)
>>> # the last hidden states are the final query embeddings of the Transformer decoder
>>> # these are of shape (batch_size, num_queries, hidden_size)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 100, 256]
( config: DetrConfig )
Parameters
DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using DetrFeatureExtractor. See DetrFeatureExtractor.call() for details.
torch.LongTensor
of shape (batch_size, height, width)
, optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, num_queries)
, optional) —
Not used by default. Can be used to mask object queries.
tuple(tuple(torch.FloatTensor)
, optional) —
Tuple consists of (last_hidden_state
, optional: hidden_states
, optional: attentions
)
last_hidden_state
of shape (batch_size, sequence_length, hidden_size)
, optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
torch.FloatTensor
of shape (batch_size, num_queries, hidden_size)
, optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
List[Dict]
of len (batch_size,)
, optional) —
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch
respectively). The class labels themselves should be a torch.LongTensor
of len (number of bounding boxes in the image,)
and the boxes a torch.FloatTensor
of shape (number of bounding boxes in the image, 4)
.
Returns
transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (DetrConfig) and inputs.
torch.FloatTensor
of shape (1,)
, optional, returned when labels
are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.Dict
, optional) — A dictionary containing the individual losses. Useful for logging.torch.FloatTensor
of shape (batch_size, num_queries, num_classes + 1)
) — Classification logits (including no-object) for all queries.torch.FloatTensor
of shape (batch_size, num_queries, 4)
) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.list[Dict]
, optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss
is set to True
)
and labels are provided. It is a list of dictionaries containing the two above keys (logits
and
pred_boxes
) for each decoder layer.torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.The DetrForObjectDetection forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import DetrFeatureExtractor, DetrForObjectDetection
>>> import torch
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50")
>>> model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> # convert outputs (bounding boxes and class logits) to COCO API
>>> target_sizes = torch.tensor([image.size[::-1]])
>>> results = feature_extractor.post_process_object_detection(
... outputs, threshold=0.9, target_sizes=target_sizes
... )[0]
>>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98]
Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66]
Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76]
Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93]
Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72]
( config: DetrConfig )
Parameters
DETR Model (consisting of a backbone and encoder-decoder Transformer) with a segmentation head on top, for tasks such as COCO panoptic.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.detr.modeling_detr.DetrSegmentationOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using DetrFeatureExtractor. See DetrFeatureExtractor.call() for details.
torch.LongTensor
of shape (batch_size, height, width)
, optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, num_queries)
, optional) —
Not used by default. Can be used to mask object queries.
tuple(tuple(torch.FloatTensor)
, optional) —
Tuple consists of (last_hidden_state
, optional: hidden_states
, optional: attentions
)
last_hidden_state
of shape (batch_size, sequence_length, hidden_size)
, optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
torch.FloatTensor
of shape (batch_size, num_queries, hidden_size)
, optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
List[Dict]
of len (batch_size,)
, optional) —
Labels for computing the bipartite matching loss, DICE/F-1 loss and Focal loss. List of dicts, each
dictionary containing at least the following 3 keys: ‘class_labels’, ‘boxes’ and ‘masks’ (the class labels,
bounding boxes and segmentation masks of an image in the batch respectively). The class labels themselves
should be a torch.LongTensor
of len (number of bounding boxes in the image,)
, the boxes a
torch.FloatTensor
of shape (number of bounding boxes in the image, 4)
and the masks a
torch.FloatTensor
of shape (number of bounding boxes in the image, height, width)
.
Returns
transformers.models.detr.modeling_detr.DetrSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.detr.modeling_detr.DetrSegmentationOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (DetrConfig) and inputs.
torch.FloatTensor
of shape (1,)
, optional, returned when labels
are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.Dict
, optional) — A dictionary containing the individual losses. Useful for logging.torch.FloatTensor
of shape (batch_size, num_queries, num_classes + 1)
) — Classification logits (including no-object) for all queries.torch.FloatTensor
of shape (batch_size, num_queries, 4)
) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.torch.FloatTensor
of shape (batch_size, num_queries, height/4, width/4)
) — Segmentation masks logits for all queries. See also
post_process_semantic_segmentation() or
post_process_instance_segmentation()
post_process_panoptic_segmentation() to evaluate semantic, instance and panoptic
segmentation masks respectively.list[Dict]
, optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss
is set to True
)
and labels are provided. It is a list of dictionaries containing the two above keys (logits
and
pred_boxes
) for each decoder layer.torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.The DetrForSegmentation forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> import io
>>> import requests
>>> from PIL import Image
>>> import torch
>>> import numpy
>>> from transformers import DetrFeatureExtractor, DetrForSegmentation
>>> from transformers.models.detr.feature_extraction_detr import rgb_to_id
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50-panoptic")
>>> model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
>>> # prepare image for the model
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> # forward pass
>>> outputs = model(**inputs)
>>> # Use the `post_process_panoptic_segmentation` method of `DetrFeatureExtractor` to retrieve post-processed panoptic segmentation maps
>>> # Segmentation results are returned as a list of dictionaries
>>> result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[(300, 500)])
>>> # A tensor of shape (height, width) where each value denotes a segment id, filled with -1 if no segment is found
>>> panoptic_seg = result[0]["segmentation"]
>>> # Get prediction score and segment_id to class_id mapping of each segment
>>> panoptic_segments_info = result[0]["segments_info"]