Transformers documentation

RT-DETR

You are viewing v4.44.2 version. A newer version v4.47.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

RT-DETR

Overview

The RT-DETR model was proposed in DETRs Beat YOLOs on Real-time Object Detection by Wenyu Lv, Yian Zhao, Shangliang Xu, Jinman Wei, Guanzhong Wang, Cheng Cui, Yuning Du, Qingqing Dang, Yi Liu.

RT-DETR is an object detection model that stands for β€œReal-Time DEtection Transformer.” This model is designed to perform object detection tasks with a focus on achieving real-time performance while maintaining high accuracy. Leveraging the transformer architecture, which has gained significant popularity in various fields of deep learning, RT-DETR processes images to identify and locate multiple objects within them.

The abstract from the paper is the following:

Recently, end-to-end transformer-based detectors (DETRs) have achieved remarkable performance. However, the issue of the high computational cost of DETRs has not been effectively addressed, limiting their practical application and preventing them from fully exploiting the benefits of no post-processing, such as non-maximum suppression (NMS). In this paper, we first analyze the influence of NMS in modern real-time object detectors on inference speed, and establish an end-to-end speed benchmark. To avoid the inference delay caused by NMS, we propose a Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge. Specifically, we design an efficient hybrid encoder to efficiently process multi-scale features by decoupling the intra-scale interaction and cross-scale fusion, and propose IoU-aware query selection to improve the initialization of object queries. In addition, our proposed detector supports flexibly adjustment of the inference speed by using different decoder layers without the need for retraining, which facilitates the practical application of real-time object detectors. Our RT-DETR-L achieves 53.0% AP on COCO val2017 and 114 FPS on T4 GPU, while RT-DETR-X achieves 54.8% AP and 74 FPS, outperforming all YOLO detectors of the same scale in both speed and accuracy. Furthermore, our RT-DETR-R50 achieves 53.1% AP and 108 FPS, outperforming DINO-Deformable-DETR-R50 by 2.2% AP in accuracy and by about 21 times in FPS.

drawing RT-DETR performance relative to YOLO models. Taken from the original paper.

The model version was contributed by rafaelpadilla and sangbumchoi. The original code can be found here.

Usage tips

Initially, an image is processed using a pre-trained convolutional neural network, specifically a Resnet-D variant as referenced in the original code. This network extracts features from the final three layers of the architecture. Following this, a hybrid encoder is employed to convert the multi-scale features into a sequential array of image features. Then, a decoder, equipped with auxiliary prediction heads is used to refine the object queries. This process facilitates the direct generation of bounding boxes, eliminating the need for any additional post-processing to acquire the logits and coordinates for the bounding boxes.

>>> import torch
>>> import requests

>>> from PIL import Image
>>> from transformers import RTDetrForObjectDetection, RTDetrImageProcessor

>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg' 
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd")
>>> model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd")

>>> inputs = image_processor(images=image, return_tensors="pt")

>>> with torch.no_grad():
...     outputs = model(**inputs)

>>> results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)

>>> for result in results:
...     for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
...         score, label = score.item(), label_id.item()
...         box = [round(i, 2) for i in box.tolist()]
...         print(f"{model.config.id2label[label]}: {score:.2f} {box}")
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]

Resources

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RT-DETR.

Object Detection

RTDetrConfig

class transformers.RTDetrConfig

< >

( initializer_range = 0.01 initializer_bias_prior_prob = None layer_norm_eps = 1e-05 batch_norm_eps = 1e-05 backbone_config = None backbone = None use_pretrained_backbone = False use_timm_backbone = False backbone_kwargs = None encoder_hidden_dim = 256 encoder_in_channels = [512, 1024, 2048] feat_strides = [8, 16, 32] encoder_layers = 1 encoder_ffn_dim = 1024 encoder_attention_heads = 8 dropout = 0.0 activation_dropout = 0.0 encode_proj_layers = [2] positional_encoding_temperature = 10000 encoder_activation_function = 'gelu' activation_function = 'silu' eval_size = None normalize_before = False hidden_expansion = 1.0 d_model = 256 num_queries = 300 decoder_in_channels = [256, 256, 256] decoder_ffn_dim = 1024 num_feature_levels = 3 decoder_n_points = 4 decoder_layers = 6 decoder_attention_heads = 8 decoder_activation_function = 'relu' attention_dropout = 0.0 num_denoising = 100 label_noise_ratio = 0.5 box_noise_scale = 1.0 learn_initial_query = False anchor_image_size = None disable_custom_kernels = True with_box_refine = True is_encoder_decoder = True matcher_alpha = 0.25 matcher_gamma = 2.0 matcher_class_cost = 2.0 matcher_bbox_cost = 5.0 matcher_giou_cost = 2.0 use_focal_loss = True auxiliary_loss = True focal_loss_alpha = 0.75 focal_loss_gamma = 2.0 weight_loss_vfl = 1.0 weight_loss_bbox = 5.0 weight_loss_giou = 2.0 eos_coefficient = 0.0001 **kwargs )

Parameters

  • initializer_range (float, optional, defaults to 0.01) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • initializer_bias_prior_prob (float, optional) — The prior probability used by the bias initializer to initialize biases for enc_score_head and class_embed. If None, prior_prob computed as prior_prob = 1 / (num_labels + 1) while initializing model weights.
  • layer_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers.
  • batch_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the batch normalization layers.
  • backbone_config (Dict, optional, defaults to RTDetrResNetConfig()) — The configuration of the backbone model.
  • backbone (str, optional) — Name of backbone to use when backbone_config is None. If use_pretrained_backbone is True, this will load the corresponding pretrained weights from the timm or transformers library. If use_pretrained_backbone is False, this loads the backbone’s config and uses that to initialize the backbone with random weights.
  • use_pretrained_backbone (bool, optional, defaults to False) — Whether to use pretrained weights for the backbone.
  • use_timm_backbone (bool, optional, defaults to False) — Whether to load backbone from the timm library. If False, the backbone is loaded from the transformers library.
  • backbone_kwargs (dict, optional) — Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. {'out_indices': (0, 1, 2, 3)}. Cannot be specified if backbone_config is set.
  • encoder_hidden_dim (int, optional, defaults to 256) — Dimension of the layers in hybrid encoder.
  • encoder_in_channels (list, optional, defaults to [512, 1024, 2048]) — Multi level features input for encoder.
  • feat_strides (List[int], optional, defaults to [8, 16, 32]) — Strides used in each feature map.
  • encoder_layers (int, optional, defaults to 1) — Total of layers to be used by the encoder.
  • encoder_ffn_dim (int, optional, defaults to 1024) — Dimension of the “intermediate” (often named feed-forward) layer in decoder.
  • encoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder.
  • dropout (float, optional, defaults to 0.0) — The ratio for all dropout layers.
  • activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.
  • encode_proj_layers (List[int], optional, defaults to [2]) — Indexes of the projected layers to be used in the encoder.
  • positional_encoding_temperature (int, optional, defaults to 10000) — The temperature parameter used to create the positional encodings.
  • encoder_activation_function (str, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
  • activation_function (str, optional, defaults to "silu") — The non-linear activation function (function or string) in the general layer. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
  • eval_size (Tuple[int, int], optional) — Height and width used to computes the effective height and width of the position embeddings after taking into account the stride.
  • normalize_before (bool, optional, defaults to False) — Determine whether to apply layer normalization in the transformer encoder layer before self-attention and feed-forward modules.
  • hidden_expansion (float, optional, defaults to 1.0) — Expansion ratio to enlarge the dimension size of RepVGGBlock and CSPRepLayer.
  • d_model (int, optional, defaults to 256) — Dimension of the layers exclude hybrid encoder.
  • num_queries (int, optional, defaults to 300) — Number of object queries.
  • decoder_in_channels (list, optional, defaults to [256, 256, 256]) — Multi level features dimension for decoder
  • decoder_ffn_dim (int, optional, defaults to 1024) — Dimension of the “intermediate” (often named feed-forward) layer in decoder.
  • num_feature_levels (int, optional, defaults to 3) — The number of input feature levels.
  • decoder_n_points (int, optional, defaults to 4) — The number of sampled keys in each feature level for each attention head in the decoder.
  • decoder_layers (int, optional, defaults to 6) — Number of decoder layers.
  • decoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer decoder.
  • decoder_activation_function (str, optional, defaults to "relu") — The non-linear activation function (function or string) in the decoder. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
  • attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • num_denoising (int, optional, defaults to 100) — The total number of denoising tasks or queries to be used for contrastive denoising.
  • label_noise_ratio (float, optional, defaults to 0.5) — The fraction of denoising labels to which random noise should be added.
  • box_noise_scale (float, optional, defaults to 1.0) — Scale or magnitude of noise to be added to the bounding boxes.
  • learn_initial_query (bool, optional, defaults to False) — Indicates whether the initial query embeddings for the decoder should be learned during training
  • anchor_image_size (Tuple[int, int], optional) — Height and width of the input image used during evaluation to generate the bounding box anchors. If None, automatic generate anchor is applied.
  • disable_custom_kernels (bool, optional, defaults to True) — Whether to disable custom kernels.
  • with_box_refine (bool, optional, defaults to True) — Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes based on the predictions from the previous layer.
  • is_encoder_decoder (bool, optional, defaults to True) — Whether the architecture has an encoder decoder structure.
  • matcher_alpha (float, optional, defaults to 0.25) — Parameter alpha used by the Hungarian Matcher.
  • matcher_gamma (float, optional, defaults to 2.0) — Parameter gamma used by the Hungarian Matcher.
  • matcher_class_cost (float, optional, defaults to 2.0) — The relative weight of the class loss used by the Hungarian Matcher.
  • matcher_bbox_cost (float, optional, defaults to 5.0) — The relative weight of the bounding box loss used by the Hungarian Matcher.
  • matcher_giou_cost (float, optional, defaults to 2.0) — The relative weight of the giou loss of used by the Hungarian Matcher.
  • use_focal_loss (bool, optional, defaults to True) — Parameter informing if focal focal should be used.
  • auxiliary_loss (bool, optional, defaults to True) — Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
  • focal_loss_alpha (float, optional, defaults to 0.75) — Parameter alpha used to compute the focal loss.
  • focal_loss_gamma (float, optional, defaults to 2.0) — Parameter gamma used to compute the focal loss.
  • weight_loss_vfl (float, optional, defaults to 1.0) — Relative weight of the varifocal loss in the object detection loss.
  • weight_loss_bbox (float, optional, defaults to 5.0) — Relative weight of the L1 bounding box loss in the object detection loss.
  • weight_loss_giou (float, optional, defaults to 2.0) — Relative weight of the generalized IoU loss in the object detection loss.
  • eos_coefficient (float, optional, defaults to 0.0001) — Relative classification weight of the ‘no-object’ class in the object detection loss.

This is the configuration class to store the configuration of a RTDetrModel. It is used to instantiate a RT-DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RT-DETR checkpoing/todo architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Examples:

>>> from transformers import RTDetrConfig, RTDetrModel

>>> # Initializing a RT-DETR configuration
>>> configuration = RTDetrConfig()

>>> # Initializing a model (with random weights) from the configuration
>>> model = RTDetrModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

from_backbone_configs

< >

( backbone_config: PretrainedConfig **kwargs ) β†’ RTDetrConfig

Parameters

Returns

RTDetrConfig

An instance of a configuration object

Instantiate a RTDetrConfig (or a derived class) from a pre-trained backbone model configuration and DETR model configuration.

RTDetrResNetConfig

class transformers.RTDetrResNetConfig

< >

( num_channels = 3 embedding_size = 64 hidden_sizes = [256, 512, 1024, 2048] depths = [3, 4, 6, 3] layer_type = 'bottleneck' hidden_act = 'relu' downsample_in_first_stage = False downsample_in_bottleneck = False out_features = None out_indices = None **kwargs )

Parameters

  • num_channels (int, optional, defaults to 3) — The number of input channels.
  • embedding_size (int, optional, defaults to 64) — Dimensionality (hidden size) for the embedding layer.
  • hidden_sizes (List[int], optional, defaults to [256, 512, 1024, 2048]) — Dimensionality (hidden size) at each stage.
  • depths (List[int], optional, defaults to [3, 4, 6, 3]) — Depth (number of layers) for each stage.
  • layer_type (str, optional, defaults to "bottleneck") — The layer to use, it can be either "basic" (used for smaller models, like resnet-18 or resnet-34) or "bottleneck" (used for larger models like resnet-50 and above).
  • hidden_act (str, optional, defaults to "relu") — The non-linear activation function in each block. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
  • downsample_in_first_stage (bool, optional, defaults to False) — If True, the first stage will downsample the inputs using a stride of 2.
  • downsample_in_bottleneck (bool, optional, defaults to False) — If True, the first conv 1x1 in ResNetBottleNeckLayer will downsample the inputs using a stride of 2.
  • out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. Must be in the same order as defined in the stage_names attribute.
  • out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. Must be in the same order as defined in the stage_names attribute.

This is the configuration class to store the configuration of a RTDetrResnetBackbone. It is used to instantiate an ResNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ResNet microsoft/resnet-50 architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import RTDetrResNetConfig, RTDetrResnetBackbone

>>> # Initializing a ResNet resnet-50 style configuration
>>> configuration = RTDetrResNetConfig()

>>> # Initializing a model (with random weights) from the resnet-50 style configuration
>>> model = RTDetrResnetBackbone(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

RTDetrImageProcessor

class transformers.RTDetrImageProcessor

< >

( format: Union = <AnnotationFormat.COCO_DETECTION: 'coco_detection'> do_resize: bool = True size: Dict = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: Union = 0.00392156862745098 do_normalize: bool = False image_mean: Union = None image_std: Union = None do_convert_annotations: bool = True do_pad: bool = False pad_size: Optional = None **kwargs )

Parameters

  • format (str, optional, defaults to AnnotationFormat.COCO_DETECTION) — Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
  • do_resize (bool, optional, defaults to True) — Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method.
  • size (Dict[str, int] optional, defaults to {"height" -- 640, "width": 640}): Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in the preprocess method. Available options are:
    • {"height": int, "width": int}: The image will be resized to the exact size (height, width). Do NOT keep the aspect ratio.
    • {"shortest_edge": int, "longest_edge": int}: The image will be resized to a maximum size respecting the aspect ratio and keeping the shortest edge less or equal to shortest_edge and the longest edge less or equal to longest_edge.
    • {"max_height": int, "max_width": int}: The image will be resized to the maximum size respecting the aspect ratio and keeping the height less or equal to max_height and the width less or equal to max_width.
  • resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image.
  • do_rescale (bool, optional, defaults to True) — Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method.
  • rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method.
  • do_normalize (bool, optional, defaults to False) — Whether to normalize the image.
  • image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) — Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_mean parameter in the preprocess method.
  • image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) — Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_std parameter in the preprocess method.
  • do_convert_annotations (bool, optional, defaults to True) — Controls whether to convert the annotations to the format expected by the DETR model. Converts the bounding boxes to the format (center_x, center_y, width, height) and in the range [0, 1]. Can be overridden by the do_convert_annotations parameter in the preprocess method.
  • do_pad (bool, optional, defaults to False) — Controls whether to pad the image. Can be overridden by the do_pad parameter in the preprocess method. If True, padding will be applied to the bottom and right of the image with zeros. If pad_size is provided, the image will be padded to the specified dimensions. Otherwise, the image will be padded to the maximum height and width of the batch.
  • pad_size (Dict[str, int], optional) — The size {"height": int, "width" int} to pad the images to. Must be larger than any image size provided for preprocessing. If pad_size is not provided, images will be padded to the largest height and width in the batch.

Constructs a RT-DETR image processor.

preprocess

< >

( images: Union annotations: Union = None return_segmentation_masks: bool = None masks_path: Union = None do_resize: Optional = None size: Optional = None resample = None do_rescale: Optional = None rescale_factor: Union = None do_normalize: Optional = None do_convert_annotations: Optional = None image_mean: Union = None image_std: Union = None do_pad: Optional = None format: Union = None return_tensors: Union = None data_format: Union = <ChannelDimension.FIRST: 'channels_first'> input_data_format: Union = None pad_size: Optional = None )

Parameters

  • images (ImageInput) — Image or batch of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.
  • annotations (AnnotationType or List[AnnotationType], optional) — List of annotations associated with the image or batch of images. If annotation is for object detection, the annotations should be a dictionary with the following keys:
    • “image_id” (int): The image id.
    • “annotations” (List[Dict]): List of annotations for an image. Each annotation should be a dictionary. An image can have no annotations, in which case the list should be empty. If annotation is for segmentation, the annotations should be a dictionary with the following keys:
    • “image_id” (int): The image id.
    • “segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary. An image can have no segments, in which case the list should be empty.
    • “file_name” (str): The file name of the image.
  • return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) — Whether to return segmentation masks.
  • masks_path (str or pathlib.Path, optional) — Path to the directory containing the segmentation masks.
  • do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image.
  • size (Dict[str, int], optional, defaults to self.size) — Size of the image’s (height, width) dimensions after resizing. Available options are:
    • {"height": int, "width": int}: The image will be resized to the exact size (height, width). Do NOT keep the aspect ratio.
    • {"shortest_edge": int, "longest_edge": int}: The image will be resized to a maximum size respecting the aspect ratio and keeping the shortest edge less or equal to shortest_edge and the longest edge less or equal to longest_edge.
    • {"max_height": int, "max_width": int}: The image will be resized to the maximum size respecting the aspect ratio and keeping the height less or equal to max_height and the width less or equal to max_width.
  • resample (PILImageResampling, optional, defaults to self.resample) — Resampling filter to use when resizing the image.
  • do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image.
  • rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to use when rescaling the image.
  • do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image.
  • do_convert_annotations (bool, optional, defaults to self.do_convert_annotations) — Whether to convert the annotations to the format expected by the model. Converts the bounding boxes from the format (top_left_x, top_left_y, width, height) to (center_x, center_y, width, height) and in relative coordinates.
  • image_mean (float or List[float], optional, defaults to self.image_mean) — Mean to use when normalizing the image.
  • image_std (float or List[float], optional, defaults to self.image_std) — Standard deviation to use when normalizing the image.
  • do_pad (bool, optional, defaults to self.do_pad) — Whether to pad the image. If True, padding will be applied to the bottom and right of the image with zeros. If pad_size is provided, the image will be padded to the specified dimensions. Otherwise, the image will be padded to the maximum height and width of the batch.
  • format (str or AnnotationFormat, optional, defaults to self.format) — Format of the annotations.
  • return_tensors (str or TensorType, optional, defaults to self.return_tensors) — Type of tensors to return. If None, will return the list of images.
  • data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of:
    • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
    • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
    • Unset: Use the channel dimension format of the input image.
  • input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
    • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
    • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
    • "none" or ChannelDimension.NONE: image in (height, width) format.
  • pad_size (Dict[str, int], optional) — The size {"height": int, "width" int} to pad the images to. Must be larger than any image size provided for preprocessing. If pad_size is not provided, images will be padded to the largest height and width in the batch.

Preprocess an image or a batch of images so that it can be used by the model.

post_process_object_detection

< >

( outputs threshold: float = 0.5 target_sizes: Union = None use_focal_loss: bool = True ) β†’ List[Dict]

Parameters

  • outputs (DetrObjectDetectionOutput) — Raw outputs of the model.
  • threshold (float, optional, defaults to 0.5) — Score threshold to keep object detection predictions.
  • target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized.
  • use_focal_loss (bool defaults to True) — Variable informing if the focal loss was used to predict the outputs. If True, a sigmoid is applied to compute the scores of each detection, otherwise, a softmax function is used.

Returns

List[Dict]

A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.

Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.

RTDetrModel

class transformers.RTDetrModel

< >

( config: RTDetrConfig )

Parameters

  • config (RTDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

RT-DETR Model (consisting of a backbone and encoder-decoder) outputting raw hidden states without any head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: FloatTensor pixel_mask: Optional = None encoder_outputs: Optional = None inputs_embeds: Optional = None decoder_inputs_embeds: Optional = None labels: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) β†’ transformers.models.rt_detr.modeling_rt_detr.RTDetrModelOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See RTDetrImageProcessor.call() for details.
  • pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:

    • 1 for pixels that are real (i.e. not masked),
    • 0 for pixels that are padding (i.e. masked).

    What are attention masks?

  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image.
  • decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation.
  • labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

transformers.models.rt_detr.modeling_rt_detr.RTDetrModelOutput or tuple(torch.FloatTensor)

A transformers.models.rt_detr.modeling_rt_detr.RTDetrModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RTDetrConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the decoder of the model.
  • intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) β€” Stacked intermediate hidden states (output of each layer of the decoder).
  • intermediate_logits (torch.FloatTensor of shape (batch_size, config.decoder_layers, sequence_length, config.num_labels)) β€” Stacked intermediate logits (logits of each layer of the decoder).
  • intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) β€” Stacked intermediate reference points (reference points of each layer of the decoder).
  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) β€” Sequence of hidden-states at the output of the last layer of the encoder of the model.
  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
  • init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) β€” Initial reference points sent through the Transformer decoder.
  • enc_topk_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) β€” Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the encoder stage. Output of bounding box binary classification (i.e. foreground and background).
  • enc_topk_bboxes (torch.FloatTensor of shape (batch_size, sequence_length, 4)) β€” Logits of predicted bounding boxes coordinates in the encoder stage.
  • enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) β€” Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background).
  • enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) β€” Logits of predicted bounding boxes coordinates in the first stage.
  • denoising_meta_values (dict) β€” Extra dictionary for the denoising related values

The RTDetrModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import AutoImageProcessor, RTDetrModel
>>> from PIL import Image
>>> import requests

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = AutoImageProcessor.from_pretrained("PekingU/rtdetr_r50vd")
>>> model = RTDetrModel.from_pretrained("PekingU/rtdetr_r50vd")

>>> inputs = image_processor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)

>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 300, 256]

RTDetrForObjectDetection

class transformers.RTDetrForObjectDetection

< >

( config: RTDetrConfig )

Parameters

  • config (RTDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

RT-DETR Model (consisting of a backbone and encoder-decoder) outputting bounding boxes and logits to be further decoded into scores and classes.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: FloatTensor pixel_mask: Optional = None encoder_outputs: Optional = None inputs_embeds: Optional = None decoder_inputs_embeds: Optional = None labels: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) β†’ transformers.models.rt_detr.modeling_rt_detr.RTDetrObjectDetectionOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See RTDetrImageProcessor.call() for details.
  • pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:

    • 1 for pixels that are real (i.e. not masked),
    • 0 for pixels that are padding (i.e. masked).

    What are attention masks?

  • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image.
  • decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation.
  • labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).

Returns

transformers.models.rt_detr.modeling_rt_detr.RTDetrObjectDetectionOutput or tuple(torch.FloatTensor)

A transformers.models.rt_detr.modeling_rt_detr.RTDetrObjectDetectionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RTDetrConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) β€” Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss.
  • loss_dict (Dict, optional) β€” A dictionary containing the individual losses. Useful for logging.
  • logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) β€” Classification logits (including no-object) for all queries.
  • pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) β€” Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use post_process_object_detection() to retrieve the unnormalized (absolute) bounding boxes.
  • auxiliary_outputs (list[Dict], optional) β€” Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer.
  • last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size)) β€” Sequence of hidden-states at the output of the last layer of the decoder of the model.
  • intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) β€” Stacked intermediate hidden states (output of each layer of the decoder).
  • intermediate_logits (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, config.num_labels)) β€” Stacked intermediate logits (logits of each layer of the decoder).
  • intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) β€” Stacked intermediate reference points (reference points of each layer of the decoder).
  • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
  • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
  • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
  • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) β€” Sequence of hidden-states at the output of the last layer of the encoder of the model.
  • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
  • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
  • init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) β€” Initial reference points sent through the Transformer decoder.
  • enc_topk_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) β€” Logits of predicted bounding boxes coordinates in the encoder.
  • enc_topk_bboxes (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) β€” Logits of predicted bounding boxes coordinates in the encoder.
  • enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) β€” Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background).
  • enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) β€” Logits of predicted bounding boxes coordinates in the first stage.
  • denoising_meta_values (dict) β€” Extra dictionary for the denoising related values

The RTDetrForObjectDetection forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import RTDetrImageProcessor, RTDetrForObjectDetection
>>> from PIL import Image
>>> import requests
>>> import torch

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd")
>>> model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd")

>>> # prepare image for the model
>>> inputs = image_processor(images=image, return_tensors="pt")

>>> # forward pass
>>> outputs = model(**inputs)

>>> logits = outputs.logits
>>> list(logits.shape)
[1, 300, 80]

>>> boxes = outputs.pred_boxes
>>> list(boxes.shape)
[1, 300, 4]

>>> # convert outputs (bounding boxes and class logits) to Pascal VOC format (xmin, ymin, xmax, ymax)
>>> target_sizes = torch.tensor([image.size[::-1]])
>>> results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[
...     0
... ]

>>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
...     box = [round(i, 2) for i in box.tolist()]
...     print(
...         f"Detected {model.config.id2label[label.item()]} with confidence "
...         f"{round(score.item(), 3)} at location {box}"
...     )
Detected sofa with confidence 0.97 at location [0.14, 0.38, 640.13, 476.21]
Detected cat with confidence 0.96 at location [343.38, 24.28, 640.14, 371.5]
Detected cat with confidence 0.958 at location [13.23, 54.18, 318.98, 472.22]
Detected remote with confidence 0.951 at location [40.11, 73.44, 175.96, 118.48]
Detected remote with confidence 0.924 at location [333.73, 76.58, 369.97, 186.99]

RTDetrResNetBackbone

class transformers.RTDetrResNetBackbone

< >

( config )

Parameters

  • config (RTDetrResNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

ResNet backbone, to be used with frameworks like RTDETR.

This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: Tensor output_hidden_states: Optional = None return_dict: Optional = None ) β†’ transformers.modeling_outputs.BackboneOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See RTDetrImageProcessor.call() for details.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

transformers.modeling_outputs.BackboneOutput or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BackboneOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (RTDetrResNetConfig) and inputs.

  • feature_maps (tuple(torch.FloatTensor) of shape (batch_size, num_channels, height, width)) β€” Feature maps of the stages.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) β€” Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size) or (batch_size, num_channels, height, width), depending on the backbone.

    Hidden-states of the model at the output of each stage plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) β€” Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Only applicable if the backbone uses attention.

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The RTDetrResNetBackbone forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import RTDetrResNetConfig, RTDetrResNetBackbone
>>> import torch

>>> config = RTDetrResNetConfig()
>>> model = RTDetrResNetBackbone(config)

>>> pixel_values = torch.randn(1, 3, 224, 224)

>>> with torch.no_grad():
...     outputs = model(pixel_values)

>>> feature_maps = outputs.feature_maps
>>> list(feature_maps[-1].shape)
[1, 2048, 7, 7]
< > Update on GitHub