id
stringlengths 30
32
| content
stringlengths 139
2.8k
|
|---|---|
codereview_new_python_data_4202
|
class AscendAssignResult(util_mixins.NiceRepr):
batch_neg_mask (IntTensor): Negative samples mask in all images.
batch_max_overlaps (FloatTensor): The max overlaps of all bboxes
and ground truth boxes.
- batch_anchor_gt_indes(None | LongTensor): The the assigned truth
box index of all anchors.
batch_anchor_gt_labels(None | LongTensor): The gt labels
of all anchors
```suggestion
batch_anchor_gt_indes(None | LongTensor): The assigned truth
```
class AscendAssignResult(util_mixins.NiceRepr):
batch_neg_mask (IntTensor): Negative samples mask in all images.
batch_max_overlaps (FloatTensor): The max overlaps of all bboxes
and ground truth boxes.
+ batch_anchor_gt_indes(None | LongTensor): The assigned truth
box index of all anchors.
batch_anchor_gt_labels(None | LongTensor): The gt labels
of all anchors
|
codereview_new_python_data_4203
|
]
test_pipeline = [
- dict(
- type='LoadImageFromFile',
- file_client_args={{_base_.file_client_args}}),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
Delete. Already supported
]
test_pipeline = [
+ dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
|
codereview_new_python_data_4204
|
]
test_pipeline = [
- dict(
- type='LoadImageFromFile',
- file_client_args={{_base_.file_client_args}}),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
```suggestion
file_client_args=_base_.file_client_args),
```
]
test_pipeline = [
+ dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
|
codereview_new_python_data_4205
|
class OccludedSeparatedCocoDataset(CocoDataset):
COCO val dataset, collecting separated objects and partially occluded
objects for a large variety of categories. In this way, we define
occlusion into two major categories: separated and partially occluded.
- Separation: target object segmentation mask is separated into distinct
regions by the occluder.
- Partial Occlusion: target object is partially occluded but the
segmentation mask is connected.
These two new scalable real-image datasets are to benchmark a model's
capability to detect occluded objects of 80 common categories.
```suggestion
- Separation: target object segmentation mask is separated into distinct
```
class OccludedSeparatedCocoDataset(CocoDataset):
COCO val dataset, collecting separated objects and partially occluded
objects for a large variety of categories. In this way, we define
occlusion into two major categories: separated and partially occluded.
+
- Separation: target object segmentation mask is separated into distinct
regions by the occluder.
- Partial Occlusion: target object is partially occluded but the
segmentation mask is connected.
+
These two new scalable real-image datasets are to benchmark a model's
capability to detect occluded objects of 80 common categories.
|
codereview_new_python_data_4206
|
class DetInferencer(BaseInferencer):
- """MMDet inferencer.
Args:
model (str, optional): Path to the config file or the model name
Object Detection Inferencer.
class DetInferencer(BaseInferencer):
+ """Object Detection Inferencer..
Args:
model (str, optional): Path to the config file or the model name
|
codereview_new_python_data_4207
|
def postprocess(
result_dict['visualization'] = visualization
return result_dict
def pred2dict(self,
data_sample: DetDataSample,
pred_out_file: str = '') -> Dict:
What is the convention of the json format? Keep the same with the datasample?
def postprocess(
result_dict['visualization'] = visualization
return result_dict
+ # TODO: The data format and fields saved in json need further discussion.
+ # Maybe should include model name, timestamp, filename, image info etc.
def pred2dict(self,
data_sample: DetDataSample,
pred_out_file: str = '') -> Dict:
|
codereview_new_python_data_4208
|
def __call__(self, results):
# `numpy.transpose()` followed by `numpy.ascontiguousarray()`
# If image is already contiguous, use
# `torch.permute()` followed by `torch.contiguous()`
if not img.flags.c_contiguous:
img = np.ascontiguousarray(img.transpose(2, 0, 1))
img = to_tensor(img)
We can also refer to this PR so that the readers can find more details by adding the link in the comments like
```
Refer to https://github.com/open-mmlab/mmdetection/pull/9533 for more details
```
def __call__(self, results):
# `numpy.transpose()` followed by `numpy.ascontiguousarray()`
# If image is already contiguous, use
# `torch.permute()` followed by `torch.contiguous()`
+ # Refer to https://github.com/open-mmlab/mmdetection/pull/9533
+ # for more details
if not img.flags.c_contiguous:
img = np.ascontiguousarray(img.transpose(2, 0, 1))
img = to_tensor(img)
|
codereview_new_python_data_4209
|
def aligned_bilinear(tensor: Tensor, factor: int) -> Tensor:
return tensor[:, :, :oh - 1, :ow - 1]
-def unfold_wo_center(x, kernel_size, dilation):
"""unfold_wo_center, used in original implement in BoxInst:
https://github.com/aim-uofa/AdelaiDet/blob/\
unfoled_wo_center -> Unfold without xx center?
def aligned_bilinear(tensor: Tensor, factor: int) -> Tensor:
return tensor[:, :, :oh - 1, :ow - 1]
+def unfold_wo_center(x, kernel_size: int, dilation: int) -> Tensor:
"""unfold_wo_center, used in original implement in BoxInst:
https://github.com/aim-uofa/AdelaiDet/blob/\
|
codereview_new_python_data_4210
|
def transform(self, results: dict) -> dict:
retrieve_gt_bboxes = retrieve_results['gt_bboxes']
retrieve_gt_bboxes.rescale_([scale_ratio, scale_ratio])
if with_mask:
- retrieve_gt_masks: BitmapMasks = retrieve_results[
- 'gt_masks'].rescale(scale_ratio)
if self.bbox_clip_border:
retrieve_gt_bboxes.clip_([origin_h, origin_w])
It is not BitmapMasks.
def transform(self, results: dict) -> dict:
retrieve_gt_bboxes = retrieve_results['gt_bboxes']
retrieve_gt_bboxes.rescale_([scale_ratio, scale_ratio])
if with_mask:
+ retrieve_gt_masks = retrieve_results['gt_masks'].rescale(
+ scale_ratio)
if self.bbox_clip_border:
retrieve_gt_bboxes.clip_([origin_h, origin_w])
|
codereview_new_python_data_4211
|
EPS = 1.0e-7
-def center_of_mass(masks: Tensor, eps=1e-6):
n, h, w = masks.shape
grid_h = torch.arange(h, device=masks.device)[:, None]
grid_w = torch.arange(w, device=masks.device)
```suggestion
def center_of_mass(masks: Tensor, eps: float = 1e-6):
```
EPS = 1.0e-7
+def center_of_mass(masks: Tensor, eps: float = 1e-7) -> Tensor:
+ """Compute the masks center of mass.
+
+ Args:
+ masks: Mask tensor, has shape (num_masks, H, W).
+ eps: a small number to avoid normalizer to be zero.
+ Defaults to 1e-7.
+ Returns:
+ Tensor: The masks center of mass. Has shape (num_masks, 2).
+ """
n, h, w = masks.shape
grid_h = torch.arange(h, device=masks.device)[:, None]
grid_w = torch.arange(w, device=masks.device)
|
codereview_new_python_data_4212
|
tta_model = dict(
type='DetTTAModel',
tta_cfg=dict(nms=dict(type='nms', iou_threshold=0.5), max_per_img=100))
Check whether this follows the official implementation of CenterNet. (I thought it is different from the original one, which simply fuse the score of model predictions) If not, we should add comments to indicate that this is different from the TTA of official CenterNet
+# This is different from the TTA of official CenterNet.
+
tta_model = dict(
type='DetTTAModel',
tta_cfg=dict(nms=dict(type='nms', iou_threshold=0.5), max_per_img=100))
|
codereview_new_python_data_4213
|
def get_ann_ids(self, img_ids=[], cat_ids=[], area_rng=[], iscrowd=None):
return self.getAnnIds(img_ids, cat_ids, area_rng, iscrowd)
def get_cat_ids(self, cat_names=[], sup_names=[], cat_ids=[]):
- cat_ids_coco = self.getCatIds(cat_names, sup_names, cat_ids)
- index = [i for i, v in enumerate(cat_names) if v is not None]
- cat_ids = list(range(len(cat_names)))
- for i in range(len(index)):
- cat_ids[index[i]] = cat_ids_coco[i]
- return cat_ids
def get_img_ids(self, img_ids=[], cat_ids=[]):
return self.getImgIds(img_ids, cat_ids)
remove this, we will support this in #9362
def get_ann_ids(self, img_ids=[], cat_ids=[], area_rng=[], iscrowd=None):
return self.getAnnIds(img_ids, cat_ids, area_rng, iscrowd)
def get_cat_ids(self, cat_names=[], sup_names=[], cat_ids=[]):
+ return self.getCatIds(cat_names, sup_names, cat_ids)
def get_img_ids(self, img_ids=[], cat_ids=[]):
return self.getImgIds(img_ids, cat_ids)
|
codereview_new_python_data_4214
|
def init_weights(self):
super().init_weights()
# The initialization below for transformer head is very
# important as we use Focal_loss for loss_cls
- if self.loss_cls.use_sigmoid:
bias_init = bias_init_with_prob(0.01)
nn.init.constant_(self.fc_cls.bias, bias_init)
change to `self.loss_cls.get("use_sigmoid", True): ` to avoid there is no such attribute.
def init_weights(self):
super().init_weights()
# The initialization below for transformer head is very
# important as we use Focal_loss for loss_cls
+ if self.loss_cls.get('use_sigmoid', True):
bias_init = bias_init_with_prob(0.01)
nn.init.constant_(self.fc_cls.bias, bias_init)
|
codereview_new_python_data_4215
|
def forward_decoder(self, query: Tensor, query_pos: Tensor, memory: Tensor,
query_pos=query_pos,
key_pos=memory_pos,
key_padding_mask=memory_mask)
- references = references.transpose(0, 1)
head_inputs_dict = dict(
hidden_states=hidden_states, references=references)
return head_inputs_dict
Can we remove this transpose operation? I believe it is unnecessary
def forward_decoder(self, query: Tensor, query_pos: Tensor, memory: Tensor,
query_pos=query_pos,
key_pos=memory_pos,
key_padding_mask=memory_mask)
head_inputs_dict = dict(
hidden_states=hidden_states, references=references)
return head_inputs_dict
|
codereview_new_python_data_4216
|
def forward(self, query: Tensor, key: Tensor, value: Tensor,
(num_decoder_layers, bs, num_queries, 2).
"""
reference_unsigmoid = self.ref_point_head(
- query_pos) # [num_queries, batch_size, 2]
- reference = reference_unsigmoid.sigmoid().transpose(0, 1)
- reference_xy = reference[..., :2].transpose(0, 1)
intermediate = []
for layer_id, layer in enumerate(self.layers):
if layer_id == 0:
check the shape comments
def forward(self, query: Tensor, key: Tensor, value: Tensor,
(num_decoder_layers, bs, num_queries, 2).
"""
reference_unsigmoid = self.ref_point_head(
+ query_pos) # [bs, num_queries, 2]
+ reference = reference_unsigmoid.sigmoid()
+ reference_xy = reference[..., :2]
intermediate = []
for layer_id, layer in enumerate(self.layers):
if layer_id == 0:
|
codereview_new_python_data_4217
|
def forward(self, query: Tensor, key: Tensor, value: Tensor,
(num_decoder_layers, bs, num_queries, 2).
"""
reference_unsigmoid = self.ref_point_head(
- query_pos) # [num_queries, batch_size, 2]
- reference = reference_unsigmoid.sigmoid().transpose(0, 1)
- reference_xy = reference[..., :2].transpose(0, 1)
intermediate = []
for layer_id, layer in enumerate(self.layers):
if layer_id == 0:
too many unnecessary transpose operation for reference, try to make it clear
def forward(self, query: Tensor, key: Tensor, value: Tensor,
(num_decoder_layers, bs, num_queries, 2).
"""
reference_unsigmoid = self.ref_point_head(
+ query_pos) # [bs, num_queries, 2]
+ reference = reference_unsigmoid.sigmoid()
+ reference_xy = reference[..., :2]
intermediate = []
for layer_id, layer in enumerate(self.layers):
if layer_id == 0:
|
codereview_new_python_data_4218
|
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=50, val_interval=1)
param_scheduler = [dict(type='MultiStepLR', end=50, milestones=[40])]
-randomness = dict(seed=42, deterministic=True)
is it necessary?
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=50, val_interval=1)
param_scheduler = [dict(type='MultiStepLR', end=50, milestones=[40])]
|
codereview_new_python_data_4219
|
def transform(self, results: dict) -> dict:
# For image(type=float32), after convert bgr to hsv by opencv,
# valid saturation value range is [0, 1]
if saturation_value > 1:
- img[..., 1][img[..., 1] > 1] = 1
# random hue
if hue_flag:
```suggestion
img[..., 1] = img[..., 1].clip(0, 1)
```
def transform(self, results: dict) -> dict:
# For image(type=float32), after convert bgr to hsv by opencv,
# valid saturation value range is [0, 1]
if saturation_value > 1:
+ img[..., 1] = img[..., 1].clip(0, 1)
# random hue
if hue_flag:
|
codereview_new_python_data_4220
|
def __init__(self,
dcn: OptConfigType = None,
plugins: OptConfigType = None,
init_fg: OptMultiConfig = None) -> None:
- super(SimplifiedBasicBlock, self).__init__(init_fg)
assert dcn is None, 'Not implemented yet.'
assert plugins is None, 'Not implemented yet.'
assert not with_cp, 'Not implemented yet.'
```suggestion
super().__init__(init_fg=init_fg)
```
def __init__(self,
dcn: OptConfigType = None,
plugins: OptConfigType = None,
init_fg: OptMultiConfig = None) -> None:
+ super().__init__(init_fg=init_fg)
assert dcn is None, 'Not implemented yet.'
assert plugins is None, 'Not implemented yet.'
assert not with_cp, 'Not implemented yet.'
|
codereview_new_python_data_4221
|
def main():
'"auto_scale_lr.enable" or '
'"auto_scale_lr.base_batch_size" in your'
' configuration file.')
-
- cfg.resume = args.resume
# build the runner from config
if 'runner_type' not in cfg:
Is that necessary to add `load-from` too?
def main():
'"auto_scale_lr.enable" or '
'"auto_scale_lr.base_batch_size" in your'
' configuration file.')
+ if args.resume:
+ cfg.resume = args.resume
# build the runner from config
if 'runner_type' not in cfg:
|
codereview_new_python_data_4222
|
def main():
cfg.resume = args.resume
# resume is determined in this priority: resume from > auto_resume
- cfg.resume = args.resume
if args.resume_from is not None:
cfg.resume = True
cfg.load_from = args.resume_from
This line will cause resume conflict. Please refer to the logic in https://github.com/open-mmlab/mmdetection/pull/9287
def main():
cfg.resume = args.resume
# resume is determined in this priority: resume from > auto_resume
if args.resume_from is not None:
cfg.resume = True
cfg.load_from = args.resume_from
|
codereview_new_python_data_4223
|
def parse_args():
parser = argparse.ArgumentParser(description='Print the whole config')
parser.add_argument('config', help='config file path')
parser.add_argument(
- '--save_path', default=None, help='save path of whole config, it can be suffixed with .py, .json, .yml')
parser.add_argument(
'--cfg-options',
nargs='+',
```suggestion
'--save-path', default=None, help='save path of whole config, it can be suffixed with .py, .json, .yml')
```
def parse_args():
parser = argparse.ArgumentParser(description='Print the whole config')
parser.add_argument('config', help='config file path')
parser.add_argument(
+ '--save-path', default=None, help='save path of whole config, it can be suffixed with .py, .json, .yml')
parser.add_argument(
'--cfg-options',
nargs='+',
|
codereview_new_python_data_4224
|
def main():
if args.save_path is not None:
save_path = args.save_path
if not os.path.exists(os.path.split(save_path)[0]):
os.makedirs(os.path.split(save_path)[0])
cfg.dump(save_path)
if __name__ == '__main__':
Maybe we need to check the file suffix?
def main():
if args.save_path is not None:
save_path = args.save_path
+
+ suffix = os.path.splitext(save_path)[-1]
+ assert suffix in ['py', 'json', 'yml']
+
if not os.path.exists(os.path.split(save_path)[0]):
os.makedirs(os.path.split(save_path)[0])
cfg.dump(save_path)
+ print(f'Config saving at {save_path}')
if __name__ == '__main__':
|
codereview_new_python_data_4225
|
def train_detector(model,
# fp16 setting
fp16_cfg = cfg.get('fp16', None)
if fp16_cfg is not None:
optimizer_config = Fp16OptimizerHook(
**cfg.optimizer_config, **fp16_cfg, distributed=distributed)
Does it mean when using NPU we use FP16 by default?
def train_detector(model,
# fp16 setting
fp16_cfg = cfg.get('fp16', None)
+ if fp16_cfg is None and cfg.get('device', None) == 'npu':
+ fp16_cfg = dict(loss_scale='dynamic')
if fp16_cfg is not None:
optimizer_config = Fp16OptimizerHook(
**cfg.optimizer_config, **fp16_cfg, distributed=distributed)
|
codereview_new_python_data_4226
|
def train_detector(model,
# fp16 setting
fp16_cfg = cfg.get('fp16', None)
if fp16_cfg is not None:
optimizer_config = Fp16OptimizerHook(
**cfg.optimizer_config, **fp16_cfg, distributed=distributed)
dynamic is better ?
def train_detector(model,
# fp16 setting
fp16_cfg = cfg.get('fp16', None)
+ if fp16_cfg is None and cfg.get('device', None) == 'npu':
+ fp16_cfg = dict(loss_scale='dynamic')
if fp16_cfg is not None:
optimizer_config = Fp16OptimizerHook(
**cfg.optimizer_config, **fp16_cfg, distributed=distributed)
|
codereview_new_python_data_4227
|
model = dict(
type='DABDETR',
num_query=300,
- iter_update=True,
random_refpoints_xy=False,
num_patterns=0,
data_preprocessor=dict(
There is a similar arg `with_box_refine` in Deformable DETR, let's unify the arg name!
model = dict(
type='DABDETR',
num_query=300,
random_refpoints_xy=False,
num_patterns=0,
data_preprocessor=dict(
|
codereview_new_python_data_4228
|
def __init__(self,
init_cfg=None):
super().__init__(init_cfg=init_cfg)
- assert batch_first is True, 'First \
- dimension of all DETRs in mmdet is \
- `batch`, please set `batch_first` flag.'
self.cross_attn = cross_attn
self.keep_query_pos = keep_query_pos
we do not support `batch_first=False` for ConditionalAttention
@LYMDLUT @KeiChiTse please help to fix this
def __init__(self,
init_cfg=None):
super().__init__(init_cfg=init_cfg)
+ assert batch_first is True, 'Set `batch_first`\
+ to False is NOT supported in ConditionalAttention. \
+ First dimension of all DETRs in mmdet is `batch`, \
+ please set `batch_first` to True.'
self.cross_attn = cross_attn
self.keep_query_pos = keep_query_pos
|
codereview_new_python_data_4229
|
def nchw_to_nlc(x):
return x.flatten(2).transpose(1, 2).contiguous()
-def convert_coordinate_to_encoding(coord_tensor: Tensor,
- num_feats: int = 128,
- temperature: int = 10000,
- scale: float = 2 * math.pi):
"""Convert coordinate tensor to positional encoding.
Args:
Why do these two functions still exist in this pr? They should have been merged in Conditional DETR
def nchw_to_nlc(x):
return x.flatten(2).transpose(1, 2).contiguous()
+def coordinate_to_encoding(coord_tensor: Tensor,
+ num_feats: int = 128,
+ temperature: int = 10000,
+ scale: float = 2 * math.pi):
"""Convert coordinate tensor to positional encoding.
Args:
|
codereview_new_python_data_4230
|
def nchw_to_nlc(x):
return x.flatten(2).transpose(1, 2).contiguous()
-def convert_coordinate_to_encoding(coord_tensor: Tensor,
- num_feats: int = 128,
- temperature: int = 10000,
- scale: float = 2 * math.pi):
"""Convert coordinate tensor to positional encoding.
Args:
convert_coordinate_to_encoding -> coordinate_to_encoding
def nchw_to_nlc(x):
return x.flatten(2).transpose(1, 2).contiguous()
+def coordinate_to_encoding(coord_tensor: Tensor,
+ num_feats: int = 128,
+ temperature: int = 10000,
+ scale: float = 2 * math.pi):
"""Convert coordinate tensor to positional encoding.
Args:
|
codereview_new_python_data_4231
|
def _predict_by_feat_single(self,
mode='bilinear',
align_corners=False).squeeze(0) > cfg.mask_thr
else:
- masks = mask_preds > cfg.mask_thr
return masks
You also need to add `squeeze(0)` here. Or you can move the `unsqueeze(0)` into the rescale part.
def _predict_by_feat_single(self,
mode='bilinear',
align_corners=False).squeeze(0) > cfg.mask_thr
else:
+ masks = mask_preds.squeeze(0) > cfg.mask_thr
return masks
|
codereview_new_python_data_4232
|
norm_cfg = dict(type='SyncBN', requires_grad=True)
# Use MMSyncBN that handles empty tensor in head. It can be changed to
# SyncBN after https://github.com/pytorch/pytorch/issues/36530 is fixed
-# Requires MMCV after https://github.com/open-mmlab/mmcv/pull/1205.
head_norm_cfg = dict(type='MMSyncBN', requires_grad=True)
model = dict(
# the model is trained from scratch, so init_cfg is None
we can delete this line of comments because mmdet 3.x relies on mmcv 2.x, which must have support it.
norm_cfg = dict(type='SyncBN', requires_grad=True)
# Use MMSyncBN that handles empty tensor in head. It can be changed to
# SyncBN after https://github.com/pytorch/pytorch/issues/36530 is fixed
head_norm_cfg = dict(type='MMSyncBN', requires_grad=True)
model = dict(
# the model is trained from scratch, so init_cfg is None
|
codereview_new_python_data_4233
|
def loss(self,
if num_pos > 0:
loss_mask = torch.cat(loss_mask).sum() / num_pos
else:
- loss_mask = mask_feats.new_zeros(1).mean()
# cate
flatten_labels = [
usually we use the results.sum() * 0 to include all parameters in the graph
def loss(self,
if num_pos > 0:
loss_mask = torch.cat(loss_mask).sum() / num_pos
else:
+ loss_mask = mask_feats.sum() * 0
# cate
flatten_labels = [
|
codereview_new_python_data_4234
|
def __init__(self,
self.pixel_decoder = MODELS.build(pixel_decoder)
self.transformer_decoder = MODELS.build(transformer_decoder)
self.decoder_embed_dims = self.transformer_decoder.embed_dims
- if isinstance(
- self.pixel_decoder,
- PixelDecoder) and (self.decoder_embed_dims != in_channels[-1]
- or enforce_decoder_input_project):
self.decoder_input_proj = Conv2d(
in_channels[-1], self.decoder_embed_dims, kernel_size=1)
else:
It brings unit test error:

I set track here, and found the conflict:
When the self.pixel_decoder is the `TransformerEncoderPixelDecoder` which is inherited from the `PixelDecoder`.
The `isinstance(self.pixel_decoder, PixelDecoder)` returns `True`, while the original `pixel_decoder_type == 'PixelDecoder'` returns `False`.
I tried replace it with `self.pixel_decoder._get_name() == 'PixelDecoder'`.
i.e.
```Python
pixel_decoder_type = self.pixel_decoder._get_name()
```
The ut pass.
def __init__(self,
self.pixel_decoder = MODELS.build(pixel_decoder)
self.transformer_decoder = MODELS.build(transformer_decoder)
self.decoder_embed_dims = self.transformer_decoder.embed_dims
+ if type(self.pixel_decoder) == PixelDecoder and (
+ self.decoder_embed_dims != in_channels[-1]
+ or enforce_decoder_input_project):
self.decoder_input_proj = Conv2d(
in_channels[-1], self.decoder_embed_dims, kernel_size=1)
else:
|
codereview_new_python_data_4235
|
def __init__(self,
self.pixel_decoder = MODELS.build(pixel_decoder)
self.transformer_decoder = MODELS.build(transformer_decoder)
self.decoder_embed_dims = self.transformer_decoder.embed_dims
- if isinstance(
- self.pixel_decoder,
- PixelDecoder) and (self.decoder_embed_dims != in_channels[-1]
- or enforce_decoder_input_project):
self.decoder_input_proj = Conv2d(
in_channels[-1], self.decoder_embed_dims, kernel_size=1)
else:
```suggestion
if type(self.pixel_decoder) == PixelDecoder and (
self.decoder_embed_dims != in_channels[-1]
or enforce_decoder_input_project):
```
def __init__(self,
self.pixel_decoder = MODELS.build(pixel_decoder)
self.transformer_decoder = MODELS.build(transformer_decoder)
self.decoder_embed_dims = self.transformer_decoder.embed_dims
+ if type(self.pixel_decoder) == PixelDecoder and (
+ self.decoder_embed_dims != in_channels[-1]
+ or enforce_decoder_input_project):
self.decoder_input_proj = Conv2d(
in_channels[-1], self.decoder_embed_dims, kernel_size=1)
else:
|
codereview_new_python_data_4236
|
def loss_by_feat(
loss_dict = super(DeformableDETRHead, self).loss_by_feat(
all_layers_matching_cls_scores, all_layers_matching_bbox_preds,
batch_gt_instances, batch_img_metas, batch_gt_instances_ignore)
# loss of proposal generated from encode feature map.
if enc_cls_scores is not None:
Seems using the loss_by_feat of DETR here
what is the difference of it between DETR and DeformableDETR?
def loss_by_feat(
loss_dict = super(DeformableDETRHead, self).loss_by_feat(
all_layers_matching_cls_scores, all_layers_matching_bbox_preds,
batch_gt_instances, batch_img_metas, batch_gt_instances_ignore)
+ # NOTE DETRHead.loss_by_feat but not DeformableDETRHead.loss_by_feat
+ # is called, because the encoder loss calculations are different
+ # between DINO and DeformableDETR.
# loss of proposal generated from encode feature map.
if enc_cls_scores is not None:
|
codereview_new_python_data_4237
|
def generate_dn_bbox_query(self, gt_bboxes: Tensor,
have the points both between the inner and outer squares.
Besides, the length of outer square is twice as long as that of
- the inner square, i.e., self.box_noise_scale * 2 * w_or_h.
NOTE The noise is added to all the bboxes. Moreover, there is still
unconsidered case when one point is within the positive square and
the others is between the inner and outer squares.
self.box_noise_scale * w_or_h / 2
def generate_dn_bbox_query(self, gt_bboxes: Tensor,
have the points both between the inner and outer squares.
Besides, the length of outer square is twice as long as that of
+ the inner square, i.e., self.box_noise_scale * w_or_h / 2.
NOTE The noise is added to all the bboxes. Moreover, there is still
unconsidered case when one point is within the positive square and
the others is between the inner and outer squares.
|
codereview_new_python_data_4238
|
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
backbone=dict(
_delete_=True,
Does it mean this PR can only be merged after MMCls support auto import?
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
+# TODO: delete custom_imports after mmcls supports auto import
+# please install mmcls>=1.0
+# import mmcls.models to trigger register_module in mmcls
+custom_imports = dict(imports=['mmcls.models'], allow_failed_imports=False)
+
model = dict(
backbone=dict(
_delete_=True,
|
codereview_new_python_data_4239
|
def main():
if args.cfg_options is not None:
cfg.merge_from_dict(args.cfg_options)
- init_default_scope(cfg.get())
distributed = False
if args.launcher != 'none':
It should be uniformly written as init_default_scope(cfg.get('default_scope', 'mmdet'))
def main():
if args.cfg_options is not None:
cfg.merge_from_dict(args.cfg_options)
+ init_default_scope(cfg.get('default_scope', 'mmdet'))
distributed = False
if args.launcher != 'none':
|
codereview_new_python_data_4240
|
LOG_PROCESSORS = Registry(
'log_processor',
parent=MMENGINE_LOG_PROCESSORS,
- locations=['mmdet.visualization'])
For those that have never been used in mmdet, can the locations just be written casually?
LOG_PROCESSORS = Registry(
'log_processor',
parent=MMENGINE_LOG_PROCESSORS,
+ # TODO: update the location when mmdet has its own log processor
+ locations=['mmdet.engine'])
|
codereview_new_python_data_4241
|
def parse_data_info(self, raw_data_info: dict) -> Union[dict, List[dict]]:
if self.data_prefix.get('seg', None):
seg_map_path = osp.join(
self.data_prefix['seg'],
- img_info['filename'].rsplit('.', 1)[0] + self.seg_suffix)
else:
seg_map_path = None
data_info['img_path'] = img_path
we can use seg_map_suffix now to be consistent with mmseg.
def parse_data_info(self, raw_data_info: dict) -> Union[dict, List[dict]]:
if self.data_prefix.get('seg', None):
seg_map_path = osp.join(
self.data_prefix['seg'],
+ img_info['filename'].rsplit('.', 1)[0] + self.seg_map_suffix)
else:
seg_map_path = None
data_info['img_path'] = img_path
|
codereview_new_python_data_4242
|
def parse_data_info(self, raw_data_info: dict) -> Union[dict, List[dict]]:
if self.data_prefix.get('seg', None):
seg_map_path = osp.join(
self.data_prefix['seg'],
- img_info['filename'].rsplit('.', 1)[0] + self.seg_map_suffix)
else:
seg_map_path = None
data_info['img_path'] = img_path
is the change from file_name to filename correct? Did you verify the modification?
def parse_data_info(self, raw_data_info: dict) -> Union[dict, List[dict]]:
if self.data_prefix.get('seg', None):
seg_map_path = osp.join(
self.data_prefix['seg'],
+ img_info['file_name'].rsplit('.', 1)[0] + self.seg_map_suffix)
else:
seg_map_path = None
data_info['img_path'] = img_path
|
codereview_new_python_data_4243
|
class EIoULoss(nn.Module):
Code is modified from https://github.com//ShiqiYu/libfacedetection.train.
Args:
- pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2),
- shape (n, 4).
- target (Tensor): Corresponding gt bboxes, shape (n, 4).
- smooth_point (float): hyperparameter, default is 0.1.
eps (float): Eps to avoid log(0).
- Return:
- Tensor: Loss tensor.
"""
def __init__(self,
The order of the args should be consistent with the __init__ method.
And delete the "Return"
class EIoULoss(nn.Module):
Code is modified from https://github.com//ShiqiYu/libfacedetection.train.
Args:
eps (float): Eps to avoid log(0).
+ reduction (str): Options are "none", "mean" and "sum".
+ loss_weight (float): Weight of loss.
+ smooth_point (float): hyperparameter, default is 0.1.
"""
def __init__(self,
|
codereview_new_python_data_4244
|
def _parse_ann_info(self, img_info, ann_info):
else:
gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32)
- seg_map = img_info['filename'].rsplit('.', 1)[0] + self.seg_map_suffix
ann = dict(
bboxes=gt_bboxes,
seems we do not need img_suffix from this line.
def _parse_ann_info(self, img_info, ann_info):
else:
gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32)
+ seg_map = img_info['filename'].rsplit('.', 1)[0] + self.seg_suffix
ann = dict(
bboxes=gt_bboxes,
|
codereview_new_python_data_4245
|
def main():
dataset = DATASETS.build(cfg.test_dataloader.dataset)
predictions = mmengine.load(args.pkl_results)
- assert len(dataset) == len(predictions)
evaluator = Evaluator(cfg.test_evaluator)
evaluator.dataset_meta = dataset.metainfo
this line seems useless because we no longer need to pass the dataset to the evaluator
def main():
dataset = DATASETS.build(cfg.test_dataloader.dataset)
predictions = mmengine.load(args.pkl_results)
evaluator = Evaluator(cfg.test_evaluator)
evaluator.dataset_meta = dataset.metainfo
|
codereview_new_python_data_4246
|
times=3,
dataset=dict(
type='ConcatDataset',
ignore_keys=['DATASET_TYPE'],
datasets=[
dict(
add comments to tell users why ignore_keys are needed here
times=3,
dataset=dict(
type='ConcatDataset',
+ # VOCDataset will add `DATASET_TYPE` in dataset.metainfo,
+ # which will get error if using Concatdataset. Adding
+ # `ignore_keys` can avoid this error.
ignore_keys=['DATASET_TYPE'],
datasets=[
dict(
|
codereview_new_python_data_4247
|
def empty_instances(batch_img_metas: List[dict],
Defaults to False.
num_classes (int): num_classes of bbox_head. Defaults to 80.
score_per_cls (bool): Whether to generate class-aware score for
- the empty instance. Defaults to False.
Returns:
list[:obj:`InstanceData`]: Detection results of each image
Need to explain in detail when is true?
def empty_instances(batch_img_metas: List[dict],
Defaults to False.
num_classes (int): num_classes of bbox_head. Defaults to 80.
score_per_cls (bool): Whether to generate class-aware score for
+ the empty instance. ``score_per_cls`` will be True when the model
+ needs to result the raw results without nms. Defaults to False.
Returns:
list[:obj:`InstanceData`]: Detection results of each image
|
codereview_new_python_data_4248
|
class SSHModule(BaseModule):
in_channels (int): Number of input channels used at each scale.
out_channels (int): Number of output channels used at each scale.
conv_cfg (dict, optional): Config dict for convolution layer.
- Default: None, which means using conv2d.
norm_cfg (dict): Config dict for normalization layer.
Defaults to dict(type='BN').
init_cfg (dict or list[dict], optional): Initialization config dict.
```suggestion
Defaults to None.
class SSHModule(BaseModule):
in_channels (int): Number of input channels used at each scale.
out_channels (int): Number of output channels used at each scale.
conv_cfg (dict, optional): Config dict for convolution layer.
+ Defaults to None.
norm_cfg (dict): Config dict for normalization layer.
Defaults to dict(type='BN').
init_cfg (dict or list[dict], optional): Initialization config dict.
|
codereview_new_python_data_4249
|
def gen_masks_from_bboxes(self, bboxes, img_shape):
return BitmapMasks(gt_masks, img_h, img_w)
def get_gt_masks(self, results):
- """Check gt_masks in results.
If gt_masks is not contained in results,
it will be generated based on gt_bboxes.
Get gt_masks originally or generated based on bboxes.
def gen_masks_from_bboxes(self, bboxes, img_shape):
return BitmapMasks(gt_masks, img_h, img_w)
def get_gt_masks(self, results):
+ """get gt_masks originally or generated based on bboxes.
If gt_masks is not contained in results,
it will be generated based on gt_bboxes.
|
codereview_new_python_data_4250
|
backbone=dict(
depth=18,
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18')),
- bbox_head=dict(in_channels=512))
in_channels is the dim of transformer input,not bbox_head
backbone=dict(
depth=18,
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18')),
+ neck=dict(in_channels=[512]))
|
codereview_new_python_data_4251
|
def _init_layers(self) -> None:
def init_weights(self) -> None:
super().init_weights()
self._init_transformer_weights()
- self._is_init = True
def _init_transformer_weights(self) -> None:
# follow the DetrTransformer to init parameters
Where does `_is_init` be used?
def _init_layers(self) -> None:
def init_weights(self) -> None:
super().init_weights()
self._init_transformer_weights()
def _init_transformer_weights(self) -> None:
# follow the DetrTransformer to init parameters
|
codereview_new_python_data_4252
|
_base_ = 'deformable-detr_refine_r50_16xb2-50e_coco.py'
-model = dict(as_two_stage=True, bbox_head=dict(num_pred=7, as_two_stage=True))
Could we only set this `as_two_stage` in the detector
_base_ = 'deformable-detr_refine_r50_16xb2-50e_coco.py'
+model = dict(as_two_stage=True)
|
codereview_new_python_data_4253
|
_base_ = 'deformable-detr_refine_r50_16xb2-50e_coco.py'
-model = dict(as_two_stage=True, bbox_head=dict(num_pred=7, as_two_stage=True))
`num_pred` may be confusing
_base_ = 'deformable-detr_refine_r50_16xb2-50e_coco.py'
+model = dict(as_two_stage=True)
|
codereview_new_python_data_4254
|
act_cfg=None,
norm_cfg=dict(type='GN', num_groups=32),
num_outs=4),
- encoder_cfg=dict( # DeformableDetrTransformerEncoder
num_layers=6,
layer_cfg=dict( # DeformableDetrTransformerEncoderLayer
self_attn_cfg=dict( # MultiScaleDeformableAttention
embed_dims=256),
ffn_cfg=dict(
embed_dims=256, feedforward_channels=1024, ffn_drop=0.1))),
- decoder_cfg=dict( # DeformableDetrTransformerDecoder
num_layers=6,
return_intermediate=True,
layer_cfg=dict( # DeformableDetrTransformerDecoderLayer
encoder_cfg -> encoder
act_cfg=None,
norm_cfg=dict(type='GN', num_groups=32),
num_outs=4),
+ encoder=dict( # DeformableDetrTransformerEncoder
num_layers=6,
layer_cfg=dict( # DeformableDetrTransformerEncoderLayer
self_attn_cfg=dict( # MultiScaleDeformableAttention
embed_dims=256),
ffn_cfg=dict(
embed_dims=256, feedforward_channels=1024, ffn_drop=0.1))),
+ decoder=dict( # DeformableDetrTransformerDecoder
num_layers=6,
return_intermediate=True,
layer_cfg=dict( # DeformableDetrTransformerDecoderLayer
|
codereview_new_python_data_4255
|
act_cfg=None,
norm_cfg=dict(type='GN', num_groups=32),
num_outs=4),
- encoder_cfg=dict( # DeformableDetrTransformerEncoder
num_layers=6,
layer_cfg=dict( # DeformableDetrTransformerEncoderLayer
self_attn_cfg=dict( # MultiScaleDeformableAttention
embed_dims=256),
ffn_cfg=dict(
embed_dims=256, feedforward_channels=1024, ffn_drop=0.1))),
- decoder_cfg=dict( # DeformableDetrTransformerDecoder
num_layers=6,
return_intermediate=True,
layer_cfg=dict( # DeformableDetrTransformerDecoderLayer
decoder_cfg -> decoder
act_cfg=None,
norm_cfg=dict(type='GN', num_groups=32),
num_outs=4),
+ encoder=dict( # DeformableDetrTransformerEncoder
num_layers=6,
layer_cfg=dict( # DeformableDetrTransformerEncoderLayer
self_attn_cfg=dict( # MultiScaleDeformableAttention
embed_dims=256),
ffn_cfg=dict(
embed_dims=256, feedforward_channels=1024, ffn_drop=0.1))),
+ decoder=dict( # DeformableDetrTransformerDecoder
num_layers=6,
return_intermediate=True,
layer_cfg=dict( # DeformableDetrTransformerDecoderLayer
|
codereview_new_python_data_4256
|
def __init__(self,
self.neg_gt_bboxes = gt_and_ignore_bboxes[
self.neg_assigned_gt_inds.long(), :]
assign_result.gt_inds += 1
- super().__init__(pos_inds, neg_inds, priors, gt_and_ignore_bboxes,
- assign_result, gt_flags, avg_factor_with_neg)
add comments, why adding 1 here
def __init__(self,
self.neg_gt_bboxes = gt_and_ignore_bboxes[
self.neg_assigned_gt_inds.long(), :]
+ # To resist the minus 1 operation in `SamplingResult.init()`.
assign_result.gt_inds += 1
+ super().__init__(
+ pos_inds=pos_inds,
+ neg_inds=neg_inds,
+ priors=priors,
+ gt_bboxes=gt_and_ignore_bboxes,
+ assign_result=assign_result,
+ gt_flags=gt_flags,
+ avg_factor_with_neg=avg_factor_with_neg)
|
codereview_new_python_data_4257
|
-_base_ = ['./crowddet-rcnn_r50_fpn_8xb2-30e_crowdhuman.py']
model = dict(roi_head=dict(bbox_head=dict(with_refine=True)))
incorrect file name
+_base_ = './crowddet-rcnn_r50_fpn_8xb2-30e_crowdhuman.py'
model = dict(roi_head=dict(bbox_head=dict(with_refine=True)))
|
codereview_new_python_data_4258
|
# model settings
model = dict(
type='Detectron2Wrapper',
- data_preprocessor=None, # detectron2 process data inside the model
bgr_to_rgb=False,
- d2_detector=dict(
# The settings in `d2_detector` will merged into default settings
# in detectron2. More details please refer to
# https://github.com/facebookresearch/detectron2/blob/main/detectron2/config/defaults.py # noqa
can we simply use detector rather than d2_detector?
# model settings
model = dict(
type='Detectron2Wrapper',
bgr_to_rgb=False,
+ detector=dict(
# The settings in `d2_detector` will merged into default settings
# in detectron2. More details please refer to
# https://github.com/facebookresearch/detectron2/blob/main/detectron2/config/defaults.py # noqa
|
codereview_new_python_data_4259
|
class FixShapeResize(Resize):
width (int): width for resizing.
height (int): height for resizing.
Defaults to None.
- pad_val (Number | dict[str, Number], optional) - Padding value for if
the pad_mode is "constant". If it is a single number, the value
to pad the image is the number and to pad the semantic
segmentation map is 255. If it is a dict, it should have the
add `:`
`pad_val (Number | dict[str, Number], optional) :`
class FixShapeResize(Resize):
width (int): width for resizing.
height (int): height for resizing.
Defaults to None.
+ pad_val (Number | dict[str, Number], optional): Padding value for if
the pad_mode is "constant". If it is a single number, the value
to pad the image is the number and to pad the semantic
segmentation map is 255. If it is a dict, it should have the
|
codereview_new_python_data_4260
|
def __repr__(self) -> str:
@TRANSFORMS.register_module()
class FixShapeResize(Resize):
- """Resize images & bbox & seg.
This transform resizes the input image according to ``width`` and
``height``. Bboxes, masks, and seg map are then resized
The summary is too simple, should explain FixShape shortly.
def __repr__(self) -> str:
@TRANSFORMS.register_module()
class FixShapeResize(Resize):
+ """Resize images & bbox & seg to the specified size.
This transform resizes the input image according to ``width`` and
``height``. Bboxes, masks, and seg map are then resized
|
codereview_new_python_data_4261
|
from torch import Tensor
from mmdet.structures import SampleList
from mmdet.utils import InstanceList, OptMultiConfig
from ..test_time_augs import merge_aug_results
-from ..utils import (cat_boxes, filter_scores_and_topk, get_box_tensor,
- get_box_wh, scale_boxes, select_single_mlvl,
unpack_gt_instances)
cat -> cat_boxes
from torch import Tensor
from mmdet.structures import SampleList
+from mmdet.structures.bbox import (cat_boxes, get_box_tensor, get_box_wh,
+ scale_boxes)
from mmdet.utils import InstanceList, OptMultiConfig
from ..test_time_augs import merge_aug_results
+from ..utils import (filter_scores_and_topk, select_single_mlvl,
unpack_gt_instances)
|
codereview_new_python_data_4262
|
rpn_head=dict(
_delete_=True, # ignore the unused old settings
type='FCOSHead',
- num_classes=1, # num_classes = 1 for rpn
in_channels=256,
stacked_convs=4,
feat_channels=256,
We should tell users that if `num_classes` > 1, we will force set num classes = 1 in rpn
rpn_head=dict(
_delete_=True, # ignore the unused old settings
type='FCOSHead',
+ # num_classes = 1 for rpn,
+ # if num_classes > 1, it will be set to 1 in rpn head
+ num_classes=1,
in_channels=256,
stacked_convs=4,
feat_channels=256,
|
codereview_new_python_data_4263
|
_delete_=True, # ignore the unused old settings
type='FCOSHead',
# num_classes = 1 for rpn,
- # if num_classes > 1, it will be set to 1 in rpn head
num_classes=1,
in_channels=256,
stacked_convs=4,
if num_classes > 1, it will be set to 1 in xxx automatically.
I do not think the modification is done in rpn head, check the place.
_delete_=True, # ignore the unused old settings
type='FCOSHead',
# num_classes = 1 for rpn,
+ # if num_classes > 1, it will be set to 1 in
+ # TwoStageDetector automatically
num_classes=1,
in_channels=256,
stacked_convs=4,
|
codereview_new_python_data_4264
|
class MultiBranchDataPreprocessor(BaseDataPreprocessor):
In order to reuse `DetDataPreprocessor` for the data
from different branches, the format of multi-branch data
- grouped by branch as below :
.. code-block:: none
{
as below -> is as below
class MultiBranchDataPreprocessor(BaseDataPreprocessor):
In order to reuse `DetDataPreprocessor` for the data
from different branches, the format of multi-branch data
+ grouped by branch is as below :
.. code-block:: none
{
|
codereview_new_python_data_4265
|
def add_pred_to_datasample(self, data_samples: SampleList,
"""
for data_sample, pred_instances in zip(data_samples, results_list):
data_sample.pred_instances = pred_instances
return data_samples
Same question, we should check where converting boxlist to tensor is more reasonable
def add_pred_to_datasample(self, data_samples: SampleList,
"""
for data_sample, pred_instances in zip(data_samples, results_list):
data_sample.pred_instances = pred_instances
+ samplelist_boxlist2tensor(data_samples)
return data_samples
|
codereview_new_python_data_4266
|
class BaseBBoxCoder(metaclass=ABCMeta):
- """Base bounding box coder."""
- # The length of the `encode` function output.
encode_size = 4
def __init__(self, with_boxlist: bool = False, **kwargs):
how about using box_dim directly?
class BaseBBoxCoder(metaclass=ABCMeta):
+ """Base bounding box coder.
+ Args:
+ with_boxlist (bool): Whether to warp decoded boxes with the
+ boxlist data structure. Defaults to False.
+ """
+
+ # The length of the last of dimension of the encoded tensor.
encode_size = 4
def __init__(self, with_boxlist: bool = False, **kwargs):
|
codereview_new_python_data_4267
|
class BaseBBoxCoder(metaclass=ABCMeta):
"""Base bounding box coder.
Args:
- with_boxlist (bool): Whether to warp decoded boxes with the
boxlist data structure. Defaults to False.
"""
# The size of the last of dimension of the encoded tensor.
encode_size = 4
- def __init__(self, with_boxlist: bool = False, **kwargs):
- self.with_boxlist = with_boxlist
@abstractmethod
def encode(self, bboxes, gt_bboxes):
the same problem with "boxlist"
class BaseBBoxCoder(metaclass=ABCMeta):
"""Base bounding box coder.
Args:
+ use_box_type (bool): Whether to warp decoded boxes with the
boxlist data structure. Defaults to False.
"""
# The size of the last of dimension of the encoded tensor.
encode_size = 4
+ def __init__(self, use_box_type: bool = False, **kwargs):
+ self.use_box_type = use_box_type
@abstractmethod
def encode(self, bboxes, gt_bboxes):
|
codereview_new_python_data_4268
|
def inference_detector(
test_pipeline = Compose(new_test_pipeline)
- for m in model.modules():
- assert not isinstance(
- m,
- RoIPool), 'CPU inference with RoIPool is not supported currently.'
result_list = []
for img in imgs:
should also judge the model device
def inference_detector(
test_pipeline = Compose(new_test_pipeline)
+ if model.data_preprocessor.device.type == 'cpu':
+ for m in model.modules():
+ assert not isinstance(
+ m, RoIPool
+ ), 'CPU inference with RoIPool is not supported currently.'
result_list = []
for img in imgs:
|
codereview_new_python_data_4269
|
def predict(self,
results_list = self.mask_head.predict(
x, batch_data_samples, rescale=rescale, results_list=results_list)
- # connvert to DetDataSample
- predictions = self.convert_to_datasample(batch_data_samples,
- results_list)
- return predictions
`predictions` -> `results_list`
def predict(self,
results_list = self.mask_head.predict(
x, batch_data_samples, rescale=rescale, results_list=results_list)
+ batch_data_samples = self.add_pred_to_datasample(
+ batch_data_samples, results_list)
+ return batch_data_samples
|
codereview_new_python_data_4270
|
def before_train(self, runner: Runner) -> None:
def after_train_iter(self,
runner: Runner,
batch_idx: int,
- data_batch: dict = None,
outputs: Optional[dict] = None) -> None:
"""Update teacher's parameter every self.interval iterations."""
if (runner.iter + 1) % self.interval != 0:
dict -> Optional[dict]
def before_train(self, runner: Runner) -> None:
def after_train_iter(self,
runner: Runner,
batch_idx: int,
+ data_batch: Optional[dict] = None,
outputs: Optional[dict] = None) -> None:
"""Update teacher's parameter every self.interval iterations."""
if (runner.iter + 1) % self.interval != 0:
|
codereview_new_python_data_4271
|
def parse_data_info(self, raw_data_info: dict) -> Union[dict, List[dict]]:
else:
seg_map_path = None
data_info['img_path'] = img_path
- data_info['file_name'] = img_info['file_name']
data_info['img_id'] = img_info['img_id']
data_info['seg_map_path'] = seg_map_path
data_info['height'] = img_info['height']
file_name should be unnecessary since img_path already indicated that.
def parse_data_info(self, raw_data_info: dict) -> Union[dict, List[dict]]:
else:
seg_map_path = None
data_info['img_path'] = img_path
data_info['img_id'] = img_info['img_id']
data_info['seg_map_path'] = seg_map_path
data_info['height'] = img_info['height']
|
codereview_new_python_data_4272
|
import os
import unittest
-from mmengine import dump
from mmdet.datasets import CityscapesDataset
```suggestion
from mmengine.fileio import dump
```
import os
import unittest
+from mmengine.fileio import dump
from mmdet.datasets import CityscapesDataset
|
codereview_new_python_data_4273
|
import tempfile
import unittest
-from mmengine import dump
from mmdet.datasets.api_wrappers import COCOPanoptic
```suggestion
from mmengine.fileio import dump
```
import tempfile
import unittest
+from mmengine.fileio import dump
from mmdet.datasets.api_wrappers import COCOPanoptic
|
codereview_new_python_data_4274
|
def rotate(self, out_shape, angle, center=None, scale=1.0, border_value=0):
"""
def get_bboxes(self, dst_type='hbb'):
- """Get certain type boxes from masks.
Args:
dst_type: Destination box type.
Add a link to the box type in the docstring.
def rotate(self, out_shape, angle, center=None, scale=1.0, border_value=0):
"""
def get_bboxes(self, dst_type='hbb'):
+ """Get the certain type boxes from masks.
+
+ Please refer to ``mmdet.structures.bbox.box_type`` for more details of
+ the box type.
Args:
dst_type: Destination box type.
|
codereview_new_python_data_4275
|
def test_transform(self):
self.assertTrue((results['gt_bboxes'] == np.array([[20, 20, 40, 40],
[40, 40, 80,
80]])).all())
- self.assertTrue(len(results['gt_masks']) == 2)
- self.assertTrue(len(results['gt_ignore_flags'] == 2))
def test_repr(self):
transform = FilterAnnotations(
maybe we can use `assertEqual`
def test_transform(self):
self.assertTrue((results['gt_bboxes'] == np.array([[20, 20, 40, 40],
[40, 40, 80,
80]])).all())
+ self.assertEqual(len(results['gt_masks']), 2)
+ self.assertEqual(len(results['gt_ignore_flags']), 2)
def test_repr(self):
transform = FilterAnnotations(
|
codereview_new_python_data_4276
|
@LOOPS.register_module()
-class MultiValLoop(ValLoop):
- """Multi-loop for validation.
-
- Args:
- runner (Runner): A reference of runner.
- dataloader (Dataloader or dict): A dataloader object or a dict to
- build a dataloader.
- evaluator (Evaluator or dict or list): Used for computing metrics.
- fp16 (bool): Whether to enable fp16 validation. Defaults to
- False.
- """
def run(self):
"""Launch validation for model teacher and student."""
This Loop requires the model has teacher and student. So it is not general enough to use this name. Maybe rename to `TeacherStudentValLoop` or `SemiSupValLoop`.
@LOOPS.register_module()
+class TeacherStudentValLoop(ValLoop):
+ """Loop for validation of model teacher and student."""
def run(self):
"""Launch validation for model teacher and student."""
|
codereview_new_python_data_4277
|
-_base_ = ['semi_base_faster-rcnn_r50_caffe_fpn_180k_partial_coco.py']
-
-model = dict(
- type='SoftTeacher',
- semi_train_cfg=dict(
- pseudo_label_initial_score_thr=0.5,
- cls_pseudo_thr=0.9,
- rpn_pseudo_thr=0.9,
- reg_pseudo_thr=0.02,
- jitter_times=10,
- jitter_scale=0.06))
soft_teacher_faster-rcnn_r50_caffe_fpn_180k_partial_coco.py -> faster-rcnn_r50-caffe-fpn_softteacher-180k_coco.py
|
codereview_new_python_data_4278
|
def fast_test_model(config_name, checkpoint, args, logger=None):
runner.test()
-# Sample test whether the train code is correct
def main(args):
# register all modules in mmdet into the registries
register_all_modules(init_default_scope=False)
train -> inference
def fast_test_model(config_name, checkpoint, args, logger=None):
runner.test()
+# Sample test whether the inference code is correct
def main(args):
# register all modules in mmdet into the registries
register_all_modules(init_default_scope=False)
|
codereview_new_python_data_4279
|
def parse_args():
parser.add_argument(
'--auto-resume',
action='store_true',
- help='resume from the latest checkpoint automatically')
parser.add_argument(
'--cfg-options',
nargs='+',
```suggestion
help='resume from the latest checkpoint in the work_dir automatically')
```
def parse_args():
parser.add_argument(
'--auto-resume',
action='store_true',
+ help='resume from the latest checkpoint in the work_dir automatically')
parser.add_argument(
'--cfg-options',
nargs='+',
|
codereview_new_python_data_4280
|
def main():
assert args.out or args.show or args.show_dir, \
('Please specify at least one operation (save or show the results) '
- 'with the argument "--dump", "--show" or "show-dir"')
# load config
cfg = Config.fromfile(args.config)
```suggestion
'with the argument "--out", "--show" or "show-dir"')
```
def main():
assert args.out or args.show or args.show_dir, \
('Please specify at least one operation (save or show the results) '
+ 'with the argument "--out", "--show" or "show-dir"')
# load config
cfg = Config.fromfile(args.config)
|
codereview_new_python_data_4281
|
log_level = 'INFO'
load_from = None
resume = False
-
-# Default setting for scaling LR automatically
-# - `enable` means enable scaling LR automatically
-# or not by default.
-# - `base_batch_size` = (8 GPUs) x (2 samples per GPU).
-auto_scale_lr = dict(enable=False, base_batch_size=16)
This should not be put in default_runtime. May put it in the schedule folder.
log_level = 'INFO'
load_from = None
resume = False
|
codereview_new_python_data_4282
|
class CrowdHumanDataset(BaseDataset):
"""Dataset for CrowdHuman."""
- METAINFO = {'CLASSES': ['person']}
def __init__(self, file_client_args: dict = dict(backend='disk'),
**kwargs):
suggested adding PALETTE :
```
# PALETTE is a list of color tuples, which is used for visualization.
'PALETTE': [(xxx)]
```
class CrowdHumanDataset(BaseDataset):
"""Dataset for CrowdHuman."""
+ METAINFO = {
+ 'CLASSES': ('person', ),
+ # PALETTE is a list of color tuples, which is used for visualization.
+ 'PALETTE': [(220, 20, 60)]
+ }
def __init__(self, file_client_args: dict = dict(backend='disk'),
**kwargs):
|
codereview_new_python_data_4283
|
class CrowdHumanDataset(BaseDataset):
"""Dataset for CrowdHuman."""
- METAINFO = {'CLASSES': ['person']}
def __init__(self, file_client_args: dict = dict(backend='disk'),
**kwargs):
CLASSES should be a tuple
class CrowdHumanDataset(BaseDataset):
"""Dataset for CrowdHuman."""
+ METAINFO = {
+ 'CLASSES': ('person', ),
+ # PALETTE is a list of color tuples, which is used for visualization.
+ 'PALETTE': [(220, 20, 60)]
+ }
def __init__(self, file_client_args: dict = dict(backend='disk'),
**kwargs):
|
codereview_new_python_data_4284
|
def load_data_list(self) -> List[dict]:
data_list.append(parsed_data_info)
prog_bar.update()
if not self.id_hw_exist:
- # TODO: MMDetection's dataset support multiple file client. If the
- # dataset is not stored on disks, such as AWS or Aliyun OSS, this
- # may cause errors.
with open(self.id_hw_path, 'w', encoding='utf-8') as file:
json.dump(self.id_hw, file, indent=4)
print_log(f'\nsave id_hw in {self.data_root}', level=logging.INFO)
```suggestion
# TODO: support file client
```
def load_data_list(self) -> List[dict]:
data_list.append(parsed_data_info)
prog_bar.update()
if not self.id_hw_exist:
+ # TODO: support file client
with open(self.id_hw_path, 'w', encoding='utf-8') as file:
json.dump(self.id_hw, file, indent=4)
print_log(f'\nsave id_hw in {self.data_root}', level=logging.INFO)
|
codereview_new_python_data_4285
|
class CrowdHumanDataset(BaseDetDataset):
data_root (str): The root directory for
``data_prefix`` and ``ann_file``.
ann_file (str): Annotation file path.
- id_hw_path (str | None):The path of extra image metas for CrowdHuman.
It can be created by CrowdHumanDataset automatically or
by tools/misc/get_crowdhuman_id_hw.py manually.
"""
METAINFO = {
```suggestion
id_hw_path (str, optional):The path of extra image metas for CrowdHuman.
```
class CrowdHumanDataset(BaseDetDataset):
data_root (str): The root directory for
``data_prefix`` and ``ann_file``.
ann_file (str): Annotation file path.
+ id_hw_path (str, None):The path of extra image metas for CrowdHuman.
It can be created by CrowdHumanDataset automatically or
by tools/misc/get_crowdhuman_id_hw.py manually.
+ Defaults to None.
"""
METAINFO = {
|
codereview_new_python_data_4286
|
class CrowdHumanDataset(BaseDetDataset):
data_root (str): The root directory for
``data_prefix`` and ``ann_file``.
ann_file (str): Annotation file path.
- id_hw_path (str | None):The path of extra image metas for CrowdHuman.
It can be created by CrowdHumanDataset automatically or
by tools/misc/get_crowdhuman_id_hw.py manually.
"""
METAINFO = {
```suggestion
by tools/misc/get_crowdhuman_id_hw.py manually.
Defaults to None.
```
class CrowdHumanDataset(BaseDetDataset):
data_root (str): The root directory for
``data_prefix`` and ``ann_file``.
ann_file (str): Annotation file path.
+ id_hw_path (str, None):The path of extra image metas for CrowdHuman.
It can be created by CrowdHumanDataset automatically or
by tools/misc/get_crowdhuman_id_hw.py manually.
+ Defaults to None.
"""
METAINFO = {
|
codereview_new_python_data_4287
|
class CrowdHumanDataset(BaseDetDataset):
data_root (str): The root directory for
``data_prefix`` and ``ann_file``.
ann_file (str): Annotation file path.
- id_hw_path (str, None):The path of extra image metas for CrowdHuman.
- It can be created by CrowdHumanDataset automatically or
- by tools/misc/get_crowdhuman_id_hw.py manually.
- Defaults to None.
"""
METAINFO = {
```suggestion
id_hw_path (str | optional): The path of extra image metas for CrowdHuman.
```
class CrowdHumanDataset(BaseDetDataset):
data_root (str): The root directory for
``data_prefix`` and ``ann_file``.
ann_file (str): Annotation file path.
+ id_hw_path (str | optional):The path of extra image metas
+ for CrowdHuman. It can be created by CrowdHumanDataset
+ automatically or by tools/misc/get_crowdhuman_id_hw.py
+ manually. Defaults to None.
"""
METAINFO = {
|
codereview_new_python_data_4288
|
def calculate_confusion_matrix(dataset,
assert len(dataset) == len(results)
prog_bar = mmcv.ProgressBar(len(results))
for idx, per_img_res in enumerate(results):
- if isinstance(per_img_res, tuple):
- res_bboxes, _ = per_img_res
- else:
- res_bboxes = per_img_res['pred_instances']
gts = dataset.get_data_info(idx)['instances']
analyze_per_img_dets(confusion_matrix, gts, res_bboxes, score_thr,
tp_iou_thr, nms_iou_thr)
there will be no such a case when the prediction results is a tuple in 3.x, we should clean the logic.
def calculate_confusion_matrix(dataset,
assert len(dataset) == len(results)
prog_bar = mmcv.ProgressBar(len(results))
for idx, per_img_res in enumerate(results):
+ res_bboxes = per_img_res['pred_instances']
gts = dataset.get_data_info(idx)['instances']
analyze_per_img_dets(confusion_matrix, gts, res_bboxes, score_thr,
tp_iou_thr, nms_iou_thr)
|
codereview_new_python_data_4289
|
def compute_metrics(self, results: list) -> Dict[str, float]:
pred_json = load(json_filename)
pred_json = dict(
(el['image_id'], el) for el in pred_json['annotations'])
# match the gt_anns and pred_anns in the same image
matched_annotations_list = []
for gt_ann in gt_json:
unnecessary to delete this blank line
def compute_metrics(self, results: list) -> Dict[str, float]:
pred_json = load(json_filename)
pred_json = dict(
(el['image_id'], el) for el in pred_json['annotations'])
+
# match the gt_anns and pred_anns in the same image
matched_annotations_list = []
for gt_ann in gt_json:
|
codereview_new_python_data_4290
|
def main():
'"auto_scale_lr.enable" or '
'"auto_scale_lr.base_batch_size" in your'
' configuration file. Please update all the '
- 'configuration files to mmdet >= 2.25.1.')
# set multi-process settings
setup_multi_processes(cfg)
we can keep this to be 2.24.1 since in 2.24.1 people can find auto_scale_lr config
def main():
'"auto_scale_lr.enable" or '
'"auto_scale_lr.base_batch_size" in your'
' configuration file. Please update all the '
+ 'configuration files to mmdet >= 2.24.1.')
# set multi-process settings
setup_multi_processes(cfg)
|
codereview_new_python_data_4291
|
def main():
# init visualizer
visualizer = VISUALIZERS.build(model.cfg.visualizer)
visualizer.dataset_meta = model.dataset_meta
video_reader = mmcv.VideoReader(args.video)
add a comment that the dataset_meta is loaded from the checkpoint and then pass to the model in `init_detector`
def main():
# init visualizer
visualizer = VISUALIZERS.build(model.cfg.visualizer)
+ # the dataset_meta is loaded from the checkpoint and
+ # then pass to the model in init_detector
visualizer.dataset_meta = model.dataset_meta
video_reader = mmcv.VideoReader(args.video)
|
codereview_new_python_data_4292
|
def _draw_instances(self, image: np.ndarray, instances: ['InstanceData'],
self.draw_binary_masks(masks, colors=colors, alphas=self.alpha)
if 'bboxes' not in instances or instances.bboxes.sum() == 0:
- # e.g. SOLO
areas = []
positions = []
for mask in masks:
Does `instances.bboxes.sum() == 0` represent dummy bboxes? May add some comments.
def _draw_instances(self, image: np.ndarray, instances: ['InstanceData'],
self.draw_binary_masks(masks, colors=colors, alphas=self.alpha)
if 'bboxes' not in instances or instances.bboxes.sum() == 0:
+ # instances.bboxes.sum()==0 represent dummy bboxes.
+ # A typical example of SOLO does not exist bbox branch.
areas = []
positions = []
for mask in masks:
|
codereview_new_python_data_4293
|
def process(self, data_batch: Sequence[dict],
predictions (Sequence[dict]): A batch of outputs from
the model.
"""
- # If ``self.tmp_dir`` is none, it will compute pq_stats here,
- # otherwise, it will save gt and predictions to self.results.
if self.tmp_dir is None:
-
self._process_gt_and_predictions(data_batch, predictions)
else:
self._compute_batch_pq_stats(data_batch, predictions)
The comment seems to be different from the code.
def process(self, data_batch: Sequence[dict],
predictions (Sequence[dict]): A batch of outputs from
the model.
"""
+ # If ``self.tmp_dir`` is none, it will save gt and predictions to
+ # self.results, otherwise, it will compute pq_stats here.
if self.tmp_dir is None:
self._process_gt_and_predictions(data_batch, predictions)
else:
self._compute_batch_pq_stats(data_batch, predictions)
|
codereview_new_python_data_4294
|
-_base_ = '../panoptic_fpn/panoptic_fpn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='Res2Net',
Seems this is a new config for panoptic_fpn. Please add the performance in panoptic_fpn README.md
+_base_ = './panoptic_fpn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='Res2Net',
|
codereview_new_python_data_4295
|
-_base_ = '../panoptic_fpn/panoptic_fpn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='Res2Net',
Can directly use _base_ = './panoptic_fpn_r50_fpn_1x_coco.py'
+_base_ = './panoptic_fpn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='Res2Net',
|
codereview_new_python_data_4296
|
def forward(self, x):
mode='bicubic',
align_corners=False).flatten(2).transpose(1, 2)
else:
- absolute_pos_embed = self.absolute_pos_embed\
- .flatten(2).transpose(1, 2)
x = x + absolute_pos_embed
x = self.drop_after_pos(x)
'\' is unnecessary here.
def forward(self, x):
mode='bicubic',
align_corners=False).flatten(2).transpose(1, 2)
else:
+ absolute_pos_embed = self.absolute_pos_embed.flatten(
+ 2).transpose(1, 2)
x = x + absolute_pos_embed
x = self.drop_after_pos(x)
|
codereview_new_python_data_4382
|
def visit_less_than_or_equal(self, term: BoundTerm[L], literal: Literal[L]) -> L
return [(term.ref().field.name, "<=", self._cast_if_necessary(term.ref().field.field_type, literal.value))]
def visit_starts_with(self, term: BoundTerm[L], literal: Literal[L]) -> List[Tuple[str, str, Any]]:
- return [(term.ref().field.name, "starts_with", self._cast_if_necessary(term.ref().field.field_type, literal.value))]
def visit_not_starts_with(self, term: BoundTerm[L], literal: Literal[L]) -> List[Tuple[str, str, Any]]:
return [(term.ref().field.name, "not starts_with", self._cast_if_necessary(term.ref().field.field_type, literal.value))]
I don't think that this is a supported predicate: https://github.com/apache/arrow/blob/45918a90a6ca1cf3fd67c256a7d6a240249e555a/python/pyarrow/parquet/core.py#L126-L131
We can just return an empty predicate for now:
```suggestion
return []
```
This visitor isn't used right now but might be in the future for Dask to implement page skipping. Keeping it empty will mean that we don't use it for skipping pages, but we have to filter on a row level afterward anyway.
def visit_less_than_or_equal(self, term: BoundTerm[L], literal: Literal[L]) -> L
return [(term.ref().field.name, "<=", self._cast_if_necessary(term.ref().field.field_type, literal.value))]
def visit_starts_with(self, term: BoundTerm[L], literal: Literal[L]) -> List[Tuple[str, str, Any]]:
+ return []
def visit_not_starts_with(self, term: BoundTerm[L], literal: Literal[L]) -> List[Tuple[str, str, Any]]:
return [(term.ref().field.name, "not starts_with", self._cast_if_necessary(term.ref().field.field_type, literal.value))]
|
codereview_new_python_data_4383
|
def test_and_or_with_parens() -> None:
def test_starts_with() -> None:
- assert StartsWith("x", "data") == parser.parse("x starts_with 'data'")
assert StartsWith("x", "data") == parser.parse("x STARTS_WITH 'data'")
I don't think `starts_with` is common in SQL. How about:
```suggestion
assert StartsWith("x", "data") == parser.parse("x LIKE 'data*'")
```
def test_and_or_with_parens() -> None:
def test_starts_with() -> None:
+ assert StartsWith("x", "data") == parser.parse("x LIKE 'data*'")
assert StartsWith("x", "data") == parser.parse("x STARTS_WITH 'data'")
|
codereview_new_python_data_4385
|
def _(self, type_var: DecimalType) -> Literal[Decimal]:
@to.register(BooleanType)
def _(self, type_var: BooleanType) -> Literal[bool]:
if self.value.upper() in ['TRUE', 'FALSE']:
- return BooleanLiteral(True if self.value.upper() == 'TRUE' else False)
else:
raise ValueError(f"Could not convert {self.value} into a {type_var}")
```suggestion
return BooleanLiteral(self.value.upper() == 'TRUE')
```
def _(self, type_var: DecimalType) -> Literal[Decimal]:
@to.register(BooleanType)
def _(self, type_var: BooleanType) -> Literal[bool]:
if self.value.upper() in ['TRUE', 'FALSE']:
+ return BooleanLiteral(self.value.upper() == 'TRUE')
else:
raise ValueError(f"Could not convert {self.value} into a {type_var}")
|
codereview_new_python_data_4386
|
def _(self, type_var: DecimalType) -> Literal[Decimal]:
@to.register(BooleanType)
def _(self, type_var: BooleanType) -> Literal[bool]:
if self.value.upper() in ['TRUE', 'FALSE']:
- return BooleanLiteral(True if self.value.upper() == 'TRUE' else False)
else:
raise ValueError(f"Could not convert {self.value} into a {type_var}")
Can we introduce a variable `value_upper` so we call `.upper()` just once?
def _(self, type_var: DecimalType) -> Literal[Decimal]:
@to.register(BooleanType)
def _(self, type_var: BooleanType) -> Literal[bool]:
if self.value.upper() in ['TRUE', 'FALSE']:
+ return BooleanLiteral(self.value.upper() == 'TRUE')
else:
raise ValueError(f"Could not convert {self.value} into a {type_var}")
|
codereview_new_python_data_4398
|
def update_dictionary_end_frame(array_simulation_particle_coordinates, dictionar
cube_counter = 0
for key, cube in dictionary_cube_data_this_core.items():
# if there were no particles in the cube in the first frame, then set dx,dy,dz each to 0
- if cube['centroid_of_particles_first_frame'] == None:
cube['dx'] = 0
cube['dy'] = 0
cube['dz'] = 0
That took me far too long to narrow down, looks like the issue here was that we are at times doing an array vs str comparison, which until 1.24 led to a scalar return, but now returns an array of bools instead, that in turns messes up this if/else call.
Doing a comparison against `None` is probably just safer & more pythonic here?
P.S. Weirdly enough I think this should have thrown an error rather than silently failing i.e. it _should_ have complained about the truth of an array being ambiguous.. but it didn't 😕
def update_dictionary_end_frame(array_simulation_particle_coordinates, dictionar
cube_counter = 0
for key, cube in dictionary_cube_data_this_core.items():
# if there were no particles in the cube in the first frame, then set dx,dy,dz each to 0
+ if cube['centroid_of_particles_first_frame'] is None:
cube['dx'] = 0
cube['dy'] = 0
cube['dz'] = 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.