项目设计流程:多目标检测与跟踪系统
目标
实现对视频 / 图像中无人机目标的精准检测与连续跟踪,重点解决小目标检测难、多目标易混淆的问题,也加入对鸟类,直升飞机,飞机的检测,适用于安防监控、生态监测、低空域管理等场景。
技术流程
数据层:采集包含 4 类目标的多场景数据(机场、自然保护区、城市低空等,这个地方灵活变通,结合胡文俊的数据集,和GitHub上之前的数据集介绍)
检测层:基于 图像识别算法构建多类别检测模型,优化小目标检测能力。(后面重点讲)
跟踪层:集成 BotSORT 或 ByteTrack 算法,实现目标轨迹关联。
代码1
tracker_type: botsort 
track_high_thresh: 0.25 
track_low_thresh: 0.1 
new_track_thresh: 0.25 
track_buffer: 30match_thresh: 0.8
fuse_score: True 
# min_box_area: 10 
# BoT-SORT settings
gmc_method: sparseOptFlow # ReID model related thresh (not supported yet)
proximity_thresh: 0.5
appearance_thresh: 0.25
with_reid: False
 
代码2
tracker_type: bytetrack # tracker type, ['botsort', 'bytetrack']
track_high_thresh: 0.25 # threshold for the first association
track_low_thresh: 0.1 # threshold for the second association
new_track_thresh: 0.25 # threshold for init new track if the detection does not match any tracks
track_buffer: 30 # buffer to calculate the time when to remove tracks
match_thresh: 0.8 # threshold for matching tracks
fuse_score: True
结果为选择 ByteTrack:实时性(实时回传画面)好,且检测模型精度较高。
应用层:支持实时视频流输入,输出带轨迹的可视化结果。

算法对比
 YOLOv8 自定义模型配置
1. 深度与宽度增强
2. 超浅层检测层
3.Backbone:注意力机制的分层优化

算法:
depth_multiple: 0.6  # 适当增加深度
width_multiple: 0.8  # 适当增加宽度

# Backbone:增强小目标特征提取(添加注意力+调整层级)
backbone:
  - [-1, 1, Conv, [64, 3, 2]]   # 0-P2: 80x80
  - [-1, 1, Conv, [128, 3, 2]]  # 1-P3: 40x40
  - [-1, 3, C2f, [128, True]]   # 2
  - [-1, 1, CBAM, [128]]        # 3: P3层CBAM(小目标注意力)
  - [-1, 1, Conv, [256, 3, 2]]  # 4-P4: 20x20
  - [-1, 6, C2f, [256, True]]   # 5
  - [-1, 1, ECA, [256]]         # 6: P4层ECA(轻量级通道注意力)
  - [-1, 1, Conv, [512, 3, 2]]  # 7-P5: 10x10
  - [-1, 6, C2f, [512, True]]   # 8
  - [-1, 1, SE, [512]]          # 9: P5层SE(通道+空间注意力)
  - [-1, 1, Conv, [1024, 3, 2]] # 10-P6: 5x5
  - [-1, 3, C2f, [1024, True]]  # 11
  - [-1, 1, SPPF, [1024, 5]]    # 12

# Neck:BiFPN加权特征融合(增强小目标特征传递)
neck:
  - [[-4, -3, -2, -1], 1, BiFPN, [256]]  # 13: 融合P3-P6特征
  - [-1, 1, nn.Upsample, [None, 2, 'bilinear']]  # 14: 上采样到80x80(P2)

# Head:新增P2超浅层检测+定制锚框(覆盖小目标)
head:
  - [[14, 3, 6, 9], 1, Detect, [nc, [  # 检测层:P2(80x80), P3(40x40), P4(20x20), P5(10x10)
      [6,8, 10,12, 15,18],    # P2锚框(对应原图8-24像素小目标)
      [20,25, 30,40, 45,60],   # P3锚框(原图24-48像素中目标)
      [70,90, 110,150, 180,240], # P4锚框(原图48-96像素大目标)
      [250,300, 350,450, 500,600]  # P5锚框(原图>96像素超大目标)
    ]]
  ]

结果(配上图片1)

YOLOv11
其架构改进(C3K2、C2PSA)和性能提升(小目标 mAP+3.5%)显著适配你的无人机检测需求

算法:
scales:
  # [depth, width, max_channels]
  u: [0.50, 0.25, 1024]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs


backbone:
  
  - [-1, 1, Conv, [64, 3, 2]] 
  - [-1, 1, Conv, [128, 3, 2]] 
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]]
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] 
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] 
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]]
  - [-1, 2, C2PSA, [1024]] 

head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]]
  - [-1, 2, C3k2, [512, False]] 
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] 
  - [-1, 2, C3k2, [256, False]]
  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]] 
  - [-1, 2, C3k2, [512, False]]
  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]]
  - [-1, 2, C3k2, [1024, True]]
  - [[16, 19, 22], 1, Detect, [nc]] 

结果(图片2)

YOLOv8引入可变形卷积(DCNv2)
通过动态调整卷积核的采样位置,突破了传统卷积固定感受野的限制,能自适应匹配目标的形状和分布。尤其对小目标检测、姿态变化大的目标有显著提升。DCNv2 的动态采样能 “避开” 背景干扰,更精准地提取目标特征,提升检测鲁棒性。

DCNv2模块:
# DCNv2 相关模块定义
class DCNv2(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1,
                 padding=1, dilation=1, groups=1, deformable_groups=1):
        super().__init__()
        self.in_channels = in_channels
        self.out_channels = out_channels
        self.kernel_size = (kernel_size, kernel_size)
        self.stride = (stride, stride)
        self.padding = (padding, padding)
        self.dilation = (dilation, dilation)
        self.groups = groups
        self.deformable_groups = deformable_groups

        self.weight = nn.Parameter(torch.empty(out_channels, in_channels, *self.kernel_size))
        self.bias = nn.Parameter(torch.empty(out_channels))
        
        out_channels_offset_mask = self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1]
        self.conv_offset_mask = nn.Conv2d(
            self.in_channels, out_channels_offset_mask,
            kernel_size=self.kernel_size, stride=self.stride,
            padding=self.padding, bias=True)
        
        self.bn = nn.BatchNorm2d(out_channels)
        self.act = Conv.default_act
        self.reset_parameters()

    def forward(self, x):
        offset_mask = self.conv_offset_mask(x)
        o1, o2, mask = torch.chunk(offset_mask, 3, dim=1)
        offset = torch.cat((o1, o2), dim=1)
        mask = torch.sigmoid(mask)
        x = torch.ops.torchvision.deform_conv2d(
            x, self.weight, offset, mask, self.bias,
            self.stride[0], self.stride[1],
            self.padding[0], self.padding[1],
            self.dilation[0], self.dilation[1],
            self.groups, self.deformable_groups, True)
        return self.act(self.bn(x))

    def reset_parameters(self):
        n = self.in_channels
        for k in self.kernel_size: n *= k
        std = 1. / math.sqrt(n)
        self.weight.data.uniform_(-std, std)
        self.bias.data.zero_()
        self.conv_offset_mask.weight.data.zero_()
        self.conv_offset_mask.bias.data.zero_()

class Bottleneck_DCN(nn.Module):
    def __init__(self, c1, c2, shortcut=True, g=1, k=(3, 3), e=0.5):
        super().__init__()
        c_ = int(c2 * e)
        if k[0] == 3:
            self.cv1 = DCNv2(c1, c_, k[0], 1)
        else:
            self.cv1 = Conv(c1, c_, k[0], 1)
        if k[1] == 3:
            self.cv2 = DCNv2(c_, c2, k[1], 1, groups=g)
        else:
            self.cv2 = Conv(c_, c2, k[1], 1, g=g)
        self.add = shortcut and c1 == c2

    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))

class C2f_DCN(nn.Module):
    def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):
        super().__init__()
        width_multiple = 0.25
        c2 = max(1, int(c2 * width_multiple))
        self.c = max(1, int(c2 * e))
        self.cv1 = Conv(c1, 2 * self.c, 1, 1)
        self.cv2 = Conv((2 + n) * self.c, c2, 1)
        self.m = nn.ModuleList(Bottleneck_DCN(self.c, self.c, shortcut, g, k=(1, 3), e=1.0) for _ in range(n))


    @classmethod
    def from_yaml(cls, model, ch, args):
        if isinstance(args, (list, tuple)):
            args = list(args) + [1, False, 1, 0.5][len(args):]
        else:
            args = [args, 1, False, 1, 0.5]
            
        return cls(
            ch,  # c1
            args[0],  # c2
            n=args[1] if len(args) > 1 else 1,
            shortcut=args[2] if len(args) > 2 else False,
            g=args[3] if len(args) > 3 else 1,
            e=args[4] if len(args) > 4 else 0.5
        )

    def forward(self, x):
        y = list(torch.chunk(self.cv1(x), 2, dim=1))
        y.extend(m(y[-1]) for m in self.m)
        return self.cv2(torch.cat(y, dim=1))
结果(图片3)
