{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# YOLO v3\n",
    "## 1. backbone\n",
    "在基本的图像特征提取方面，YOLO3采用了称之为Darknet-53的网络结构（含有53个卷积层），它借鉴了残差网络residual network的做法，在一些层之间设置了快捷链路（shortcut connections）。\n",
    "## 2. 利用多尺度特征进行对象检测\n",
    "YOLO2曾采用passthrough结构来检测细粒度特征，在YOLO3更进一步采用了3个不同尺度的特征图来进行对象检测。\n",
    "## 3. 9种尺度的先验框\n",
    "随着输出的特征图的数量和尺度的变化，先验框的尺寸也需要相应的调整。YOLO2已经开始采用K-means聚类得到先验框的尺寸，YOLO3延续了这种方法，为每种下采样尺度设定3种先验框，总共聚类出9种尺寸的先验框。分配上，在最小的13*13特征图上（有最大的感受野）应用较大的先验框(116x90)，(156x198)，(373x326)，适合检测较大的对象。中等的26*26特征图上（中等感受野）应用中等的先验框(30x61)，(62x45)，(59x119)，适合检测中等大小的对象。较大的52*52特征图上（较小的感受野）应用较小的先验框(10x13)，(16x30)，(33x23)，适合检测较小的对象。\n",
    "## 3. 对象分类softmax改成logistic\n",
    "预测对象类别时不使用softmax，改成使用logistic的输出进行预测。这样能够支持多标签对象（比如一个人有Woman 和 Person两个标签）。在处理概率类别时，不再使用softmax而改用sigmoid（COCO数据集上单标签即可）。训练时使用sigmoid+binary cross-entropy\n",
    "\n",
    "参考：https://blog.paperspace.com/how-to-implement-a-yolo-v3-object-detector-from-scratch-in-pytorch-part-3/"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "source": [
    "import torch\n",
    "import numpy as np\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import cv2\n",
    "import torchvision.transforms as transforms\n",
    "from torch.autograd import Variable\n",
    "from collections import OrderedDict\n",
    "import math"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "source": [
    "class Config:\n",
    "    def __init__(self) -> None:\n",
    "        self.anchors = [[[116, 90], [156, 198], [373, 326]],\n",
    "                        [[30, 61], [62, 45], [59, 119]],\n",
    "                        [[10, 13], [16, 30], [33, 23]]]\n",
    "        self.classes = 80\n",
    "config = Config()"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "# 网络结构"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 基础结构"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "source": [
    "class BasicBlock(nn.Module):\n",
    "    def __init__(self, inplanes, planes):\n",
    "        super(BasicBlock, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(inplanes, planes[0], kernel_size=1,\n",
    "                               stride=1, padding=0, bias=False)\n",
    "        self.bn1 = nn.BatchNorm2d(planes[0])\n",
    "        self.relu1 = nn.LeakyReLU(0.1)\n",
    "        \n",
    "        self.conv2 = nn.Conv2d(planes[0], planes[1], kernel_size=3,\n",
    "                               stride=1, padding=1, bias=False)\n",
    "        self.bn2 = nn.BatchNorm2d(planes[1])\n",
    "        self.relu2 = nn.LeakyReLU(0.1)\n",
    "\n",
    "    def forward(self, x):\n",
    "        residual = x\n",
    "\n",
    "        out = self.conv1(x)\n",
    "        out = self.bn1(out)\n",
    "        out = self.relu1(out)\n",
    "\n",
    "        out = self.conv2(out)\n",
    "        out = self.bn2(out)\n",
    "        out = self.relu2(out)\n",
    "\n",
    "        out += residual\n",
    "        return out"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Darknet"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "source": [
    "class DarkNet(nn.Module):\n",
    "    def __init__(self, layers):\n",
    "        super(DarkNet, self).__init__()\n",
    "        self.inplanes = 32\n",
    "        self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)\n",
    "        self.bn1 = nn.BatchNorm2d(self.inplanes)\n",
    "        self.relu1 = nn.LeakyReLU(0.1)\n",
    "\n",
    "        self.layer1 = self._make_layer([32, 64], layers[0])     #32--64--32--64\n",
    "        self.layer2 = self._make_layer([64, 128], layers[1])    #64--128--64--128\n",
    "        self.layer3 = self._make_layer([128, 256], layers[2])\n",
    "        self.layer4 = self._make_layer([256, 512], layers[3])\n",
    "        self.layer5 = self._make_layer([512, 1024], layers[4])\n",
    "\n",
    "        self.layers_out_filters = [64, 128, 256, 512, 1024]\n",
    "\n",
    "        for m in self.modules():\n",
    "            if isinstance(m, nn.Conv2d):\n",
    "                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n",
    "                m.weight.data.normal_(0, math.sqrt(2. / n))\n",
    "            elif isinstance(m, nn.BatchNorm2d):\n",
    "                m.weight.data.fill_(1)\n",
    "                m.bias.data.zero_()\n",
    "\n",
    "    def _make_layer(self, planes, blocks):\n",
    "        layers = []\n",
    "        # downsamples\n",
    "        layers.append((\"ds_conv\", nn.Conv2d(self.inplanes, planes[1], kernel_size=3, stride=2, padding=1, bias=False)))\n",
    "        layers.append((\"ds_bn\", nn.BatchNorm2d(planes[1])))\n",
    "        layers.append((\"ds_relu\", nn.LeakyReLU(0.1)))\n",
    "        # blocks\n",
    "        self.inplanes = planes[1]\n",
    "        for i in range(0, blocks):\n",
    "            layers.append((\"residual_{}\".format(i), BasicBlock(self.inplanes, planes)))\n",
    "        return nn.Sequential(OrderedDict(layers))   #按照放入顺序排序，且构建为字典\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.conv1(x)\n",
    "        x = self.bn1(x)\n",
    "        x = self.relu1(x)\n",
    "\n",
    "        x = self.layer1(x)\n",
    "        x = self.layer2(x)\n",
    "        out3 = self.layer3(x)\n",
    "        out4 = self.layer4(out3)\n",
    "        out5 = self.layer5(out4)\n",
    "\n",
    "        return out3, out4, out5"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "## YOLO-v3"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "source": [
    "class YOLOv3(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(YOLOv3, self).__init__()\n",
    "        self.backbone = DarkNet([1, 2, 8, 8, 4])\n",
    "        _out_filters = self.backbone.layers_out_filters\n",
    "        #  embedding0\n",
    "        final_out_filter0 = len(config.anchors[0]) * (5 + config.classes)\n",
    "        self.embedding0 = self._make_embedding([512, 1024], _out_filters[-1], final_out_filter0)\n",
    "\n",
    "    def _make_embedding(self, filters_list, in_filters, out_filter):\n",
    "        m = nn.ModuleList([\n",
    "            self._make_cbl(in_filters, filters_list[0], 1),\n",
    "            self._make_cbl(filters_list[0], filters_list[1], 3),\n",
    "            self._make_cbl(filters_list[1], filters_list[0], 1),\n",
    "            self._make_cbl(filters_list[0], filters_list[1], 3),\n",
    "            self._make_cbl(filters_list[1], filters_list[0], 1),\n",
    "            self._make_cbl(filters_list[0], filters_list[1], 3)])\n",
    "        m.add_module(\"conv_out\", nn.Conv2d(filters_list[1], out_filter, kernel_size=1,\n",
    "                                           stride=1, padding=0, bias=True))\n",
    "        return m\n",
    "\n",
    "    def _make_cbl(self, _in, _out, ks):\n",
    "        '''cbl = conv + batch_norm + leaky_relu\n",
    "        '''\n",
    "        pad = (ks - 1) // 2 if ks else 0\n",
    "        return nn.Sequential(OrderedDict([\n",
    "            (\"conv\", nn.Conv2d(_in, _out, kernel_size=ks, stride=1, padding=pad, bias=False)),\n",
    "            (\"bn\", nn.BatchNorm2d(_out)),\n",
    "            (\"relu\", nn.LeakyReLU(0.1)),\n",
    "        ]))\n",
    "    \n",
    "    def forward(self, x):\n",
    "        def _branch(_embedding, _in):\n",
    "            for i, e in enumerate(_embedding):\n",
    "                _in = e(_in)\n",
    "                if i == 4:\n",
    "                    out_branch = _in\n",
    "            return _in, out_branch\n",
    "        #  backbone\n",
    "        x2, x1, x0 = self.backbone(x)\n",
    "        #  yolo branch 0\n",
    "        out0, out0_branch = _branch(self.embedding0, x0)\n",
    "        #  yolo branch 1\n",
    "        x1_in = self.embedding1_cbl(out0_branch)\n",
    "        x1_in = self.embedding1_upsample(x1_in)\n",
    "        x1_in = torch.cat([x1_in, x1], 1)\n",
    "        out1, out1_branch = _branch(self.embedding1, x1_in)\n",
    "        #  yolo branch 2\n",
    "        x2_in = self.embedding2_cbl(out1_branch)\n",
    "        x2_in = self.embedding2_upsample(x2_in)\n",
    "        x2_in = torch.cat([x2_in, x2], 1)\n",
    "        out2, out2_branch = _branch(self.embedding2, x2_in)\n",
    "        return out0, out1, out2"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "source": [
    "class YOLOHead(nn.Module):\n",
    "    \"\"\"\n",
    "    三个检测头(13x13),(26x26),(52x52)，对每个检测头分别处理\n",
    "    输入：\n",
    "        anchors:一个检测头的3个anchor\n",
    "        img_size:(416, 416)\n",
    "    \"\"\"\n",
    "    def __init__(self, anchors, num_classes, img_size):\n",
    "        # img_size:(w, h)\n",
    "        super(YOLOHead, self).__init__()\n",
    "        self.anchors = anchors\n",
    "        self.num_anchors = len(anchors)\n",
    "        self.num_classes = num_classes\n",
    "        self.bbox_attrs = 5 + num_classes\n",
    "        self.img_size = img_size\n",
    "    def forward(self, input):\n",
    "        bs = input.size(0)\n",
    "        in_h = input.size(2)\n",
    "        in_w = input.size(3)\n",
    "        stride_h = self.img_size[1] / in_h\n",
    "        stride_w = self.img_size[0] / in_w\n",
    "        # anchors / stride\n",
    "        scaled_anchors = [(a_w / stride_w, a_h / stride_h) for a_w, a_h in self.anchors]\n",
    "        # [bs, num_anchors, h, w, (5 + num_classes)]\n",
    "        prediction = input.view(bs, self.num_anchors, self.bbox_attrs, in_h, in_w).permute(0, 1, 3, 4, 2).contiguous()\n",
    "        \n",
    "        x = torch.sigmoid(prediction[..., 0])\n",
    "        y = torch.sigmoid(prediction[..., 1])\n",
    "        w =     torch.exp(prediction[..., 2])\n",
    "        h =     torch.exp(prediction[..., 3])\n",
    "        conf = torch.sigmoid(prediction[..., 4])\n",
    "        # 类别概率不再用softmax而是用sigmoid\n",
    "        pred_cls = torch.sigmoid(prediction[..., 5:])"
   ],
   "outputs": [],
   "metadata": {}
  }
 ],
 "metadata": {
  "orig_nbformat": 4,
  "language_info": {
   "name": "python",
   "version": "3.6.9",
   "mimetype": "text/x-python",
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "pygments_lexer": "ipython3",
   "nbconvert_exporter": "python",
   "file_extension": ".py"
  },
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3.6.9 64-bit"
  },
  "interpreter": {
   "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}