{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# DeepLab\n",
    "## v2\n",
    "Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs\n",
    "- 论文地址：DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs\n",
    "- 源码地址：DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs\n",
    "- 复现代码：https://github.com/kazuto1011/deeplab-pytorch\n",
    "这是一篇于 2016 年发布在 CVPR 上的文章。接着上一篇 DeepLab V1 网络简介，本文对 DeepLab V2 网络进行简单介绍。DeepLab V1 相比 DeepLab V2 就是换了个 Backbone（VGG → ResNet，简单换个 Backbone 就能涨大概 3 个点）然后引入了一个新的模块 ASPP（Atros Spatial Pyramid Pooling，空洞空间金字塔池化），其他的没太大区别。<br/>\n",
    "原文链接：https://blog.csdn.net/weixin_44878336/article/details/131973362\n",
    "![deeplabv2](../images/segment/deeplabv2-net.png)\n",
    "\n",
    "### 概况\n",
    "DeepLab v2 是一个语义分割模型，旨在对图像进行像素级别的分类，将每个像素分配到其对应的语义类别。它是 DeepLab 系列的第二个版本，在 2016 年由 Chen 等人提出，并取得了令人印象深刻的结果。<br/>\n",
    "DeepLab v2 的核心思想是使用深度卷积神经网络（DCNN）来解决语义分割问题，并在此基础上引入了一些关键技术来提高性能。其主要的核心思想包括：<br/>\n",
    "- 膨胀卷积（Atrous Convolution）：传统的卷积操作具有固定的感受野大小，但在语义分割任务中，需要考虑不同尺度的上下文信息。为了实现这一点，DeepLab v2 采用了膨胀卷积。通过在卷积操作中引入可调节的膨胀系数（率） r，可以在不增加计算量的情况下扩大感受野，从而捕捉更广泛的上下文信息。<br/>\n",
    "![deeplabv2-dilated](../images/segment/deeplabv2-conv.gif)\n",
    "![deeplabv2-dilated](../images/segment/deeplabv2-dilated-conv.gif)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    " \n",
    "# 定义一个膨胀率为2的膨胀卷积层\n",
    "class DilatedConv2d(nn.Module):\n",
    "    def __init__(self, in_channels, out_channels, kernel_size, dilation):\n",
    "        super(DilatedConv2d, self).__init__()\n",
    "        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, padding=dilation, dilation=dilation)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        return self.conv(x)\n",
    " \n",
    "# 示例用法\n",
    "in_channels, out_channels, kernel_size, dilation = 3, 64, 3, 2\n",
    "model = DilatedConv2d(in_channels, out_channels, kernel_size, dilation)\n",
    " \n",
    "# 假设输入特征图的大小为[1, in_channels, H, W]\n",
    "input_feature_map = torch.randn(1, in_channels, 10, 10)\n",
    "output_feature_map = model(input_feature_map)\n",
    " \n",
    "print(output_feature_map.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 多尺度信息融合：为了进一步提高语义分割的性能，DeepLab v2 采用了多尺度信息融合（Multi-Scale，MSc）的策略。它通过在不同膨胀系数 r rr 下进行多尺度的膨胀卷积操作，并将这些不同尺度的特征图融合在一起，从而使模型能够同时利用局部和全局的上下文信息。\n",
    "- 高分辨率分类器：在语义分割任务中，分辨率较低的特征图可能会导致细小目标的信息丢失。为了解决这个问题，DeepLab v2 在膨胀卷积后使用了一个全局平均池化层（AvgPooling），将特征图的尺寸降低到 1 × 1，然后通过一个额外的高分辨率分类器来对原始尺寸的特征图进行上采样，从而恢复细节信息。\n",
    "- 条件随机场（CRF）后处理：在语义分割中，由于卷积神经网络的输出是逐像素的分类结果，可能存在一些不连续性和噪声。为了优化分割结果并增强空间连续性，DeepLab v2 使用了条件随机场（CRF）后处理步骤，以进一步优化像素标签。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "from torch.autograd import Variable\n",
    " \n",
    "class CRF(nn.Module):\n",
    "    def __init__(self, num_classes, pairwise_potential=None):\n",
    "        super(CRF, self).__init__()\n",
    "        self.num_classes = num_classes\n",
    "        self.pairwise_potential = pairwise_potential\n",
    "        if pairwise_potential is None:\n",
    "            self.pairwise_potential = nn.Parameter(\n",
    "                torch.zeros(num_classes, num_classes))\n",
    " \n",
    "    def _normalize(self, input, mask):\n",
    "        # 计算归一化常数\n",
    "        partition = input.clone()\n",
    "        for i in range(self.num_classes):\n",
    "            if i == 0:\n",
    "                partition[:, :, 0:1] = 0  # 忽略背景\n",
    "            else:\n",
    "                partition[:, :, i:i+1] = input[:, :, i:i+1] - \\\n",
    "                    input[:, :, 0:1].mul(self.pairwise_potential[0, i])\n",
    "        # 应用mask\n",
    "        mask = mask.view(mask.size()[0], mask.size()[1], 1).expand(\n",
    "            mask.size()[0], mask.size()[1], self.num_classes)\n",
    "        mask = mask.contiguous()\n",
    "        # 计算归一化的分数\n",
    "        partition = partition.masked_fill_(1 - mask, 1e-10)\n",
    "        return partition\n",
    " \n",
    "    def _compute_log_norm_const(self, input, mask):\n",
    "        # 计算归一化常数的对数\n",
    "        partition = self._normalize(input, mask)\n",
    "        return partition.log_softmax(dim=2)\n",
    " \n",
    "    def forward(self, input, mask):\n",
    "        log_norm_const = self._compute_log_norm_const(input, mask)\n",
    "        return log_norm_const\n",
    " \n",
    "# 示例用法\n",
    "num_classes = 21  # 假设有21个类别\n",
    "crf = CRF(num_classes)\n",
    " \n",
    "# 随机初始化特征映射和掩码\n",
    "features = Variable(torch.randn(1, 512, 7, 7))  # 假设特征映射大小为7x7\n",
    "mask = Variable(torch.ones(1, 7, 7).byte())  # 假定掩码为全1\n",
    " \n",
    "# 前向传播\n",
    "output = crf(features, mask)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## v3网络结构\n",
    "DeepLabV3+的网络结构如下图所示，主要为Encoder-Decoder结构。其中，Encoder为改进的DeepLabV3，Decoder为3+版本新提出的。<br/>\n",
    "![deeplabv3](../images/segment/deeplabv3-network.png)<br/>\n",
    "\n",
    "### class DeepLab\n",
    "该class为总的网络结构，我们从forward函数可以看出其整体的流程：输入x经过backbone得到16倍下采样的feature map1和低级feature map2；feature map1送入ASPP模块，得到结果，然后和feature map2一起送入Decoder模块；最后经过插值得到与原图大小相等的预测图。代码如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''\n",
    "-> Deeplab V3 +\n",
    "'''\n",
    " \n",
    "class DeepLab(BaseModel):\n",
    "    def __init__(self, num_classes, in_channels=3, backbone='xception', pretrained=True, \n",
    "                output_stride=16, freeze_bn=False, **_):\n",
    "                \n",
    "        super(DeepLab, self).__init__()\n",
    "        assert ('xception' or 'resnet' in backbone)\n",
    "        if 'resnet' in backbone:\n",
    "            self.backbone = ResNet(in_channels=in_channels, output_stride=output_stride, pretrained=pretrained)\n",
    "            low_level_channels = 256\n",
    "        else:\n",
    "            self.backbone = Xception(output_stride=output_stride, pretrained=pretrained)\n",
    "            low_level_channels = 128\n",
    " \n",
    "        self.ASSP = ASSP(in_channels=2048, output_stride=output_stride)\n",
    "        self.decoder = Decoder(low_level_channels, num_classes)\n",
    " \n",
    "        if freeze_bn: self.freeze_bn()\n",
    " \n",
    "    def forward(self, x):\n",
    "        H, W = x.size(2), x.size(3)\n",
    "        x, low_level_features = self.backbone(x)\n",
    "        x = self.ASSP(x)\n",
    "        x = self.decoder(x, low_level_features)\n",
    "        x = F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True)\n",
    "        return x\n",
    " \n",
    "    # Two functions to yield the parameters of the backbone\n",
    "    # & Decoder / ASSP to use differentiable learning rates\n",
    "    # FIXME: in xception, we use the parameters from xception and not aligned xception\n",
    "    # better to have higher lr for this backbone\n",
    " \n",
    "    def get_backbone_params(self):\n",
    "        return self.backbone.parameters()\n",
    " \n",
    "    def get_decoder_params(self):\n",
    "        return chain(self.ASSP.parameters(), self.decoder.parameters())\n",
    " \n",
    "    def freeze_bn(self):\n",
    "        for module in self.modules():\n",
    "            if isinstance(module, nn.BatchNorm2d): module.eval()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要注意的是：如果使用ResNet系列作为backbone，中间的低级feature map输出维度为256，如果使用Xception作为backbone，中间的低级feature map维度为128。不过，不管是256还是128，最终都要在送入Decoder后降采样到48通道。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### class ASPP"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ASSP(nn.Module):\n",
    "    def __init__(self, in_channels, output_stride):\n",
    "        super(ASSP, self).__init__()\n",
    " \n",
    "        assert output_stride in [8, 16], 'Only output strides of 8 or 16 are suported'\n",
    "        if output_stride == 16: dilations = [1, 6, 12, 18]\n",
    "        elif output_stride == 8: dilations = [1, 12, 24, 36]\n",
    "        \n",
    "        self.aspp1 = assp_branch(in_channels, 256, 1, dilation=dilations[0])\n",
    "        self.aspp2 = assp_branch(in_channels, 256, 3, dilation=dilations[1])\n",
    "        self.aspp3 = assp_branch(in_channels, 256, 3, dilation=dilations[2])\n",
    "        self.aspp4 = assp_branch(in_channels, 256, 3, dilation=dilations[3])\n",
    " \n",
    "        self.avg_pool = nn.Sequential(\n",
    "            nn.AdaptiveAvgPool2d((1, 1)),\n",
    "            nn.Conv2d(in_channels, 256, 1, bias=False),\n",
    "            nn.BatchNorm2d(256),\n",
    "            nn.ReLU(inplace=True))\n",
    "        \n",
    "        self.conv1 = nn.Conv2d(256*5, 256, 1, bias=False)\n",
    "        self.bn1 = nn.BatchNorm2d(256)\n",
    "        self.relu = nn.ReLU(inplace=True)\n",
    "        self.dropout = nn.Dropout(0.5)\n",
    " \n",
    "        initialize_weights(self)\n",
    " \n",
    "    def forward(self, x):\n",
    "        x1 = self.aspp1(x)\n",
    "        x2 = self.aspp2(x)\n",
    "        x3 = self.aspp3(x)\n",
    "        x4 = self.aspp4(x)\n",
    "        x5 = F.interpolate(self.avg_pool(x), size=(x.size(2), x.size(3)), mode='bilinear', align_corners=True)\n",
    " \n",
    "        x = self.conv1(torch.cat((x1, x2, x3, x4, x5), dim=1))\n",
    "        x = self.bn1(x)\n",
    "        x = self.dropout(self.relu(x))\n",
    " \n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### class Decoder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Decoder(nn.Module):\n",
    "    def __init__(self, low_level_channels, num_classes):\n",
    "        super(Decoder, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(low_level_channels, 48, 1, bias=False)\n",
    "        self.bn1 = nn.BatchNorm2d(48)\n",
    "        self.relu = nn.ReLU(inplace=True)\n",
    " \n",
    "        # Table 2, best performance with two 3x3 convs\n",
    "        self.output = nn.Sequential(\n",
    "            nn.Conv2d(48+256, 256, 3, stride=1, padding=1, bias=False),\n",
    "            nn.BatchNorm2d(256),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(256, 256, 3, stride=1, padding=1, bias=False),\n",
    "            nn.BatchNorm2d(256),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Dropout(0.1),\n",
    "            nn.Conv2d(256, num_classes, 1, stride=1),\n",
    "        )\n",
    "        initialize_weights(self)\n",
    " \n",
    "    def forward(self, x, low_level_features):\n",
    "        low_level_features = self.conv1(low_level_features)\n",
    "        low_level_features = self.relu(self.bn1(low_level_features))\n",
    "        H, W = low_level_features.size(2), low_level_features.size(3)\n",
    " \n",
    "        x = F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True)\n",
    "        x = self.output(torch.cat((low_level_features, x), dim=1))\n",
    "        return x"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "notes",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
