{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FCN网络\n",
    "## 网络结构\n",
    "FCN网络结构主要分为两个部分：全卷积部分和反卷积部分。其中全卷积部分为一些经典的CNN网络（如VGG，ResNet等），用于提取特征；反卷积部分则是通过上采样得到原尺寸的语义分割图像。FCN的输入可以为任意尺寸的彩色图像，输出与输入尺寸相同，通道数为n（目标类别数）+1（背景）。FCN网络结构如下：<br/>\n",
    "![fcn](../images/segment/fcn-network.png)<br/>\n",
    "## 上采样 Upsampling\n",
    "在卷积过程的卷积操作和池化操作会使得特征图的尺寸变小，为得到原图像大小的稠密像素预测，需要对得到的特征图进行上采样操作。可通过双线性插值（Bilinear）实现上采样，且双线性插值易于通过固定卷积核的转置卷积（transposed convolution）实现，转置卷积即为反卷积（deconvolution）。在论文中，作者并没有固定卷积核，而是让卷积核变成可学习的参数。转置卷积操作过程如下：<br/>\n",
    "![fcn](../images/segment/fcn-deconvolution.png)<br/>\n",
    "## 跳级结构\n",
    "如果仅对最后一层的特征图进行上采样得到原图大小的分割，最终的分割效果往往并不理想。因为最后一层的特征图太小，这意味着过多细节的丢失。因此，通过跳级结构将最后一层的预测（富有全局信息）和更浅层（富有局部信息）的预测结合起来，在遵守全局预测的同时进行局部预测。<br/>\n",
    "将底层（stride 32）的预测（FCN-32s）进行2倍的上采样得到原尺寸的图像，并与从pool4层（stride 16）进行的预测融合起来（相加），这一部分的网络被称为FCN-16s。随后将这一部分的预测再进行一次2倍的上采样并与从pool3层得到的预测融合起来，这一部分的网络被称为FCN-8s。图示如下：\n",
    "![fcn](../images/segment/fcn-skip.png)<br/>\n",
    "## FCN训练\n",
    "- 阶段1：以经典的分类网络为初始化，最后两级为全连接（红色），参数弃去不用。<br/>\n",
    "![fcn](../images/segment/fcn-32s.png)<br/>\n",
    "- 阶段2：FCN-32s 网络—从特征小图预测分割小图，之后直接升采样为大图。<br/>\n",
    "![fcn](../images/segment/fcn-train-32s.png)<br/>\n",
    "- 阶段3：FCN-16s 网络—上采样分为两次完成。在第二次升采样前，把第4个pooling层的预测结果融合进来，使用跳级结构提升精确性。<br/>\n",
    "![fcn](../images/segment/fcn-train-16s.png)<br/>\n",
    "- 阶段4：FCN-8s 网络—升采样分为三次完成。 进一步融合了第3个pooling层的预测结果。<br/>\n",
    "![fcn](../images/segment/fcn-train-8s.png)<br/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## pytroch搭建\n",
    "下载预训练权重，抽取出网络层实例：运行如下代码，自动下载到 C:\\Users\\**\\.cache\\torch\\hub\\checkpoints 目录下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "vgg = models.vgg16(pretrained=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "抽取网络层，vgg.features 是 VGG16 的特征抽取网络部分（卷积网络），vgg 还有 vgg.classifier 表示分类器部分（全连接网络）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "print(\"----show VGG16's features.children()----\")\n",
    "\n",
    "# feats = vgg.features.children()  # <generator object Module.children at 0x0000021CCC997580>\n",
    "feats = list(vgg.features.children())\n",
    "# print(*feats)  # 解包列表，打印列表里的所有元素（*list 只能作为函数参数，无法直接运行）\n",
    "\n",
    "for i, layer in enumerate(feats):\n",
    "    print(\"====={0}======\".format(i))\n",
    "    print(layer)  # 每一个网络层\n",
    "# print(feats[0:9])  # 获取 0-8 层 共前9层网络\n",
    "# print(*feats[0:9])  # 解包列表，不再是列表而是9个参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "卷积网络和反卷积网络，两者操作互逆"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "con = nn.Conv2d(1,16,kernel_size=(3,3),stride=(2,2),padding=(1,1))\n",
    "dec = nn.ConvTranspose2d(16,1, kernel_size=(3,3), stride=(2,2), padding=(1,1), bias=False)\n",
    "feat = torch.randn((1, 5, 5))\n",
    "feat_c = con(feat)\n",
    "feat_d = dec(feat_c)\n",
    "print(feat.shape)\n",
    "print(feat_c.shape)\n",
    "print(feat_d.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "模型搭建全部代码，仅把模型部分摘出作为参考："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "from torchvision import models\n",
    "import logging\n",
    "from itertools import chain\n",
    "\n",
    "# 一个基类，定义了一个模型的“描述信息的功能”，例如logger、print\n",
    "class BaseModel(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(BaseModel, self).__init__()\n",
    "        self.logger = logging.getLogger(self.__class__.__name__)\n",
    "    # 子类必须重写的类\n",
    "    def forward(self):\n",
    "        raise NotImplementedError\n",
    "    \n",
    "    # 打印到log文件中\n",
    "    def summary(self):\n",
    "        # 计数 所有参数的个数\n",
    "        total_params = sum(p.numel() for p in self.parameters())\n",
    "        print(f\"{total_params:,} total parameters.\")\n",
    "        total_trainable_params = sum(\n",
    "            p.numel() for p in self.parameters() if p.requires_grad)\n",
    "        print(f\"{total_trainable_params:,} training parameters.\")\n",
    "        self.logger.info(f'Nbr of trainable parameters: {total_trainable_params}')\n",
    "    \n",
    "    # 返回信息描述\n",
    "    def __str__(self):\n",
    "        total_params = sum(p.numel() for p in self.parameters())\n",
    "        print(f\"{total_params:,} total parameters.\")\n",
    "        total_trainable_params = sum(\n",
    "            p.numel() for p in self.parameters() if p.requires_grad)\n",
    "        print(f\"{total_trainable_params:,} training parameters.\")\n",
    "        return super(BaseModel, self).__str__() + f'\\nNbr of trainable parameters: {total_trainable_params}'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上采样权重"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "# 此处定义的 上采样卷积核权重 是个固定值\n",
    "# 返回 k 个 k层tensor，每个tensor都是k个矩阵，其中第i个tensor的第i个矩阵为一个高斯核，其他都是0\n",
    "# 例如 k=3，[[g, 0, 0],[0, g, 0],[0, 0, g]]\n",
    "def get_upsampling_weight(in_channels, out_channels, kernel_size):\n",
    "    factor = (kernel_size + 1) // 2\n",
    "    if kernel_size % 2 == 1:\n",
    "        center = factor - 1\n",
    "    else:\n",
    "        center = factor - 0.5\n",
    "    # 返回两个长度为 kernel_size 的向量，两者点乘得到一个矩阵（类似 meshgrid 的矩阵）\n",
    "    og = np.ogrid[:kernel_size, :kernel_size]\n",
    "    filt = (1 - abs(og[0] - center) / factor) * (1 - abs(og[1] - center) / factor)\n",
    "    weight = np.zeros((in_channels, out_channels, kernel_size, kernel_size), dtype=np.float64)\n",
    "    weight[list(range(in_channels)), list(range(out_channels)), :, :] = filt\n",
    "    return torch.from_numpy(weight).float()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "FCN8 模型，该模型的 backbone （特征提取器网络） 采用 VGG16，是pytorch库的预训练权重。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "outputs": [],
   "source": [
    "class FCN8(BaseModel):\n",
    "    def __init__(self, num_classes, pretrained=True, freeze_bn=False, **_):\n",
    "        super(FCN8, self).__init__()\n",
    "        vgg = models.vgg16(pretrained)\n",
    "        features = list(vgg.features.children())\n",
    "        classifier = list(vgg.classifier.children())\n",
    "        features[0].padding = (100, 100)\n",
    "        for layer in features:\n",
    "            if 'MaxPool' in layer.__class__.__name__:\n",
    "                # __class__形如 torch.nn.modules.conv.Conv2d\n",
    "                # __name__ 即为 Conv2d\n",
    "                # # enbale ceil in max pool, to avoid different sizes when upsampling\n",
    "                layer.ceil_mode = True\n",
    "        # extract pool3, pool4 and pool5 from the VGG net\n",
    "        # 取前17层为第一特征模块\n",
    "        self.pool3 = nn.Sequential(*features[:17])\n",
    "        # 取前17-23层为第二特征模块\n",
    "        self.pool4 = nn.Sequential(*features[17:24])\n",
    "        # 取24层及之后所有的为第三特征模块\n",
    "        self.pool5 = nn.Sequential(*features[24:])\n",
    "        \n",
    "        # adjust the depth of pool3 and pool4 to num_classes\n",
    "        self.adj_pool3 = nn.Conv2d(256, num_classes, kernel_size=1)\n",
    "        self.adj_pool4 = nn.Conv2d(512, num_classes, kernel_size=1)\n",
    "        \n",
    "        # replace the FC layer of VGG with conv layers\n",
    "        conv6 = nn.Conv2d(512, 4096, kernel_size=7)\n",
    "        conv7 = nn.Conv2d(4096, 4096, kernel_size=1)\n",
    "        output = nn.Conv2d(4096, num_classes, kernel_size=1)\n",
    "        \n",
    "        # copy the weights from VGG's FC pretrained layers\n",
    "        conv6.weight.data.copy_(classifier[0].weight.data.view(\n",
    "            conv6.weight.data.size()))\n",
    "        conv6.bias.data.copy_(classifier[0].bias.data)\n",
    "        \n",
    "        conv7.weight.data.copy_(classifier[3].weight.data.view(\n",
    "            conv7.weight.data.size()))\n",
    "        conv7.bias.data.copy_(classifier[3].bias.data)\n",
    "        \n",
    "        # get the outputs\n",
    "        self.output = nn.Sequential(conv6, nn.ReLU(inplace=True), nn.Dropout(),\n",
    "                                    conv7, nn.ReLU(inplace=True), nn.Dropout(), \n",
    "                                    output)\n",
    "        \n",
    "        # we'll need three upsampling layers, upsampling (x2 +2) the outputs\n",
    "        # upsampling (x2 +2) addition of pool4 and upsampled output \n",
    "        # upsampling (x8 +8) the final value (pool3 + added output and pool4)\n",
    "        self.up_output = nn.ConvTranspose2d(num_classes, num_classes,\n",
    "                                            kernel_size=4, stride=2, bias=False)\n",
    "        self.up_pool4_out = nn.ConvTranspose2d(num_classes, num_classes, \n",
    "                                            kernel_size=4, stride=2, bias=False)\n",
    "        self.up_final = nn.ConvTranspose2d(num_classes, num_classes, \n",
    "                                            kernel_size=16, stride=8, bias=False)\n",
    "        \n",
    "        # we'll use guassian kernels for the upsampling weights\n",
    "        self.up_output.weight.data.copy_(\n",
    "            get_upsampling_weight(num_classes, num_classes, 4))\n",
    "        self.up_pool4_out.weight.data.copy_(\n",
    "            get_upsampling_weight(num_classes, num_classes, 4))\n",
    "        self.up_final.weight.data.copy_(\n",
    "            get_upsampling_weight(num_classes, num_classes, 16))\n",
    "        \n",
    "        # we'll freeze the wights, this is a fixed upsampling and not deconv\n",
    "        for m in self.modules():\n",
    "            if isinstance(m, nn.ConvTranspose2d):\n",
    "                m.weight.requires_grad = False\n",
    "        if freeze_bn: self.freeze_bn()\n",
    "    \n",
    "    # VGG抽取特征-->conv relu d conv relu d conv--> upsampling conv upsampling conv upsampling\n",
    "    def forward(self, x):\n",
    "        imh_H, img_W = x.size()[2], x.size()[3]\n",
    "        \n",
    "        # forward the image by VGG16's features\n",
    "        pool3 = self.pool3(x)\n",
    "        pool4 = self.pool4(pool3)\n",
    "        pool5 = self.pool5(pool4)\n",
    "\n",
    "        # get the outputs and upsmaple them\n",
    "        output = self.output(pool5)  # conv relu dropout conv relu dropout conv\n",
    "        up_output = self.up_output(output)  # convT\n",
    "\n",
    "        # adjust pool4 and add the uped-outputs to pool4\n",
    "        adjstd_pool4 = self.adj_pool4(0.01 * pool4)  # conv\n",
    "        add_out_pool4 = self.up_pool4_out(adjstd_pool4[:, :, 5: (5 + up_output.size()[2]), \n",
    "                                            5: (5 + up_output.size()[3])]\n",
    "                                           + up_output)  # convT\n",
    "        \n",
    "        # adjust pool3 and add it to the uped last addition\n",
    "        adjstd_pool3 = self.adj_pool3(0.0001 * pool3)  # conv\n",
    "        final_value = self.up_final(adjstd_pool3[:, :, 9: (9 + add_out_pool4.size()[2]), 9: (9 + add_out_pool4.size()[3])]\n",
    "                                 + add_out_pool4)  # convT\n",
    "\n",
    "        # remove the corresponding padded regions to the input img size\n",
    "        final_value = final_value[:, :, 31: (31 + imh_H), 31: (31 + img_W)].contiguous()\n",
    "        return final_value\n",
    "    \n",
    "    def get_backbone_params(self):\n",
    "        return chain(self.pool3.parameters(), self.pool4.parameters(), self.pool5.parameters(), self.output.parameters())\n",
    "\n",
    "    def get_decoder_params(self):\n",
    "        return chain(self.up_output.parameters(), self.adj_pool4.parameters(), self.up_pool4_out.parameters(),\n",
    "            self.adj_pool3.parameters(), self.up_final.parameters())\n",
    "\n",
    "    def freeze_bn(self):\n",
    "        for module in self.modules():\n",
    "            if isinstance(module, nn.BatchNorm2d): module.eval()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "fcn8 = FCN8(9)\n",
    "x = torch.randn((4, 3, 28, 28))\n",
    "fcn8(x)"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
