{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FCN 8X网络结构的实现\n",
    "\n",
    "构建一个FCN 8X网络模型，并提供文档解释模型的结构。\n",
    "\n",
    "- 完成数据集的准备工作 20分。\n",
    "- 完成模型训练，log输出没有明显报错 20分。\n",
    "- 给出最终分割结果的图片 10分。\n",
    "- 完成FCN 8X的代码 20分。\n",
    "- 编写文档描述FCN训练的整个过程 20分。\n",
    "- 解释为什么8X结构的效果比16X的好 10分。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 简介\n",
    "本代码为系列课程, 第九周部分的课后作业内容。\n",
    "http://edu.csdn.net/lecturer/1427\n",
    "\n",
    "## TinymMind上GPU运行费用较贵，每 CPU 每小时 $0.09，每 GPU 每小时 $0.99，所有作业内容推荐先在本地运行出一定的结果，保证运行正确之后，再上传到TinyMind上运行。初始运行推荐使用CPU运行资源，待所有代码确保没有问题之后，再启动GPU运行。\n",
    "\n",
    "TinyMind上Tensorflow已经有1.4的版本，能比1.3的版本快一点，推荐使用。\n",
    "\n",
    "## 作业内容\n",
    "\n",
    "本作业以week9视频中讲述的FCN为基础，构建一个FCN训练模型，要求学员实现代码中缺失的部分并使用自己的实现跑出比较好的结果。\n",
    "\n",
    "### 数据集\n",
    "本作业使用Pascal2 VOC2012的数据中，语义分割部分的数据作为作业的数据集。\n",
    "\n",
    "VOC网址：http://host.robots.ox.ac.uk/pascal/VOC/voc2012/\n",
    "\n",
    "本次作业不提供数据集下载，请学员自行到上述网址找到并下载数据，同时请仔细阅读VOC网站对于数据集的描述。\n",
    "\n",
    "VOC数据集目录结构如下：\n",
    "```\n",
    "├── local\n",
    "│   ├── VOC2006\n",
    "│   └── VOC2007\n",
    "├── results\n",
    "│   ├── VOC2006\n",
    "│   │   └── Main\n",
    "│   └── VOC2007\n",
    "│       ├── Layout\n",
    "│       ├── Main\n",
    "│       └── Segmentation\n",
    "├── VOC2007\n",
    "│   ├── Annotations\n",
    "│   ├── ImageSets\n",
    "│   │   ├── Layout\n",
    "│   │   ├── Main\n",
    "│   │   └── Segmentation\n",
    "│   ├── JPEGImages\n",
    "│   ├── SegmentationClass\n",
    "│   └── SegmentationObject\n",
    "├── VOC2012\n",
    "│   ├── Annotations\n",
    "│   ├── ImageSets\n",
    "│   │   ├── Action\n",
    "│   │   ├── Layout\n",
    "│   │   ├── Main\n",
    "│   │   └── Segmentation\n",
    "│   ├── JPEGImages\n",
    "│   ├── SegmentationClass\n",
    "│   └── SegmentationObject\n",
    "└── VOCcode\n",
    "```\n",
    "\n",
    "其中本次作业使用VOC2012目录下的内容。作业数据集划分位于**VOC2012/ImageSets/Segmentation**中，分为train.txt 1464张图片和val.txt1449张图片。\n",
    "\n",
    "语义分割标签位于**VOC2012/SegmentationClass**,注意不是数据集中所有的图片都有语义分类的标签。\n",
    "语义分割标签用颜色来标志不同的物体，该数据集中共有20种不同的物体分类，以1～20的数字编号，加上编号为0的背景分类，该数据集中共有21种分类。编号与颜色的对应关系如下：\n",
    "```py\n",
    "# class\n",
    "classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat',\n",
    "           'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable',\n",
    "           'dog', 'horse', 'motorbike', 'person', 'potted plant',\n",
    "           'sheep', 'sofa', 'train', 'tv/monitor']\n",
    "\n",
    "# RGB color for each class\n",
    "colormap = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128],\n",
    "            [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], [192, 0, 0],\n",
    "            [64, 128, 0], [192, 128, 0], [64, 0, 128], [192, 0, 128],\n",
    "            [64, 128, 128], [192, 128, 128], [0, 64, 0], [128, 64, 0],\n",
    "            [0, 192, 0], [128, 192, 0], [0, 64, 128]]\n",
    "```\n",
    "\n",
    "对应关系可由**VOCcode/VOClabelcolormap.m**计算得出，作业代码中也有计算对应关系的代码，这里不再详述，请学员自行理解代码。\n",
    "\n",
    ">需要注意，分类中其实还有一个编号为255的分类，其颜色对应[224, 224, 192],这个分类用作边界着色，这里不处理这个分类。\n",
    "\n",
    "### 训练数据准备\n",
    "训练数据需要预先打包成tfrecord格式，本步骤在本地完成。\n",
    "\n",
    "打包使用作业代码中的**convert_fcn_dataset.py**脚本进行。脚本内容已经删掉一部分，需要由学员自行补全缺失部分的代码。\n",
    "\n",
    "```\n",
    "python3 convert_fcn_dataset.py --data_dir=/path/to/VOCdevkit/VOC2012/ --output_dir=./\n",
    "```\n",
    "\n",
    "\n",
    "本步骤最终生成的两个文件**fcn_train.record**,**fcn_val.record**分别在400MB左右，共800MB左右，如果最后的文件大小过大或过小，生成数据的过程可能有问题，请注意检查。\n",
    "\n",
    ">提示：可以参考week8中数据集生成部分的代码来补全这里的代码。\n",
    "\n",
    "### 数据集上传\n",
    "请参考week7,week8中的内容，这里不再详述。\n",
    "\n",
    "### 预训练模型\n",
    "预训练模型使用tensorflow，modelzoo中的VGG16模型，请学员自行到modelzoo中查找并将该预训练模型放到tinymind上。\n",
    "\n",
    "网络有问题的学员，可以使用已经预先上传到tinymind的模型，数据集为**ai100/vgg16**.\n",
    "\n",
    "### 模型\n",
    "模型代码以课程视频week9 FCN部分的代码进行了修改，主要是代码整理，添加了数据输入和结果输出的部分。\n",
    "\n",
    "代码参考：https://gitee.com/ai100/quiz-w9-code.git\n",
    "\n",
    "在tinymind上新建一个模型，模型设置参考如下模型：\n",
    "\n",
    "https://www.tinymind.com/ai100/quiz-w9-fcn\n",
    "\n",
    "复制模型后可以看到模型的全部参数。\n",
    "\n",
    "需要注意的是，代码中使用了额外的库，所以在建立模型的时候，需要在依赖项中，填入以下项目：\n",
    "```\n",
    "pydensecrf\n",
    "opencv-python\n",
    "```\n",
    ">cv2即是opencv-python,本地运行的话，使用pip安装即可。这个不是一个官方版本，缺一些比较少用的功能，本作业用这个版本就足够了。官方版本需要编译，而且过程比较复杂，没有特殊必要，不要编译安装。\n",
    "\n",
    "模型参数的解释：\n",
    "\n",
    "- checkpoint_path VGG16的预训练模型的目录，这个请根据自己建立的数据集的目录进行设置。\n",
    "- output_dir 输出目录，这里使用tinymind上的/output目录即可。\n",
    "- dataset_train train数据集的目录，这个请根据自己建立的数据集的目录进行设置。\n",
    "- dataset_val val数据集的目录，这个请根据自己建立的数据集的目录进行设置。\n",
    "- batch_size BATCH_SIZE，这里使用的是16,建立8X的FCN的时候，可能会OutOfMem，将batch_size调低即可解决。\n",
    "- max_steps MAX_STEPS， 这里运行1500步，如果batch_size调整了的话，可以考虑调整一下这里。\n",
    "- learning_rate 学习率，这里固定为1e-4, 不推荐做调整。\n",
    "\n",
    "运行过程中，模型每100个step会在/output/train下生成一个checkpoint，每200步会在/output/eval下生成四张验证图片。\n",
    "\n",
    ">FC论文参考 https://arxiv.org/abs/1411.4038\n",
    "### 作业内容\n",
    "- 学员需要将convert_fcn_dataset.py中的代码补全并生成对应的数据集文件上传到tinymind。\n",
    "- 学员需要在作业提供的代码基础上添加8X的FCN实现并进行训练。\n",
    "\n",
    "\n",
    "> tinymind上有已经上传好的[数据集](https://www.tinymind.com/ai100/datasets/quiz-w9)，仅供测试和参考，作业中请自己处理数据集并上传，使用这个数据集的作业数据集部分不给分。\n",
    "\n",
    "### 结果评估\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### - 完成数据集的准备工作 20分。\n",
    "\n",
    "- 数据集中应包含train和val两个tfrecord文件，大小在400MB左右"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "convert_fcn_dataset.py: **create_tf_record 函数实现** \n",
    "\n",
    "```\n",
    "\n",
    "def create_tf_record(output_filename, file_pars):\n",
    "    writer = tf.python_io.TFRecordWriter(output_filename)\n",
    "    for (data, label) in file_pars:\n",
    "        if not os.path.exists(data) or not os.path.exists(label):\n",
    "            logging.warning(\"Could not find[{0}],ignoring example.\".format((data, label)))\n",
    "        try:\n",
    "            tf_example=dict_to_tf_example(data, label)\n",
    "            if not tf_example:\n",
    "                continue\n",
    "            writer.write(tf_example.SerializeToString())\n",
    "        except ValueError:\n",
    "            logging.warning(\"Invalid example[{0}],ignoring example.\".format((data, label)))\n",
    "\n",
    "    writer.close()\n",
    "    pass\n",
    "    \n",
    "```\n",
    "\n",
    "另外，**feature_dict 里面原本都是 None**，依照每种数据的型态也随之调整：  \n",
    "\n",
    "```\n",
    "    feature_dict = {\n",
    "        'image/height': tf.train.Feature(int64_list=tf.train.Int64List(value=[height])),\n",
    "        'image/width': tf.train.Feature(int64_list=tf.train.Int64List(value=[width])),\n",
    "        'image/filename': tf.train.Feature(bytes_list=tf.train.BytesList(value=[data.encode(\"utf8\")])),\n",
    "        'image/encoded': tf.train.Feature(bytes_list=tf.train.BytesList(value=[encoded_data])),\n",
    "        'image/label': tf.train.Feature(bytes_list=tf.train.BytesList(value=[encoded_label])),\n",
    "        'image/format': tf.train.Feature(bytes_list=tf.train.BytesList(value=['jpeg'.encode('utf8')])),\n",
    "    }\n",
    "    \n",
    "```\n",
    "\n",
    "执行以下指令后顺利产生两个 tfrecord 格式档案：**fcn_train.record (413,119 KB)** 及 **fcn_val.record (409,613 KB)**  \n",
    "\n",
    "```\n",
    "\n",
    "python3 convert_fcn_dataset.py --data_dir=./VOC2012/ --output_dir=./output\n",
    "\n",
    "```\n",
    "\n",
    "![tfrecords](HW12-images/tfrecords.PNG)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### - 完成模型训练，log输出没有明显报错 20分。\n",
    "\n",
    "在安装所需的套件 pydensecrf 时，一直出现“**error: Microsoft Visual C++ 14.0 is required**”的错误(环境使用python 3.7.6)，尝试依照网友们的解决方式，甚至还安装了 Visual Studio 2015 Community/Professional 却依然出现相同的错误，直到参考 https://blog.csdn.net/weixin_38899860/article/details/95320949 才解决这个问题。\n",
    "\n",
    "总共执行 2200 个 step（先跑 200，再多跑 2000 个 step）, loss 明显稳定下来：  \n",
    "\n",
    "![2200](HW12-images/2200.PNG)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### - 给出最终分割结果的图片 10分。\n",
    "\n",
    "训练完成之后，可以在**/output/eval**下面生成验证的图片，其中**val_xx_prediction.jpg**的图片为模型输出的预测结果，内容应可以对应相应的 annotation 和 img。根据验证图片的内容，结果可能会有区别，但是肯定可以看到输出的结果是明显有意义的。\n",
    "\n",
    "原图 (val_1200_img.jpg):  \n",
    "\n",
    "![val_1200_img](HW12-images/val_1200_img.jpg)\n",
    "\n",
    "Annotation (val_1200_annotation.jpg):  \n",
    "\n",
    "![val_1200_annotation](HW12-images/val_1200_annotation.jpg)\n",
    "\n",
    "预测结果 (val_1200_prediction.jpg):  \n",
    "\n",
    "![val_1200_prediction](HW12-images/val_1200_prediction.jpg)\n",
    "\n",
    "预测结果 + CRF 处理 (val_1200_prediction_crfed.jpg):  \n",
    "\n",
    "![val_1200_prediction_crfed](HW12-images/val_1200_prediction_crfed.jpg)\n",
    "\n",
    "Overlay (val_1200_overlay.jpg):  \n",
    "\n",
    "![val_1200_overlay](HW12-images/val_1200_overlay.jpg)\n",
    "\n",
    "---\n",
    "---\n",
    "\n",
    "原图 (val_2000_img.jpg):  \n",
    "\n",
    "![val_2000_img](HW12-images/val_2000_img.jpg)\n",
    "\n",
    "Annotation (val_2000_annotation.jpg):  \n",
    "\n",
    "![val_2000_annotation](HW12-images/val_2000_annotation.jpg)\n",
    "\n",
    "预测结果 (val_2000_prediction.jpg):  \n",
    "\n",
    "![val_2000_prediction](HW12-images/val_2000_prediction.jpg)\n",
    "\n",
    "预测结果 + CRF 处理 (val_2000_prediction_crfed.jpg):  \n",
    "\n",
    "![val_2000_prediction_crfed](HW12-images/val_2000_prediction_crfed.jpg)\n",
    "\n",
    "Overlay (val_2000_overlay.jpg):  \n",
    "\n",
    "![val_2000_overlay](HW12-images/val_2000_overlay.jpg)\n",
    "\n",
    "---\n",
    "---\n",
    "\n",
    "原图 (val_2200_img.jpg):  \n",
    "\n",
    "![val_2200_img](HW12-images/val_2200_img.jpg)\n",
    "\n",
    "Annotation (val_2200_annotation.jpg):  \n",
    "\n",
    "![val_2200_annotation](HW12-images/val_2200_annotation.jpg)\n",
    "\n",
    "预测结果 (val_2200_prediction.jpg):  \n",
    "\n",
    "![val_2200_prediction](HW12-images/val_2200_prediction.jpg)\n",
    "\n",
    "预测结果 + CRF 处理 (val_2200_prediction_crfed.jpg):  \n",
    "\n",
    "![val_2200_prediction_crfed](HW12-images/val_2200_prediction_crfed.jpg)\n",
    "\n",
    "Overlay (val_2200_overlay.jpg):  \n",
    "\n",
    "![val_2200_overlay](HW12-images/val_2200_overlay.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### - 完成 FCN 8X 的代码 20分。\n",
    "\n",
    "train.py 中可以看到 8X 代码的实现。形式可能会有区别，但是有比较明显的三个上采样过程，两个 2X，一个 8X，及其结果的融合。  \n",
    "\n",
    "```\n",
    "\n",
    "# Calculate the ouput size of the upsampled tensor\n",
    "# The shape should be batch_size X width X height X num_classes\n",
    "upsampled_logits_shape = tf.stack([\n",
    "                                  downsampled_logits_shape[0],\n",
    "                                  img_shape[1],\n",
    "                                  img_shape[2],\n",
    "                                  downsampled_logits_shape[3]\n",
    "                                  ])\n",
    "\n",
    "\n",
    "pool4_feature = end_points['vgg_16/pool4']\n",
    "pool3_feature = end_points['vgg_16/pool3']\n",
    "\n",
    "\n",
    "# Perform the upsampling\n",
    "# logits 的二倍上采样，与 pool4 进行逐元素相加(element-wise addition),得到 upsampled_logits1\n",
    "with tf.variable_scope('vgg_16/fc8'):\n",
    "    aux_logits_1 = slim.conv2d(pool4_feature, number_of_classes, [1, 1],\n",
    "                                 activation_fn=None,\n",
    "                                 weights_initializer=tf.zeros_initializer,\n",
    "                                 scope='conv_pool4')\n",
    "\n",
    "upsample_filter_np1_x2 = bilinear_upsample_weights(2,  # upsample_factor, number_of_classes)\n",
    "\n",
    "upsample_filter_tensor1_x2 = tf.Variable(upsample_filter_np1_x2, name='vgg_16/fc8/t_conv1_x2')\n",
    "\n",
    "upsampled_logits1 = tf.nn.conv2d_transpose(logits, upsample_filter_tensor1_x2,\n",
    "                                          output_shape=tf.shape(aux_logits_1),\n",
    "                                          strides=[1, 2, 2, 1],\n",
    "                                          padding='SAME')\n",
    "\n",
    "upsampled_logits1 = upsampled_logits1 + aux_logits_1\n",
    "\n",
    "# upsampled_logits1 二倍上采样，与 pool3 进行逐元素相加(element-wise addition),得到 upsampled_logits2\n",
    "with tf.variable_scope('vgg_16/fc8'):\n",
    "    aux_logits_2 = slim.conv2d(pool3_feature, number_of_classes, [1, 1],\n",
    "                                 activation_fn=None,\n",
    "                                 weights_initializer=tf.zeros_initializer,\n",
    "                                 scope='conv_pool3')\n",
    "\n",
    "upsample_filter_np2_x2 = bilinear_upsample_weights(2, number_of_classes)\n",
    "\n",
    "upsample_filter_tensor2_x2 = tf.Variable(upsample_filter_np2_x2, name='vgg_16/fc8/t_conv2_x2')\n",
    "\n",
    "upsampled_logits2 = tf.nn.conv2d_transpose(upsampled_logits1, upsample_filter_tensor2_x2,\n",
    "                                          output_shape=tf.shape(aux_logits_2),\n",
    "                                          strides=[1, 2, 2, 1],\n",
    "                                          padding='SAME')\n",
    "\n",
    "upsampled_logits2 = upsampled_logits2 + aux_logits_2\n",
    "\n",
    "# upsampled_logits2 八倍上采样,得到 upsampled_logits3\n",
    "upsample_filter_np_x8 = bilinear_upsample_weights(upsample_factor, number_of_classes)\n",
    "\n",
    "upsample_filter_tensor_x8 = tf.Variable(upsample_filter_np_x8, name='vgg_16/fc8/t_conv_x8')\n",
    "\n",
    "upsampled_logits3 = tf.nn.conv2d_transpose(upsampled_logits2, upsample_filter_tensor_x8,\n",
    "                                          output_shape=upsampled_logits_shape,\n",
    "                                          strides=[1, upsample_factor, upsample_factor, 1],\n",
    "                                          padding='SAME')\n",
    "\n",
    "\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### - 编写文档描述 FCN 训练的整个过程 20分。\n",
    "\n",
    "### 核心概念\n",
    "\n",
    "从运行结果来看，**网络对一些只有单体对象的图片分割效果很好**，经过** CRF 处理过之后的图像轮廓多出了很多细节，可以描绘出更细微的边缘**；但对一些对象数量较杂、种类较多的图片，在相同的训练 steps 内，分割的效果略不理想，经过 CRF 之后有时还会丢掉一些原本识别出来的“色块”，但给出的分割还是有明显意义。总体来说，实现的 FCN-8X 可以达到一定的图像分割的效果，**效果的好坏与训练的 steps 数和 batch_size 数的设定有直接的关系**，继续增大这些设定，在不超出运行内存的前提下，可以看到对复杂图片分割效果的提升，但对简单的图片，则可能出现过拟合的情况，我认为**可以将训练和验证集图片都按照简单型和复杂型分开两拨，各自设定不同的 steps 数和 batch_size 数进行训练**；或者想**更方便的话可以比如在网络训练至 2000 steps 时停止训练，验证完简单型图片后，网络继续进行训练至 4000 或 6000, 8000 steps，再验证复杂型图片，网络得到的结果应该会更好**。  \n",
    "\n",
    "- 第一个2倍上采样：\n",
    "\n",
    "```\n",
    "\n",
    "pool4_feature = end_points['vgg_16/pool4']\n",
    "pool3_feature = end_points['vgg_16/pool3']\n",
    "\n",
    "\n",
    "# Perform the upsampling\n",
    "# logits 的二倍上采样，与 pool4 进行逐元素相加(element-wise addition),得到 upsampled_logits1\n",
    "with tf.variable_scope('vgg_16/fc8'):\n",
    "    aux_logits_1 = slim.conv2d(pool4_feature, number_of_classes, [1, 1],\n",
    "                                 activation_fn=None,\n",
    "                                 weights_initializer=tf.zeros_initializer,\n",
    "                                 scope='conv_pool4')\n",
    "\n",
    "upsample_filter_np1_x2 = bilinear_upsample_weights(2,  # upsample_factor, number_of_classes)\n",
    "\n",
    "upsample_filter_tensor1_x2 = tf.Variable(upsample_filter_np1_x2, name='vgg_16/fc8/t_conv1_x2')\n",
    "\n",
    "upsampled_logits1 = tf.nn.conv2d_transpose(logits, upsample_filter_tensor1_x2,\n",
    "                                          output_shape=tf.shape(aux_logits_1),\n",
    "                                          strides=[1, 2, 2, 1],\n",
    "                                          padding='SAME')\n",
    "\n",
    "upsampled_logits1 = upsampled_logits1 + aux_logits_1\n",
    "\n",
    "```\n",
    "\n",
    "由 Vgg 网络中 pool4 的输出，经过一个分类器，使得输出的 feature maps 的形状不变，channels 数变为分类类别数。  \n",
    "由 Vgg 网络最后输出的 logits，经过 2 倍上采样，使得输出的 feature maps 的长和宽扩大两倍，可以与上一步的输出进行相加。  \n",
    "上述两个输出进行逐元素相加，得到 upsampled_logits1。  \n",
    "\n",
    "- 第二个2倍上采样： \n",
    "\n",
    "```\n",
    "\n",
    "# upsampled_logits1 二倍上采样，与 pool3 进行逐元素相加(element-wise addition),得到 upsampled_logits2\n",
    "with tf.variable_scope('vgg_16/fc8'):\n",
    "    aux_logits_2 = slim.conv2d(pool3_feature, number_of_classes, [1, 1],\n",
    "                                 activation_fn=None,\n",
    "                                 weights_initializer=tf.zeros_initializer,\n",
    "                                 scope='conv_pool3')\n",
    "\n",
    "upsample_filter_np2_x2 = bilinear_upsample_weights(2, number_of_classes)\n",
    "\n",
    "upsample_filter_tensor2_x2 = tf.Variable(upsample_filter_np2_x2, name='vgg_16/fc8/t_conv2_x2')\n",
    "\n",
    "upsampled_logits2 = tf.nn.conv2d_transpose(upsampled_logits1, upsample_filter_tensor2_x2,\n",
    "                                          output_shape=tf.shape(aux_logits_2),\n",
    "                                          strides=[1, 2, 2, 1],\n",
    "                                          padding='SAME')\n",
    "\n",
    "upsampled_logits2 = upsampled_logits2 + aux_logits_2\n",
    "\n",
    "```\n",
    "\n",
    "由 Vgg 网络中 pool3 的输出，经过一个分类器，使得输出的 feature maps 的形状不变，channels 数变为分类类别数。  \n",
    "由第一个 2 倍上采样代码段最后输出的 upsampled_logits1，再经过一个 2 倍上采样，使得输出的 feature maps 的长和宽又扩大了两倍，可以与上一步的输出进行相加, 此时产生的 feature maps 的长和宽已经比原始 Vgg 输出扩大了 4 倍。\n",
    "上述两个输出进行逐元素相加，得到 upsampled_logits2。  \n",
    "\n",
    "- 8倍上采样：\n",
    "\n",
    "```\n",
    "# upsampled_logits2 八倍上采样,得到 upsampled_logits3\n",
    "upsample_filter_np_x8 = bilinear_upsample_weights(upsample_factor, number_of_classes)\n",
    "\n",
    "upsample_filter_tensor_x8 = tf.Variable(upsample_filter_np_x8, name='vgg_16/fc8/t_conv_x8')\n",
    "\n",
    "upsampled_logits3 = tf.nn.conv2d_transpose(upsampled_logits2, upsample_filter_tensor_x8,\n",
    "                                          output_shape=upsampled_logits_shape,\n",
    "                                          strides=[1, upsample_factor, upsample_factor, 1],\n",
    "                                          padding='SAME')\n",
    "\n",
    "\n",
    "```\n",
    "\n",
    "由第二个 2 倍上采样代码段最后输出的 upsampled_logits2，再经过一个 8 倍上采样，使得输出的 feature maps 的长和宽又扩大了 8 倍，得到 upsampled_logits3。此时 feature maps 的长和宽已经比原始 Vgg 输出扩大了 32 倍，变为和原图片的长和宽保持相同。网络构建到此结束，最后将所得的 upsampled_logits3 与图片的 label 进行交叉熵运算，计算损失函数，便可向前进行梯度和权重的更新，达到训练整个网络的目的。  \n",
    "\n",
    "**FCN 的实现方法，使得网络输出由 1\\*1 的形状，变为了 n\\*n**，(n=原始图片长和宽/32)。原来的输出每个 channels 只有**一个像素点，在原图上的感受野就是整张图片**，而修改之后，每个 channels 都有 n\\*n 个像素点，**每个像素点都对应原图上一个更小的区域**，由此便使得整个网络可以得到关于原图的一个更“稠密”的输出。在之前的分类和目标检测的任务中，图片中需要识别或定位的对象都是基于整张图片中一个大区域内的信息进行的，所以网络最终输出 1\\*1 的 feature maps 所能够提供的图片识别“精度”已经够用；而在图像分割的任务中，要求对每一个像素点进行分类，所做的工作都要基于像素这个更加微小和精细的层面上进行，所以，FCN 将网络输出的形状放大，无疑缩小了特征图中每个像素点对应的感受野，使其能“感受”到原图中更精细的细节。再者也是因为图像分割中的图片 label 由原来的一个类别名称（一个字符串，或者说，一个分类类别数）变为了一张图片（一个矩阵张量），输出 1\\*1 的特征图显然无法与这样的 label 一起进行损失计算，也只有这样想办法让输出与原图形状大小相同，才有对比的可能。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 模型训练\n",
    "\n",
    "• 用 AlexNet，VGG16 或者 GoogleNet 训练好的模型做初始化，在这个基础上做 fine-tuning，只需**在末尾加上 upsampling，参数的学习还是利用 CNN 本身的反向传播原理**。  \n",
    "• 采用全图做训练，不进行局部抽样。实验证明直接用全图已经很高效。   \n",
    "FCN 例子: 输入可为任意尺寸图像彩色图像；输出与输入尺寸相同，**深度为：20类目标+背景=21**，**模型基于 VGG16(原论文使用 AlexNet 分类网络)**。  \n",
    "• 蓝色：卷积层。  \n",
    "• 绿色：Max Pooling层。  \n",
    "• 黄色: 求和运算, 使用逐数据相加，把三个不同深度的预测结果进行融合：较浅的结果更为精细，较深的结果更为鲁棒。  \n",
    "• 灰色: 裁剪, 在融合之前，使用裁剪层统一两者大小, 最后裁剪成和输入相同尺寸输出。  \n",
    "• 对于不同尺寸的输入图像，各层数据的尺寸(height，width) 相应变化，深度 (channel) 不变。  \n",
    "\n",
    "![train_whole_picture](HW12-images/train_whole_picture.PNG)\n",
    "\n",
    "• 全卷积层部分进行特征提取, 提取卷积层（3 个蓝色层）的输出来作为预测 21 个类别的特征。  \n",
    "• 图中虚线内是反卷积层的运算, 反卷积层（3 个橙色层）可以把输入数据尺寸放大。和卷积层一样，升采样的具体参数经过训练确定。  \n",
    "\n",
    "1. 以经典的 Vgg 网络为初始化 (原论文使用 AlexNet 分类网络), 最后两级是全连接 (红色), 参数弃去不用。  \n",
    "\n",
    "![train_step_1](HW12-images/train_step_1.PNG)\n",
    "\n",
    "2. 反卷积（橙色）的步长为 32，这个网络称为 FCN-32X。   \n",
    "从 \"特征小图\"(16\\*16\\*4096) 预测 \"分割小图\"(16\\*16\\*21)，之后直接升采样为大图。\n",
    "\n",
    "![train_step_2](HW12-images/train_step_2.PNG)\n",
    "\n",
    "3. 第二次反卷积步长为 16，这个网络称为 FCN-16X。  \n",
    "升采样分为两次完成（橙色\\*2）, 在第二次升采样前，把第 4 个pooling层（绿色）的预测结果（蓝色）融合进来。使用跳级结构 (skip) 提升精确性。\n",
    "\n",
    "![train_step_3](HW12-images/train_step_3.PNG)\n",
    "\n",
    "4. 第三次反卷积步长为 8，记为 FCN-8X。   \n",
    "升采样分为三次完成（橙色\\*3）, 进一步融合了第 3 个 pooling 层的预测结果。\n",
    "\n",
    "![train_step_4](HW12-images/train_step_4.PNG)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### - 解释为什么 8X 结构的效果比 16X 的好 10分。\n",
    "\n",
    "**较浅层(pool3 - 8X)的卷积层感知域(感受野)比较小**，学习到一些局部区域的特征；**较深层(pool4 - 16X)的卷积层有较大的感知域(感受野)，能够学习到更加抽象的一些特征**。这些抽象特征对物体的大小、位置和方向等敏感性更低，从而有助于分类性能的提高，却损失每个像素的个别分类信息。  \n",
    "论文作者提出**增加 skip 结构将最后一层的预测(有更富的全局信息)和更浅层(有更多的局部细节)的预测结合起来**，这样可以在遵守全局预测的同时进行局部预测。 \n",
    "\n",
    "总体来说，逻辑如下： \n",
    "- 想要精确预测每个像素的分割结果，必须经历从大到小，再从小到大的两个过程   \n",
    "- 在升采样过程中，分阶段增大比一步到位效果更好   \n",
    "- 在升采样的每个阶段，使用降采样对应层的特征进行辅助  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "---\n",
    "\n",
    "#### 参考内容\n",
    "\n",
    "本地运行训练使用的命令行(max_steps 是**一次训练要进行的step数**，不是分次训练总共累积进行的次数)：\n",
    "\n",
    "```Python\n",
    "\n",
    "python train.py --checkpoint_path=./vgg_16.ckpt --output_dir=./output --dataset_train=./output/fcn_train.record --dataset_val=./output/fcn_val.record --batch_size 16 --max_steps 2000\n",
    "\n",
    "\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
