{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 残差网络（ResNet）\n",
    "\n",
    "让我们先思考一个问题：对神经网络模型添加新的层，充分训练后的模型是否只可能更有效地降低训练误差？理论上，原模型解的空间只是新模型解的空间的子空间。也就是说，如果我们能将新添加的层训练成恒等映射$f(x) = x$，新模型和原模型将同样有效。由于新模型可能得出更优的解来拟合训练数据集，因此添加层似乎更容易降低训练误差。然而在实践中，添加过多的层后训练误差往往不降反升。即使利用批量归一化带来的数值稳定性使训练深层模型更加容易，该问题仍然存在。针对这一问题，何恺明等人提出了残差网络（ResNet） [1]。它在2015年的ImageNet图像识别挑战赛夺魁，并深刻影响了后来的深度神经网络的设计。\n",
    "\n",
    "\n",
    "## 残差块\n",
    "\n",
    "让我们聚焦于神经网络局部。如图5.9所示，设输入为$\\boldsymbol{x}$。假设我们希望学出的理想映射为$f(\\boldsymbol{x})$，从而作为图5.9上方激活函数的输入。左图虚线框中的部分需要直接拟合出该映射$f(\\boldsymbol{x})$，而右图虚线框中的部分则需要拟合出有关恒等映射的残差映射$f(\\boldsymbol{x})-\\boldsymbol{x}$。残差映射在实际中往往更容易优化。以本节开头提到的恒等映射作为我们希望学出的理想映射$f(\\boldsymbol{x})$。我们只需将图5.9中右图虚线框内上方的加权运算（如仿射）的权重和偏差参数学成0，那么$f(\\boldsymbol{x})$即为恒等映射。实际中，当理想映射$f(\\boldsymbol{x})$极接近于恒等映射时，残差映射也易于捕捉恒等映射的细微波动。图5.9右图也是ResNet的基础块，即残差块（residual block）。在残差块中，输入可通过跨层的数据线路更快地向前传播。\n",
    "\n",
    "![设输入为$\\boldsymbol{x}$。假设图中最上方激活函数输入的理想映射为$f(\\boldsymbol{x})$。左图虚线框中的部分需要直接拟合出该映射$f(\\boldsymbol{x})$，而右图虚线框中的部分需要拟合出有关恒等映射的残差映射$f(\\boldsymbol{x})-\\boldsymbol{x}$](../img/residual-block.svg)\n",
    "\n",
    "ResNet沿用了VGG全$3\\times 3$卷积层的设计。残差块里首先有2个有相同输出通道数的$3\\times 3$卷积层。每个卷积层后接一个批量归一化层和ReLU激活函数。然后我们将输入跳过这2个卷积运算后直接加在最后的ReLU激活函数前。这样的设计要求2个卷积层的输出与输入形状一样，从而可以相加。如果想改变通道数，就需要引入一个额外的$1\\times 1$卷积层来将输入变换成需要的形状后再做相加运算。\n",
    "\n",
    "残差块的实现如下。它可以设定输出通道数、是否使用额外的$1\\times 1$卷积层来修改通道数以及卷积层的步幅。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# # 安装paddle 2.0 beta环境 cpu，首次运行后注释\r\n",
    "# !pip install paddlepaddle==2.0.0b0 -i https://mirror.baidu.com/pypi/simple\r\n",
    "\r\n",
    "import paddle\r\n",
    "import paddle.nn as nn\r\n",
    "import numpy as np\r\n",
    "\r\n",
    "paddle.disable_static()\r\n",
    "\r\n",
    "class Residual(nn.Layer):\r\n",
    "    def __init__(self, num_channels, num_filters, use_1x1conv=False, stride=1):\r\n",
    "        super(Residual, self).__init__()\r\n",
    "        self.use_1x1conv = use_1x1conv\r\n",
    "        model = [\r\n",
    "            nn.Conv2d(num_channels, num_filters, 3, stride=stride, padding=1),\r\n",
    "            nn.BatchNorm2d(num_filters),\r\n",
    "            nn.ReLU(),\r\n",
    "            nn.Conv2d(num_filters, num_filters, 3, stride=1, padding=1),\r\n",
    "            nn.BatchNorm2d(num_filters),\r\n",
    "        ]\r\n",
    "        self.model = nn.Sequential(*model)\r\n",
    "        if use_1x1conv:\r\n",
    "            model_1x1 = [nn.Conv2d(num_channels, num_filters, 1, stride=stride)]\r\n",
    "            self.model_1x1 = nn.Sequential(*model_1x1)\r\n",
    "    def forward(self, X):\r\n",
    "        Y = self.model(X)\r\n",
    "        if self.use_1x1conv:\r\n",
    "            X = self.model_1x1(X)\r\n",
    "        return paddle.nn.functional.relu(X + Y)\r\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "下面我们来查看输入和输出形状一致的情况。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[4, 3, 6, 6]\n"
     ]
    }
   ],
   "source": [
    "with paddle.fluid.dygraph.guard(paddle.fluid.cpu_places()[0]):\n",
    "    blk = Residual(3, 3)\n",
    "    X = paddle.to_tensor(np.random.uniform(-1., 1., [4, 3, 6, 6]).astype('float32'))\n",
    "    Y = blk(X)\n",
    "    print(Y.shape)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "我们也可以在增加输出通道数的同时减半输出的高和宽。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[4, 6, 3, 3]\n"
     ]
    }
   ],
   "source": [
    "with paddle.fluid.dygraph.guard(paddle.fluid.cpu_places()[0]):\n",
    "    blk = Residual(3, 6, use_1x1conv=True, stride=2)\n",
    "    Y = blk(Y)\n",
    "    print(Y.shape)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "GoogLeNet在后面接了4个由Inception块组成的模块。ResNet则使用4个由残差块组成的模块，每个模块使用若干个同样输出通道数的残差块。第一个模块的通道数同输入通道数一致。由于之前已经使用了步幅为2的最大池化层，所以无须减小高和宽。之后的每个模块在第一个残差块里将上一个模块的通道数翻倍，并将高和宽减半。\n",
    "\n",
    "下面我们来实现这个模块。注意，这里对第一个模块做了特别处理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class ResnetBlock(nn.Layer):\r\n",
    "    def __init__(self, num_channels, num_filters, num_residuals, first_block=False):\r\n",
    "        super(ResnetBlock, self).__init__()\r\n",
    "        model = []\r\n",
    "        for i in range(num_residuals):\r\n",
    "            if i == 0:\r\n",
    "                if not first_block:\r\n",
    "                    model += [Residual(num_channels, num_filters, use_1x1conv=True, stride=2)]\r\n",
    "                else:\r\n",
    "                    model += [Residual(num_channels, num_filters)]\r\n",
    "            else:\r\n",
    "                model += [Residual(num_filters, num_filters)]\r\n",
    "        self.model = nn.Sequential(*model)\r\n",
    "    def forward(self, X):\r\n",
    "        return self.model(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## ResNet模型\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1, 10]\n"
     ]
    }
   ],
   "source": [
    "class ResNet(nn.Layer):\r\n",
    "    def __init__(self, num_classes=10):\r\n",
    "        super(ResNet, self).__init__()\r\n",
    "        # ResNet的前两层跟之前介绍的GoogLeNet中的一样：\r\n",
    "        # 在输出通道数为64、步幅为2的7×77卷积层后接步幅为2的3×3的最大池化层。\r\n",
    "        # 不同之处在于ResNet每个卷积层后增加的批量归一化层。\r\n",
    "        model = [\r\n",
    "            nn.Conv2d(1, 64, 7, stride=2, padding=3),\r\n",
    "            nn.BatchNorm2d(64),\r\n",
    "            nn.ReLU(),\r\n",
    "            nn.Pool2D(pool_size=3, pool_stride=2, pool_padding=1, pool_type='max')\r\n",
    "        ]\r\n",
    "\r\n",
    "        # 接着我们为ResNet加入所有残差块。这里每个模块使用2个残差块。\r\n",
    "        model += [\r\n",
    "            ResnetBlock(64, 64, 2, first_block=True),\r\n",
    "            ResnetBlock(64, 128, 2)\r\n",
    "        ]\r\n",
    "        self.num_channels = 128\r\n",
    "        # model += [\r\n",
    "        #     ResnetBlock(64, 64, 2, first_block=True),\r\n",
    "        #     ResnetBlock(64, 128, 2),\r\n",
    "        #     ResnetBlock(128, 256, 2),\r\n",
    "        #     ResnetBlock(256, 512, 2)\r\n",
    "        # ]\r\n",
    "        # self.num_channels = 512\r\n",
    "\r\n",
    "        # 最后，与GoogLeNet一样，加入全局平均池化层后接上全连接层输出。\r\n",
    "        model += [nn.Pool2D(pool_type='avg', global_pooling=True)]\r\n",
    "        linear_softmax = [\r\n",
    "            nn.Linear(self.num_channels, num_classes),\r\n",
    "            nn.Softmax()\r\n",
    "        ]\r\n",
    "        self.model = nn.Sequential(*model)\r\n",
    "        self.linear_softmax = nn.Sequential(*linear_softmax)\r\n",
    "    def forward(self, X):\r\n",
    "        Y = self.model(X)\r\n",
    "        Y = self.linear_softmax(paddle.reshape(Y, [-1, self.num_channels]))\r\n",
    "        return Y\r\n",
    "\r\n",
    "train_dataset_unit_test = paddle.vision.datasets.MNIST(mode='train')\r\n",
    "train_loader_unit_test = paddle.io.DataLoader(train_dataset_unit_test, places=paddle.CPUPlace(), batch_size=1, shuffle=False)\r\n",
    "with paddle.fluid.dygraph.guard(paddle.fluid.cpu_places()[0]):\r\n",
    "    data = next(train_loader_unit_test())\r\n",
    "    rn = ResNet()\r\n",
    "    logit = rn(data[0])\r\n",
    "    print(logit.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "由于这里使用的尺寸28×28的MNIST数据集，所以只使用了两个残差块，每个模块里有2个卷积层（不计算$1\\times 1$卷积层）。通过配置不同的通道数和模块里的残差块数可以得到不同的ResNet模型，例如ResNet-18，以及更深的含152层的ResNet-152。虽然ResNet的主体架构跟GoogLeNet的类似，但ResNet结构更简单，修改也更方便。这些因素都导致了ResNet迅速被广泛使用。\n",
    "\n",
    "在训练ResNet之前，我们来观察一下输入形状在ResNet不同模块之间的变化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--------------------------------------------------------------------------------\n",
      "   Layer (type)          Input Shape         Output Shape         Param #\n",
      "================================================================================\n",
      "       Conv2d-1       [1, 1, 28, 28]      [1, 64, 14, 14]           3,200\n",
      "  BatchNorm2d-1      [1, 64, 14, 14]      [1, 64, 14, 14]             256\n",
      "         ReLU-1      [1, 64, 14, 14]      [1, 64, 14, 14]               0\n",
      "       Pool2D-1      [1, 64, 14, 14]        [1, 64, 7, 7]               0\n",
      "       Conv2d-2        [1, 64, 7, 7]        [1, 64, 7, 7]          36,928\n",
      "  BatchNorm2d-2        [1, 64, 7, 7]        [1, 64, 7, 7]             256\n",
      "         ReLU-2        [1, 64, 7, 7]        [1, 64, 7, 7]               0\n",
      "       Conv2d-3        [1, 64, 7, 7]        [1, 64, 7, 7]          36,928\n",
      "  BatchNorm2d-3        [1, 64, 7, 7]        [1, 64, 7, 7]             256\n",
      "     Residual-1        [1, 64, 7, 7]        [1, 64, 7, 7]               0\n",
      "       Conv2d-4        [1, 64, 7, 7]        [1, 64, 7, 7]          36,928\n",
      "  BatchNorm2d-4        [1, 64, 7, 7]        [1, 64, 7, 7]             256\n",
      "         ReLU-3        [1, 64, 7, 7]        [1, 64, 7, 7]               0\n",
      "       Conv2d-5        [1, 64, 7, 7]        [1, 64, 7, 7]          36,928\n",
      "  BatchNorm2d-5        [1, 64, 7, 7]        [1, 64, 7, 7]             256\n",
      "     Residual-2        [1, 64, 7, 7]        [1, 64, 7, 7]               0\n",
      "  ResnetBlock-1        [1, 64, 7, 7]        [1, 64, 7, 7]               0\n",
      "       Conv2d-6        [1, 64, 7, 7]       [1, 128, 4, 4]          73,856\n",
      "  BatchNorm2d-6       [1, 128, 4, 4]       [1, 128, 4, 4]             512\n",
      "         ReLU-4       [1, 128, 4, 4]       [1, 128, 4, 4]               0\n",
      "       Conv2d-7       [1, 128, 4, 4]       [1, 128, 4, 4]         147,584\n",
      "  BatchNorm2d-7       [1, 128, 4, 4]       [1, 128, 4, 4]             512\n",
      "       Conv2d-8        [1, 64, 7, 7]       [1, 128, 4, 4]           8,320\n",
      "     Residual-3        [1, 64, 7, 7]       [1, 128, 4, 4]               0\n",
      "       Conv2d-9       [1, 128, 4, 4]       [1, 128, 4, 4]         147,584\n",
      "  BatchNorm2d-8       [1, 128, 4, 4]       [1, 128, 4, 4]             512\n",
      "         ReLU-5       [1, 128, 4, 4]       [1, 128, 4, 4]               0\n",
      "      Conv2d-10       [1, 128, 4, 4]       [1, 128, 4, 4]         147,584\n",
      "  BatchNorm2d-9       [1, 128, 4, 4]       [1, 128, 4, 4]             512\n",
      "     Residual-4       [1, 128, 4, 4]       [1, 128, 4, 4]               0\n",
      "  ResnetBlock-2        [1, 64, 7, 7]       [1, 128, 4, 4]               0\n",
      "       Pool2D-2       [1, 128, 4, 4]       [1, 128, 1, 1]               0\n",
      "       Linear-1             [1, 128]              [1, 10]           1,290\n",
      "      Softmax-1              [1, 10]              [1, 10]               0\n",
      "================================================================================\n",
      "Total params: 680,458\n",
      "Trainable params: 677,130\n",
      "Non-trainable params: 3,328\n",
      "--------------------------------------------------------------------------------\n",
      "Input size (MB): 0.00\n",
      "Forward/backward pass size (MB): 0.84\n",
      "Params size (MB): 2.60\n",
      "Estimated Total Size (MB): 3.44\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "{'total_params': 680458, 'trainable_params': 677130}\n"
     ]
    }
   ],
   "source": [
    "rnpt = ResNet()\n",
    "param_info = paddle.summary(rnpt, (1, 28, 28), batch_size=1)\n",
    "print(param_info)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 训练模型\n",
    "\n",
    "下面我们在Fashion-MNIST数据集上训练ResNet。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/2\n",
      "step 200/938 - loss: 1.5768 - acc_top1: 0.8920 - acc_top2: 0.9313 - 200ms/step\n",
      "step 400/938 - loss: 1.4658 - acc_top1: 0.9282 - acc_top2: 0.9597 - 201ms/step\n",
      "step 600/938 - loss: 1.4648 - acc_top1: 0.9425 - acc_top2: 0.9705 - 201ms/step\n",
      "step 800/938 - loss: 1.4839 - acc_top1: 0.9509 - acc_top2: 0.9761 - 201ms/step\n",
      "step 938/938 - loss: 1.4726 - acc_top1: 0.9550 - acc_top2: 0.9787 - 201ms/step\n",
      "Epoch 2/2\n",
      "step 200/938 - loss: 1.4807 - acc_top1: 0.9798 - acc_top2: 0.9943 - 201ms/step\n",
      "step 400/938 - loss: 1.4621 - acc_top1: 0.9794 - acc_top2: 0.9947 - 201ms/step\n",
      "step 600/938 - loss: 1.4646 - acc_top1: 0.9813 - acc_top2: 0.9954 - 201ms/step\n",
      "step 800/938 - loss: 1.4802 - acc_top1: 0.9821 - acc_top2: 0.9956 - 201ms/step\n",
      "step 938/938 - loss: 1.4623 - acc_top1: 0.9820 - acc_top2: 0.9956 - 200ms/step\n",
      "Eval begin...\n",
      "step  20/157 - loss: 1.4989 - acc_top1: 0.9805 - acc_top2: 0.9977 - 67ms/step\n",
      "step  40/157 - loss: 1.4618 - acc_top1: 0.9809 - acc_top2: 0.9969 - 67ms/step\n",
      "step  60/157 - loss: 1.4929 - acc_top1: 0.9807 - acc_top2: 0.9977 - 67ms/step\n",
      "step  80/157 - loss: 1.4613 - acc_top1: 0.9799 - acc_top2: 0.9980 - 67ms/step\n",
      "step 100/157 - loss: 1.4613 - acc_top1: 0.9819 - acc_top2: 0.9984 - 67ms/step\n",
      "step 120/157 - loss: 1.4693 - acc_top1: 0.9836 - acc_top2: 0.9983 - 67ms/step\n",
      "step 140/157 - loss: 1.4612 - acc_top1: 0.9847 - acc_top2: 0.9983 - 67ms/step\n",
      "step 157/157 - loss: 1.4612 - acc_top1: 0.9852 - acc_top2: 0.9982 - 67ms/step\n",
      "Eval samples: 10000\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'loss': [1.461175], 'acc_top1': 0.9852, 'acc_top2': 0.9982}"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\n",
    "resnet = ResNet()\n",
    "model = paddle.Model(resnet)\n",
    "# 设置训练模型所需的optimizer, loss, metric\n",
    "model.prepare(\n",
    "    paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()),\n",
    "    paddle.nn.CrossEntropyLoss(),\n",
    "    paddle.metric.Accuracy(topk=(1,5))\n",
    ")\n",
    "# 启动训练\n",
    "model.fit(train_dataset, epochs=2, batch_size=64, log_freq=200)\n",
    "# 启动评估\n",
    "model.evaluate(test_dataset, log_freq=20, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 小结\n",
    "\n",
    "* 残差块通过跨层的数据通道从而能够训练出有效的深度神经网络。\n",
    "* ResNet深刻影响了后来的深度神经网络的设计。\n",
    "\n",
    "\n",
    "## 练习\n",
    "\n",
    "* 参考ResNet论文的表1来实现不同版本的ResNet [1]。\n",
    "* 对于比较深的网络， ResNet论文中介绍了一个“瓶颈”架构来降低模型复杂度。尝试实现它 [1]。\n",
    "* 在ResNet的后续版本里，作者将残差块里的“卷积、批量归一化和激活”结构改成了“批量归一化、激活和卷积”，实现这个改进（[2]，图1）。\n",
    "\n",
    "\n",
    "\n",
    "## 参考文献\n",
    "\n",
    "[1] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).\n",
    "\n",
    "[2] He, K., Zhang, X., Ren, S., & Sun, J. (2016, October). Identity mappings in deep residual networks. In European Conference on Computer Vision (pp. 630-645). Springer, Cham.\n",
    "\n",
    "## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1663)\n",
    "\n",
    "![](../img/qr_resnet.svg)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 1.8.4 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
