{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 稠密连接网络（DenseNet）\n",
    "\n",
    "ResNet中的跨层连接设计引申出了数个后续工作。本节我们介绍其中的一个：稠密连接网络（DenseNet） [1]。 它与ResNet的主要区别如图5.10所示。\n",
    "\n",
    "![ResNet（左）与DenseNet（右）在跨层连接上的主要区别：使用相加和使用连结](../img/densenet.svg)\n",
    "\n",
    "图5.10中将部分前后相邻的运算抽象为模块$A$和模块$B$。与ResNet的主要区别在于，DenseNet里模块$B$的输出不是像ResNet那样和模块$A$的输出相加，而是在通道维上连结。这样模块$A$的输出可以直接传入模块$B$后面的层。在这个设计里，模块$A$直接跟模块$B$后面的所有层连接在了一起。这也是它被称为“稠密连接”的原因。\n",
    "\n",
    "DenseNet的主要构建模块是稠密块（dense block）和过渡层（transition layer）。前者定义了输入和输出是如何连结的，后者则用来控制通道数，使之不过大。\n",
    "\n",
    "\n",
    "## 稠密块\n",
    "\n",
    "DenseNet使用了ResNet改良版的“批量归一化、激活和卷积”结构（参见上一节的练习），我们首先在`BNConv`函数里实现这个结构。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# # 安装paddle 2.0 beta环境 cpu，首次运行后注释\n",
    "# !pip install paddlepaddle==2.0.0b0 -i https://mirror.baidu.com/pypi/simple\n",
    "\n",
    "import paddle\n",
    "import paddle.nn as nn\n",
    "import numpy as np\n",
    "\n",
    "paddle.disable_static()\n",
    "\n",
    "class BNConv(nn.Layer):\n",
    "    def __init__(self, num_channels, num_filters):\n",
    "        super(BNConv, self).__init__()\n",
    "        model = [\n",
    "            nn.BatchNorm2d(num_channels),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2d(num_channels, num_filters, 3, stride=1, padding=1)\n",
    "        ]\n",
    "        self.model = nn.Sequential(*model)\n",
    "    def forward(self, X):\n",
    "        return self.model(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "稠密块由多个`BNConv`组成，每块使用相同的输出通道数。但在前向计算时，我们将每块的输入和输出在通道维上连结。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class DenseBlock(nn.Layer):\n",
    "    def __init__(self, num_channels, num_layers, growth_rate):\n",
    "        super(DenseBlock, self).__init__()\n",
    "        self.dense_blocks = []\n",
    "        for i in range(num_layers):\n",
    "            block = self.add_sublayer(str(i), BNConv(num_channels + i * growth_rate, growth_rate))\n",
    "            self.dense_blocks.append(block)\n",
    "    def forward(self, X):\n",
    "        for block in self.dense_blocks:\n",
    "            X = paddle.concat([X, block(X)], axis=1)\n",
    "        return X\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "在下面的例子中，我们定义一个有2个输出通道数为10的卷积块。使用通道数为3的输入时，我们会得到通道数为$3+2\\times 10=23$的输出。卷积块的通道数控制了输出通道数相对于输入通道数的增长，因此也被称为增长率（growth rate）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[4, 23, 8, 8]\n"
     ]
    }
   ],
   "source": [
    "with paddle.fluid.dygraph.guard(paddle.fluid.cpu_places()[0]):\n",
    "    blk = DenseBlock(3, 2, 10)\n",
    "    X = paddle.to_tensor(np.random.uniform(-1., 1., [4, 3, 8, 8]).astype('float32'))\n",
    "    Y = blk(X)\n",
    "    print(Y.shape)\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 过渡层\n",
    "\n",
    "由于每个稠密块都会带来通道数的增加，使用过多则会带来过于复杂的模型。过渡层用来控制模型复杂度。它通过$1\\times1$卷积层来减小通道数，并使用步幅为2的平均池化层减半高和宽，从而进一步降低模型复杂度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "class TransitionLayer(nn.Layer):\n",
    "    def __init__(self, num_channels, num_filters):\n",
    "        super(TransitionLayer, self).__init__()\n",
    "        model = [\n",
    "            nn.BatchNorm2d(num_channels),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2d(num_channels, num_filters, 1, stride=1),\n",
    "            nn.Pool2D(pool_size=2, pool_stride=2, pool_type='avg')\n",
    "        ]\n",
    "        self.model = nn.Sequential(*model)\n",
    "    def forward(self, X):\n",
    "        return self.model(X)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "对上一个例子中稠密块的输出使用通道数为10的过渡层。此时输出的通道数减为10，高和宽均减半。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[4, 10, 4, 4]\n"
     ]
    }
   ],
   "source": [
    "with paddle.fluid.dygraph.guard(paddle.fluid.cpu_places()[0]):\n",
    "    blk = TransitionLayer(23, 10)\n",
    "    Y = blk(Y)\n",
    "    print(Y.shape)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## DenseNet模型\n",
    "\n",
    "我们来构造DenseNet模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1, 10]\n"
     ]
    }
   ],
   "source": [
    "class DenseNet(nn.Layer):\r\n",
    "    def __init__(self, num_classes=10):\r\n",
    "        super(DenseNet, self).__init__()\r\n",
    "\r\n",
    "        # DenseNet首先使用同ResNet一样的单卷积层和最大池化层。\r\n",
    "        model = [\r\n",
    "            nn.Conv2d(1, 64, 7, stride=2, padding=3),\r\n",
    "            nn.BatchNorm2d(64),\r\n",
    "            nn.ReLU(),\r\n",
    "            nn.Pool2D(pool_size=3, pool_stride=2, pool_padding=1, pool_type='max')\r\n",
    "        ]\r\n",
    "\r\n",
    "        # 类似于ResNet接下来使用的4个残差块，DenseNet使用的是4个稠密块。\r\n",
    "        # 同ResNet一样，我们可以设置每个稠密块使用多少个卷积层。\r\n",
    "        # 这里我们设成4，从而与上一节的ResNet-18保持一致。\r\n",
    "        # 稠密块里的卷积层通道数（即增长率）设为32，所以每个稠密块将增加128个通道。\r\n",
    "        # ResNet里通过步幅为2的残差块在每个模块之间减小高和宽。这里我们则使用过渡层来减半高和宽，并减半通道数。\r\n",
    "        num_channels, growth_rate = 64, 32  # num_channels为当前的通道数\r\n",
    "        # num_convs_in_dense_blocks = [4, 4, 4, 4]\r\n",
    "        num_convs_in_dense_blocks = [4, 4]\r\n",
    "        for i, num_convs in enumerate(num_convs_in_dense_blocks):\r\n",
    "            model += [DenseBlock(num_channels, num_convs, growth_rate)]\r\n",
    "            # 上一个稠密块的输出通道数\r\n",
    "            num_channels += num_convs * growth_rate\r\n",
    "            # 在稠密块之间加入通道数减半的过渡层\r\n",
    "            if i != len(num_convs_in_dense_blocks) - 1:\r\n",
    "                model += [TransitionLayer(num_channels, num_channels // 2)]\r\n",
    "                num_channels //= 2\r\n",
    "\r\n",
    "        # 同ResNet一样，最后接上全局池化层和全连接层来输出。\r\n",
    "        model += [\r\n",
    "            nn.BatchNorm2d(num_channels),\r\n",
    "            nn.ReLU(),\r\n",
    "            nn.Pool2D(pool_type='avg', global_pooling=True)\r\n",
    "        ]\r\n",
    "        linear_softmax = [\r\n",
    "            nn.Linear(num_channels, num_classes),\r\n",
    "            nn.Softmax()\r\n",
    "        ]\r\n",
    "        self.num_channels = num_channels\r\n",
    "        self.model = nn.Sequential(*model)\r\n",
    "        self.linear_softmax = nn.Sequential(*linear_softmax)\r\n",
    "    def forward(self, X):\r\n",
    "        Y = self.model(X)\r\n",
    "        Y = self.linear_softmax(paddle.reshape(Y, [-1, self.num_channels]))\r\n",
    "        return Y\r\n",
    "\r\n",
    "train_dataset_unit_test = paddle.vision.datasets.MNIST(mode='train')\r\n",
    "train_loader_unit_test = paddle.io.DataLoader(train_dataset_unit_test, places=paddle.CPUPlace(), batch_size=1, shuffle=False)\r\n",
    "with paddle.fluid.dygraph.guard(paddle.fluid.cpu_places()[0]):\r\n",
    "    data = next(train_loader_unit_test())\r\n",
    "    dn = DenseNet()\r\n",
    "    logit = dn(data[0])\r\n",
    "    print(logit.shape)\r\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 训练模型\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-images-idx3-ubyte.gz \n",
      "Begin to download\n",
      "\n",
      "Download finished\n",
      "Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-labels-idx1-ubyte.gz \n",
      "Begin to download\n",
      "..\n",
      "Download finished\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/2\n",
      "step 200/938 - loss: 1.5978 - acc_top1: 0.8866 - acc_top2: 0.9377 - 279ms/step\n",
      "step 400/938 - loss: 1.5237 - acc_top1: 0.9180 - acc_top2: 0.9595 - 279ms/step\n",
      "step 600/938 - loss: 1.4817 - acc_top1: 0.9316 - acc_top2: 0.9685 - 278ms/step\n",
      "step 800/938 - loss: 1.4778 - acc_top1: 0.9398 - acc_top2: 0.9738 - 278ms/step\n",
      "step 938/938 - loss: 1.4989 - acc_top1: 0.9445 - acc_top2: 0.9763 - 277ms/step\n",
      "Epoch 2/2\n",
      "step 200/938 - loss: 1.4953 - acc_top1: 0.9712 - acc_top2: 0.9916 - 277ms/step\n",
      "step 400/938 - loss: 1.4781 - acc_top1: 0.9705 - acc_top2: 0.9913 - 276ms/step\n",
      "step 600/938 - loss: 1.4927 - acc_top1: 0.9722 - acc_top2: 0.9920 - 276ms/step\n",
      "step 800/938 - loss: 1.4749 - acc_top1: 0.9726 - acc_top2: 0.9921 - 275ms/step\n",
      "step 938/938 - loss: 1.5825 - acc_top1: 0.9726 - acc_top2: 0.9919 - 275ms/step\n",
      "Eval begin...\n",
      "step  20/157 - loss: 1.5559 - acc_top1: 0.9719 - acc_top2: 0.9898 - 88ms/step\n",
      "step  40/157 - loss: 1.4763 - acc_top1: 0.9723 - acc_top2: 0.9926 - 89ms/step\n",
      "step  60/157 - loss: 1.5419 - acc_top1: 0.9714 - acc_top2: 0.9922 - 89ms/step\n",
      "step  80/157 - loss: 1.4765 - acc_top1: 0.9717 - acc_top2: 0.9934 - 89ms/step\n",
      "step 100/157 - loss: 1.4612 - acc_top1: 0.9753 - acc_top2: 0.9941 - 89ms/step\n",
      "step 120/157 - loss: 1.4612 - acc_top1: 0.9768 - acc_top2: 0.9938 - 89ms/step\n",
      "step 140/157 - loss: 1.4612 - acc_top1: 0.9785 - acc_top2: 0.9944 - 89ms/step\n",
      "step 157/157 - loss: 1.4612 - acc_top1: 0.9788 - acc_top2: 0.9945 - 89ms/step\n",
      "Eval samples: 10000\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'loss': [1.4611509], 'acc_top1': 0.9788, 'acc_top2': 0.9945}"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_dataset = paddle.vision.datasets.MNIST(mode='train')\r\n",
    "test_dataset = paddle.vision.datasets.MNIST(mode='test')\r\n",
    "train_loader = paddle.io.DataLoader(train_dataset, places=paddle.CPUPlace(), batch_size=1, shuffle=True)\r\n",
    "densenet = DenseNet()\r\n",
    "model = paddle.Model(densenet)\r\n",
    "# 设置训练模型所需的optimizer, loss, metric\r\n",
    "model.prepare(\r\n",
    "    paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()),\r\n",
    "    paddle.nn.CrossEntropyLoss(),\r\n",
    "    paddle.metric.Accuracy(topk=(1,2))\r\n",
    ")\r\n",
    "# 启动训练\r\n",
    "model.fit(train_dataset, epochs=2, batch_size=64, log_freq=200)\r\n",
    "# 启动评估\r\n",
    "model.evaluate(test_dataset, log_freq=20, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 小结\n",
    "\n",
    "* 在跨层连接上，不同于ResNet中将输入与输出相加，DenseNet在通道维上连结输入与输出。\n",
    "* DenseNet的主要构建模块是稠密块和过渡层。\n",
    "\n",
    "## 练习\n",
    "\n",
    "* DenseNet论文中提到的一个优点是模型参数比ResNet的更小，这是为什么？\n",
    "* DenseNet被人诟病的一个问题是内存或显存消耗过多。真的会这样吗？可以把输入形状换成$224\\times 224$，来看看实际的消耗。\n",
    "* 实现DenseNet论文中的表1提出的不同版本的DenseNet [1]。\n",
    "\n",
    "\n",
    "\n",
    "## 参考文献\n",
    "\n",
    "[1] Huang, G., Liu, Z., Weinberger, K. Q., & van der Maaten, L. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (Vol. 1, No. 2).\n",
    "\n",
    "## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1664)\n",
    "\n",
    "![](../img/qr_densenet.svg)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 1.8.4 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
