{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 一、卷积神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "全连接神经网络参数数量大，卷积神经网络采用参数共享的方法，权重参数数量得到控制。\n",
    "[卷积神经网络图解](https://poloclub.github.io/cnn-explainer/)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 输入层：\n",
    "\n",
    "输入层（最左边的层）代表输入到CNN中的图像。一般三个通道（RGB）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 二维卷积层\n",
    "\n",
    "- 卷积\n",
    "\n",
    "二维卷积层的参数由一组可学习的卷积核（filter）组成。每个卷积核尺寸很小（宽度和高度方向），但是深度会延伸到输入的感受野（接收域）的所有通道，也就是说，卷积核和感受野的尺寸大小相同。\n",
    "\n",
    "总结：\n",
    "\n",
    "* 多输入通道需要一组卷积核进行卷积操作，得到一个通道输出\n",
    "* 多输入通道需要多组卷积核进行卷积操作，得到多个通道输出"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 填充 （padding）\n",
    "\n",
    "填充用来改变卷积输出的大小。最常用的是零填充。也有均值填充等等。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 步长（stride）\n",
    "\n",
    "步长表示卷积核一次移动多少个像素点。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 卷积输入和输出大小的关系\n",
    "\n",
    "$ \\mbox{output_size} = \\frac{\\mbox{input_size} + 2 * \\mbox{padding} - \\mbox{filter_size}}{\\mbox{stride}} + 1 $"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 激活函数\n",
    "\n",
    "- ReLU\n",
    "    \n",
    "    $ ReLU(x) = \\max (0, x)$  \n",
    "    \n",
    "    \n",
    "- Softmax\n",
    "\n",
    "    计算分类的概率： $ Softmax(x_i) = \\frac{\\exp (x_i)}{\\sum_{i} \\exp (x_i)}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 池化层\n",
    "\n",
    "池化是一种降采样（subsampling）的方法。池化层的设计是为了逐渐减小网络的空间范围，从而减少网络的参数。使用最多的是最大池化（Max-Pooling）。需要过滤核和步长。每个过滤核划过的区域像素点均取最大值。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 二、常见的卷积神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. LeNet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](http://cuijiahua.com/wp-content/uploads/2018/01/dl_3_4.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![C3卷积](http://cuijiahua.com/wp-content/uploads/2018/01/dl_3_5.png)\n",
    "\n",
    "第一次池化之后是第二次卷积，第二次卷积的输出是C3，16个10x10的特征图，卷积核大小是5x5. 我们知道S2有6个14x14的特征图，怎么从6个特征图得到16个特征图了。这里是通过对S2的特征图特殊组合计算得到的16个特征图。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- INPUT: 输入是32x32的二维图像  \n",
    "- C1: 经过6个卷积核(filter)的计算，得到6个28x28的特征图。卷积核filter大小是5x5，训练参数是$(5*5+1)*6=156$。  \n",
    "- S2: Pooling。filter的大小是2x2，得到6个14x14的特征图。  \n",
    "- C3: 16个卷积核（大小5x5），stride=1，得到$\\frac{(14+0*2-5)}{stride}+1 = 10$的特征图。  \n",
    "- S4: Pooling。filter的大小是2x2，得到16个5x5的特征图。\n",
    "- C5: 卷积层。输入是16x5x5，输出是120x1x1。卷积核大小是5x5，卷积核种类120。\n",
    "- F6: 全连接层。输出(84,)。$sigmoid(x*w+b)$  之所以选这个数字的原因是来自于输出层的设计，对应于一个7×12的比特图，如下图所示，-1表示白色，1表示黑色，这样每个符号的比特图的黑白色就对应于一个编码。其实就是说有一个很小的编码库，然后通过把图像处理的足够小，小到和编码库一个大小时，通过计算两个图像的相似度来判断是否是同一个数字。 \n",
    " ![](https://static.oschina.net/uploads/space/2018/0311/013047_ApKN_876354.png)\n",
    "- OUTPUT：输出是(10,)，全连接层，共10个节点，分别代表0到9。计算过程是，将这层每个输出所对应的上一层84个输出结果与各个输出的特征库（大小84）做相似性判断，最相似的便是最接近的结果。该层采用径向基函数（RBF）的网络连接方式，假设x是上一层的输入，y是RBF的输出，则RBF输出的计算方式是：$y_i = \\sum_i (x_i - w_{ij})^2$。这里的$w_{ij}$由i对应的具体比特图编码确定，i从0到9，j取值从0到83。RBF输出的值越接近于0，表示当前网络输入的识别结果与比特图i越接近。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "#LeNet network\n",
    "from torch.nn import Module\n",
    "from torch import nn\n",
    "\n",
    "#本文实现lenet与原paper还是有些不一样，主要体现在评论区说的s2到c3的过程上。对于该问题，本文实现算是一个简化版本。\n",
    "#原paper之所以那样实现，也是受限于当时的计算资源。现简化版本也符合pytorch的实现框架。\n",
    "class Model(Module):\n",
    "    def __init__(self):\n",
    "        super(Model, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(1, 6, 5) #in_channels, out_channels, conv_kernel_size\n",
    "        self.relu1 = nn.ReLU()\n",
    "        self.pool1 = nn.MaxPool2d(2) #kernel_size=2, stride=2, padding=0\n",
    "        self.conv2 = nn.Conv2d(6, 16, 5) #这里和论文里的实现方法不同，这里16个卷积核依次对6个channel进行pointwise convolution\n",
    "        self.relu2 = nn.ReLU()\n",
    "        self.pool2 = nn.MaxPool2d(2)\n",
    "        self.fc1 = nn.Linear(16*5*5, 120) #卷积层：120个5x5的卷积核\n",
    "        self.relu3 = nn.ReLU()\n",
    "        self.fc2 = nn.Linear(120, 84)\n",
    "        self.relu4 = nn.ReLU()\n",
    "        self.fc3 = nn.Linear(84, 10)\n",
    "        self.relu5 = nn.ReLU()\n",
    "\n",
    "    def forward(self, x):\n",
    "        y = self.conv1(x)\n",
    "        y = self.relu1(y)\n",
    "        y = self.pool1(y)\n",
    "        y = self.conv2(y)\n",
    "        y = self.relu2(y)\n",
    "        y = self.pool2(y)\n",
    "        y = y.view(y.shape[0], -1) # 相当于numpy.reshape\n",
    "        y = self.fc1(y)\n",
    "        y = self.relu3(y)\n",
    "        y = self.fc2(y)\n",
    "        y = self.relu4(y)\n",
    "        y = self.fc3(y)\n",
    "        y = self.relu5(y)\n",
    "        return y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. AlexNet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![图](https://pic3.zhimg.com/v2-29c8b75b2cf5248f025fdf12a246801e_r.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "说明：上下两层对应两块GPU，只有到特定的网络层才需要两块GPU进行交互。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 卷积层：  \n",
    "顺序：conv -> relu -> maxpooling -> LRN  \n",
    "输入图像是224x224x3(这里应该是227的大小，最开始是224x224x3，后续为处理方便进行调整)，卷积核大小是11x11x3，96个卷积核，stride=4，通过公式$output = \\frac{input-feature+pooling*2}{stride}$，得到output=55，最终得到96个feature map，大小是55x55。经过relu函数处理，保持feature map大小不变。Maxpooling：kernel=3x3，stride=2，padding=0,output=27x27x96。最后，LRN。  \n",
    "- 卷积层：  \n",
    "顺序：conv -> relu -> maxpooling -> LRN  \n",
    "输入图像27x27x96，卷积核大小5x5x96，256个卷积核，stride=1, padding=2, output=27x27x256。经过relu。Maxpooling：kernel=3x3, stride=2, padding=0, output=13x13x256。最后，LRN。  \n",
    "- 卷积层：  \n",
    "顺序：conv -> relu  \n",
    "输入图像是13x13x256，卷积核大小3x3x253，384个卷积核，stride是1，padding是1，得到输出是13x13x384。经过relu层，大小不变，保持13x13x384。  \n",
    "- 卷积层：  \n",
    "顺序：conv -> relu  \n",
    "输入图像是13x13x384，卷积核大小3x3x384，384个卷积核，stride是1，padding是1，得到输出是13x13x384。经过relu层，大小不变。  \n",
    "- 卷积层：  \n",
    "顺序：conv -> relu -> maxpooling  \n",
    "输入图像是13x13x384，卷积核大小是3x3x384，256个卷积核，stride是1，padding是1，得到输出是13x13x256。经过relu层，大小保持不变。在经过maxpooling，卷积核大小是3x3，stride大小是2，padding是0，得到输出是6x6x256。  \n",
    "- 全连接层（卷积层向全连接层过渡）：  \n",
    "顺序：conv -> relu -> dropout  \n",
    "输入图像是6x6x256，卷积核6x6x256，4096个卷积核，得到输出是4096x1。经过relu层，大小不变。采用dropout，大小不变。 \n",
    "- 全连接层：  \n",
    "顺序：-> relu -> dropout  \n",
    "输入数据是4096x1，经过relu和dropout。  \n",
    "- 全连接层（输出层）：  \n",
    "顺序： fully connected -> softmax  \n",
    "输入数据是4096x1，1000个神经元与其进行全连接，得到输出1000x1的结果，然后经过softmax，得到1000种分类概率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.nn import Module\n",
    "from torch import nn\n",
    "\n",
    "class AlexNet(nn.Module):\n",
    "    def __init__(self, num_classes=1000):\n",
    "        super(AlexNet, self).__init__()\n",
    "        self.features = nn.Sequential(\n",
    "            #conv1\n",
    "            nn.Conv2d(3, 96, kernel_size=11, stride=4), #[in_channels, output, kernel_size, stride, padding]\n",
    "            nn.Relu(inplace=True), \n",
    "            nn.MaxPool2d(kernel_size=3, stride=2),\n",
    "            #conv2\n",
    "            nn.Conv2d(96, 256, kernel_size=5, stride=1, padding=2),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.MaxPool2d(kernel_size=3, stride=2),\n",
    "            #conv3\n",
    "            nn.Conv2d(256, 384, kernel_size=3, stride=1, padding=1),\n",
    "            nn.Relu(inplace=True),\n",
    "            #conv4\n",
    "            nn.Conv2d(384, 384, kernel_size=3, stride=1, padding=1),\n",
    "            nn.Relu(inplace=True),\n",
    "            #conv5\n",
    "            nn.Conv2d(384, 256, kernel_size=3, stride=1, padding=1),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.MaxPool2d(kernel_size=3, stride=2),\n",
    "            #conv6\n",
    "            nn.Conv2d(256, 4096, kernel_size=6),\n",
    "            )\n",
    "        self.classifier = nn.Sequential(\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Dropout(),\n",
    "            nn.Linear(4096, 4096),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Linear(4096, num_classes),\n",
    "            nn.Relu(inplace=True)\n",
    "            )\n",
    "    def forward(self, x):\n",
    "        x = self.feature(x)\n",
    "        x = self.classifier(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. VGG"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "VGG是Oxford的Visual Geometry Group的组提出的。该网络是在ILSVRC 2014上的相关工作，主要工作是证明了增加网络的深度能够在一定程度上影响网络最终的性能。VGG有两种结构，分别是VGG16和VGG19，两者并没有本质上的区别，只是网络深度不一样。  \n",
    "VGG16相比AlexNet的一个改进是采用连续的几个3x3的卷积核代替AlexNet中较大的卷积核（11x11,5x5），对于给定的感受野（与输出有关的输入图片的局部大小），采用堆积的小卷积核是优于采用大的卷积核，因为多层非线性层可以增加网络深度来保证学习更复杂的模式。  \n",
    "简单来说，在VGG中，使用了3个3x3卷积核来代替7x7卷积核，使用了2个3x3卷积核来代替5*5卷积核，这样做的主要目的是在保证具有相同感知野的条件下，提升了网络的深度，在一定程度上提升了神经网络的效果。  \n",
    "比如：假定padding=1，stride=1，  \n",
    "case1: 给定一个5x5的feature map，卷积核为3x3，需要$\\frac{5-3}{1}+1$  \n",
    "case2: 给定一个7x7的feature，卷积核为3x3，需要$\\frac{7-3}{1}+1$，然后重复case1。  \n",
    "比如，3个步长为1的3x3卷积核的一层层叠加作用可看成一个大小为7的感受野（其实就表示3个3x3连续卷积相当于一个7x7卷积），其参数总量为 3x(9xC^2) ，如果直接使用7x7卷积核，其参数总量为 49xC^2 ，这里 C 指的是输入和输出的通道数。很明显，27xC^2小于49xC^2，即减少了参数；而且3x3卷积核有利于更好地保持图像性质。  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ss1.bdstatic.com/70cFuXSh_Q1YnxGkpoWK1HF6hhy/it/u=61453128,2121375767&fm=26&gp=0.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- VGG16包含了16个隐藏层（13个卷积层和3个全连接层），如上图D列；  \n",
    "- VGG19包含了19个隐藏层（16个卷积层和3个全连接层），如上图E列。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# VGG16\n",
    "import torch as torch  \n",
    "import torch.nn as nn \n",
    "class VGG16(nn.Module): \n",
    "    def __init__(self,num_classes): \n",
    "        super(VGG16,self).__init__() \n",
    "        self.features = nn.Sequential( \n",
    "            nn.Conv2d(3,64,kernel_size=3,padding=1), \n",
    "            nn.ReLU(inplace=True), \n",
    "            nn.Conv2d(64,64,kernel_size=3,padding=1), \n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(64,128,kernel_size=3,padding=1),\n",
    "            nn.ReLU(inplace=True), \n",
    "            nn.Conv2d(128, 128, kernel_size=3, padding=1), \n",
    "            nn.ReLU(inplace=True), \n",
    "            nn.Conv2d(128, 256, kernel_size=3, padding=1),\n",
    "            nn.ReLU(inplace=True), \n",
    "            nn.Conv2d(256, 256, kernel_size=3, padding=1), \n",
    "            nn.ReLU(inplace=True), \n",
    "            nn.Conv2d(256, 256, kernel_size=3, padding=1),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(256, 512, kernel_size=3, padding=1),\n",
    "            nn.ReLU(inplace=True), \n",
    "            nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "            nn.ReLU(inplace=True), \n",
    "            nn.Conv2d(512, 512, kernel_size=3, padding=1), \n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "            nn.ReLU(inplace=True) \n",
    "        )\n",
    "        self.classifier = nn.Sequential(\n",
    "            nn.Linear(512*7*7,4096),\n",
    "            nn.ReLU(inplace=True),\n",
    "            nn.Dropout(),\n",
    "            nn.Linear(4096,4096),\n",
    "            nn.ReLU(True), \n",
    "            nn.Dropout(),\n",
    "            nn.Linear(4096,num_classes)\n",
    "        ) \n",
    "    def forward(self, x):\n",
    "        x = self.features(x),\n",
    "        x = x.view(x.size(0),-1)\n",
    "        x = self.classifier(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "总结：  \n",
    "- 使用了3个3x3卷积核来代替7x7卷积核，使用了2个3x3卷积核来代替5*5卷积核。这样做的主要目的是在保证具有相同感知野的条件下，提升了网络的深度（因为多层非线性层可以增加网络深度来保证学习更复杂的模式），在一定程度上提升了神经网络的效果。  \n",
    ">输入通道数和输出通道数都为C， 3个步长为1的3x3卷积核的一层层叠加作用可看成一个大小为7的感受野（其实就表示3个3x3连续卷积相当于一个7x7卷积），其参数总量为3×(9×C2) 3\\times (9\\times C^2)3×(9×C2) ，如果直接使用7x7卷积核，其参数总量为 49×C249\\times C^249×C2 。很明显，27×C227\\times C^2 27×C2 小于49×C249\\times C^249×C2，即减少了参数；而且3x3卷积核有利于更好地保持图像性质。  \n",
    "- VGGNet的结构非常简洁，整个网络都使用了同样大小的卷积核尺寸（3x3）和最大池化尺寸（2x2）。  \n",
    "- 几个小滤波器（3x3）卷积层的组合比一个大滤波器（5x5或7x7）卷积层好。\n",
    "- 验证了通过不断加深网络结构可以提升性能。\n",
    "- 缺点是VGG耗费更多计算资源，并且使用了更多的参数，这里不是3x3卷积的原因，其中绝大多数的参数都是来自于第一个全连接层。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "LeNet、AlexNet和VGG在设计上的共同之处是：先以由卷积层构成的模块充分抽取空间特征，再以由全连接层构成的模块输出分类结果。其中，AlexNet和VGG的改进主要是在于如何对这两个模块加宽（增加通道数）和加深。  \n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Network In Network"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "相比上述三种框架，用卷积+全连接的方式，NIN提出了新的思路，即串联多个由卷积层和全连接层构成的小网络（MLP卷积）来构建深层网络。先进行一次普通的卷积（比如3x3），紧接着在进行一次1x1的卷积，对于某个像素点来说，1x1卷积等效于该像素点在所有特征上进行一次全连接的计算。需要注意的是，NIN结构卷积后面都需要跟着激活函数（例如，Relu）。注意到这种1x1卷积方式是非常有效的，对后来的网络设计有非常大的启发，虽然NiN在后来应用不多，但1x1卷积的思想得到了广泛使用。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![图片.png](https://ss0.bdstatic.com/70cFuHSh_Q1YnxGkpoWK1HF6hhy/it/u=3400518535,1363485481&fm=26&gp=0.jpg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Network In Network\n",
    "import torch as torch  \n",
    "import torch.nn as nn \n",
    "import torch.nn.functional as F\n",
    "\n",
    "class Net(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        self.classifier = nn.Sequential(\n",
    "            # MLP卷积层1\n",
    "            nn.Conv2d(3, 192, kernel_size=5, stride=1, padding=2),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Conv2d(192, 160, kernel_size=1, stride=1, padding=0), #(1x1)，类似全连接\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Conv2d(160, 96, kernel_size=1, stride=1, padding=0), #(1x1)，类似全连接\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.MaxPool2d(kernel_size=3, stride=2, padding=1),\n",
    "            nn.Dropout(0.5),\n",
    "            \n",
    "            # MLP卷积层2\n",
    "            nn.Conv2d(96, 192, kernel_size=5, stride=1, padding=2),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Con2d(192, 192, kernel_size=1, stride=1, padding=0),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.AvgPool2d(kernel_size=3, stride=2, padding=1),\n",
    "            nn.Dropout(0.5),\n",
    "        \n",
    "            # Mlp卷积层3\n",
    "            nn.Conv2d(192, 192, kernel_size=3, stride=1, padding=1),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Conv2d(192, 192, kernel_size=1, stride=1, padding=0),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.Conv2d(192, 10, kernel_size=1, stride=1, padding=0),\n",
    "            nn.Relu(inplace=True),\n",
    "            nn.AvgPool2d(kernel_size=8, stride=1, padding=0),\n",
    "        )\n",
    "        \n",
    "    def forward(self, x):\n",
    "        x = self.classifier(x)\n",
    "        x = F.avg_pool2d(x, kernel_size=x.size()[2:])\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. GoogLeNet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "GoogLeNet，向LeNet致敬，所以“L”大写。GoogLeNet吸收了NiN网络串联的思想，并在此基础上做了很大的改进。在后续的几年里，GoogLeNet做了数次改进，这里以第一版本为例。注：GoogLeNet一共22层。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "GoogLeNet中的基础卷积块叫做Inception块，得名于同名电影《盗梦空间(Inception)》。与NiN块相比，Inception块在结构上更加复杂。基本思想是Inception网络不需要人为决定使用哪个filter或者是否需要pooling，而是由网络自行决定这些参数，你可以给网络添加这些参数得所有可能值，然后把这些输出连接起来，让网络自己学习它需要什么样得参数，采用哪些filter组合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ss2.bdstatic.com/70cFvnSh_Q1YnxGkpoWK1HF6hhy/it/u=3657495659,3392210773&fm=26&gp=0.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上图显示了Inception得初始版本，有4条并行线路。前3条线路使用窗口大小为1x1、3x3、5x5的卷积层，第四条使用3x3最大pooling来抽取不同空间尺寸下的信息，再用1x1卷积改变通道数。  \n",
    "问题：  \n",
    "5x5卷积网络的计算代价很大，在混合池化层之后会更加突出，很快会出现计算量暴涨。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了克服初始版本Inception模块的困难，GoogLeNet做了改进，即采用1x1的卷积层来减低输入层的维度，使网络参数减少，减少了网络的复杂性。见下图（Inception V1）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ss1.bdstatic.com/70cFvXSh_Q1YnxGkpoWK1HF6hhy/it/u=1963993584,1861393003&fm=26&gp=0.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从GoogLeNet中明显看出，共包含9个**Inception V1**模块，如下图所示。所有层均采用了ReLU激活函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ss0.bdstatic.com/70cFvHSh_Q1YnxGkpoWK1HF6hhy/it/u=1678270015,3262638813&fm=26&gp=0.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://images3.freesion.com/682/5d/5d87c1846704ff0ab25a9c85fc0835f2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "自从2014年过后，Inception模块不断的改进，现在已发展到V4。GoogLeNet V2中的Inception参考VGGNet用两个3*3核的卷积层代替了具有5*5核的卷积层，与此同时减少了一个辅助分类器，并引入了Batch Normalization（BN），它是一个非常有用的正则化方法。V3相对于V2的学习效率提升了很多倍，并且训练时间大大缩短了。在ImageNet上的top-5错误率为4.8%。Inception V3通过改进V2得到，其核心思想是将一个较大的n*n的二维卷积拆成两个较小的一维卷积n*1和1*n。Inception V3有三种不同的结构（Base的大小分别为35*35、17*17、8*8），如图1.8所示，其中分支可能嵌套。GoogLeNet也只用了一个辅助分类器，在ImageNet上top-5的错误率为3.5%。Inception V4是一种与Inception V3类似或更复杂的网络模块。V4在ImageNet上top-5的错误率为3.08%。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# https://github.com/facebookresearch/mixup-cifar10/blob/master/models/googlenet.py\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "from torch.autograd import Variable\n",
    "\n",
    "class Inception(nn.Module):\n",
    "    def __init__(self, in_planes, n1x1, n3x3red, n3x3, n5x5red, n5x5, pool_planes):\n",
    "        super(Inception, self).__init__()\n",
    "        # 1x1 conv branch\n",
    "        self.b1 = nn.Sequential(\n",
    "            nn.Conv2d(in_planes, n1x1, kernel_size=1),  # stride=1\n",
    "            nn.BatchNorm2d(n1x1),\n",
    "            nn.ReLU(True),\n",
    "        )\n",
    "        # 1x1 conv -> 3x3 conv branch\n",
    "        self.b2 = nn.Sequential(\n",
    "            nn.Conv2d(in_planes, n3x3red, kernel_size=1),\n",
    "            nn.BatchNorm2d(n3x3red),\n",
    "            nn.ReLU(True),\n",
    "            nn.Conv2d(n3x3red, n3x3, kernel_size=3, padding=1),\n",
    "            nn.BatchNorm2d(n3x3),\n",
    "            nn.ReLU(True),\n",
    "        )\n",
    "        # 1x1 conv -> 5x5 conv branch\n",
    "        self.b3 = nn.Sequential(\n",
    "            nn.Conv2d(in_planes, n5x5red, kernel_size=1),\n",
    "            nn.BatchNorm2d(n5x5red),\n",
    "            nn.ReLU(True),\n",
    "            # 2个3x3卷积代替1个5x5卷积\n",
    "            nn.Conv2d(in_planes, n5x5, kernel_size=3, padding=1),\n",
    "            nn.BatchNorm2d(n5x5),\n",
    "            nn.ReLU(True),\n",
    "            nn.Conv2d(n5x5, n5x5, kernel_size=3, padding=1),\n",
    "            nn.BatchNorm2d(n5x5),\n",
    "            nn.ReLU(True),\n",
    "        )\n",
    "        # 3x3 pool -> 1x1 conv branch\n",
    "        self.b4 = nn.Sequential(\n",
    "            nn.MaxPool2d(3, stride=1, padding=1),\n",
    "            nn.Conv2d(in_planes, pool_planes, kernel_size=1),\n",
    "            nn.BatchNorm2d(pool_planes),\n",
    "            nn.ReLU(True),\n",
    "        )\n",
    "    \n",
    "    def forward(self, x):\n",
    "        y1 = self.b1(x)\n",
    "        y2 = self.b2(x)\n",
    "        y3 = self.b3(x)\n",
    "        y4 = self.b4(x)\n",
    "        return torch.cat([y1, y2, y3, y4], 1)  #按列拼接\n",
    "    \n",
    "class GoogLeNet(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(GoogLeNet, self).__init__()\n",
    "        self.pre_layers = nn.Sequential(\n",
    "            nn.Conv2d(3, 192, kernel_size=3, padding=1),\n",
    "            nn.BatchNorm2d(192),\n",
    "            nn.ReLU(True),\n",
    "        )\n",
    "        \n",
    "        self.a3 = Inception(192, 64, 96, 128, 16, 32, 32)\n",
    "        self.b3 = Inception(256, 128, 128, 192, 32, 96, 64)\n",
    "        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n",
    "        self.a4 = Inception(480, 192, 96, 208, 16, 48, 64)\n",
    "        self.b4 = Inception(512, 160, 112, 224, 24, 64, 64)\n",
    "        self.c4 = Inception(512, 128, 128, 256, 24, 64, 64)\n",
    "        self.d4 = Inception(512, 112, 144, 288, 32, 64, 64)\n",
    "        self.e4 = Inception(528, 256, 160, 320, 32, 128, 128)\n",
    "        self.a5 = Inception(832, 256, 160, 320, 32, 128, 128)\n",
    "        self.b5 = Inception(832, 384, 192, 384, 48, 128, 128)\n",
    "        \n",
    "        self.avgpool = nn.AvgPool2d(8, stride=1)\n",
    "        self.linear = nn.Linear(1024, 10)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        out = self.pre_layers(x)\n",
    "        out = self.a3(out)\n",
    "        out = self.b3(out)\n",
    "        out = self.maxpool(out)\n",
    "        out = self.a4(out)\n",
    "        out = self.b4(out)\n",
    "        out = self.c4(out)\n",
    "        out = self.d4(out)\n",
    "        out = self.e4(out)\n",
    "        out = self.maxpool(out)\n",
    "        out = self.a5(out)\n",
    "        out = self.b5(out)\n",
    "        out = self.avgpool(out)\n",
    "        out = out.view(out.size(0), -1)\n",
    "        out = self.linear(out)\n",
    "        return out"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 总结：  \n",
    "- 使用1x1的卷积，原因一是为了减少通道数，降低模型复杂度，二是为了提取更丰富的特征\n",
    "- 在构建神经网络层的时候，不想决定池化层是使用1×1，3×3还是5×5的过滤器，那么Inception模块就是最好的选择。我们可以应用各种类型的过滤器，只需要把输出连接起来。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. ResNet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "论文：《Deep Residual Learning for Image Recognition》  \n",
    "我们都知道一个卷积神经网络的网络深度是很重要的，通常增加网络深度可以提升网络的准确率，但是很深的网络层，由于参数初始化一般更靠近0，这样在训练的过程中更新浅层网络的参数时，很容易随着网络的深入而导致**梯度消失**，浅层的参数无法更新。同时，随着网络深度的增加，就会**出现准确度变得饱和，然后开始下降的退化问题**。而ResNet利用残差网络结构解决了深层神经网络出现的梯度消失和网络退化的问题。   \n",
    "在ImageNet数据集上ResNet的作者们评估了深度高达152层的残差网络——比VGG深8倍但仍具有较低的复杂度。这些残差网络的集合在ImageNet测试集上取得了3.57%的错误率。这个结果在ILSVRC 2015分类任务上赢得了第一名。  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ResNet特点：  \n",
    "- 解决了深层神经网络的退化问题  \n",
    "我们知道在同一个训练集上，深层网络的性能无论如何都不应该比浅层网络差：假设A是一个56层的网络，B是一个20层的网络，我们只需要将A中的20层替换为B，然后将A中剩下的36层全部优化为恒等映射（即输入=输出），那么A就应该和B拥有同样的性能。但是实验证明，无论是在训练集还是在测试集上，多层的网络性能远差于少层的网络，即随着网络深度的增加，准确度变得饱和，然后开始下降，这一问题被称为深度网络的退化问题。 \n",
    "![](https://ss3.bdstatic.com/70cFv8Sh_Q1YnxGkpoWK1HF6hhy/it/u=2739196887,861631746&fm=15&gp=0.jpg)  \n",
    "但是通过分析可以发现，这种退化并不是由过拟合引起的，因为我们发现训练精度和测试精度都在下降，这说明了更深的网络效果不好是因为网络没有被训练好。其原因是后面的多层非线性网络无法通过训练去逼近恒等映射网络。  \n",
    "ResNet通过引入残差网络结构（residual network），解决了退化问题。其主要思想是将堆叠的非线性层从拟合原来的最优解映射输出H(X)变成去拟合输出和输入的差$F(X) = H(X) - X$，$X$为输入，此时原最优解映射$H(X)$就可以改写成$F(X) + X$。  \n",
    "![](https://ss0.bdstatic.com/70cFuHSh_Q1YnxGkpoWK1HF6hhy/it/u=2490474125,3688154013&fm=26&gp=0.jpg)  \n",
    "残差映射跟原映射相比更容易被优化。比如把5映射到5.1，那么引入残差前是$F’(5) = 5.1$，引入残差后是$H(5) = 5.1$, $H(5) = F(5) + 5$, $F(5) = 0.1$。这里的$F’$和$F$都表示网络参数映射，引入残差后的映射对输出的变化更敏感。比如$s$输出从5.1变到5.2，映射$F’$的输出增加了1/51=2%，而对于残差结构输出从5.1到5.2，映射F是从0.1到0.2，增加了100%。后者输出变化明显对权重的调整作用更大，所以效果更好。  \n",
    "$F(X)+X$的公式可以通过在前馈网络中做一个**shortcut connections**来实现 ，**shortcut connections**跳过一个或多个层。在我们的用例中，**shortcut connections**执行简单的恒等映射，它们的输出被添加到叠加层的输出中。 **Identity shortcut connections**既不会添加额外的参数也不会增加计算复杂度。整个网络依然可以用SGD+反向传播来做端到端的训练。  \n",
    "- 解决了深层网络梯度消失的问题\n",
    "ResNet最终更新某一个节点的参数时，由于H(x)=F(x)+x，使得链式求导后的结果如图所示，不管括号内右边部分的求导参数有多小，因为左边的1的存在，并且将原来的链式求导中的连乘变成了连加状态，都能保证该节点参数更新不会发生梯度消失或梯度爆炸现象。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ss1.bdstatic.com/70cFuXSh_Q1YnxGkpoWK1HF6hhy/it/u=3814884971,2702173834&fm=15&gp=0.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ResNet更多细节和其他变种："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ss3.bdstatic.com/70cFv8Sh_Q1YnxGkpoWK1HF6hhy/it/u=1701762258,3273388825&fm=26&gp=0.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "基础结构残差传递的流程图（借用知乎@小哼哼的图）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://pic3.zhimg.com/80/v2-7a091e06e72c0e06a1c77209bfd5c47e_720w.jpg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "# \n",
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "# 3x3 convolution\n",
    "def conv3x3(in_channels, out_channels, stride=1):\n",
    "    return nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)\n",
    "\n",
    "# Residual block \n",
    "class ResidualBlock(nn.Module):\n",
    "    def __init__(self, in_channels, out_channels, stride=1, downsample=None):\n",
    "        super(ResidualBlock, self).__init__()\n",
    "        self.conv1 = conv3x3(in_channels, out_channels, stride)\n",
    "        self.bn1 = nn.BatchNorm2d(out_channels)\n",
    "        self.relu = nn.Relu(inplace=True)\n",
    "        self.conv2 = conv3x3(out_channels, out_channels)\n",
    "        self.bn2 = nn.BatchNorm2d(out_channels)\n",
    "        # 基础结构有一个从x直接连到下面的线，如果上一个ResidualBlock的输出维度和当前的ResidualBlock的维度不一样\n",
    "        # 那就对这个x进行downSample操作，如果维度一样，直接加就行，直接out+=residual\n",
    "        self.downsample = downsample \n",
    "    \n",
    "    def forward(self, x):\n",
    "        residual = x\n",
    "        out = self.conv1(x)\n",
    "        out = self.bn1(out)\n",
    "        out = self.relu(out)\n",
    "        out = self.conv2(out)\n",
    "        out = self.bn2(out)\n",
    "        if self.downsample:\n",
    "            residual = self.downsample(x)\n",
    "        out += residual\n",
    "        out = self.relu(out)\n",
    "        return out\n",
    "\n",
    "# ResNet\n",
    "class Net(nn.Module):\n",
    "    def __init__(self, block, layers, num_classes=10):\n",
    "        super(Net, self).__init__()\n",
    "        self.in_channels = 16\n",
    "        self.conv = conv3x3(3, 16)\n",
    "        self.bn = nn.BatchNorm2d(16)\n",
    "        self.relu = nn.ReLU(True)\n",
    "        self.layer1 = self.make_layer(block, 16, layers[0])\n",
    "        self.layer2 = self.make_layer(block, 32, layers[1], 2)\n",
    "        self.layer3 = self.make_layer(block, 64, layers[2], 2)\n",
    "        self.avg_pool = nn.AvgPool2d(8)\n",
    "        self.fc = nn.Linear(64, num_classes)\n",
    "    \n",
    "    def make_layer(self, block, out_channels, blocks, stride=1):\n",
    "        downsample = None\n",
    "        if (stride != 1) or (self.in_channels != out_channels):\n",
    "            downsample = nn.Sequential(\n",
    "                conv3x3(self.in_channels, out_channels, stride=stride),\n",
    "                nn.BatchNorm2d(out_channels))\n",
    "        layers = []\n",
    "        layers.append(block(self.in_channels, out_channels, stride, downsample))\n",
    "        self.in_channels = out_channels\n",
    "        for i in range(1, blocks):\n",
    "            layers.append(block(out_channels, out_channels))\n",
    "        return nn.Sequential(*layers)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        out = self.conv(x)\n",
    "        out = self.bn(out)\n",
    "        out = self.relu(out)\n",
    "        out = self.layer1(out)\n",
    "        out = self.layer2(out)\n",
    "        out = self.layer3(out)\n",
    "        out = self.avg_pool(out)\n",
    "        out = out.view(out.size(0), -1)\n",
    "        out = self.fc(out)\n",
    "        return out"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  },
  "toc-autonumbering": false,
  "toc-showcode": false,
  "toc-showmarkdowntxt": false,
  "toc-showtags": false
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
