{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 一、整体结构\n",
    "\n",
    "CNN和之前介绍的神经网络一样，可以像乐高积木一样通过组装层来构建。不过，CNN中新出现了卷积层（Convolution层）和池化层（Pooling层）。卷积层和池化层将在下一节详细介绍，这里我们先看一下如何组装层以构建CNN。\n",
    "\n",
    "之前介绍的神经网络中，相邻层的所有神经元之间都有连接，这称为全连接（fully-connected）。全连接层的实现就是之前的Affine层。\n",
    "\n",
    "按照神经网络的结构构造一个网络，其结构如下：\n",
    "\n",
    "![image-20230412162453899](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230412162453899.png)\n",
    "\n",
    "按照CNN的结构构造网络，其结构如下：\n",
    "\n",
    "![image-20230412162527511](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230412162527511.png)\n",
    "\n",
    "如图所示，CNN 中新增了 Convolution 层 和 Pooling 层。CNN 的层连接顺序是“Convolution - ReLU -（Pooling）”（Pooling层有时会被省略）。此外，靠近输出的层中使用了之前的“Affine - ReLU”组合。\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 二、卷积层"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.1 全连接层的问题\n",
    "\n",
    "全连接层存在的问题：数据的形状被“忽视”了。\n",
    "\n",
    "比如，输入数据是图像时，图像通常是高、长、通道方向上的3维形状。但是，**向全连接层输入时，需要将3维数据拉平为1维数据**。实际上，前面提到的使用了MNIST数据集的例子中，输入图像就是1通道、高28像素、长28像素的（1, 28, 28）形状，但却被排成1列，以784个数据的形式输入到最开始的Affine层。**图像是3维形状，这个形状中应该含有重要的空间信息**。比如，空间上邻近的像素为相似的值、RBG的各个通道之间分别有密切的关联性、相距较远的像素之间没有什么关联等，3维形状中可能隐藏有值得提取的本质模式。**但是，因为全连接层会忽视形状，将全部的输入数据作为相同的神经元（同一维度的神经元）处理，所以无法利用与形状相关的信息。**\n",
    "\n",
    "而**卷积层可以保持形状不变**。当输入数据是图像时，卷积层会以3维数据的形式接收输入数据，并同样以3维数据的形式输出至下一层。\n",
    "\n",
    "另外，CNN 中，有时将卷积层的输入输出数据称为**特征图（feature map）**。其中，卷积层的输入数据称为**输入特征图（input feature map）**，输出数据称为**输出特征图（output feature map）**。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.2卷积运算\n",
    "\n",
    "卷积层进行的处理就是卷积运算。卷积运算相当于图像处理中的“**滤波器运算**”。在介绍卷积运算时，我们来看一个具体的例子：\n",
    "\n",
    "![image-20230412210525047](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230412210525047.png)\n",
    "\n",
    "如上图所示，卷积运算对输入数据应用滤波器。其实滤波器就是一个用若干个权重参数组成的小方块，在上图中滤波器的尺寸是3*3，所以权重参数一共有9个。卷积运算就是把这个滤波器（小方块）移到输入数据上，然后对应位置的元素相乘并求和，求和结果作为输出矩阵的一个元素，该过程如下图所示：\n",
    "\n",
    "![image-20230413095923102](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413095923102.png)\n",
    "\n",
    "其实和Affine层做的运算是类似的，都是乘权重然后求和，只不过这里数据的形状做了点变化，原来是一维的，现在是二维的。\n",
    "\n",
    "对应的偏置项如下：\n",
    "\n",
    "![image-20230413100151920](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413100151920.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.3填充\n",
    "\n",
    "根据上述的例子可以发现，卷积运算之后，输出特征图的尺寸是会变小的。所以如果要保持输入和输出的尺寸不变或者控制输出尺寸变大一点，就需要用到填充。\n",
    "\n",
    "在进行卷积层的处理之前，有时要向输入数据的周围填入固定的数据（比如0等），这称为填充（padding），是卷积运算中经常会用到的处理。比如，在图7-6的例子中，对大小为(4, 4)的输入数据应用了幅度为1的填充。**“幅度为1的填充”是指用幅度为1像素的0填充周围。**\n",
    "\n",
    "![image-20230413103845755](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413103845755.png)\n",
    "\n",
    ">**使用填充主要是为了调整输出的大小**。比如，对大小为(4, 4)的输入数据应用(3, 3)的滤波器时，输出大小变为(2, 2)，相当于输出大小比输入大小缩小了 2个元素。这在反复进行多次卷积运算的深度网络中会成为问题。为什么呢？因为**如果每次进行卷积运算都会缩小空间，那么在某个时刻输出大小就有可能变为1，导致无法再应用卷积运算**。为了避免出现这样的情况，就要使用填充。在刚才的例子中，将填充的幅度设为 1，那么相对于输入大小(4, 4)，输出大小也保持为原来的(4, 4)。因此，卷积运算就可以在保持空间大小不变的情况下将数据传给下一层。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.4步幅\n",
    "\n",
    "应用滤波器的位置间隔称为步幅（stride）：\n",
    "\n",
    "![image-20230413155419080](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413155419080.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.5关于尺寸的计算\n",
    "\n",
    "在设计CNN网络结构的时候，往往需要预先知道输出特征图的尺寸，所以输入特征图尺寸、步幅、滤波器尺寸、输出特征图尺寸之间的关系就很重要，下面来推导一下他们之间的关系式：\n",
    "\n",
    "![image-20230413162547835](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413162547835.png)\n",
    "\n",
    "如图所示，$N$表示输入特征图的边长（这里边长将输入特征图的$height$和$width$统一写成了$N$，用的时候宽高不一定相等），$No$表示输出特征图的边长，$stride$表示步幅，$n$是滤波器边长。现在到移动停止为止需要移动$k$次，则可知$k$次后，滤波器第一列在输入特征图中所处位置为：\n",
    "\n",
    "$$\n",
    "1+stride*k\n",
    "$$\n",
    "\n",
    "所以，最后一列在输入特征图中所处位置为：\n",
    "\n",
    "$$\n",
    "1+stride*k+n-1=stride*k+n\n",
    "$$\n",
    "\n",
    "同时，第$k$次是最后一次移动的判断条件是：第$k$次移动后，滤波器的最后一列不会超出输入特征图的最后一列；且第$k+1$次移动后，最后一列超出最后一列：\n",
    "\n",
    "$$\n",
    "stride*k+n<=N \\\\\n",
    "stride*(k+1)+n>N\n",
    "$$\n",
    "\n",
    "整理可得：\n",
    "\n",
    "$$\n",
    "\\frac{N-n}{stride}-1<k<=\\frac{N-n}{stride}\n",
    "$$\n",
    "\n",
    "得出$k$的取值为：\n",
    "\n",
    "$$\n",
    "k=\\lfloor \\frac{N-n}{stride} \\rfloor\n",
    "$$\n",
    "\n",
    "最后，输入特征图的边长表示如下：\n",
    "\n",
    "$$\n",
    "No=\\lfloor \\frac{N-n}{stride} \\rfloor+1\n",
    "$$\n",
    "\n",
    "【注】：上述公式没加填充的因素，如果要加，就在输入特征图边长$N$上多加一个$2*padding$（$padding$是填充量）。\n",
    "\n",
    "【举例】：输入大小：(28, 31)；填充：2；步幅：3；滤波器大小：(5, 5)\n",
    "\n",
    "$$\n",
    "W=\\frac{(28+2*2)-5}{3}+1=10 \\\\\n",
    "H=\\frac{(31+2*2)-5}{3}+1=11\n",
    "$$"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.6三维数据的卷积运算\n",
    "\n",
    "在讲三维数据之前，首先需要了解一下一张彩色图像是怎么构成的？ 众所周知，一张图像是由很多个像素点构成的。根据三原色原理，任何颜色都能用红、绿、蓝三原色调出来，所以彩色图像的每个像素点也都是要用这三种颜色调出来的，这三原色在计算机中称为R(red)、G(green)、B(blue)三通道。每个像素点都由三个个颜色组成，所以一张彩色图就由三个通道堆叠组成，如下图所示：\n",
    "\n",
    "![image-20230413174253336](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413174253336.png)\n",
    "\n",
    "下面回到卷积的内容上，通道方向上有多个特征图时，会按通道进行输入数据和滤波器的卷积运算，并将结果相加，从而得到输出。\n",
    "\n",
    "![image-20230413174432909](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413174432909.png)\n",
    "\n",
    "![image-20230413174510385](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230413174510385.png)\n",
    "\n",
    "【注意】：在3维数据的卷积运算中，**输入数据和滤波器的通道数要设为相同的值**。同时，**每个通道的滤波器大小要全部相同**。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.7三维数据的形状表示\n",
    "\n",
    "把3维数据表示为多维数组时，书写顺序为（channel, height, width）。比如，通道数为C、高度为H、长度为W的数据的形状可以写成（C, H, W）。滤波器也一样，要按（channel, height, width）的顺序书写。比如，通道数为C、滤波器高度为FH（Filter Height）、长度为FW（Filter Width）时，可以写成（C, FH, FW）。\n",
    "\n",
    "![image-20230414090817565](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414090817565.png)\n",
    "\n",
    "在上图中，数据输出是1张特征图。所谓1张特征图，换句话说，就是通道数为1的特征图。那么，如果要在通道方向上也拥有多个卷积运算的输出，该怎么做呢？为此，就需要用到**多个滤波器**（权重）。\n",
    "\n",
    "![image-20230414091118487](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414091118487.png)\n",
    "\n",
    "如上图所示，关于卷积运算的滤波器，也必须考虑滤波器的数量。因此，作为4维数据，滤波器的权重数据要按 **(output_channel, input_channel, height, width)** 的顺序书写。比如，通道数为3、大小为5 × 5的滤波器有20个时，可以写成(20, 3, 5, 5)。\n",
    "\n",
    "卷积运算中（和全连接层一样）存在偏置。在上图的例子中，如果进一步追加偏置的加法运算处理，则结果如下图所示：\n",
    "\n",
    "![image-20230414091802851](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414091802851.png)\n",
    "\n",
    "这里看到偏置项和第一次的输出的形状不同，怎么能实现相加？这里就运用到**Numpy的广播机制**：NumPy中，形状不同的数组之间也可以进行运算。\n",
    "\n",
    "![image-20230414094207979](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414094207979.png)\n",
    "\n",
    "总而言之，就是先扩展，然后实现元素级别运算。回到卷积运算中，偏置项$(FN,1,1)$会水平，竖直扩展成$(FN,OH,OW)$，简单来说就是**每一个通道，都要加上同一个值**。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.8批处理\n",
    "\n",
    "之前的全连接神经网络的实现对应了批处理，我们希望卷积运算也同样对应批处理。为此，需要将在**各层间传递的数据保存为4维数据（一条数据是3维，有N条所以是4维，多的一维是数据的条数）**。具体地讲，就是按 **(batch_num, channel, height, width)** 的顺序保存数据。\n",
    "\n",
    "![image-20230414101144961](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414101144961.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 三、池化层\n",
    "\n",
    "池化是缩小高、长方向上的空间的运算。下图进行的是将2 × 2的区域集约成1个元素的处理，缩小空间大小，按步幅2进行2 × 2的Max池化时的处理顺序。\n",
    "\n",
    "![image-20230414103648363](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414103648363.png)\n",
    "\n",
    "一般来说，**池化的窗口大小会和步幅设定成相同的值**。比如，3 × 3的窗口的步幅会设为3，4 × 4的窗口的步幅会设为4等。\n",
    "\n",
    ">除了Max池化之外，还有Average池化等。相对于Max池化是从目标区域中取出最大值，Average池化则是计算目标区域的平均值。在图像识别领域，主要使用Max池化。因此，本书中说到“池化层”时，指的是Max池化。\n",
    "\n",
    "**池化层的特征**\n",
    "\n",
    "- 没有要学习的参数\n",
    "\n",
    "- 通道数不发生变化\n",
    "\n",
    "    ![image-20230414104229565](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414104229565.png)\n",
    "\n",
    "- 对微小的位置变化具有鲁棒性（健壮）\n",
    "\n",
    "    输入数据发生微小偏差时，池化仍会返回相同的结果。因此，池化对输入数据的微小偏差具有鲁棒性。比如，3 × 3的池化的情况下，如下图所示，池化会吸收输入数据的偏差：\n",
    "\n",
    "    ![image-20230414104347053](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414104347053.png)\n",
    "\n",
    "    输入数据在宽度方向上只偏离1个元素时，输出仍为相同的结果。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 四、卷积层与池化层的实现\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4.1四维数组\n",
    "\n",
    "首先来看一下四维数据的使用："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(10, 1, 28, 28)\n",
      "[[[0.89477601 0.42078335 0.83068344 0.31242428 0.89812909 0.90619721\n",
      "   0.54501538 0.25691006 0.8878828  0.64946369 0.64018247 0.41833143\n",
      "   0.12655946 0.18512659 0.69615588 0.06668205 0.34518392 0.47677963\n",
      "   0.7773527  0.97389499 0.40991714 0.37686073 0.27789164 0.94737628\n",
      "   0.71636654 0.11933569 0.80524605 0.42224317]\n",
      "  [0.20851567 0.96926236 0.78410033 0.5513441  0.08519034 0.91275512\n",
      "   0.6066     0.61426685 0.7881379  0.82330811 0.66509579 0.90337829\n",
      "   0.31987286 0.49873569 0.86187703 0.54181071 0.04532741 0.70551471\n",
      "   0.09109872 0.81281964 0.9154662  0.02895876 0.2807245  0.68508463\n",
      "   0.80744845 0.44591847 0.26894088 0.81431507]\n",
      "  [0.61465449 0.28246968 0.04949361 0.52256625 0.74277186 0.19768757\n",
      "   0.05975234 0.62252855 0.39429718 0.11465035 0.06841329 0.05937435\n",
      "   0.36725144 0.25865115 0.82749424 0.76793746 0.89127583 0.69345718\n",
      "   0.1371296  0.48625334 0.90851392 0.46619112 0.01788243 0.7622822\n",
      "   0.52148608 0.84683201 0.80228181 0.02643945]\n",
      "  [0.36592758 0.71840651 0.90341958 0.83380214 0.34933007 0.07546315\n",
      "   0.8364125  0.33871497 0.28556964 0.47568806 0.44238631 0.27372015\n",
      "   0.64077302 0.56463763 0.93714642 0.24462779 0.70516352 0.51857415\n",
      "   0.15185196 0.05948992 0.70266043 0.48849743 0.57192588 0.02294758\n",
      "   0.26188214 0.38594867 0.57469823 0.26821665]\n",
      "  [0.7523919  0.92330265 0.47731039 0.31634707 0.57121255 0.1265382\n",
      "   0.13747394 0.84025315 0.51605125 0.75668206 0.77326611 0.33140197\n",
      "   0.66660304 0.65007485 0.8026404  0.62293928 0.10684237 0.71635597\n",
      "   0.84423272 0.50115684 0.3484431  0.30921804 0.43603871 0.40058475\n",
      "   0.94977061 0.29940884 0.14501162 0.25318582]\n",
      "  [0.67844268 0.29364686 0.38919796 0.43301853 0.30860331 0.81201467\n",
      "   0.44822746 0.75615181 0.17109872 0.17149466 0.30713868 0.05019453\n",
      "   0.51664071 0.27942072 0.75786861 0.4394154  0.4953648  0.10562174\n",
      "   0.82099666 0.05348664 0.89138621 0.45125637 0.73548963 0.94917427\n",
      "   0.33770318 0.79665626 0.60237369 0.39691697]\n",
      "  [0.19971041 0.98300976 0.06706808 0.08452768 0.48411652 0.25687674\n",
      "   0.44461174 0.25534537 0.15841541 0.78881021 0.76113798 0.47702151\n",
      "   0.34683705 0.6075827  0.88440681 0.12754308 0.38378713 0.80896294\n",
      "   0.81082296 0.69085985 0.78534525 0.70544967 0.74023495 0.24159059\n",
      "   0.08231929 0.57461076 0.26787432 0.03902454]\n",
      "  [0.04893575 0.1394306  0.84250779 0.5698432  0.11931529 0.66175246\n",
      "   0.39011588 0.38243686 0.21634095 0.89719521 0.6363264  0.39530019\n",
      "   0.9629894  0.02876939 0.96460708 0.44920609 0.33953318 0.94039744\n",
      "   0.42579029 0.25781795 0.52061062 0.88299226 0.6254698  0.60089787\n",
      "   0.92936198 0.27743358 0.64682355 0.62459648]\n",
      "  [0.46322765 0.9167663  0.08323219 0.88994134 0.08275009 0.18821972\n",
      "   0.63422464 0.02011717 0.836208   0.07561582 0.58554844 0.54432021\n",
      "   0.18292083 0.39613625 0.67718506 0.88174653 0.30515473 0.00445776\n",
      "   0.70307203 0.9813052  0.52974981 0.14058178 0.74200397 0.0427594\n",
      "   0.47556161 0.88933924 0.41390136 0.76295653]\n",
      "  [0.07763414 0.87118186 0.95974355 0.52858962 0.44103882 0.53509864\n",
      "   0.11188039 0.98297149 0.99314915 0.30381175 0.42962408 0.62406008\n",
      "   0.92315939 0.24310798 0.20678662 0.87052264 0.37870532 0.59180929\n",
      "   0.33137369 0.76639126 0.12588553 0.57404188 0.52457665 0.54315172\n",
      "   0.2217676  0.79279245 0.87106016 0.12355044]\n",
      "  [0.87134458 0.91803037 0.44531851 0.80327356 0.79876464 0.72342475\n",
      "   0.14023088 0.77903254 0.83126709 0.23701669 0.77635289 0.51378388\n",
      "   0.35182854 0.57614892 0.57969238 0.87723503 0.07467325 0.73397405\n",
      "   0.06261478 0.1040751  0.74736182 0.58353357 0.3492302  0.2779219\n",
      "   0.55071217 0.27254119 0.10684571 0.61596189]\n",
      "  [0.99101848 0.58879147 0.79282637 0.4999448  0.32027103 0.51991979\n",
      "   0.02050738 0.68401355 0.82116235 0.82782997 0.35838792 0.5614431\n",
      "   0.06839678 0.05910571 0.91182781 0.67784899 0.09886633 0.32748415\n",
      "   0.00390558 0.08346346 0.88174419 0.75891237 0.77115626 0.57782995\n",
      "   0.76097376 0.73147069 0.37530966 0.74676254]\n",
      "  [0.6899985  0.91248951 0.82160083 0.28614685 0.68073695 0.79649286\n",
      "   0.30201859 0.76231041 0.45610043 0.65262122 0.8529736  0.2660845\n",
      "   0.15028408 0.78274768 0.63846174 0.76546099 0.54336312 0.04092128\n",
      "   0.19713559 0.03852192 0.7280426  0.15187698 0.44524725 0.94492347\n",
      "   0.38989765 0.82641813 0.14006352 0.05444277]\n",
      "  [0.86299881 0.91035634 0.39966708 0.69800403 0.21312901 0.81520678\n",
      "   0.32535084 0.27217121 0.92064319 0.53698478 0.41869264 0.03327589\n",
      "   0.01263896 0.97214326 0.75625576 0.54397478 0.02907644 0.38237648\n",
      "   0.79547382 0.70710502 0.9634618  0.30211588 0.91972453 0.60109536\n",
      "   0.90112578 0.15851686 0.86784734 0.53094852]\n",
      "  [0.25710833 0.67739642 0.06711421 0.56058386 0.365625   0.07159294\n",
      "   0.7629627  0.88254675 0.45892138 0.46526641 0.41161762 0.68222772\n",
      "   0.93808928 0.59640077 0.54948193 0.97724106 0.66761155 0.97468129\n",
      "   0.05316125 0.22203714 0.91248638 0.22683721 0.56573058 0.85636209\n",
      "   0.01690438 0.64009605 0.31966573 0.31715885]\n",
      "  [0.05821983 0.03932398 0.16858662 0.09631394 0.84629812 0.62136589\n",
      "   0.40660309 0.30126358 0.51268819 0.2432694  0.00252158 0.54478941\n",
      "   0.08423669 0.72369286 0.80746709 0.50565345 0.16273847 0.16984896\n",
      "   0.41460016 0.59437714 0.46173276 0.44348928 0.3330968  0.50436322\n",
      "   0.40741954 0.44337101 0.98692535 0.45119139]\n",
      "  [0.00626779 0.03601114 0.8942715  0.04488802 0.63551831 0.26909043\n",
      "   0.21070506 0.13926264 0.24929295 0.8093025  0.42705994 0.17819103\n",
      "   0.88412758 0.14880619 0.1543305  0.46899777 0.37399624 0.6671359\n",
      "   0.5288611  0.81459756 0.63906728 0.69668148 0.40414367 0.66875806\n",
      "   0.62975444 0.11524692 0.47879743 0.56687162]\n",
      "  [0.06048522 0.8074432  0.83162393 0.47765681 0.38371281 0.38623462\n",
      "   0.47910826 0.40299787 0.95893014 0.84061808 0.66034208 0.52845211\n",
      "   0.10624951 0.64207496 0.65523385 0.20846226 0.07877298 0.65429943\n",
      "   0.06981391 0.16907617 0.014041   0.46017734 0.85782155 0.17928828\n",
      "   0.005993   0.76310192 0.20806434 0.20397643]\n",
      "  [0.36074566 0.69943152 0.7289739  0.0607895  0.88952913 0.90768529\n",
      "   0.92871228 0.17199366 0.67858531 0.81130901 0.92586442 0.60069336\n",
      "   0.19467803 0.99462283 0.9047641  0.67990392 0.37354061 0.14113846\n",
      "   0.12254567 0.64561348 0.52292623 0.71441612 0.47764199 0.63653564\n",
      "   0.62538177 0.97466316 0.68036969 0.93784836]\n",
      "  [0.1637844  0.58175397 0.934378   0.88099484 0.5807166  0.73086123\n",
      "   0.4961762  0.85958782 0.46452021 0.13216117 0.0288009  0.38491769\n",
      "   0.20943135 0.08372372 0.40432007 0.75277931 0.82869656 0.76054366\n",
      "   0.70163731 0.84091152 0.62921336 0.03356398 0.22836828 0.05629627\n",
      "   0.57666441 0.12406106 0.74518007 0.38914489]\n",
      "  [0.41926511 0.49998884 0.81155763 0.41400958 0.46374171 0.38706831\n",
      "   0.20395219 0.33459951 0.1224038  0.71443098 0.26432947 0.1089553\n",
      "   0.563134   0.27563231 0.45619074 0.87825272 0.33141024 0.78684014\n",
      "   0.94979353 0.24742042 0.41307437 0.35261094 0.34252735 0.94975904\n",
      "   0.48069342 0.53643772 0.81783476 0.43987963]\n",
      "  [0.93559106 0.19665097 0.52630166 0.8810552  0.21910178 0.34560477\n",
      "   0.59148597 0.9773665  0.55953155 0.65151758 0.46897074 0.81730565\n",
      "   0.23851887 0.41774136 0.58870206 0.62504959 0.44072253 0.13888841\n",
      "   0.37239804 0.29837356 0.19043774 0.42739745 0.36065026 0.42462695\n",
      "   0.56613214 0.95667934 0.24562963 0.90223124]\n",
      "  [0.82850061 0.75068617 0.87515027 0.75396896 0.92001226 0.81851025\n",
      "   0.2165742  0.92600584 0.38726755 0.12326184 0.91285437 0.03642893\n",
      "   0.8656344  0.73380138 0.05771607 0.70388864 0.44992521 0.97600809\n",
      "   0.57297588 0.85545136 0.19496232 0.71317758 0.82922551 0.42074798\n",
      "   0.75710613 0.07398833 0.26052018 0.86796052]\n",
      "  [0.98913006 0.67297912 0.7037425  0.61022497 0.48760311 0.59493598\n",
      "   0.75803618 0.1727801  0.83922932 0.1894733  0.25056284 0.0997731\n",
      "   0.40960938 0.46096598 0.85090565 0.63328811 0.09914588 0.1994895\n",
      "   0.29372178 0.62361208 0.47610051 0.59962725 0.81287177 0.52567863\n",
      "   0.86488024 0.92136286 0.79500678 0.12100776]\n",
      "  [0.75504312 0.38323547 0.06167084 0.86400642 0.84066528 0.58010472\n",
      "   0.40060364 0.3949128  0.74157401 0.96521053 0.42755523 0.80918141\n",
      "   0.27402812 0.90906157 0.5019731  0.30021928 0.32788243 0.96008252\n",
      "   0.61946642 0.2214393  0.25776974 0.20132011 0.62571069 0.45320409\n",
      "   0.23707678 0.01143668 0.75435204 0.82080246]\n",
      "  [0.13770741 0.09861826 0.19716974 0.40438426 0.71681418 0.19293565\n",
      "   0.70547091 0.71802759 0.00289717 0.44593006 0.9995023  0.00224216\n",
      "   0.11491291 0.03292976 0.11174399 0.42948739 0.61016359 0.27204911\n",
      "   0.94084653 0.15734273 0.34652461 0.6137508  0.31188875 0.30118946\n",
      "   0.04523321 0.56118165 0.6879391  0.19124835]\n",
      "  [0.01958482 0.36120338 0.0702429  0.22779748 0.44468221 0.45716397\n",
      "   0.01166294 0.60169237 0.39256835 0.21842837 0.82367675 0.41148083\n",
      "   0.94713305 0.43239666 0.06922194 0.78066246 0.32622634 0.29945474\n",
      "   0.75721359 0.22301189 0.09446642 0.34007404 0.17944679 0.99031401\n",
      "   0.73691376 0.21519586 0.03126503 0.55781139]\n",
      "  [0.50731192 0.39814511 0.98714663 0.55905389 0.32880712 0.37677402\n",
      "   0.93539891 0.16614403 0.42213576 0.12946302 0.92993771 0.83756642\n",
      "   0.65023715 0.05360225 0.87809094 0.30641206 0.48743779 0.53709845\n",
      "   0.14924156 0.08413936 0.51050501 0.56912445 0.76643776 0.36563234\n",
      "   0.41988594 0.03697139 0.64877828 0.76494524]]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "x=np.random.rand(10,1,28,28) # 10个单通道28*28数据（10个灰度图）\n",
    "\n",
    "print(x.shape) # 查看x的形状\n",
    "\n",
    "print(x[0]) # 查看x中第一条数据"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4.2 基于im2col的展开\n",
    "\n",
    "如果老老实实地实现卷积运算，要重复好几层的for语句，这样的实现有点麻烦，而且，NumPy中存在使用for语句后处理变慢的缺点（**NumPy中，访问元素时最好不要用for语句**）。这里，我们不使用for语句，而是使用im2col这个便利的函数进行简单的实现。\n",
    "\n",
    "im2col是一个函数，**将输入数据展开以适合滤波器**（权重）。如下图所示，对3维的输入数据应用im2col后，数据转换为2维矩阵（正确地讲，是把包含批数量的4维数据转换成了2维数据）。\n",
    "\n",
    "![image-20230414135358651](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414135358651.png)\n",
    "\n",
    "im2col会在所有应用滤波器的地方进行这个展开处理，\n",
    "\n",
    "![image-20230414164642417](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414164642417.png)\n",
    "\n",
    "使用im2col展开输入数据后，之后就只需将卷积层的滤波器（权重）纵向展开为1列，并计算2个矩阵的乘积即可。这和全连接层的Affine层进行的处理基本相同。\n",
    "\n",
    "![image-20230414164734607](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414164734607.png)\n",
    "\n",
    "卷积运算的滤波器处理的细节：将滤波器纵向展开为1列，并计算和im2col展开的数据的矩阵乘积，最后转换（reshape）为输出数据的大小"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(8, 75)\n"
     ]
    }
   ],
   "source": [
    "def im2col(input_data, filter_h, filter_w, stride=1, pad=0):\n",
    "    \"\"\"\n",
    "\n",
    "    Parameters\n",
    "    ----------\n",
    "    input_data : 由(数据量, 通道, 高, 长)的4维数组构成的输入数据\n",
    "    filter_h : 滤波器的高\n",
    "    filter_w : 滤波器的长\n",
    "    stride : 步幅\n",
    "    pad : 填充\n",
    "\n",
    "    Returns\n",
    "    -------\n",
    "    col : 2维数组\n",
    "    \"\"\"\n",
    "\n",
    "    # 先计算输出特征图的尺寸\n",
    "    N, C, H, W = input_data.shape\n",
    "    out_h = (H + 2*pad - filter_h)//stride + 1 #'//'表示先做除法，再向下取整\n",
    "    out_w = (W + 2*pad - filter_w)//stride + 1\n",
    "\n",
    "    \n",
    "    img = np.pad(input_data, [(0,0), (0,0), (pad, pad), (pad, pad)], 'constant')\n",
    "    col = np.zeros((N, C, filter_h, filter_w, out_h, out_w))\n",
    "\n",
    "    for y in range(filter_h):\n",
    "        y_max = y + stride*out_h\n",
    "        for x in range(filter_w):\n",
    "            x_max = x + stride*out_w\n",
    "            col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]\n",
    "            \n",
    "    col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N*out_h*out_w, -1)\n",
    "    return col\n",
    "\n",
    "x1 = np.random.rand(2, 3, 7, 7)\n",
    "col1 = im2col(x1, 5, 5, stride=2, pad=0)\n",
    "print(col1.shape)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上述代码的解释：\n",
    "\n",
    "```py\n",
    "out_h = (H + 2*pad - filter_h)//stride + 1\n",
    "out_w = (W + 2*pad - filter_w)//stride + 1\n",
    "```\n",
    "\n",
    "这两行计算输出特征图的高度 out_h 和宽度 out_w。\n",
    "\n",
    "```py\n",
    "img = np.pad(input_data, [(0,0), (0,0), (pad, pad), (pad, pad)], 'constant')\n",
    "```\n",
    "\n",
    "这一行使用 np.pad 函数在输入数据的四个维度上添加填充。它接收四个参数：`input_data`（要填充的数组）、`[(0,0), (0,0), (pad, pad), (pad, pad)]`（指定在每个轴上添加的填充数量）、`'constant'`（指定填充类型为常数）和 `0`（指定填充值为 0）。\n",
    "\n",
    "`[(0,0), (0,0), (pad, pad), (pad, pad)]` 是 np.pad 函数的第二个参数 `pad_width` 的值。它指定了在每个轴上添加的填充数量。在这种情况下，输入数据是一个四维数组，因此 `pad_width` 参数也是一个四元组，每个元素对应一个轴。每个元组中的两个值分别指定了在该轴的开头和结尾添加的填充数量。\n",
    "\n",
    "例如，在 im2col 函数中，pad_width 的值为 `[(0,0), (0,0), (pad, pad), (pad, pad)]`。这意味着在第一个轴和第二个轴上不添加填充（填充数量为 0），而在第三个轴和第四个轴上，在开头和结尾都添加 pad 个填充值。\n",
    "\n",
    "`col = np.zeros((N, C, filter_h, filter_w, out_h, out_w))`的作用是保存滤波器每个位置移动过程中，所经过的位置。现在对这个可能无法理解，在后面会详细讲解。\n",
    "\n",
    "```py\n",
    "for y in range(filter_h):\n",
    "    y_max = y + stride*out_h\n",
    "    for x in range(filter_w):\n",
    "        x_max = x + stride*out_w\n",
    "        col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]\n",
    "```\n",
    "\n",
    "这个for循环是该函数的关键。`y_max = y + stride*out_h`的意思是，行初始位置`y`移动`out_h`次后，到达最终停止位置。实际上，应该是**到达的最终位置的后stride行**（类似**左闭右开**的意思，多移了一次）。因为我们在（2.5）节计算尺寸的时候，算出输出特征图的边长`out_h`是比移动次数`k`多1的。这里要多往后取一次的原因是，数组的切片索引都是左闭右开`y:y_max:stride`。\n",
    "\n",
    "所以，\n",
    "\n",
    "```py\n",
    "y_max = y + stride*out_h\n",
    "x_max = x + stride*out_w\n",
    "```\n",
    "\n",
    "这两句话用两个for循环共同确定了滤波器中每个位置的元素，在经过移动后，最终到达的位置。求最终到达的位置`x_max`,`y_max`的作用是什么呢？是为了使用切片索引`x:x_max:stride`,`y:y_max:stride`，从而求出滤波器中每个元素在移动过程中经过的位置。\n",
    "\n",
    "例如：现有输入特征图$shape=(N=1,C=3,H=7,W=7)$，步幅$stride=2$，滤波器$shape=(C=3,H=5,W=5)$，填充$pad=0$：\n",
    "\n",
    "![image-20230415195618867](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230415195618867.png)\n",
    "\n",
    "通过计算得到输出特征图$(C=3,out\\_h=2,out\\_w=2)$，假设现在取第一行第二个元素$y=0$，$x=1$。所以，$y\\_max=y+stride*out\\_h=4$，$x\\_max=x+stride*out\\_w=5$：\n",
    "\n",
    "![image-20230415200525482](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230415200525482.png)\n",
    "\n",
    "所以，`img[:, :, y:y_max:stride, x:x_max:stride]`取到的元素如下：\n",
    "\n",
    "![image-20230415210509263](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230415210509263.png)\n",
    "\n",
    "同时，由于$InputImg$是一个四维矩阵$(1,3,7,7)$，即1张3通道的图像。所以，`img[:, :, y:y_max:stride, x:x_max:stride]`得到的$Img\\_$的中，图像数量和通道数量都保持不变，即：新矩阵$shape=(1,3,2,2)$。\n",
    "\n",
    "现在，获取到的这个矩阵$Img\\_$要存在哪里呢？这里就用到之前讲的六维矩阵$col$。观察新矩阵$Img\\_$的形状$shape=(1,3,2,2)$，很容易发现，1就是$N$的取值，3就是$C$的取值，2就是输出特征图$h$和$w$的长度，所以$Img\\_$的形状$shape=(N,C,out\\_h,out\\_w)$。又因为保存的时候，$col$某几维的形状要和$Img\\_$的形状相同，所以$col$要设计成$(N, C, filter\\_h, filter\\_w, out\\_h, out\\_w)$，其中$filter\\_w, out\\_h$表示是滤波器的哪一个位置的元素。\n",
    "\n",
    "所以，保存$Img\\_$应该写为：\n",
    "\n",
    "```py\n",
    "col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]\n",
    "```\n",
    "\n",
    "其中，$y,x$指示的是滤波器中第几行第几列的元素，在上面的例子中，$y=0,x=1$。\n",
    "\n",
    "总而言之，上面这么多步骤，就是把滤波器中每个元素在移动过程中的可能取值全部保存下来了。\n",
    "\n",
    "```py\n",
    "col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N*out_h*out_w, -1)\n",
    "```\n",
    "\n",
    "$transpose(0, 4, 5, 1, 2, 3)$ 是一个在 NumPy数组上调用的方法，它重新排列数组的维度。括号内的数字表示轴的新顺序。在这种情况下，第一个轴（0）保持不变，第二个轴（1）移动到第四个位置，第三个轴（2）移动到第五个位置，依此类推。即：$(N, C, filter\\_h, filter\\_w, out\\_h, out\\_w)$变成$(N, out\\_h, out\\_w, C, filter\\_h, filter\\_w)$\n",
    "\n",
    "reshape函数执行的过程如下$(x.shape=(2,3,7,7),stride=2,pad=0,fileter.shape=(3,5,5))$：\n",
    "\n",
    "<img src=\"https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230416111416405.png\" alt=\"image-20230416111416405\" style=\"zoom:150%;\" />\n",
    "\n",
    "由上图可知，六维数组$col$就可以看成是**两个三维数组的嵌套**（一个三维数组的元素还是一个三维数组），所以**六维数组变二维就是做两次三维数组到一维的转换**（图中的2处3D to 1D）。同时，很容易就能发现，三维数组$(C,filter\\_h,filter\\_w)$就是输入特征图中**每一次应用滤波器的地方(即与滤波器进行计算的部分)**，因为有8个这样的三维数组，所以最后输出的纵轴长度是8，横轴方向就是把这个三维数组拉成一维数组。\n",
    "\n",
    "![image-20230414164642417](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414164642417.png)\n",
    "\n",
    "![image-20230414164734607](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230414164734607.png)\n",
    "\n",
    "有了上面的解释，现在就能理解为什么在$reshape$之前要对$col$进行`transpose(0, 4, 5, 1, 2, 3)`了。因为一开始col数组是针对滤波器中每一个元素在移动过程中会应用的地方，而通过`transpose(0, 4, 5, 1, 2, 3)`转换之后，就整合成了针对一个完整滤波器在移动过程中会应用的地方。即：从局部到整体的变化。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面用im2col实现卷积层："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys,os\n",
    "\n",
    "current_file_path = os.path.dirname(os.path.realpath('__file__')) # 获取ipynb文件的路径\n",
    "Demo_path = os.path.join(current_file_path, 'Demo') # 组合成Demo文件夹所在路径\n",
    "sys.path.append(Demo_path) # 添加路径\n",
    "\n",
    "from common.util import im2col\n",
    "\n",
    "class Convolution:\n",
    "    def __init__(self,W,b,stride=1,pad=0) -> None:\n",
    "        self.W=W\n",
    "        self.b=b\n",
    "        self.stride=stride\n",
    "        self.pad=pad\n",
    "\n",
    "    def forward(self,x):\n",
    "        FN,C,FH,FW=self.W.shape\n",
    "        N,C,H,W=x.shape\n",
    "        \n",
    "        out_h = int(1 + (H + 2*self.pad - FH) / self.stride)\n",
    "        out_w = int(1 + (W + 2*self.pad - FW) / self.stride)\n",
    "\n",
    "        col = im2col(x, FH, FW, self.stride, self.pad)\n",
    "        col_W = self.W.reshape(FN, -1).T # 滤波器的展开\n",
    "        out = np.dot(col, col_W) + self.b\n",
    "\n",
    "        out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)\n",
    "\n",
    "        return out"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4.3池化层的实现\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Pooling:\n",
    "    def __init__(self, pool_h, pool_w, stride=1, pad=0):\n",
    "        self.pool_h = pool_h\n",
    "        self.pool_w = pool_w\n",
    "        self.stride = stride\n",
    "        self.pad = pad\n",
    "    def forward(self, x):\n",
    "        N, C, H, W = x.shape\n",
    "        out_h = int(1 + (H - self.pool_h) / self.stride)\n",
    "        out_w = int(1 + (W - self.pool_w) / self.stride)\n",
    "        # 展开(1)\n",
    "        col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad)\n",
    "        col = col.reshape(-1, self.pool_h*self.pool_w)\n",
    "        # 最大值(2)\n",
    "        out = np.max(col, axis=1)\n",
    "        # 转换(3)\n",
    "        out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2)\n",
    "        return out\n",
    "    \n",
    "# x=np.array([[[[1,2,3,0],\n",
    "#               [0,1,2,4],\n",
    "#               [1,0,4,2],\n",
    "#               [3,2,0,1]],\n",
    "#              [[3,0,6,5],\n",
    "#               [4,2,4,3],\n",
    "#               [3,0,1,0],\n",
    "#               [2,3,3,1]],\n",
    "#              [[4,2,1,2],\n",
    "#               [0,1,0,4], \n",
    "#               [3,0,6,2],\n",
    "#               [4,2,4,5]]]])\n",
    "# pooling = Pooling(2,2,2,0)\n",
    "# out=pooling.forward(x)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "原图配的有误，所以整个计算过程我自己重新画了一张：\n",
    "\n",
    "<img src=\"https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230417145858649.png\" alt=\"image-20230417145858649\" style=\"zoom:150%;\" />\n",
    "\n",
    "这里解释一下tranpose的实现原理，上图中是四维的数据，这里就省略了维度$N$，仅讨论三维的情况：\n",
    "\n",
    "![image-20230417155148754](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20230417155148754.png)\n",
    "\n",
    "从上图中可以发现，$transpose$就是确定了每个维度（轴axis）的先后顺序。$transpose$前，`(C,H,W)=(axis0,axis1,axis2)`，现在想要将输出图像变成如图所示的样子，就要调整轴的位置，新图的$C'$就是原来的$W$，$H'$就是原来的$C$，$W'$就是原来的$H$。调整后，`(C',H',W')=(W,H,C)=(axis2,axis0,axis1)`，所以`transpose(2,0,1)`"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pythonProject",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.8"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
