{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这个简单的以图像任务基准模型中，各个模块可以进行替换以改变模型的性能和行为。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SiameseNetwork(nn.Module):  \n",
    "    def __init__(self):  \n",
    "        super(SiameseNetwork, self).__init__()  \n",
    "        self.convnet = nn.Sequential(  \n",
    "            nn.Conv2d(1, 64, 10),  \n",
    "            # ReLU激活函数 可以有效克服Sigmoid激活函数的梯度消失问题\n",
    "            nn.ReLU(inplace=True),  \n",
    "            nn.MaxPool2d(2),  \n",
    "            nn.Conv2d(64, 128, 7),  \n",
    "            nn.ReLU(inplace=True),  \n",
    "            nn.MaxPool2d(2),  \n",
    "            nn.Conv2d(128, 128, 4),  \n",
    "            nn.ReLU(inplace=True),  \n",
    "            nn.MaxPool2d(2),  \n",
    "            nn.Conv2d(128, 256, 4),  \n",
    "            nn.ReLU(inplace=True)  \n",
    "        )  \n",
    "  \n",
    "        self.fc = nn.Sequential(  \n",
    "            nn.Linear(256, 4096),  \n",
    "            # 分类任务中，Softmax函数可以用于多分类任务，将输出转换为概率分布\n",
    "            nn.Sigmoid()  \n",
    "        )  \n",
    "  \n",
    "    def forward_one(self, x):  \n",
    "        x = self.convnet(x)  \n",
    "        x = x.view(x.size(0), -1)  # flatten the tensor  \n",
    "        x = self.fc(x)  \n",
    "        return x  \n",
    "  \n",
    "    def forward(self, input1):  \n",
    "        output1 = self.forward_one(input1)  \n",
    "        return output1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个代码片段定义了一个卷积神经网络的层结构，其中包含卷积层、ReLU 激活函数和最大池化层。要理解这些层对输入张量的 `shape`（形状）传递的影响，我们可以逐层分析每个操作是如何改变输入张量的形状的。\n",
    "\n",
    "假设我们的初始输入是一个单通道的 `64x64` 图像（形状为 `(batch_size, channels, height, width)`，这里的 `batch_size` 设为 `1`），逐层分析如下：\n",
    "\n",
    "### 1. 初始输入\n",
    "\n",
    "- **输入形状**：`(1, 1, 64, 64)`，即批次大小为 `1`，通道数 `1`（灰度图像），图像尺寸为 `64x64`。\n",
    "\n",
    "---\n",
    "\n",
    "### 2. 第一层：`nn.Conv2d(1, 64, 10)`\n",
    "\n",
    "- **参数含义**：\n",
    "  - `in_channels=1`：输入通道数为 `1`（灰度图像）。\n",
    "  - `out_channels=64`：卷积核数量为 `64`，输出通道数将变为 `64`。\n",
    "  - `kernel_size=10`：卷积核大小为 `10x10`。\n",
    "- **输出形状计算**：\n",
    "  - 使用 `10x10` 的卷积核，假设 `stride=1` 且无 `padding`（默认）。\n",
    "  - 新的宽度和高度计算公式为：`output_size = (input_size - kernel_size) + 1`。\n",
    "  - 因此，宽和高变为 `64 - 10 + 1 = 55`。\n",
    "- **输出形状**：`(1, 64, 55, 55)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 3. 第二层：`nn.ReLU(inplace=True)`\n",
    "\n",
    "- **作用**：ReLU 激活函数不改变形状，仅将负数变为 `0`。\n",
    "- **输出形状**：`(1, 64, 55, 55)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 4. 第三层：`nn.MaxPool2d(2)`\n",
    "\n",
    "- **作用**：最大池化操作，对每 `2x2` 区域取最大值。\n",
    "- **输出形状计算**：\n",
    "  - `kernel_size=2` 表示池化区域为 `2x2`，默认 `stride=2`，因此宽和高减半。\n",
    "  - 新的宽和高为 `55 / 2 = 27`（向下取整）。\n",
    "- **输出形状**：`(1, 64, 27, 27)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 5. 第四层：`nn.Conv2d(64, 128, 7)`\n",
    "\n",
    "- **参数含义**：\n",
    "  - `in_channels=64`：输入通道数为 `64`。\n",
    "  - `out_channels=128`：卷积核数量为 `128`，输出通道数将变为 `128`。\n",
    "  - `kernel_size=7`：卷积核大小为 `7x7`。\n",
    "- **输出形状计算**：\n",
    "  - 新的宽和高为：`27 - 7 + 1 = 21`。\n",
    "- **输出形状**：`(1, 128, 21, 21)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 6. 第五层：`nn.ReLU(inplace=True)`\n",
    "\n",
    "- **作用**：ReLU 激活函数不改变形状。\n",
    "- **输出形状**：`(1, 128, 21, 21)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 7. 第六层：`nn.MaxPool2d(2)`\n",
    "\n",
    "- **作用**：最大池化操作，对每 `2x2` 区域取最大值。\n",
    "- **输出形状计算**：\n",
    "  - 新的宽和高为 `21 / 2 = 10`（向下取整）。\n",
    "- **输出形状**：`(1, 128, 10, 10)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 8. 第七层：`nn.Conv2d(128, 128, 4)`\n",
    "\n",
    "- **参数含义**：\n",
    "  - `in_channels=128`：输入通道数为 `128`。\n",
    "  - `out_channels=128`：卷积核数量为 `128`，输出通道数不变，仍然为 `128`。\n",
    "  - `kernel_size=4`：卷积核大小为 `4x4`。\n",
    "- **输出形状计算**：\n",
    "  - 新的宽和高为：`10 - 4 + 1 = 7`。\n",
    "- **输出形状**：`(1, 128, 7, 7)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 9. 第八层：`nn.ReLU(inplace=True)`\n",
    "\n",
    "- **作用**：ReLU 激活函数不改变形状。\n",
    "- **输出形状**：`(1, 128, 7, 7)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 10. 第九层：`nn.MaxPool2d(2)`\n",
    "\n",
    "- **作用**：最大池化操作，对每 `2x2` 区域取最大值。\n",
    "- **输出形状计算**：\n",
    "  - 新的宽和高为 `7 / 2 = 3`（向下取整）。\n",
    "- **输出形状**：`(1, 128, 3, 3)`。\n",
    "\n",
    "---\n",
    "\n",
    "### 11. 第十层：`nn.Conv2d(128, 256, 4)`\n",
    "\n",
    "- **参数含义**：\n",
    "  - `in_channels=128`：输入通道数为 `128`。\n",
    "  - `out_channels=256`：卷积核数量为 `256`，输出通道数将变为 `256`。\n",
    "  - `kernel_size=4`：卷积核大小为 `4x4`。\n",
    "- **输出形状计算**：\n",
    "  - 新的宽和高为：`3 - 4 + 1 = 0`，这会产生问题，因为 `kernel_size=4` 大于输入的宽度和高度 `3`。\n",
    "\n",
    "### 总结\n",
    "\n",
    "在第十层的卷积操作中，卷积核大小大于输入尺寸 `3x3`，会导致形状计算为 `0`。要避免这种情况，可以调整卷积核大小、输入图像的大小，或者在池化层中使用 padding 以维持适当的尺寸。"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
