{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.23.5\n",
      "2.0.1\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "print(np.__version__)\n",
    "print(torch.__version__)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h3 style=\"color: Salmon;\">卷积操作</h3>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">卷积</span>\n",
    "\n",
    "卷积是一种经典的数字信号处理操作，通过将设计好的卷积核与目标信号进行卷积操作实现滤波的目的。在离散的图像处理中，卷积操作可以直观理解为使用一个固定大小的窗口按照一定规律在图像上滑动。传统方法手工设计卷积核参数来实现边缘检测、锐化等目的，神经网络中则使用梯度更新的方式学习卷积核参数。卷积能够有效提取图像内的局部语义特征，并具有多种变体，本小节示例来自 pytorch 文档, [以二维卷积为例](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d)。\n",
    "\n",
    "```\n",
    "class torch.nn.Conv2d(\n",
    "        in_channels, \n",
    "        out_channels, \n",
    "        kernel_size, \n",
    "        stride=1, \n",
    "        padding=0, \n",
    "        dilation=1, \n",
    "        groups=1, \n",
    "        bias=True, \n",
    "        padding_mode='zeros'\n",
    "    )\n",
    "```\n",
    "\n",
    "给定输入尺度为 $(N, C_{in}, H_{in}, W_{in})$ 的特征，输出特征维度 $(N, C_{out}, H_{out}, W_{out})$ 可表示为：\n",
    "\n",
    "$\n",
    "\\mathrm{out}(N_i, C_{\\mathrm{out}_j}) = \\mathrm{bias}(C_{\\mathrm{out}_j}) + \\sum_{k = 0}^{C_{\\mathrm{in}} - 1} \\mathrm{weight}(C_{\\mathrm{out}_j}, k) \\star \\mathrm{input}(N_i, k)\n",
    "$\n",
    "\n",
    "参数：\n",
    "- stride controls the stride for the cross-correlation, a single number or a tuple.\n",
    "- padding controls the amount of padding applied to the input. It can be either a string {‘valid’, ‘same’} or an int / a tuple of ints giving the amount of implicit padding applied on both sides.\n",
    "- dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.\n",
    "- groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,\n",
    "\n",
    "参数量：\n",
    "\n",
    "- $para\\_num=C_{out} \\times C_{in} \\times K \\times K + C_{out}$\n",
    "\n",
    "输入输出维度关系：\n",
    "\n",
    "- $H_{out}=[\\frac{H_{in} + 2P - K}{S}] + 1, W_{out}=[\\frac{W_{in} + 2P - K}{S}] + 1$\n",
    "\n",
    "计算量：\n",
    "\n",
    "- $FLOPs = 2 \\times C_{in} \\times K \\times K \\times H_{out} \\times W_{out} \\times C_{out}$\n",
    "\n",
    "常用参数设置：\n",
    "\n",
    "- 空间尺寸不变：kernel=3, stride=1, padding=1\n",
    "- 下采样一倍：kernel=3, stride=2, padding=1\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "_N.B.: Blue maps are inputs, and cyan maps are outputs._\n",
    "<table style=\"width:100%; table-layout:fixed;\">\n",
    "  <tr>\n",
    "    <td><img width=\"150px\" src=\"./image_src/no_padding_no_strides.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/arbitrary_padding_no_strides.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/same_padding_no_strides.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/full_padding_no_strides.gif\"></td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td>No padding, no strides</td>\n",
    "    <td>Arbitrary padding, no strides</td>\n",
    "    <td>Half padding, no strides</td>\n",
    "    <td>Full padding, no strides</td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td><img width=\"150px\" src=\"./image_src/no_padding_strides.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/padding_strides.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/padding_strides_odd.gif\"></td>\n",
    "    <td></td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td>No padding, strides</td>\n",
    "    <td>Padding, strides</td>\n",
    "    <td>Padding, strides (odd)</td>\n",
    "    <td></td>\n",
    "  </tr>\n",
    "    hacked from https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md\n",
    "  </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[1., 1., 1., 1., 1.],\n",
       "          [2., 2., 2., 2., 2.],\n",
       "          [3., 3., 3., 3., 3.],\n",
       "          [4., 4., 4., 4., 4.],\n",
       "          [5., 5., 5., 5., 5.]]]])"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "inps = torch.tensor([[1], [2], [3], [4], [5]], dtype=torch.float32).repeat(1, 5).unsqueeze(0).unsqueeze(0)\n",
    "inps"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Parameter containing:\n",
       "tensor([[[[ 0.0552, -0.1481, -0.2646],\n",
       "          [ 0.1739, -0.1821,  0.0855],\n",
       "          [ 0.0093,  0.2590, -0.0340]]]], requires_grad=True)"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv1 = nn.Conv2d(1, 1, 3, 1, 1, bias=False)\n",
    "conv1.weight"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'torch.FloatTensor'"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv1.weight.type()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[ 1.,  0., -1.],\n",
       "          [ 2.,  0., -2.],\n",
       "          [ 1.,  0., -1.]]]])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "weights = torch.zeros_like(conv1.weight)\n",
    "weights[:, :, 0, 0] = 1.\n",
    "weights[:, :, 1, 0] = 2.\n",
    "weights[:, :, 2, 0] = 1.\n",
    "\n",
    "weights[:, :, 0, -1] = -1.\n",
    "weights[:, :, 1, -1] = -2.\n",
    "weights[:, :, 2, -1] = -1.\n",
    "\n",
    "weights"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[ -4.,   0.,   0.,   0.,   4.],\n",
       "          [ -8.,   0.,   0.,   0.,   8.],\n",
       "          [-12.,   0.,   0.,   0.,  12.],\n",
       "          [-16.,   0.,   0.,   0.,  16.],\n",
       "          [-14.,   0.,   0.,   0.,  14.]]]])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv1.weight.data = weights\n",
    "conv1.weight.data.copy_(weights)\n",
    "out = conv1(inps)\n",
    "out.data"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">空洞卷积</span>\n",
    "\n",
    "空洞卷积能够在不增加参数量的条件下扩大卷积核的感受野，在 DeepLab 系列方法中使用较多，可通过调整卷积操作中的 dilation 参数实现。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "_N.B.: Blue maps are inputs, and cyan maps are outputs._\n",
    "<table style=\"width:25%\"; table-layout:fixed;>\n",
    "  <tr>\n",
    "    <td><img width=\"150px\" src=\"./image_src/dilation.gif\"></td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td>No padding, no stride, dilation</td>\n",
    "  </tr>\n",
    "  </tr>\n",
    "    hacked from https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md\n",
    "  </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Parameter containing:\n",
       "tensor([[[[-0.1941, -0.0134, -0.2447],\n",
       "          [-0.0113, -0.1411,  0.0135],\n",
       "          [-0.1850, -0.0847, -0.1009]]]], requires_grad=True)"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv_dila = nn.Conv2d(1, 1, 3, 1, 1, dilation=2, bias=False)\n",
    "conv_dila.weight"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "weights_dila = torch.zeros_like(conv_dila.weight)\n",
    "weights_dila[:, :, 0, 0] = 1.\n",
    "weights_dila[:, :, 1, 0] = 2.\n",
    "weights_dila[:, :, 2, 0] = 1.\n",
    "\n",
    "weights_dila[:, :, 0, -1] = -1.\n",
    "weights_dila[:, :, 1, -1] = -2.\n",
    "weights_dila[:, :, 2, -1] = -1."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[ -8.,   0.,   8.],\n",
       "          [-12.,   0.,  12.],\n",
       "          [-10.,   0.,  10.]]]])"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv_dila.weight.data = weights_dila\n",
    "out = conv_dila(inps)\n",
    "out.data"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">反卷积</span>\n",
    "\n",
    "```\n",
    "class torch.nn.ConvTranspose2d(\n",
    "        in_channels, \n",
    "        out_channels, \n",
    "        kernel_size, \n",
    "        stride=1, \n",
    "        padding=0, \n",
    "        output_padding=0, \n",
    "        groups=1, \n",
    "        bias=True, \n",
    "        dilation=1, \n",
    "        padding_mode='zeros'\n",
    "    )\n",
    "```\n",
    "\n",
    "输入输出维度关系：\n",
    "$H_{out} = (H_{in} - 1) \\times S - 2P + K$\n",
    "\n",
    "\n",
    "参数量：\n",
    "\n",
    "- $para_num=C_{out} \\times C_{in} \\times K \\times K + C_{out}$\n",
    "\n",
    "计算量：\n",
    "\n",
    "- $FLOPs = 2 \\times C_{in} \\times K \\times K \\times H_{in} \\times W_{in} \\times C_{out}$\n",
    "\n",
    "常用参数设置：\n",
    "\n",
    "- 上采样一倍：kernel=4, stride=2, padding=1\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "_N.B.: Blue maps are inputs, and cyan maps are outputs._\n",
    "\n",
    "<table style=\"width:100%; table-layout:fixed;\">\n",
    "  <tr>\n",
    "    <td><img width=\"150px\" src=\"./image_src/no_padding_no_strides_transposed.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/arbitrary_padding_no_strides_transposed.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/same_padding_no_strides_transposed.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/full_padding_no_strides_transposed.gif\"></td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td>No padding, no strides, transposed</td>\n",
    "    <td>Arbitrary padding, no strides, transposed</td>\n",
    "    <td>Half padding, no strides, transposed</td>\n",
    "    <td>Full padding, no strides, transposed</td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td><img width=\"150px\" src=\"./image_src/no_padding_strides_transposed.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/padding_strides_transposed.gif\"></td>\n",
    "    <td><img width=\"150px\" src=\"./image_src/padding_strides_odd_transposed.gif\"></td>\n",
    "    <td></td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td>No padding, strides, transposed</td>\n",
    "    <td>Padding, strides, transposed</td>\n",
    "    <td>Padding, strides, transposed (odd)</td>\n",
    "    <td></td>\n",
    "  </tr>\n",
    "  </tr>\n",
    "    hacked from https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md\n",
    "  </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Parameter containing:\n",
       "tensor([[[[ 0.1978,  0.2337, -0.1017, -0.1180],\n",
       "          [-0.1132,  0.0494,  0.0247,  0.0605],\n",
       "          [-0.0425,  0.0865, -0.1358,  0.1604],\n",
       "          [ 0.0798, -0.1157, -0.0878, -0.1580]]]], requires_grad=True)"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv_transconv = nn.ConvTranspose2d(1, 1, 4, 2, 1, bias=False)\n",
    "conv_transconv.weight"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([1, 1, 5, 5]), torch.Size([1, 1, 10, 10]))"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "out = conv_transconv(inps)\n",
    "inps.size(), out.size()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h3 style=\"color: Salmon;\">自注意力</h3>\n",
    "\n",
    "![Vit](./image_src/qlc_vit.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">双向自注意力</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "from einops import rearrange"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "to_qkv = nn.Linear(in_features=512, out_features=512 * 3, bias = False)\n",
    "softmax = nn.Softmax(dim = -1)\n",
    "to_out = nn.Linear(in_features=512, out_features=512)\n",
    "\n",
    "dim = 512\n",
    "\n",
    "x = torch.randn((2, 4, 512), dtype=torch.float32)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([2, 4, 512])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 4, 512])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "heads = 1\n",
    "\n",
    "scale = (dim / heads) ** -0.5\n",
    "\n",
    "qkv = to_qkv(x).chunk(3, dim = -1)\n",
    "q, k, v = map(lambda t: rearrange(t, 'b n d -> b n d'), qkv)\n",
    "print(q.size())\n",
    "dots = torch.matmul(q, k.transpose(-1, -2)) * scale\n",
    "\n",
    "attn = softmax(dots)\n",
    "\n",
    "out = torch.matmul(attn, v)\n",
    "out = rearrange(out, 'b n d -> b n d')\n",
    "\n",
    "out = to_out(out)\n",
    "\n",
    "out.size()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">多头自注意力</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([2, 8, 4, 64])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(torch.Size([2, 4, 512]), torch.Size([2, 8, 4, 4]))"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "heads = 8\n",
    "\n",
    "scale = (dim / heads) ** -0.5\n",
    "\n",
    "qkv = to_qkv(x).chunk(3, dim = -1)\n",
    "q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = heads), qkv)\n",
    "print(q.size())\n",
    "dots = torch.matmul(q, k.transpose(-1, -2)) * scale\n",
    "\n",
    "attn = softmax(dots)\n",
    "\n",
    "out = torch.matmul(attn, v)\n",
    "out = rearrange(out, 'b h n d -> b n (h d)')\n",
    "\n",
    "out = to_out(out)\n",
    "\n",
    "out.size(), dots.size()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 多头注意力的计算只和头注意力关注的特征区域相关\n",
    "q_1 = q[:,0,::]\n",
    "k_1 = k[:,0,::]\n",
    "\n",
    "dots_1 = torch.matmul(q_1, k_1.transpose(-1, -2)) * scale\n",
    "\n",
    "torch.allclose(dots[:, 0, ::], dots_1, atol=1e-6)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">因果自注意力</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 4, 512])"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 生成因果掩码\n",
    "causal_mask = torch.triu(\n",
    "    torch.ones(4, 4, device=q.device) * float('-inf'), \n",
    "    diagonal=1\n",
    ")\n",
    "\n",
    "# 使用 PyTorch 的高效注意力实现\n",
    "output = F.scaled_dot_product_attention(\n",
    "    q, k, v, \n",
    "    attn_mask=causal_mask,\n",
    "    dropout_p=0.0\n",
    ")\n",
    "\n",
    "output = output.transpose(1, 2).contiguous().view(\n",
    "    2, 4, 512\n",
    ")\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">交叉自注意力</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 4, 512])"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "to_Q = nn.Linear(in_features=512, out_features=512, bias = False)\n",
    "\n",
    "\n",
    "dim = 512\n",
    "        \n",
    "y = torch.randn((2, 4, 512), dtype=torch.float32)\n",
    "\n",
    "Q = to_Q(y)\n",
    "Q = Q.view(2, 4, 8, 64).transpose(1, 2)\n",
    "\n",
    "\n",
    "# 使用 PyTorch 的高效注意力实现\n",
    "output = F.scaled_dot_product_attention(\n",
    "    Q, k, v, \n",
    ")\n",
    "\n",
    "# 合并多头\n",
    "output = output.transpose(1, 2).contiguous().view(\n",
    "    2, 4, 512\n",
    ")\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h3 style=\"color: Salmon;\">无参数层</h3>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">上采样</span>\n",
    "\n",
    "```\n",
    "torch.nn.functional.interpolate(\n",
    "        input, \n",
    "        size=None, \n",
    "        scale_factor=None, \n",
    "        mode='nearest', \n",
    "        align_corners=None, \n",
    "        recompute_scale_factor=None, \n",
    "        antialias=False\n",
    "    )\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 2, 8, 8])"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 双插值\n",
    "inps = torch.randn((2, 2, 4, 4))\n",
    "\n",
    "out = torch.nn.functional.interpolate(\n",
    "        inps, \n",
    "        scale_factor=2, \n",
    "        mode='bilinear'\n",
    "    )\n",
    "\n",
    "out.size()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 2, 16, 16])"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 双插值\n",
    "inps = torch.randn((2, 2, 4, 4))\n",
    "\n",
    "out = torch.nn.functional.interpolate(\n",
    "        inps, \n",
    "        scale_factor=4, \n",
    "        mode='bilinear'\n",
    "    )\n",
    "\n",
    "out.size()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 2, 8, 8, 8])"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 三插值\n",
    "inps = torch.randn((2, 2, 4, 4, 4))\n",
    "\n",
    "out = torch.nn.functional.interpolate(\n",
    "        inps, \n",
    "        scale_factor=2, \n",
    "        mode='trilinear'\n",
    "    )\n",
    "\n",
    "out.size()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">池化层</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([20, 16, 24, 15])"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# pool of square window of size=3, stride=2\n",
    "m = nn.MaxPool2d(3, stride=2)\n",
    "\n",
    "input = torch.randn(20, 16, 50, 32)\n",
    "output = m(input)\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([20, 16, 24, 31])"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# pool of non-square window\n",
    "m = nn.MaxPool2d((3, 2), stride=(2, 1))\n",
    "input = torch.randn(20, 16, 50, 32)\n",
    "output = m(input)\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([20, 16, 24, 31])"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# pool of square window of size=3, stride=2\n",
    "m = nn.AvgPool2d(3, stride=2)\n",
    "# pool of non-square window\n",
    "m = nn.AvgPool2d((3, 2), stride=(2, 1))\n",
    "input = torch.randn(20, 16, 50, 32)\n",
    "output = m(input)\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">自适应池化层</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([1, 64, 5, 7])"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# target output size of 5x7\n",
    "m = nn.AdaptiveAvgPool2d((5, 7))\n",
    "input = torch.randn(1, 64, 8, 9)\n",
    "output = m(input)\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([1, 64, 7, 7])"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# target output size of 7x7 (square)\n",
    "m = nn.AdaptiveAvgPool2d(7)\n",
    "input = torch.randn(1, 64, 10, 9)\n",
    "output = m(input)\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([1, 64, 10, 7])"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# target output size of 10x7\n",
    "m = nn.AdaptiveAvgPool2d((None, 7))\n",
    "input = torch.randn(1, 64, 10, 9)\n",
    "output = m(input)\n",
    "\n",
    "output.size()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h3 style=\"color: Salmon;\">归一化层</h3>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Visualization of four normalization methods. Image from [Group Normalization](https://arxiv.org/pdf/1803.08494).\n",
    "\n",
    "![Image from [Group Normalization](https://arxiv.org/pdf/1803.08494)](./image_src/normalize.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">BatchNorm</span>\n",
    "\n",
    "对于一个批次内一张图像，批归一化意味着图像的归一化结果和同一批次内所有的图像的特征都相关"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_data = torch.tensor([\n",
    "    # Batch 1\n",
    "    [[\n",
    "        [ 1.0, 2.0, 3.0],\n",
    "        [ 4.0, 5.0, 6.0],\n",
    "        [7.0, 8.0, 9.0],\n",
    "\n",
    "    ]],\n",
    "    # Batch 2\n",
    "    [[\n",
    "        [-1.0, -2.0, -3.0],\n",
    "        [-4.0, -5.0, -6.0],\n",
    "        [-7.0, -8.0, -9.0],\n",
    "    ]]\n",
    "], dtype=torch.float32)  # (2, 1, 3, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[ 1.0000,  1.0000,  1.0000],\n",
       "          [ 1.0000,  1.0000,  1.0000],\n",
       "          [ 1.0000,  1.0000,  1.0000]]],\n",
       "\n",
       "\n",
       "        [[[-1.0000, -1.0000, -1.0000],\n",
       "          [-1.0000, -1.0000, -1.0000],\n",
       "          [-1.0000, -1.0000, -1.0000]]]])"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#input\n",
    "input_data = torch.tensor([\n",
    "    # Batch 1\n",
    "    [[\n",
    "        [1.0, 1.0, 1.0],\n",
    "        [1.0, 1.0, 1.0],\n",
    "        [1.0, 1.0, 1.0],\n",
    "\n",
    "    ]],\n",
    "    # Batch 2\n",
    "    [[\n",
    "        [-1.0, -1.0, -1.0],\n",
    "        [-1.0, -1.0, -1.0],\n",
    "        [-1.0, -1.0, -1.0],\n",
    "    ]]\n",
    "], dtype=torch.float32)  # (2, 1, 3, 3)\n",
    "\n",
    "#print(\"输入数据：\\n\", input_data)\n",
    "# BatchNorm\n",
    "bn_norm = torch.nn.BatchNorm2d(num_features=1, affine=False)  # 通道数 C=1\n",
    "#print(input)\n",
    "bn_out = bn_norm(input_data)\n",
    "\n",
    "bn_out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor(0.),\n",
       " tensor(1.),\n",
       " tensor([[[[ 1.0000,  1.0000,  1.0000],\n",
       "           [ 1.0000,  1.0000,  1.0000],\n",
       "           [ 1.0000,  1.0000,  1.0000]]],\n",
       " \n",
       " \n",
       "         [[[-1.0000, -1.0000, -1.0000],\n",
       "           [-1.0000, -1.0000, -1.0000],\n",
       "           [-1.0000, -1.0000, -1.0000]]]]))"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Calculate the global mean and variance\n",
    "mean = input_data.mean()\n",
    "var = input_data.var(unbiased=False) \n",
    "\n",
    "# Manual normalization\n",
    "output_manual = (input_data - mean) / torch.sqrt(var + 1e-5)\n",
    "\n",
    "mean, var, output_manual"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Manual normalization：\n",
      " tensor([[[[ 1.0000,  1.0000,  1.0000],\n",
      "          [ 1.0000,  1.0000,  1.0000],\n",
      "          [ 1.0000,  1.0000,  1.0000]]],\n",
      "\n",
      "\n",
      "        [[[-1.0000, -1.0000, -1.0000],\n",
      "          [-1.0000, -1.0000, -1.0000],\n",
      "          [-1.0000, -1.0000, -1.0000]]]])\n",
      "\n",
      "API The result is consistent with the manual calculation： True\n"
     ]
    }
   ],
   "source": [
    "print(\"\\nManual normalization：\\n\", output_manual)\n",
    "print(\"\\nAPI The result is consistent with the manual calculation：\", \n",
    "      torch.allclose(bn_out, output_manual, atol=1e-6))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">LayerNorm</span>\n",
    "\n",
    "对于一个批次内一张图像，层归一化意味着该图像归一化结果和此图像所有通道的特征相关"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_data = torch.tensor([\n",
    "    # Batch 1\n",
    "    [\n",
    "        [\n",
    "            [1.0, 1.0, 1.0],\n",
    "            [2.0, 2.0, 2.0],\n",
    "        ],\n",
    "    # Batch 2\n",
    "        [\n",
    "            [-1.0, -1.0, -1.0],\n",
    "            [-1.0, -1.0, -1.0],\n",
    "        ]\n",
    "    ],\n",
    "    # Batch 2 (与Batch1对称)\n",
    "    [\n",
    "        [\n",
    "            [1.0, 2.0, 3.0],\n",
    "            [3.0, 6.0, 9.0],\n",
    "\n",
    "        ],\n",
    "    # Batch 2\n",
    "        [\n",
    "            [-1.0, -1.0, -1.0],\n",
    "            [-1.0, -1.0, -1.0],\n",
    "        ]\n",
    "    ]\n",
    "], dtype=torch.float32)  \n",
    "\n",
    "nlp_input = input_data.view(2, 2, 6)\n",
    "image_input = input_data.clone()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[-1.0000, -1.0000, -1.0000,  1.0000,  1.0000,  1.0000],\n",
       "         [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000]],\n",
       "\n",
       "        [[-1.1078, -0.7385, -0.3693, -0.3693,  0.7385,  1.8464],\n",
       "         [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000]]])"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# NLP Example\n",
    "layer_norm = nn.LayerNorm(normalized_shape=6, elementwise_affine=False)  \n",
    "nlp_output = layer_norm(nlp_input)\n",
    "\n",
    "nlp_output.data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[[ 1.5000],\n",
       "          [-1.0000]],\n",
       " \n",
       "         [[ 4.0000],\n",
       "          [-1.0000]]]),\n",
       " tensor([[[0.5000],\n",
       "          [0.0000]],\n",
       " \n",
       "         [[2.7080],\n",
       "          [0.0000]]]))"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "mean = nlp_input.mean(-1, keepdim=True)              \n",
    "std = nlp_input.std(-1, unbiased=False, keepdim=True)\n",
    "\n",
    "mean, std"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[-1.0000, -1.0000, -1.0000,  1.0000,  1.0000,  1.0000],\n",
       "         [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000]],\n",
       "\n",
       "        [[-1.1078, -0.7385, -0.3693, -0.3693,  0.7385,  1.8464],\n",
       "         [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000]]])"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "cal = (nlp_input - mean) / (std + 1e-5)\n",
    "cal.data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[ 0.5773,  0.5773,  0.5773],\n",
       "          [ 1.3471,  1.3471,  1.3471]],\n",
       "\n",
       "         [[-0.9622, -0.9622, -0.9622],\n",
       "          [-0.9622, -0.9622, -0.9622]]],\n",
       "\n",
       "\n",
       "        [[[-0.1588,  0.1588,  0.4763],\n",
       "          [ 0.4763,  1.4290,  2.3817]],\n",
       "\n",
       "         [[-0.7939, -0.7939, -0.7939],\n",
       "          [-0.7939, -0.7939, -0.7939]]]])"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Image Example\n",
    "layer_norm = torch.nn.LayerNorm(normalized_shape=(2, 2, 3), elementwise_affine=False)\n",
    "ln_out = layer_norm(input_data)\n",
    "\n",
    "ln_out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LayerNorm 结果: tensor([[[[ 0.5773,  0.5773,  0.5773],\n",
      "          [ 1.3471,  1.3471,  1.3471]],\n",
      "\n",
      "         [[-0.9622, -0.9622, -0.9622],\n",
      "          [-0.9622, -0.9622, -0.9622]]],\n",
      "\n",
      "\n",
      "        [[[-0.1588,  0.1588,  0.4763],\n",
      "          [ 0.4763,  1.4290,  2.3817]],\n",
      "\n",
      "         [[-0.7939, -0.7939, -0.7939],\n",
      "          [-0.7939, -0.7939, -0.7939]]]])\n",
      "LayerNorm 结果一致性验证: True\n"
     ]
    }
   ],
   "source": [
    "manual_ln = (image_input - image_input.mean(dim=(1, 2, 3), keepdim=True)) / \\\n",
    "            torch.sqrt(image_input.var(dim=(1, 2, 3), unbiased=False, keepdim=True) + 1e-5)\n",
    "\n",
    "print(\"LayerNorm 结果:\", manual_ln.data)\n",
    "print(\"LayerNorm 结果一致性验证:\", torch.allclose(ln_out, manual_ln, atol=1e-6))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[0.2500]]],\n",
       "\n",
       "\n",
       "        [[[1.5000]]]])"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "input_data.mean(dim=(1, 2, 3), keepdim=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[0.0000, 0.0000, 0.0000],\n",
       "          [0.5000, 0.5000, 0.5000]]],\n",
       "\n",
       "\n",
       "        [[[0.0000, 0.5000, 1.0000],\n",
       "          [1.0000, 2.5000, 4.0000]]]])"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "input_data.mean(dim=(1), keepdim=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[0.2500, 0.2500, 0.2500]]],\n",
       "\n",
       "\n",
       "        [[[0.5000, 1.5000, 2.5000]]]])"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "input_data.mean(dim=(1), keepdim=True).mean(dim=(2), keepdim=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[0.2500]]],\n",
       "\n",
       "\n",
       "        [[[1.5000]]]])"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "input_data.mean(dim=(1), keepdim=True).mean(dim=(2), keepdim=True).mean(dim=(3), keepdim=True)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">GroupNorm</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "group_norm_g2 = torch.nn.GroupNorm(num_groups=1, num_channels=2, affine=False)\n",
    "gn_g2_out = group_norm_g2(input_data)\n",
    "\n",
    "torch.allclose(ln_out, gn_g2_out, atol=1e-6)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[-1.0000, -1.0000, -1.0000],\n",
       "          [ 1.0000,  1.0000,  1.0000]],\n",
       "\n",
       "         [[ 0.0000,  0.0000,  0.0000],\n",
       "          [ 0.0000,  0.0000,  0.0000]]],\n",
       "\n",
       "\n",
       "        [[[-1.1078, -0.7385, -0.3693],\n",
       "          [-0.3693,  0.7385,  1.8464]],\n",
       "\n",
       "         [[ 0.0000,  0.0000,  0.0000],\n",
       "          [ 0.0000,  0.0000,  0.0000]]]])"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "group_norm_g2 = torch.nn.GroupNorm(num_groups=2, num_channels=2, affine=False)\n",
    "gn_g2_out = group_norm_g2(input_data)\n",
    "\n",
    "gn_g2_out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始数据: 1.0\n",
      "Group=2 结果: -0.9999799728393555\n",
      "\n",
      "Group=2 结果一致性验证: True\n"
     ]
    }
   ],
   "source": [
    "manual_gn_g2 = input_data.clone()\n",
    "for b in range(input_data.size(0)):      # through each sample\n",
    "    for c in range(input_data.size(1)):  # through each channel\n",
    "        channel_data = input_data[b, c]\n",
    "        mean = channel_data.mean()\n",
    "        var = channel_data.var(unbiased=False)\n",
    "        manual_gn_g2[b, c] = (channel_data - mean) / torch.sqrt(var + 1e-5)\n",
    "\n",
    "print(\"原始数据:\", input_data[0, 0, 0, 0].item())\n",
    "print(\"Group=2 结果:\", gn_g2_out[0, 0, 0, 0].item())\n",
    "print(\"\\nGroup=2 结果一致性验证:\", torch.allclose(gn_g2_out, manual_gn_g2, atol=1e-6))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "manual_ln = (input_data - input_data.mean(dim=(2, 3), keepdim=True)) / \\\n",
    "            torch.sqrt(input_data.var(dim=(2, 3), unbiased=False, keepdim=True) + 1e-5)\n",
    "\n",
    "torch.allclose(gn_g2_out, manual_ln, atol=1e-6)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([[[[ 1.5000]],\n",
       " \n",
       "          [[-1.0000]]],\n",
       " \n",
       " \n",
       "         [[[ 4.0000]],\n",
       " \n",
       "          [[-1.0000]]]]),\n",
       " tensor([[[[0.2500]],\n",
       " \n",
       "          [[0.0000]]],\n",
       " \n",
       " \n",
       "         [[[7.3333]],\n",
       " \n",
       "          [[0.0000]]]]))"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "input_data.mean(dim=(2, 3), keepdim=True), input_data.var(dim=(2, 3), unbiased=False, keepdim=True)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h3 style=\"color: Salmon;\">激活层</h3>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<table style=\"width:100%; table-layout:fixed;\">\n",
    "  <tr>\n",
    "    <td><img width=\"450px\" src=\"./image_src/ReLU.png\"></td>\n",
    "    <td><img width=\"450px\" src=\"./image_src/LeakyReLU.png\"></td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td>ReLU</td>\n",
    "    <td>LeakyReLU</td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td><img width=\"450px\" src=\"./image_src/Sigmoid.png\"></td>\n",
    "    <td><img width=\"450px\" src=\"./image_src/Tanh.png\"></td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td>Sigmoid</td>\n",
    "    <td>Tanh</td>\n",
    "  </tr>\n",
    "</table>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">ReLU</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.0000, 0.0000, 0.0000, 0.1000, 0.5000, 1.0000]])"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m = nn.ReLU()\n",
    "input = torch.tensor([[-0.1, -0.5, -1.0, 0.1, 0.5, 1.0]])\n",
    "output = m(input)\n",
    "\n",
    "output"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">LeakyReLU</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-0.0100, -0.0500, -0.1000,  0.1000,  0.5000,  1.0000]])"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m = nn.LeakyReLU(0.1)\n",
    "input = torch.tensor([[-0.1, -0.5, -1.0, 0.1, 0.5, 1.0]])\n",
    "output = m(input)\n",
    "\n",
    "output"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">Sigmoid</span>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.5025, 0.5250, 0.6225, 0.6682, 0.7311]])"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m = nn.Sigmoid()\n",
    "input = torch.tensor([[0.01, 0.1, 0.5, 0.7, 1.0]])\n",
    "output = m(input)\n",
    "\n",
    "output"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">Softmax</span>\n",
    "\n",
    "$Softmax(x_{i})=\\frac{exp(x_{i})}{\\sum_{j}exp(x_{j})}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.3333, 0.3333, 0.3333]])"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m = nn.Softmax(dim=1)\n",
    "input = torch.tensor([[1.0, 1.0, 1.0]])\n",
    "output = m(input)\n",
    "\n",
    "output"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<span style=\"color: DodgerBlue;\">Tan</span>\n",
    "\n",
    "$Tanh(x) = \\frac{exp(x)-exp(-x)}{exp(x)+exp(-x)}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0.0997, 0.4621, 0.7616]])"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m = nn.Tanh()\n",
    "input = torch.tensor([[0.1, 0.5, 1.0]])\n",
    "output = m(input)\n",
    "\n",
    "output"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
