{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 层与块"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "块（block）可以描述单个层、由多个层组成的组\n",
    "件或整个模型本身。</br>使用块进行抽象的一个好处是可以将一些块组合成更大的组件，这一过程通常是递归的，\n",
    "</br>通过定义代码来按需生成任意复杂度的块，我们可以通过简洁的代码实现复杂的神经网络。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**回顾多层感知机**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-0.0485, -0.0130, -0.3521,  0.1993, -0.1517, -0.2736,  0.1027, -0.0378,\n",
       "          0.0723,  0.0304],\n",
       "        [-0.0785,  0.1899, -0.1656, -0.0083,  0.0440, -0.0350,  0.0368, -0.0955,\n",
       "          0.1334,  0.1328]], grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "from torch import nn\n",
    "\n",
    "\"\"\"一个隐藏层，一个全连接层\"\"\"\n",
    "net=nn.Sequential(nn.Linear(20,256),nn.ReLU(),nn.Linear(256,10))\n",
    "\n",
    "X=torch.rand(2,20)#创建一个形状2x20的张量(随机-均匀分布，0-1)\n",
    "\n",
    "net(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 使用自定义块"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们定制的__init__函数通过super().__init__() 调用父类的__init__函数，省去了重复编写模版代码的痛\n",
    "苦。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.nn import functional as F\n",
    "class MLP(nn.Module):\n",
    "    #用模型参数声明层。\n",
    "    def __init__(self):\n",
    "        #调用父类Module的构造函数来执行必要的初始化\n",
    "        # 在类实例化时也可以指定其他函数参数，例如模型参数params（挖个坑）\n",
    "        super().__init__()\n",
    "        self.hidden=nn.Linear(20,256)#隐藏层\n",
    "        self.out=nn.Linear(256,10)#输出层\n",
    "    \"\"\"定义前向传播\"\"\"\n",
    "    def forward(self,X):\n",
    "        #使用RelU激活函数，其在nn.functional中定义\n",
    "        return self.out(F.relu(self.hidden(X)))#输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.1690,  0.0153, -0.0964, -0.0818,  0.0074,  0.0223, -0.2114, -0.0065,\n",
       "         -0.0710, -0.0343],\n",
       "        [ 0.2081,  0.0339, -0.0867,  0.0355, -0.1134,  0.0732, -0.1869, -0.0818,\n",
       "         -0.0383, -0.1742]], grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#使用一下\n",
    "net=MLP()\n",
    "net(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 顺序块"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在我们可以更仔细地看看Sequential类是如何工作的，回想一下Sequential的设计是为了把其他模块串起<br>\n",
    "来。为了构建我们自己的简化的MySequential，我们只需要定义两个关键函数：</br>\n",
    "1. 一种将块逐个追加到列表中的函数；</br>\n",
    "2. 一种前向传播函数，用于将输入按追加块的顺序传递给块组成的“链条”。</br>\n",
    "下面的MySequential类提供了与默认Sequential类相同的功能。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "ps:每个Module都有一个_modules属性：</br>_modules的主要优\n",
    "点是：在模块的参数初始化过程中，系统知道在_modules字典中查找需要初始化参数的子块。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"实现类似于mm.Sequential()\"\"\"\n",
    "class mySequential(nn.Module):\n",
    "    def __init__(self,*args):\n",
    "        super().__init__()\n",
    "        for idx,module in enumerate(args):\n",
    "            self._modules[str(idx)]=module#添加模块进入nn.modules\n",
    "    def forward(self,x):\n",
    "        \"\"\"这行代码的作用是遍历当前模块的所有子模块，并将输入 x 依次传递给每个子模块进行前向传播\"\"\"\n",
    "        for block in self._modules.values():\n",
    "            x=block(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.0093,  0.1858,  0.1102,  0.2864,  0.0419, -0.1708,  0.0791,  0.1248,\n",
       "         -0.0759,  0.0145],\n",
       "        [ 0.1279,  0.0361,  0.0618,  0.1381,  0.1129, -0.0908,  0.2479,  0.1116,\n",
       "         -0.2244, -0.0215]], grad_fn=<AddmmBackward0>)"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "net = mySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))\n",
    "net(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (*重要)在前向传播中执行代码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    然而，有时我们可能希望合并\n",
    "既不是上一层的结果也不是可更新参数的项，我们称之为常数参数（constant parameter）。</br>例如，我们需要\n",
    "一个计算函数 f(x, w) = c · w⊤x的层，其中x是输入，w是参数，c是某个在优化过程中没有更新的指定常量。</br>\n",
    "因此我们实现了一个FixedHiddenMLP类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "class fixedHiddenMLP(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        #不计算梯度的随即权重参数，因此在训练中保持不变\n",
    "        self.rand_weight=torch.rand((20,20),requires_grad=False)\n",
    "        self.linear=nn.Linear(20,20)\n",
    "    def forward(self,x):\n",
    "        x=self.linear(x)\n",
    "        #使用创建的常数参数，mm和relu【激活】\n",
    "        x=F.relu(torch.mm(x,self.rand_weight)+1)\n",
    "        #复用全连接层，相当于两个层共享权重(数一样)【激活的x送到全连接层】\n",
    "        x=self.linear(x)\n",
    "\n",
    "        #控制流（没啥用，单纯展示如何将任意代码集成到神经网络流程中）\n",
    "        while x.abs().sum()>1:\n",
    "            x/=2\n",
    "        return x.abs().sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(0.7944, grad_fn=<SumBackward0>)"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "net=fixedHiddenMLP()\n",
    "net(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 混搭组合块的MLP(2隐藏层+1全连接层)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(0.7583, grad_fn=<SumBackward0>)"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "class nestMLP(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        #两个隐藏层\n",
    "        self.net=nn.Sequential(nn.Linear(20,64),nn.ReLU(),\n",
    "                               nn.Linear(64,32),nn.ReLU()\n",
    "                               )\n",
    "        #全连接层\n",
    "        self.linear=nn.Linear(32,16)\n",
    "    \n",
    "    def forward(self,x):\n",
    "        return self.linear(self.net(x))#隐藏层(20,64)=>隐藏层(64,32)=>全连接层(32.16)\n",
    "\n",
    "\"\"\"last的拼接\"\"\"\n",
    "#隐藏层(20,64)=>隐藏层(64,32)=>全连接层(32.16)==>全连接层(16,20)==>隐藏层(20,20)==>全连接层(20,20)\n",
    "chama=nn.Sequential(nestMLP(),nn.Linear(16,20),fixedHiddenMLP())\n",
    "chama(X)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "DL_pytorch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
