{"cells":[{"cell_type":"markdown","source":["# 开发环境安装"],"metadata":{"id":"2piGg5Edjohs"},"id":"2piGg5Edjohs"},{"cell_type":"code","source":["!pip install git+https://github.com/d2l-ai/d2l-zh@release  # installing d2l\n","!pip install git+https://github.com/ipython/matplotlib-inline"],"metadata":{"id":"oc0B6VH_jnuI","colab":{"base_uri":"https://localhost:8080/","height":1000},"executionInfo":{"status":"ok","timestamp":1664151466726,"user_tz":-480,"elapsed":50546,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}},"outputId":"8f31d0cb-a3c5-4ba7-8331-df90dcc7196a"},"id":"oc0B6VH_jnuI","execution_count":1,"outputs":[{"output_type":"stream","name":"stdout","text":["Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n","Collecting git+https://github.com/d2l-ai/d2l-zh@release\n","  Cloning https://github.com/d2l-ai/d2l-zh (to revision release) to /tmp/pip-req-build-rlrxsz6n\n","  Running command git clone -q https://github.com/d2l-ai/d2l-zh /tmp/pip-req-build-rlrxsz6n\n","  Running command git checkout -b release --track origin/release\n","  Switched to a new branch 'release'\n","  Branch 'release' set up to track remote branch 'release' from 'origin'.\n","  Running command git submodule update --init --recursive -q\n","Collecting jupyter==1.0.0\n","  Downloading jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)\n","Collecting numpy==1.21.5\n","  Downloading numpy-1.21.5-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)\n","\u001b[K     |████████████████████████████████| 15.7 MB 2.1 MB/s \n","\u001b[?25hCollecting matplotlib==3.5.1\n","  Downloading matplotlib-3.5.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (11.2 MB)\n","\u001b[K     |████████████████████████████████| 11.2 MB 47.3 MB/s \n","\u001b[?25hCollecting requests==2.25.1\n","  Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)\n","\u001b[K     |████████████████████████████████| 61 kB 8.4 MB/s \n","\u001b[?25hCollecting pandas==1.2.4\n","  Downloading pandas-1.2.4-cp37-cp37m-manylinux1_x86_64.whl (9.9 MB)\n","\u001b[K     |████████████████████████████████| 9.9 MB 32.2 MB/s \n","\u001b[?25hRequirement already satisfied: ipywidgets in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==2.0.0b1) (7.7.1)\n","Collecting qtconsole\n","  Downloading qtconsole-5.3.2-py3-none-any.whl (120 kB)\n","\u001b[K     |████████████████████████████████| 120 kB 83.8 MB/s \n","\u001b[?25hRequirement already satisfied: nbconvert in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==2.0.0b1) (5.6.1)\n","Requirement already satisfied: notebook in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==2.0.0b1) (5.3.1)\n","Requirement already satisfied: jupyter-console in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==2.0.0b1) (6.1.0)\n","Requirement already satisfied: ipykernel in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==2.0.0b1) (5.3.4)\n","Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.5.1->d2l==2.0.0b1) (21.3)\n","Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.5.1->d2l==2.0.0b1) (7.1.2)\n","Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.5.1->d2l==2.0.0b1) (0.11.0)\n","Requirement already satisfied: pyparsing>=2.2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.5.1->d2l==2.0.0b1) (3.0.9)\n","Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.5.1->d2l==2.0.0b1) (2.8.2)\n","Collecting fonttools>=4.22.0\n","  Downloading fonttools-4.37.3-py3-none-any.whl (959 kB)\n","\u001b[K     |████████████████████████████████| 959 kB 66.9 MB/s \n","\u001b[?25hRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.5.1->d2l==2.0.0b1) (1.4.4)\n","Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas==1.2.4->d2l==2.0.0b1) (2022.2.1)\n","Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==2.0.0b1) (2022.6.15)\n","Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==2.0.0b1) (3.0.4)\n","Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==2.0.0b1) (2.10)\n","Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==2.0.0b1) (1.24.3)\n","Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib==3.5.1->d2l==2.0.0b1) (4.1.1)\n","Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7->matplotlib==3.5.1->d2l==2.0.0b1) (1.15.0)\n","Requirement already satisfied: traitlets>=4.1.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (5.1.1)\n","Requirement already satisfied: ipython>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (7.9.0)\n","Requirement already satisfied: tornado>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (5.1.1)\n","Requirement already satisfied: jupyter-client in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (6.1.12)\n","Requirement already satisfied: prompt-toolkit<2.1.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (2.0.10)\n","Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (57.4.0)\n","Collecting jedi>=0.10\n","  Downloading jedi-0.18.1-py2.py3-none-any.whl (1.6 MB)\n","\u001b[K     |████████████████████████████████| 1.6 MB 48.4 MB/s \n","\u001b[?25hRequirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (4.4.2)\n","Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (0.7.5)\n","Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (4.8.0)\n","Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (2.6.1)\n","Requirement already satisfied: backcall in /usr/local/lib/python3.7/dist-packages (from ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (0.2.0)\n","Requirement already satisfied: parso<0.9.0,>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from jedi>=0.10->ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (0.8.3)\n","Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython>=5.0.0->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (0.2.5)\n","Requirement already satisfied: ipython-genutils~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter==1.0.0->d2l==2.0.0b1) (0.2.0)\n","Requirement already satisfied: widgetsnbextension~=3.6.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter==1.0.0->d2l==2.0.0b1) (3.6.1)\n","Requirement already satisfied: jupyterlab-widgets>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter==1.0.0->d2l==2.0.0b1) (3.0.3)\n","Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==2.0.0b1) (0.13.3)\n","Requirement already satisfied: nbformat in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==2.0.0b1) (5.4.0)\n","Requirement already satisfied: Send2Trash in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==2.0.0b1) (1.8.0)\n","Requirement already satisfied: jupyter-core>=4.4.0 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==2.0.0b1) (4.11.1)\n","Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==2.0.0b1) (2.11.3)\n","Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.7/dist-packages (from jupyter-client->ipykernel->jupyter==1.0.0->d2l==2.0.0b1) (23.2.1)\n","Requirement already satisfied: ptyprocess in /usr/local/lib/python3.7/dist-packages (from terminado>=0.8.1->notebook->jupyter==1.0.0->d2l==2.0.0b1) (0.7.0)\n","Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->notebook->jupyter==1.0.0->d2l==2.0.0b1) (2.0.1)\n","Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==2.0.0b1) (0.4)\n","Requirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==2.0.0b1) (5.0.1)\n","Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==2.0.0b1) (1.5.0)\n","Requirement already satisfied: defusedxml in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==2.0.0b1) (0.7.1)\n","Requirement already satisfied: testpath in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==2.0.0b1) (0.6.0)\n","Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==2.0.0b1) (0.8.4)\n","Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.7/dist-packages (from nbformat->notebook->jupyter==1.0.0->d2l==2.0.0b1) (4.3.3)\n","Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.7/dist-packages (from nbformat->notebook->jupyter==1.0.0->d2l==2.0.0b1) (2.16.1)\n","Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook->jupyter==1.0.0->d2l==2.0.0b1) (4.12.0)\n","Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook->jupyter==1.0.0->d2l==2.0.0b1) (5.9.0)\n","Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook->jupyter==1.0.0->d2l==2.0.0b1) (0.18.1)\n","Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook->jupyter==1.0.0->d2l==2.0.0b1) (22.1.0)\n","Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat->notebook->jupyter==1.0.0->d2l==2.0.0b1) (3.8.1)\n","Requirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->jupyter==1.0.0->d2l==2.0.0b1) (0.5.1)\n","Collecting qtpy>=2.0.1\n","  Downloading QtPy-2.2.0-py3-none-any.whl (82 kB)\n","\u001b[K     |████████████████████████████████| 82 kB 550 kB/s \n","\u001b[?25hBuilding wheels for collected packages: d2l\n","  Building wheel for d2l (setup.py) ... \u001b[?25l\u001b[?25hdone\n","  Created wheel for d2l: filename=d2l-2.0.0b1-py3-none-any.whl size=80147 sha256=5967918a1bfdf4f9e0a41505955e027d8181ce3447ca125fd99d5f5f3a60f546\n","  Stored in directory: /tmp/pip-ephem-wheel-cache-zfe56ecy/wheels/73/f4/42/d2b85ca46d85a241d6aa57c1c24027de2d2258202bb67945f9\n","Successfully built d2l\n","Installing collected packages: jedi, qtpy, qtconsole, numpy, fonttools, requests, pandas, matplotlib, jupyter, d2l\n","  Attempting uninstall: numpy\n","    Found existing installation: numpy 1.21.6\n","    Uninstalling numpy-1.21.6:\n","      Successfully uninstalled numpy-1.21.6\n","  Attempting uninstall: requests\n","    Found existing installation: requests 2.23.0\n","    Uninstalling requests-2.23.0:\n","      Successfully uninstalled requests-2.23.0\n","  Attempting uninstall: pandas\n","    Found existing installation: pandas 1.3.5\n","    Uninstalling pandas-1.3.5:\n","      Successfully uninstalled pandas-1.3.5\n","  Attempting uninstall: matplotlib\n","    Found existing installation: matplotlib 3.2.2\n","    Uninstalling matplotlib-3.2.2:\n","      Successfully uninstalled matplotlib-3.2.2\n","Successfully installed d2l-2.0.0b1 fonttools-4.37.3 jedi-0.18.1 jupyter-1.0.0 matplotlib-3.5.1 numpy-1.21.5 pandas-1.2.4 qtconsole-5.3.2 qtpy-2.2.0 requests-2.25.1\n"]},{"output_type":"display_data","data":{"application/vnd.colab-display-data+json":{"pip_warning":{"packages":["matplotlib","mpl_toolkits","numpy"]}}},"metadata":{}},{"output_type":"stream","name":"stdout","text":["Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n","Collecting git+https://github.com/ipython/matplotlib-inline\n","  Cloning https://github.com/ipython/matplotlib-inline to /tmp/pip-req-build-e22j2ek5\n","  Running command git clone -q https://github.com/ipython/matplotlib-inline /tmp/pip-req-build-e22j2ek5\n","Requirement already satisfied: traitlets in /usr/local/lib/python3.7/dist-packages (from matplotlib-inline==0.1.6) (5.1.1)\n","Building wheels for collected packages: matplotlib-inline\n","  Building wheel for matplotlib-inline (setup.py) ... \u001b[?25l\u001b[?25hdone\n","  Created wheel for matplotlib-inline: filename=matplotlib_inline-0.1.6-py3-none-any.whl size=9422 sha256=820bc2458361db0f70958e5b1d5c694e2fb86d0c0d41ad16af1f7297d2c6eefe\n","  Stored in directory: /tmp/pip-ephem-wheel-cache-s_ekb7zb/wheels/8e/4e/9a/dac66ab6df8e9b620e41189fd17327c15fcc2da2aae4d9b50b\n","Successfully built matplotlib-inline\n","Installing collected packages: matplotlib-inline\n","Successfully installed matplotlib-inline-0.1.6\n"]}]},{"cell_type":"markdown","id":"ec327923","metadata":{"origin_pos":0,"id":"ec327923"},"source":["# 层和块\n","\n","\n","之前首次介绍神经网络时，我们关注的是具有单一输出的线性模型。\n","在这里，整个模型只有一个输出。\n","注意，单个神经网络\n","（1）接受一些输入；\n","（2）生成相应的标量输出；\n","（3）具有一组相关 *参数*（parameters），更新这些参数可以优化某目标函数。\n","\n","然后，当考虑具有多个输出的网络时，\n","我们利用矢量化算法来描述整层神经元。\n","像单个神经元一样，层（1）接受一组输入，\n","（2）生成相应的输出，\n","（3）由一组可调整参数描述。\n","当我们使用softmax回归时，一个单层本身就是模型。\n","然而，即使我们随后引入了多层感知机，我们仍然可以认为该模型保留了上面所说的基本架构。\n","\n","对于多层感知机而言，整个模型及其组成层都是这种架构。\n","整个模型接受原始输入（特征），生成输出（预测），\n","并包含一些参数（所有组成层的参数集合）。\n","同样，每个单独的层接收输入（由前一层提供），\n","生成输出（到下一层的输入），并且具有一组可调参数，\n","这些参数根据从下一层反向传播的信号进行更新。\n","\n","事实证明，研究讨论“比单个层大”但“比整个模型小”的组件更有价值。\n","例如，在计算机视觉中广泛流行的ResNet-152架构就有数百层，\n","这些层是由*层组*（groups of layers）的重复模式组成。\n","这个ResNet架构赢得了2015年ImageNet和COCO计算机视觉比赛\n","的识别和检测任务 :cite:`He.Zhang.Ren.ea.2016`。\n","目前ResNet架构仍然是许多视觉任务的首选架构。\n","在其他的领域，如自然语言处理和语音，\n","层组以各种重复模式排列的类似架构现在也是普遍存在。\n","\n","为了实现这些复杂的网络，我们引入了神经网络*块*的概念。\n","*块*（block）可以描述单个层、由多个层组成的组件或整个模型本身。\n","使用块进行抽象的一个好处是可以将一些块组合成更大的组件，\n","这一过程通常是递归的，如 :numref:`fig_blocks`所示。\n","通过定义代码来按需生成任意复杂度的块，\n","我们可以通过简洁的代码实现复杂的神经网络。\n","\n","![多个层被组合成块，形成更大的模型](http://d2l.ai/_images/blocks.svg)\n",":label:`fig_blocks`\n","\n","从编程的角度来看，块由*类*（class）表示。\n","它的任何子类都必须定义一个将其输入转换为输出的前向传播函数，\n","并且必须存储任何必需的参数。\n","注意，有些块不需要任何参数。\n","最后，为了计算梯度，块必须具有反向传播函数。\n","在定义我们自己的块时，由于自动微分（在 :numref:`sec_autograd` 中引入）\n","提供了一些后端实现，我们只需要考虑前向传播函数和必需的参数。\n","\n","在构造自定义块之前，(**我们先回顾一下多层感知机**)\n","（ :numref:`sec_mlp_concise` ）的代码。\n","下面的代码生成一个网络，其中包含一个具有256个单元和ReLU激活函数的全连接隐藏层，\n","然后是一个具有10个隐藏单元且不带激活函数的全连接输出层。\n"]},{"cell_type":"code","execution_count":3,"id":"af24a23e","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:12.993032Z","iopub.status.busy":"2022-07-31T02:50:12.992570Z","iopub.status.idle":"2022-07-31T02:50:13.720179Z","shell.execute_reply":"2022-07-31T02:50:13.719531Z"},"origin_pos":2,"tab":["pytorch"],"id":"af24a23e","outputId":"11a0073f-39c3-4a40-8719-87f1d2c0b63c","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664151585617,"user_tz":-480,"elapsed":555,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"stream","name":"stdout","text":["tensor([[0.3438, 0.5677, 0.5957, 0.9018, 0.0208, 0.3681, 0.7795, 0.3265, 0.0403,\n","         0.3306, 0.0580, 0.7684, 0.4169, 0.4288, 0.8688, 0.4614, 0.2868, 0.3857,\n","         0.7730, 0.8075],\n","        [0.9251, 0.5322, 0.3217, 0.2564, 0.5812, 0.0470, 0.7109, 0.4858, 0.7685,\n","         0.2785, 0.1976, 0.5883, 0.6504, 0.2557, 0.3962, 0.4557, 0.6195, 0.6096,\n","         0.8156, 0.7765]])\n"]},{"output_type":"execute_result","data":{"text/plain":["tensor([[ 0.2354,  0.0522, -0.1223,  0.0333, -0.0035, -0.0987,  0.1724,  0.0887,\n","         -0.1242, -0.1881],\n","        [ 0.1382,  0.0182, -0.0814, -0.0106, -0.0048, -0.1363,  0.1475, -0.0260,\n","         -0.0653, -0.4086]], grad_fn=<AddmmBackward0>)"]},"metadata":{},"execution_count":3}],"source":["import torch\n","from torch import nn\n","from torch.nn import functional as F\n","\n","net = nn.Sequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))\n","\n","X = torch.rand(2, 20)\n","print(X)\n","net(X)"]},{"cell_type":"markdown","id":"6b1fe8c4","metadata":{"origin_pos":5,"tab":["pytorch"],"id":"6b1fe8c4"},"source":["在这个例子中，我们通过实例化`nn.Sequential`来构建我们的模型，\n","层的执行顺序是作为参数传递的。\n","简而言之，(**`nn.Sequential`定义了一种特殊的`Module`**)，\n","即在PyTorch中表示一个块的类，\n","它维护了一个由`Module`组成的有序列表。\n","注意，两个全连接层都是`Linear`类的实例，\n","`Linear`类本身就是`Module`的子类。\n","另外，到目前为止，我们一直在通过`net(X)`调用我们的模型来获得模型的输出。\n","这实际上是`net.__call__(X)`的简写。\n","这个前向传播函数非常简单：\n","它将列表中的每个块连接在一起，将每个块的输出作为下一个块的输入。\n"]},{"cell_type":"markdown","id":"c4f4fa20","metadata":{"origin_pos":7,"id":"c4f4fa20"},"source":["## [**自定义块**]\n","\n","要想直观地了解块是如何工作的，最简单的方法就是自己实现一个。\n","在实现我们自定义块之前，我们简要总结一下每个块必须提供的基本功能：\n"]},{"cell_type":"markdown","id":"b8ae6690","metadata":{"origin_pos":9,"tab":["pytorch"],"id":"b8ae6690"},"source":["1. 将输入数据作为其前向传播函数的参数。\n","1. 通过前向传播函数来生成输出。请注意，输出的形状可能与输入的形状不同。例如，我们上面模型中的第一个全连接的层接收一个20维的输入，但是返回一个维度为256的输出。\n","1. 计算其输出关于输入的梯度，可通过其反向传播函数进行访问。通常这是自动发生的。\n","1. 存储和访问前向传播计算所需的参数。\n","1. 根据需要初始化模型参数。\n"]},{"cell_type":"markdown","id":"6a9fc07d","metadata":{"origin_pos":10,"id":"6a9fc07d"},"source":["在下面的代码片段中，我们从零开始编写一个块。\n","它包含一个多层感知机，其具有256个隐藏单元的隐藏层和一个10维输出层。\n","注意，下面的`MLP`类继承了表示块的类。\n","我们的实现只需要提供我们自己的构造函数（Python中的`__init__`函数）和前向传播函数。\n"]},{"cell_type":"code","execution_count":4,"id":"7e5462a9","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:13.723811Z","iopub.status.busy":"2022-07-31T02:50:13.723426Z","iopub.status.idle":"2022-07-31T02:50:13.728507Z","shell.execute_reply":"2022-07-31T02:50:13.727819Z"},"origin_pos":12,"tab":["pytorch"],"id":"7e5462a9","executionInfo":{"status":"ok","timestamp":1664151715935,"user_tz":-480,"elapsed":536,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[],"source":["class MLP(nn.Module):\n","    # 用模型参数声明层。这里，我们声明两个全连接的层\n","    def __init__(self):\n","        # 调用MLP的父类Module的构造函数来执行必要的初始化。\n","        # 这样，在类实例化时也可以指定其他函数参数，例如模型参数params（稍后将介绍）\n","        super().__init__()\n","        self.hidden = nn.Linear(20, 256)  # 隐藏层\n","        self.out = nn.Linear(256, 10)  # 输出层\n","\n","    # 定义模型的前向传播，即如何根据输入X返回所需的模型输出\n","    def forward(self, X):\n","        # 注意，这里我们使用ReLU的函数版本，其在nn.functional模块中定义。\n","        return self.out(F.relu(self.hidden(X)))"]},{"cell_type":"markdown","id":"5cd54120","metadata":{"origin_pos":14,"id":"5cd54120"},"source":["我们首先看一下前向传播函数，它以`X`作为输入，\n","计算带有激活函数的隐藏表示，并输出其未规范化的输出值。\n","在这个`MLP`实现中，两个层都是实例变量。\n","要了解这为什么是合理的，可以想象实例化两个多层感知机（`net1`和`net2`），\n","并根据不同的数据对它们进行训练。\n","当然，我们希望它们学到两种不同的模型。\n","\n","接着我们[**实例化多层感知机的层，然后在每次调用前向传播函数时调用这些层**]。\n","注意一些关键细节：\n","首先，我们定制的`__init__`函数通过`super().__init__()`\n","调用父类的`__init__`函数，\n","省去了重复编写模版代码的痛苦。\n","然后，我们实例化两个全连接层，\n","分别为`self.hidden`和`self.out`。\n","注意，除非我们实现一个新的运算符，\n","否则我们不必担心反向传播函数或参数初始化，\n","系统将自动生成这些。\n","\n","我们来试一下这个函数：\n"]},{"cell_type":"code","execution_count":5,"id":"56e96e47","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:13.731919Z","iopub.status.busy":"2022-07-31T02:50:13.731330Z","iopub.status.idle":"2022-07-31T02:50:13.737308Z","shell.execute_reply":"2022-07-31T02:50:13.736611Z"},"origin_pos":16,"tab":["pytorch"],"id":"56e96e47","outputId":"dd531de6-7a62-40df-dff0-c6645b1ff96a","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664151719466,"user_tz":-480,"elapsed":536,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["tensor([[ 0.0261, -0.2320,  0.0537,  0.1173, -0.2813,  0.1853,  0.0691, -0.0380,\n","          0.0689,  0.0634],\n","        [-0.0784, -0.3086,  0.0671,  0.1314, -0.2424,  0.1889, -0.1356, -0.0359,\n","          0.0746,  0.0523]], grad_fn=<AddmmBackward0>)"]},"metadata":{},"execution_count":5}],"source":["net = MLP()\n","net(X)"]},{"cell_type":"markdown","id":"87a911a8","metadata":{"origin_pos":18,"id":"87a911a8"},"source":["块的一个主要优点是它的多功能性。\n","我们可以子类化块以创建层（如全连接层的类）、\n","整个模型（如上面的`MLP`类）或具有中等复杂度的各种组件。\n","我们在接下来的章节中充分利用了这种多功能性，\n","比如在处理卷积神经网络时。\n","\n","## [**顺序块**]\n","\n","现在我们可以更仔细地看看`Sequential`类是如何工作的，\n","回想一下`Sequential`的设计是为了把其他模块串起来。\n","为了构建我们自己的简化的`MySequential`，\n","我们只需要定义两个关键函数：\n","\n","1. 一种将块逐个追加到列表中的函数。\n","1. 一种前向传播函数，用于将输入按追加块的顺序传递给块组成的“链条”。\n","\n","下面的`MySequential`类提供了与默认`Sequential`类相同的功能。\n"]},{"cell_type":"code","execution_count":null,"id":"8263a94c","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:13.740368Z","iopub.status.busy":"2022-07-31T02:50:13.739908Z","iopub.status.idle":"2022-07-31T02:50:13.744530Z","shell.execute_reply":"2022-07-31T02:50:13.743920Z"},"origin_pos":20,"tab":["pytorch"],"id":"8263a94c"},"outputs":[],"source":["class MySequential(nn.Module):\n","    def __init__(self, *args):\n","        super().__init__()\n","        for idx, module in enumerate(args):\n","            # 这里，module是Module子类的一个实例。我们把它保存在'Module'类的成员\n","            # 变量_modules中。_module的类型是OrderedDict\n","            self._modules[str(idx)] = module\n","\n","    def forward(self, X):\n","        # OrderedDict保证了按照成员添加的顺序遍历它们\n","        for block in self._modules.values():\n","            X = block(X)\n","        return X"]},{"cell_type":"markdown","id":"28f82c0b","metadata":{"origin_pos":23,"tab":["pytorch"],"id":"28f82c0b"},"source":["`__init__`函数将每个模块逐个添加到有序字典`_modules`中。\n","你可能会好奇为什么每个`Module`都有一个`_modules`属性？\n","以及为什么我们使用它而不是自己定义一个Python列表？\n","简而言之，`_modules`的主要优点是：\n","在模块的参数初始化过程中，\n","系统知道在`_modules`字典中查找需要初始化参数的子块。\n"]},{"cell_type":"markdown","id":"0d236ce9","metadata":{"origin_pos":24,"id":"0d236ce9"},"source":["当`MySequential`的前向传播函数被调用时，\n","每个添加的块都按照它们被添加的顺序执行。\n","现在可以使用我们的`MySequential`类重新实现多层感知机。\n"]},{"cell_type":"code","execution_count":null,"id":"05c7a29f","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:13.747460Z","iopub.status.busy":"2022-07-31T02:50:13.746970Z","iopub.status.idle":"2022-07-31T02:50:13.753046Z","shell.execute_reply":"2022-07-31T02:50:13.752424Z"},"origin_pos":26,"tab":["pytorch"],"id":"05c7a29f","outputId":"e6735e75-cf7d-4290-ac4b-5462aec957ad"},"outputs":[{"data":{"text/plain":["tensor([[-0.0077,  0.0073, -0.1978,  0.0472, -0.0503,  0.0230, -0.3637, -0.0708,\n","          0.1103, -0.0739],\n","        [ 0.0431, -0.0212, -0.1294,  0.1650,  0.0585, -0.0102, -0.3153, -0.1349,\n","          0.0603, -0.0431]], grad_fn=<AddmmBackward0>)"]},"execution_count":5,"metadata":{},"output_type":"execute_result"}],"source":["net = MySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))\n","net(X)"]},{"cell_type":"markdown","id":"369b1274","metadata":{"origin_pos":28,"id":"369b1274"},"source":["请注意，`MySequential`的用法与之前为`Sequential`类编写的代码相同\n","（如 :numref:`sec_mlp_concise` 中所述）。\n","\n","## [**在前向传播函数中执行代码**]\n","\n","`Sequential`类使模型构造变得简单，\n","允许我们组合新的架构，而不必定义自己的类。\n","然而，并不是所有的架构都是简单的顺序架构。\n","当需要更强的灵活性时，我们需要定义自己的块。\n","例如，我们可能希望在前向传播函数中执行Python的控制流。\n","此外，我们可能希望执行任意的数学运算，\n","而不是简单地依赖预定义的神经网络层。\n","\n","到目前为止，\n","我们网络中的所有操作都对网络的激活值及网络的参数起作用。\n","然而，有时我们可能希望合并既不是上一层的结果也不是可更新参数的项，\n","我们称之为*常数参数*（constant parameter）。\n","例如，我们需要一个计算函数\n","$f(\\mathbf{x},\\mathbf{w}) = c \\cdot \\mathbf{w}^\\top \\mathbf{x}$的层，\n","其中$\\mathbf{x}$是输入，\n","$\\mathbf{w}$是参数，\n","$c$是某个在优化过程中没有更新的指定常量。\n","因此我们实现了一个`FixedHiddenMLP`类，如下所示：\n"]},{"cell_type":"code","execution_count":null,"id":"89ffa5f9","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:13.755995Z","iopub.status.busy":"2022-07-31T02:50:13.755532Z","iopub.status.idle":"2022-07-31T02:50:13.761032Z","shell.execute_reply":"2022-07-31T02:50:13.760415Z"},"origin_pos":30,"tab":["pytorch"],"id":"89ffa5f9"},"outputs":[],"source":["class FixedHiddenMLP(nn.Module):\n","    def __init__(self):\n","        super().__init__()\n","        # 不计算梯度的随机权重参数。因此其在训练期间保持不变\n","        self.rand_weight = torch.rand((20, 20), requires_grad=False)\n","        self.linear = nn.Linear(20, 20)\n","\n","    def forward(self, X):\n","        X = self.linear(X)\n","        # 使用创建的常量参数以及relu和mm函数\n","        X = F.relu(torch.mm(X, self.rand_weight) + 1)\n","        # 复用全连接层。这相当于两个全连接层共享参数\n","        X = self.linear(X)\n","        # 控制流\n","        while X.abs().sum() > 1:\n","            X /= 2\n","        return X.sum()"]},{"cell_type":"markdown","id":"d6cffadb","metadata":{"origin_pos":32,"id":"d6cffadb"},"source":["在这个`FixedHiddenMLP`模型中，我们实现了一个隐藏层，\n","其权重（`self.rand_weight`）在实例化时被随机初始化，之后为常量。\n","这个权重不是一个模型参数，因此它永远不会被反向传播更新。\n","然后，神经网络将这个固定层的输出通过一个全连接层。\n","\n","注意，在返回输出之前，模型做了一些不寻常的事情：\n","它运行了一个while循环，在$L_1$范数大于$1$的条件下，\n","将输出向量除以$2$，直到它满足条件为止。\n","最后，模型返回了`X`中所有项的和。\n","注意，此操作可能不会常用于在任何实际任务中，\n","我们只是向你展示如何将任意代码集成到神经网络计算的流程中。\n"]},{"cell_type":"code","execution_count":null,"id":"1752f8a5","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:13.763950Z","iopub.status.busy":"2022-07-31T02:50:13.763487Z","iopub.status.idle":"2022-07-31T02:50:13.769452Z","shell.execute_reply":"2022-07-31T02:50:13.768697Z"},"origin_pos":34,"tab":["pytorch"],"id":"1752f8a5","outputId":"b85a4ca1-6123-4dfc-cec2-9f3de5f5cafc"},"outputs":[{"data":{"text/plain":["tensor(-0.0949, grad_fn=<SumBackward0>)"]},"execution_count":7,"metadata":{},"output_type":"execute_result"}],"source":["net = FixedHiddenMLP()\n","net(X)"]},{"cell_type":"markdown","id":"5df281a0","metadata":{"origin_pos":35,"id":"5df281a0"},"source":["我们可以[**混合搭配各种组合块的方法**]。\n","在下面的例子中，我们以一些想到的方法嵌套块。\n"]},{"cell_type":"code","execution_count":null,"id":"51bff270","metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:50:13.773177Z","iopub.status.busy":"2022-07-31T02:50:13.772720Z","iopub.status.idle":"2022-07-31T02:50:13.781728Z","shell.execute_reply":"2022-07-31T02:50:13.781098Z"},"origin_pos":37,"tab":["pytorch"],"id":"51bff270","outputId":"c9b5cd21-fff0-4302-9986-9de2c6870c44"},"outputs":[{"data":{"text/plain":["tensor(-0.1322, grad_fn=<SumBackward0>)"]},"execution_count":8,"metadata":{},"output_type":"execute_result"}],"source":["class NestMLP(nn.Module):\n","    def __init__(self):\n","        super().__init__()\n","        self.net = nn.Sequential(nn.Linear(20, 64), nn.ReLU(),\n","                                 nn.Linear(64, 32), nn.ReLU())\n","        self.linear = nn.Linear(32, 16)\n","\n","    def forward(self, X):\n","        return self.linear(self.net(X))\n","\n","chimera = nn.Sequential(NestMLP(), nn.Linear(16, 20), FixedHiddenMLP())\n","chimera(X)"]},{"cell_type":"markdown","id":"bed7bc76","metadata":{"origin_pos":39,"id":"bed7bc76"},"source":["## 效率\n"]},{"cell_type":"markdown","id":"602c48c1","metadata":{"origin_pos":41,"tab":["pytorch"],"id":"602c48c1"},"source":["你可能会开始担心操作效率的问题。\n","毕竟，我们在一个高性能的深度学习库中进行了大量的字典查找、\n","代码执行和许多其他的Python代码。\n","Python的问题[全局解释器锁](https://wiki.python.org/moin/GlobalInterpreterLock)\n","是众所周知的。\n","在深度学习环境中，我们担心速度极快的GPU可能要等到CPU运行Python代码后才能运行另一个作业。\n"]},{"cell_type":"markdown","id":"d29cacb2","metadata":{"origin_pos":43,"id":"d29cacb2"},"source":["## 小结\n","\n","* 一个块可以由许多层组成；一个块可以由许多块组成。\n","* 块可以包含代码。\n","* 块负责大量的内部处理，包括参数初始化和反向传播。\n","* 层和块的顺序连接由`Sequential`块处理。\n","\n","## 练习\n","\n","1. 如果将`MySequential`中存储块的方式更改为Python列表，会出现什么样的问题？\n","1. 实现一个块，它以两个块为参数，例如`net1`和`net2`，并返回前向传播中两个网络的串联输出。这也被称为平行块。\n","1. 假设你想要连接同一网络的多个实例。实现一个函数，该函数生成同一个块的多个实例，并在此基础上构建更大的网络。\n"]},{"cell_type":"markdown","id":"b2e463fd","metadata":{"origin_pos":45,"tab":["pytorch"],"id":"b2e463fd"},"source":["[Discussions](https://discuss.d2l.ai/t/1827)\n"]},{"cell_type":"markdown","metadata":{"origin_pos":0,"id":"5dd9a2c1"},"source":["# 参数管理\n","\n","在选择了架构并设置了超参数后，我们就进入了训练阶段。\n","此时，我们的目标是找到使损失函数最小化的模型参数值。\n","经过训练后，我们将需要使用这些参数来做出未来的预测。\n","此外，有时我们希望提取参数，以便在其他环境中复用它们，\n","将模型保存下来，以便它可以在其他软件中执行，\n","或者为了获得科学的理解而进行检查。\n","\n","之前的介绍中，我们只依靠深度学习框架来完成训练的工作，\n","而忽略了操作参数的具体细节。\n","本节，我们将介绍以下内容：\n","\n","* 访问参数，用于调试、诊断和可视化。\n","* 参数初始化。\n","* 在不同模型组件间共享参数。\n","\n","(**我们首先看一下具有单隐藏层的多层感知机。**)\n"],"id":"5dd9a2c1"},{"cell_type":"code","execution_count":6,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:26.869753Z","iopub.status.busy":"2022-07-31T02:34:26.869490Z","iopub.status.idle":"2022-07-31T02:34:27.572375Z","shell.execute_reply":"2022-07-31T02:34:27.571664Z"},"origin_pos":2,"tab":["pytorch"],"id":"3ed99574","outputId":"ef4f5019-85f4-49c0-d4ca-35ecd7e68650","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664152133002,"user_tz":-480,"elapsed":570,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["tensor([[0.0735],\n","        [0.1160]], grad_fn=<AddmmBackward0>)"]},"metadata":{},"execution_count":6}],"source":["import torch\n","from torch import nn\n","\n","net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(), nn.Linear(8, 1))\n","X = torch.rand(size=(2, 4))\n","net(X)"],"id":"3ed99574"},{"cell_type":"markdown","metadata":{"origin_pos":4,"id":"f802b767"},"source":["## [**参数访问**]\n","\n","我们从已有模型中访问参数。\n","当通过`Sequential`类定义模型时，\n","我们可以通过索引来访问模型的任意层。\n","这就像模型是一个列表一样，每层的参数都在其属性中。\n","如下所示，我们可以检查第二个全连接层的参数。\n"],"id":"f802b767"},{"cell_type":"code","execution_count":7,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.575738Z","iopub.status.busy":"2022-07-31T02:34:27.575342Z","iopub.status.idle":"2022-07-31T02:34:27.580762Z","shell.execute_reply":"2022-07-31T02:34:27.580035Z"},"origin_pos":6,"tab":["pytorch"],"id":"d716f025","outputId":"a6e9d8da-1211-4cbd-e4a9-e69b8b8ec264","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664152139986,"user_tz":-480,"elapsed":553,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"stream","name":"stdout","text":["OrderedDict([('weight', tensor([[-0.0364, -0.2193, -0.1377,  0.2372,  0.1205,  0.0513, -0.1917, -0.0727]])), ('bias', tensor([0.0584]))])\n"]}],"source":["print(net[2].state_dict())"],"id":"d716f025"},{"cell_type":"markdown","metadata":{"origin_pos":8,"id":"5f874631"},"source":["输出的结果告诉我们一些重要的事情：\n","首先，这个全连接层包含两个参数，分别是该层的权重和偏置。\n","两者都存储为单精度浮点数（float32）。\n","注意，参数名称允许唯一标识每个参数，即使在包含数百个层的网络中也是如此。\n","\n","### [**目标参数**]\n","\n","注意，每个参数都表示为参数类的一个实例。\n","要对参数执行任何操作，首先我们需要访问底层的数值。\n","有几种方法可以做到这一点。有些比较简单，而另一些则比较通用。\n","下面的代码从第二个全连接层（即第三个神经网络层）提取偏置，\n","提取后返回的是一个参数类实例，并进一步访问该参数的值。\n"],"id":"5f874631"},{"cell_type":"code","execution_count":8,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.583948Z","iopub.status.busy":"2022-07-31T02:34:27.583344Z","iopub.status.idle":"2022-07-31T02:34:27.589583Z","shell.execute_reply":"2022-07-31T02:34:27.588617Z"},"origin_pos":10,"tab":["pytorch"],"id":"0037f9c3","outputId":"06112c4e-e21f-4955-8231-61f6ea9eeab4","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664152196194,"user_tz":-480,"elapsed":568,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"stream","name":"stdout","text":["<class 'torch.nn.parameter.Parameter'>\n","Parameter containing:\n","tensor([0.0584], requires_grad=True)\n","tensor([0.0584])\n"]}],"source":["print(type(net[2].bias))\n","print(net[2].bias)\n","print(net[2].bias.data)"],"id":"0037f9c3"},{"cell_type":"markdown","metadata":{"origin_pos":12,"tab":["pytorch"],"id":"3f13fc04"},"source":["参数是复合的对象，包含值、梯度和额外信息。\n","这就是我们需要显式参数值的原因。\n","除了值之外，我们还可以访问每个参数的梯度。\n","在上面这个网络中，由于我们还没有调用反向传播，所以参数的梯度处于初始状态。\n"],"id":"3f13fc04"},{"cell_type":"code","execution_count":9,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.592937Z","iopub.status.busy":"2022-07-31T02:34:27.592216Z","iopub.status.idle":"2022-07-31T02:34:27.598557Z","shell.execute_reply":"2022-07-31T02:34:27.597589Z"},"origin_pos":14,"tab":["pytorch"],"id":"07a112c8","outputId":"6f7993ba-b701-43ea-c5a7-698b72018d45","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664152213352,"user_tz":-480,"elapsed":655,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["True"]},"metadata":{},"execution_count":9}],"source":["net[2].weight.grad == None"],"id":"07a112c8"},{"cell_type":"markdown","metadata":{"origin_pos":15,"id":"5e5615e4"},"source":["### [**一次性访问所有参数**]\n","\n","当我们需要对所有参数执行操作时，逐个访问它们可能会很麻烦。\n","当我们处理更复杂的块（例如，嵌套块）时，情况可能会变得特别复杂，\n","因为我们需要递归整个树来提取每个子块的参数。\n","下面，我们将通过演示来比较访问第一个全连接层的参数和访问所有层。\n"],"id":"5e5615e4"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.601458Z","iopub.status.busy":"2022-07-31T02:34:27.601058Z","iopub.status.idle":"2022-07-31T02:34:27.606788Z","shell.execute_reply":"2022-07-31T02:34:27.605795Z"},"origin_pos":17,"tab":["pytorch"],"id":"e8b57f24","outputId":"c65b86f2-8c80-447d-9164-766e6e6989a4"},"outputs":[{"name":"stdout","output_type":"stream","text":["('weight', torch.Size([8, 4])) ('bias', torch.Size([8]))\n","('0.weight', torch.Size([8, 4])) ('0.bias', torch.Size([8])) ('2.weight', torch.Size([1, 8])) ('2.bias', torch.Size([1]))\n"]}],"source":["print(*[(name, param.shape) for name, param in net[0].named_parameters()])\n","print(*[(name, param.shape) for name, param in net.named_parameters()])"],"id":"e8b57f24"},{"cell_type":"markdown","metadata":{"origin_pos":19,"id":"5ec219a7"},"source":["这为我们提供了另一种访问网络参数的方式，如下所示。\n"],"id":"5ec219a7"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.610770Z","iopub.status.busy":"2022-07-31T02:34:27.610298Z","iopub.status.idle":"2022-07-31T02:34:27.615718Z","shell.execute_reply":"2022-07-31T02:34:27.614944Z"},"origin_pos":21,"tab":["pytorch"],"id":"22d6d41c","outputId":"72e749ce-f907-42b1-9a9e-f771cfc29b54"},"outputs":[{"data":{"text/plain":["tensor([0.2871])"]},"execution_count":6,"metadata":{},"output_type":"execute_result"}],"source":["net.state_dict()['2.bias'].data"],"id":"22d6d41c"},{"cell_type":"markdown","metadata":{"origin_pos":23,"id":"29ac48d4"},"source":["### [**从嵌套块收集参数**]\n","\n","让我们看看，如果我们将多个块相互嵌套，参数命名约定是如何工作的。\n","我们首先定义一个生成块的函数（可以说是“块工厂”），然后将这些块组合到更大的块中。\n"],"id":"29ac48d4"},{"cell_type":"code","execution_count":10,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.619790Z","iopub.status.busy":"2022-07-31T02:34:27.619312Z","iopub.status.idle":"2022-07-31T02:34:27.629178Z","shell.execute_reply":"2022-07-31T02:34:27.628478Z"},"origin_pos":25,"tab":["pytorch"],"id":"bfc3a497","outputId":"6b50fbfb-a8b2-4f5e-b2c0-0ecffed67e21","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664152526639,"user_tz":-480,"elapsed":728,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["tensor([[0.3002],\n","        [0.3000]], grad_fn=<AddmmBackward0>)"]},"metadata":{},"execution_count":10}],"source":["def block1():\n","    return nn.Sequential(nn.Linear(4, 8), nn.ReLU(),\n","                         nn.Linear(8, 4), nn.ReLU())\n","\n","def block2():\n","    net = nn.Sequential()\n","    for i in range(4):\n","        # 在这里嵌套\n","        net.add_module(f'block {i}', block1())\n","    return net\n","\n","rgnet = nn.Sequential(block2(), nn.Linear(4, 1))\n","rgnet(X)"],"id":"bfc3a497"},{"cell_type":"markdown","metadata":{"origin_pos":27,"id":"ab151cbc"},"source":["[**设计了网络后，我们看看它是如何工作的。**]\n"],"id":"ab151cbc"},{"cell_type":"code","execution_count":11,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.633107Z","iopub.status.busy":"2022-07-31T02:34:27.632632Z","iopub.status.idle":"2022-07-31T02:34:27.636665Z","shell.execute_reply":"2022-07-31T02:34:27.635958Z"},"origin_pos":29,"tab":["pytorch"],"id":"fab91a23","outputId":"c849d566-7e1f-4477-cda1-89aa4c514c07","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664152534211,"user_tz":-480,"elapsed":576,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"stream","name":"stdout","text":["Sequential(\n","  (0): Sequential(\n","    (block 0): Sequential(\n","      (0): Linear(in_features=4, out_features=8, bias=True)\n","      (1): ReLU()\n","      (2): Linear(in_features=8, out_features=4, bias=True)\n","      (3): ReLU()\n","    )\n","    (block 1): Sequential(\n","      (0): Linear(in_features=4, out_features=8, bias=True)\n","      (1): ReLU()\n","      (2): Linear(in_features=8, out_features=4, bias=True)\n","      (3): ReLU()\n","    )\n","    (block 2): Sequential(\n","      (0): Linear(in_features=4, out_features=8, bias=True)\n","      (1): ReLU()\n","      (2): Linear(in_features=8, out_features=4, bias=True)\n","      (3): ReLU()\n","    )\n","    (block 3): Sequential(\n","      (0): Linear(in_features=4, out_features=8, bias=True)\n","      (1): ReLU()\n","      (2): Linear(in_features=8, out_features=4, bias=True)\n","      (3): ReLU()\n","    )\n","  )\n","  (1): Linear(in_features=4, out_features=1, bias=True)\n",")\n"]}],"source":["print(rgnet)"],"id":"fab91a23"},{"cell_type":"markdown","metadata":{"origin_pos":31,"id":"6116e82e"},"source":["因为层是分层嵌套的，所以我们也可以像通过嵌套列表索引一样访问它们。\n","下面，我们访问第一个主要的块中、第二个子块的第一层的偏置项。\n"],"id":"6116e82e"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.641182Z","iopub.status.busy":"2022-07-31T02:34:27.640699Z","iopub.status.idle":"2022-07-31T02:34:27.646040Z","shell.execute_reply":"2022-07-31T02:34:27.645323Z"},"origin_pos":33,"tab":["pytorch"],"id":"14bd585b","outputId":"30a5291c-23cc-49b5-e744-a299e531bcf7"},"outputs":[{"data":{"text/plain":["tensor([-0.0444, -0.4451, -0.4149,  0.0549, -0.0969,  0.2053, -0.2514,  0.0220])"]},"execution_count":9,"metadata":{},"output_type":"execute_result"}],"source":["rgnet[0][1][0].bias.data"],"id":"14bd585b"},{"cell_type":"markdown","metadata":{"origin_pos":35,"id":"2d6670e3"},"source":["## 参数初始化\n","\n","知道了如何访问参数后，现在我们看看如何正确地初始化参数。\n","我们在 :numref:`sec_numerical_stability`中讨论了良好初始化的必要性。\n","深度学习框架提供默认随机初始化，\n","也允许我们创建自定义初始化方法，\n","满足我们通过其他规则实现初始化权重。\n"],"id":"2d6670e3"},{"cell_type":"markdown","metadata":{"origin_pos":37,"tab":["pytorch"],"id":"d322b8ae"},"source":["默认情况下，PyTorch会根据一个范围均匀地初始化权重和偏置矩阵，\n","这个范围是根据输入和输出维度计算出的。\n","PyTorch的`nn.init`模块提供了多种预置初始化方法。\n"],"id":"d322b8ae"},{"cell_type":"markdown","metadata":{"origin_pos":39,"id":"aedc0411"},"source":["### [**内置初始化**]\n","\n","让我们首先调用内置的初始化器。\n","下面的代码将所有权重参数初始化为标准差为0.01的高斯随机变量，\n","且将偏置参数设置为0。\n"],"id":"aedc0411"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.649191Z","iopub.status.busy":"2022-07-31T02:34:27.648729Z","iopub.status.idle":"2022-07-31T02:34:27.655690Z","shell.execute_reply":"2022-07-31T02:34:27.654985Z"},"origin_pos":41,"tab":["pytorch"],"id":"2015e1c5","outputId":"9e19d1a9-c470-4b69-c236-39557fd6e846"},"outputs":[{"data":{"text/plain":["(tensor([-0.0145,  0.0053,  0.0055, -0.0044]), tensor(0.))"]},"execution_count":10,"metadata":{},"output_type":"execute_result"}],"source":["def init_normal(m):\n","    if type(m) == nn.Linear:\n","        nn.init.normal_(m.weight, mean=0, std=0.01)\n","        nn.init.zeros_(m.bias)\n","net.apply(init_normal)\n","net[0].weight.data[0], net[0].bias.data[0]"],"id":"2015e1c5"},{"cell_type":"markdown","metadata":{"origin_pos":43,"id":"4b58497c"},"source":["我们还可以将所有参数初始化为给定的常数，比如初始化为1。\n"],"id":"4b58497c"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.658675Z","iopub.status.busy":"2022-07-31T02:34:27.658217Z","iopub.status.idle":"2022-07-31T02:34:27.665187Z","shell.execute_reply":"2022-07-31T02:34:27.664442Z"},"origin_pos":45,"tab":["pytorch"],"id":"1ec562a8","outputId":"cc9f1569-7d1e-41d4-80d7-34bfabb8605c"},"outputs":[{"data":{"text/plain":["(tensor([1., 1., 1., 1.]), tensor(0.))"]},"execution_count":11,"metadata":{},"output_type":"execute_result"}],"source":["def init_constant(m):\n","    if type(m) == nn.Linear:\n","        nn.init.constant_(m.weight, 1)\n","        nn.init.zeros_(m.bias)\n","net.apply(init_constant)\n","net[0].weight.data[0], net[0].bias.data[0]"],"id":"1ec562a8"},{"cell_type":"markdown","metadata":{"origin_pos":47,"id":"35facca4"},"source":["我们还可以[**对某些块应用不同的初始化方法**]。\n","例如，下面我们使用Xavier初始化方法初始化第一个神经网络层，\n","然后将第三个神经网络层初始化为常量值42。\n"],"id":"35facca4"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.668259Z","iopub.status.busy":"2022-07-31T02:34:27.667805Z","iopub.status.idle":"2022-07-31T02:34:27.674356Z","shell.execute_reply":"2022-07-31T02:34:27.673656Z"},"origin_pos":49,"tab":["pytorch"],"id":"b09bef28","outputId":"77528620-0749-4665-c103-26d8d6d45a89"},"outputs":[{"name":"stdout","output_type":"stream","text":["tensor([-0.4792,  0.4968,  0.6094,  0.3063])\n","tensor([[42., 42., 42., 42., 42., 42., 42., 42.]])\n"]}],"source":["def init_xavier(m):\n","    if type(m) == nn.Linear:\n","        nn.init.xavier_uniform_(m.weight)\n","def init_42(m):\n","    if type(m) == nn.Linear:\n","        nn.init.constant_(m.weight, 42)\n","\n","net[0].apply(init_xavier)\n","net[2].apply(init_42)\n","print(net[0].weight.data[0])\n","print(net[2].weight.data)"],"id":"b09bef28"},{"cell_type":"markdown","metadata":{"origin_pos":51,"id":"5a6a9dd4"},"source":["### [**自定义初始化**]\n","\n","有时，深度学习框架没有提供我们需要的初始化方法。\n","在下面的例子中，我们使用以下的分布为任意权重参数$w$定义初始化方法：\n","\n","$$\n","\\begin{aligned}\n","    w \\sim \\begin{cases}\n","        U(5, 10) & \\text{ 可能性 } \\frac{1}{4} \\\\\n","            0    & \\text{ 可能性 } \\frac{1}{2} \\\\\n","        U(-10, -5) & \\text{ 可能性 } \\frac{1}{4}\n","    \\end{cases}\n","\\end{aligned}\n","$$\n"],"id":"5a6a9dd4"},{"cell_type":"markdown","metadata":{"origin_pos":53,"tab":["pytorch"],"id":"b12896c5"},"source":["同样，我们实现了一个`my_init`函数来应用到`net`。\n"],"id":"b12896c5"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.677539Z","iopub.status.busy":"2022-07-31T02:34:27.677071Z","iopub.status.idle":"2022-07-31T02:34:27.685091Z","shell.execute_reply":"2022-07-31T02:34:27.684375Z"},"origin_pos":56,"tab":["pytorch"],"id":"56e64516","outputId":"955b96d7-b3e2-41fa-ae65-1c09875d8872"},"outputs":[{"name":"stdout","output_type":"stream","text":["Init weight torch.Size([8, 4])\n","Init weight torch.Size([1, 8])\n"]},{"data":{"text/plain":["tensor([[-6.9027,  7.6638, -0.0000, -0.0000],\n","        [-0.0000,  5.5632, -6.1899,  0.0000]], grad_fn=<SliceBackward0>)"]},"execution_count":13,"metadata":{},"output_type":"execute_result"}],"source":["def my_init(m):\n","    if type(m) == nn.Linear:\n","        print(\"Init\", *[(name, param.shape)\n","                        for name, param in m.named_parameters()][0])\n","        nn.init.uniform_(m.weight, -10, 10)\n","        m.weight.data *= m.weight.data.abs() >= 5\n","\n","net.apply(my_init)\n","net[0].weight[:2]"],"id":"56e64516"},{"cell_type":"markdown","metadata":{"origin_pos":58,"id":"745864eb"},"source":["注意，我们始终可以直接设置参数。\n"],"id":"745864eb"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.688187Z","iopub.status.busy":"2022-07-31T02:34:27.687700Z","iopub.status.idle":"2022-07-31T02:34:27.693849Z","shell.execute_reply":"2022-07-31T02:34:27.693173Z"},"origin_pos":60,"tab":["pytorch"],"id":"53788268","outputId":"e27b697a-52ac-44d2-cf03-4046b89c52ee"},"outputs":[{"data":{"text/plain":["tensor([42.0000,  8.6638,  1.0000,  1.0000])"]},"execution_count":14,"metadata":{},"output_type":"execute_result"}],"source":["net[0].weight.data[:] += 1\n","net[0].weight.data[0, 0] = 42\n","net[0].weight.data[0]"],"id":"53788268"},{"cell_type":"markdown","metadata":{"origin_pos":63,"id":"d41c97da"},"source":["## [**参数绑定**]\n","\n","有时我们希望在多个层间共享参数：\n","我们可以定义一个稠密层，然后使用它的参数来设置另一个层的参数。\n"],"id":"d41c97da"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:34:27.696921Z","iopub.status.busy":"2022-07-31T02:34:27.696459Z","iopub.status.idle":"2022-07-31T02:34:27.705214Z","shell.execute_reply":"2022-07-31T02:34:27.704526Z"},"origin_pos":65,"tab":["pytorch"],"id":"c1defe46","outputId":"1cfa8023-4fa9-42e6-a7d7-83e71a691e89"},"outputs":[{"name":"stdout","output_type":"stream","text":["tensor([True, True, True, True, True, True, True, True])\n","tensor([True, True, True, True, True, True, True, True])\n"]}],"source":["# 我们需要给共享层一个名称，以便可以引用它的参数\n","shared = nn.Linear(8, 8)\n","net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(),\n","                    shared, nn.ReLU(),\n","                    shared, nn.ReLU(),\n","                    nn.Linear(8, 1))\n","net(X)\n","# 检查参数是否相同\n","print(net[2].weight.data[0] == net[4].weight.data[0])\n","net[2].weight.data[0, 0] = 100\n","# 确保它们实际上是同一个对象，而不只是有相同的值\n","print(net[2].weight.data[0] == net[4].weight.data[0])"],"id":"c1defe46"},{"cell_type":"markdown","metadata":{"origin_pos":68,"tab":["pytorch"],"id":"58307879"},"source":["这个例子表明第三个和第五个神经网络层的参数是绑定的。\n","它们不仅值相等，而且由相同的张量表示。\n","因此，如果我们改变其中一个参数，另一个参数也会改变。\n","你可能会思考：当参数绑定时，梯度会发生什么情况？\n","答案是由于模型参数包含梯度，因此在反向传播期间第二个隐藏层\n","（即第三个神经网络层）和第三个隐藏层（即第五个神经网络层）的梯度会加在一起。\n"],"id":"58307879"},{"cell_type":"markdown","metadata":{"origin_pos":69,"id":"050094b4"},"source":["## 小结\n","\n","* 我们有几种方法可以访问、初始化和绑定模型参数。\n","* 我们可以使用自定义初始化方法。\n","\n","## 练习\n","\n","1. 使用 :numref:`sec_model_construction` 中定义的`FancyMLP`模型，访问各个层的参数。\n","1. 查看初始化模块文档以了解不同的初始化方法。\n","1. 构建包含共享参数层的多层感知机并对其进行训练。在训练过程中，观察模型各层的参数和梯度。\n","1. 为什么共享参数是个好主意？\n"],"id":"050094b4"},{"cell_type":"markdown","metadata":{"origin_pos":0,"id":"8cecf865"},"source":["# 自定义层\n","\n","深度学习成功背后的一个因素是神经网络的灵活性：\n","我们可以用创造性的方式组合不同的层，从而设计出适用于各种任务的架构。\n","例如，研究人员发明了专门用于处理图像、文本、序列数据和执行动态规划的层。\n","未来，你会遇到或要自己发明一个现在在深度学习框架中还不存在的层。\n","在这些情况下，你必须构建自定义层。在本节中，我们将向你展示如何构建。\n","\n","## 不带参数的层\n","\n","首先，我们(**构造一个没有任何参数的自定义层**)。\n","如果你还记得我们在 :numref:`sec_model_construction`对块的介绍，\n","这应该看起来很眼熟。\n","下面的`CenteredLayer`类要从其输入中减去均值。\n","要构建它，我们只需继承基础层类并实现前向传播功能。\n"],"id":"8cecf865"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:48.531563Z","iopub.status.busy":"2022-07-31T02:17:48.531009Z","iopub.status.idle":"2022-07-31T02:17:49.285343Z","shell.execute_reply":"2022-07-31T02:17:49.284238Z"},"origin_pos":2,"tab":["pytorch"],"id":"9e287c3e"},"outputs":[],"source":["import torch\n","import torch.nn.functional as F\n","from torch import nn\n","\n","\n","class CenteredLayer(nn.Module):\n","    def __init__(self):\n","        super().__init__()\n","\n","    def forward(self, X):\n","        return X - X.mean()"],"id":"9e287c3e"},{"cell_type":"markdown","metadata":{"origin_pos":4,"id":"0f796008"},"source":["让我们向该层提供一些数据，验证它是否能按预期工作。\n"],"id":"0f796008"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:49.290569Z","iopub.status.busy":"2022-07-31T02:17:49.289905Z","iopub.status.idle":"2022-07-31T02:17:49.309521Z","shell.execute_reply":"2022-07-31T02:17:49.308533Z"},"origin_pos":6,"tab":["pytorch"],"id":"fba9cb49","outputId":"4cd53e69-9410-45f7-fae3-3b10bbd3792b"},"outputs":[{"data":{"text/plain":["tensor([-2., -1.,  0.,  1.,  2.])"]},"execution_count":2,"metadata":{},"output_type":"execute_result"}],"source":["layer = CenteredLayer()\n","layer(torch.FloatTensor([1, 2, 3, 4, 5]))"],"id":"fba9cb49"},{"cell_type":"markdown","metadata":{"origin_pos":8,"id":"6ab2e9fa"},"source":["现在，我们可以[**将层作为组件合并到更复杂的模型中**]。\n"],"id":"6ab2e9fa"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:49.313838Z","iopub.status.busy":"2022-07-31T02:17:49.313258Z","iopub.status.idle":"2022-07-31T02:17:49.319338Z","shell.execute_reply":"2022-07-31T02:17:49.318354Z"},"origin_pos":10,"tab":["pytorch"],"id":"e979f3f1"},"outputs":[],"source":["net = nn.Sequential(nn.Linear(8, 128), CenteredLayer())"],"id":"e979f3f1"},{"cell_type":"markdown","metadata":{"origin_pos":12,"id":"0a6243c7"},"source":["作为额外的健全性检查，我们可以在向该网络发送随机数据后，检查均值是否为0。\n","由于我们处理的是浮点数，因为存储精度的原因，我们仍然可能会看到一个非常小的非零数。\n"],"id":"0a6243c7"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:49.323305Z","iopub.status.busy":"2022-07-31T02:17:49.322738Z","iopub.status.idle":"2022-07-31T02:17:49.332536Z","shell.execute_reply":"2022-07-31T02:17:49.331588Z"},"origin_pos":14,"tab":["pytorch"],"id":"6df8ad4a","outputId":"04d8d0c2-caca-4649-fd2e-51d96bb36d17"},"outputs":[{"data":{"text/plain":["tensor(0., grad_fn=<MeanBackward0>)"]},"execution_count":4,"metadata":{},"output_type":"execute_result"}],"source":["Y = net(torch.rand(4, 8))\n","Y.mean()"],"id":"6df8ad4a"},{"cell_type":"markdown","metadata":{"origin_pos":16,"id":"912f4e12"},"source":["## [**带参数的层**]\n","\n","以上我们知道了如何定义简单的层，下面我们继续定义具有参数的层，\n","这些参数可以通过训练进行调整。\n","我们可以使用内置函数来创建参数，这些函数提供一些基本的管理功能。\n","比如管理访问、初始化、共享、保存和加载模型参数。\n","这样做的好处之一是：我们不需要为每个自定义层编写自定义的序列化程序。\n","\n","现在，让我们实现自定义版本的全连接层。\n","回想一下，该层需要两个参数，一个用于表示权重，另一个用于表示偏置项。\n","在此实现中，我们使用修正线性单元作为激活函数。\n","该层需要输入参数：`in_units`和`units`，分别表示输入数和输出数。\n"],"id":"912f4e12"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:49.336805Z","iopub.status.busy":"2022-07-31T02:17:49.336323Z","iopub.status.idle":"2022-07-31T02:17:49.343528Z","shell.execute_reply":"2022-07-31T02:17:49.342640Z"},"origin_pos":18,"tab":["pytorch"],"id":"b8eac37b"},"outputs":[],"source":["class MyLinear(nn.Module):\n","    def __init__(self, in_units, units):\n","        super().__init__()\n","        self.weight = nn.Parameter(torch.randn(in_units, units))\n","        self.bias = nn.Parameter(torch.randn(units,))\n","    def forward(self, X):\n","        linear = torch.matmul(X, self.weight.data) + self.bias.data\n","        return F.relu(linear)"],"id":"b8eac37b"},{"cell_type":"markdown","metadata":{"origin_pos":21,"tab":["pytorch"],"id":"bffb1d19"},"source":["接下来，我们实例化`MyLinear`类并访问其模型参数。\n"],"id":"bffb1d19"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:49.347044Z","iopub.status.busy":"2022-07-31T02:17:49.346545Z","iopub.status.idle":"2022-07-31T02:17:49.354965Z","shell.execute_reply":"2022-07-31T02:17:49.354032Z"},"origin_pos":23,"tab":["pytorch"],"id":"b74e5205","outputId":"c078a483-6cea-4a25-e874-82d828f2410a"},"outputs":[{"data":{"text/plain":["Parameter containing:\n","tensor([[-1.4779, -0.6027, -0.2225],\n","        [ 1.1270, -0.6127, -0.2008],\n","        [-2.1864, -1.0548,  0.2558],\n","        [ 0.0225,  0.0553,  0.4876],\n","        [ 0.3558,  1.1427,  1.0245]], requires_grad=True)"]},"execution_count":6,"metadata":{},"output_type":"execute_result"}],"source":["linear = MyLinear(5, 3)\n","linear.weight"],"id":"b74e5205"},{"cell_type":"markdown","metadata":{"origin_pos":25,"id":"9f2b3c45"},"source":["我们可以[**使用自定义层直接执行前向传播计算**]。\n"],"id":"9f2b3c45"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:49.358920Z","iopub.status.busy":"2022-07-31T02:17:49.358274Z","iopub.status.idle":"2022-07-31T02:17:49.366967Z","shell.execute_reply":"2022-07-31T02:17:49.366204Z"},"origin_pos":27,"tab":["pytorch"],"id":"e3e2d09d","outputId":"ab436466-4a9a-4088-ccca-dbd4d8cae110"},"outputs":[{"data":{"text/plain":["tensor([[0.0000, 0.0000, 0.2187],\n","        [0.0000, 0.0000, 0.0000]])"]},"execution_count":7,"metadata":{},"output_type":"execute_result"}],"source":["linear(torch.rand(2, 5))"],"id":"e3e2d09d"},{"cell_type":"markdown","metadata":{"origin_pos":29,"id":"72b661dc"},"source":["我们还可以(**使用自定义层构建模型**)，就像使用内置的全连接层一样使用自定义层。\n"],"id":"72b661dc"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:17:49.370583Z","iopub.status.busy":"2022-07-31T02:17:49.369984Z","iopub.status.idle":"2022-07-31T02:17:49.378565Z","shell.execute_reply":"2022-07-31T02:17:49.377692Z"},"origin_pos":31,"tab":["pytorch"],"id":"febc961d","outputId":"1b79be15-1e20-40f3-bb3f-826e142cef1c"},"outputs":[{"data":{"text/plain":["tensor([[ 7.4571],\n","        [12.7505]])"]},"execution_count":8,"metadata":{},"output_type":"execute_result"}],"source":["net = nn.Sequential(MyLinear(64, 8), MyLinear(8, 1))\n","net(torch.rand(2, 64))"],"id":"febc961d"},{"cell_type":"markdown","metadata":{"origin_pos":33,"id":"5341acfa"},"source":["## 小结\n","\n","* 我们可以通过基本层类设计自定义层。这允许我们定义灵活的新层，其行为与深度学习框架中的任何现有层不同。\n","* 在自定义层定义完成后，我们就可以在任意环境和网络架构中调用该自定义层。\n","* 层可以有局部参数，这些参数可以通过内置函数创建。\n","\n","## 练习\n","\n","1. 设计一个接受输入并计算张量降维的层，它返回$y_k = \\sum_{i, j} W_{ijk} x_i x_j$。\n","1. 设计一个返回输入数据的傅立叶系数前半部分的层。\n"],"id":"5341acfa"},{"cell_type":"markdown","metadata":{"origin_pos":0,"id":"3e089410"},"source":["# 读写文件\n","\n","到目前为止，我们讨论了如何处理数据，\n","以及如何构建、训练和测试深度学习模型。\n","然而，有时我们希望保存训练的模型，\n","以备将来在各种环境中使用（比如在部署中进行预测）。\n","此外，当运行一个耗时较长的训练过程时，\n","最佳的做法是定期保存中间结果，\n","以确保在服务器电源被不小心断掉时，我们不会损失几天的计算结果。\n","因此，现在是时候学习如何加载和存储权重向量和整个模型了。\n","\n","## (**加载和保存张量**)\n","\n","对于单个张量，我们可以直接调用`load`和`save`函数分别读写它们。\n","这两个函数都要求我们提供一个名称，`save`要求将要保存的变量作为输入。\n"],"id":"3e089410"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:55.803857Z","iopub.status.busy":"2022-07-31T02:32:55.803417Z","iopub.status.idle":"2022-07-31T02:32:56.504212Z","shell.execute_reply":"2022-07-31T02:32:56.503478Z"},"origin_pos":2,"tab":["pytorch"],"id":"37d8dbd1"},"outputs":[],"source":["import torch\n","from torch import nn\n","from torch.nn import functional as F\n","\n","x = torch.arange(4)\n","torch.save(x, 'x-file')"],"id":"37d8dbd1"},{"cell_type":"markdown","metadata":{"origin_pos":4,"id":"baf2885d"},"source":["我们现在可以将存储在文件中的数据读回内存。\n"],"id":"baf2885d"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:56.507830Z","iopub.status.busy":"2022-07-31T02:32:56.507438Z","iopub.status.idle":"2022-07-31T02:32:56.518747Z","shell.execute_reply":"2022-07-31T02:32:56.518155Z"},"origin_pos":6,"tab":["pytorch"],"id":"b80fe2ae","outputId":"a18d25fe-84d8-465f-bfc1-57583238ea85"},"outputs":[{"data":{"text/plain":["tensor([0, 1, 2, 3])"]},"execution_count":2,"metadata":{},"output_type":"execute_result"}],"source":["x2 = torch.load('x-file')\n","x2"],"id":"b80fe2ae"},{"cell_type":"markdown","metadata":{"origin_pos":8,"id":"f68d2fa6"},"source":["我们可以[**存储一个张量列表，然后把它们读回内存。**]\n"],"id":"f68d2fa6"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:56.521551Z","iopub.status.busy":"2022-07-31T02:32:56.521348Z","iopub.status.idle":"2022-07-31T02:32:56.528208Z","shell.execute_reply":"2022-07-31T02:32:56.527619Z"},"origin_pos":10,"tab":["pytorch"],"id":"e7b0261e","outputId":"8b50be48-2171-46f0-93a1-700cb3cc73ba"},"outputs":[{"data":{"text/plain":["(tensor([0, 1, 2, 3]), tensor([0., 0., 0., 0.]))"]},"execution_count":3,"metadata":{},"output_type":"execute_result"}],"source":["y = torch.zeros(4)\n","torch.save([x, y],'x-files')\n","x2, y2 = torch.load('x-files')\n","(x2, y2)"],"id":"e7b0261e"},{"cell_type":"markdown","metadata":{"origin_pos":12,"id":"4227d24b"},"source":["我们甚至可以(**写入或读取从字符串映射到张量的字典**)。\n","当我们要读取或写入模型中的所有权重时，这很方便。\n"],"id":"4227d24b"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:56.530996Z","iopub.status.busy":"2022-07-31T02:32:56.530638Z","iopub.status.idle":"2022-07-31T02:32:56.536985Z","shell.execute_reply":"2022-07-31T02:32:56.536389Z"},"origin_pos":14,"tab":["pytorch"],"id":"9d62889d","outputId":"f0778f7d-284e-465b-ebac-840efd2d0447"},"outputs":[{"data":{"text/plain":["{'x': tensor([0, 1, 2, 3]), 'y': tensor([0., 0., 0., 0.])}"]},"execution_count":4,"metadata":{},"output_type":"execute_result"}],"source":["mydict = {'x': x, 'y': y}\n","torch.save(mydict, 'mydict')\n","mydict2 = torch.load('mydict')\n","mydict2"],"id":"9d62889d"},{"cell_type":"markdown","metadata":{"origin_pos":16,"id":"9b254c7b"},"source":["## [**加载和保存模型参数**]\n","\n","保存单个权重向量（或其他张量）确实有用，\n","但是如果我们想保存整个模型，并在以后加载它们，\n","单独保存每个向量则会变得很麻烦。\n","毕竟，我们可能有数百个参数散布在各处。\n","因此，深度学习框架提供了内置函数来保存和加载整个网络。\n","需要注意的一个重要细节是，这将保存模型的参数而不是保存整个模型。\n","例如，如果我们有一个3层多层感知机，我们需要单独指定架构。\n","因为模型本身可以包含任意代码，所以模型本身难以序列化。\n","因此，为了恢复模型，我们需要用代码生成架构，\n","然后从磁盘加载参数。\n","让我们从熟悉的多层感知机开始尝试一下。\n"],"id":"9b254c7b"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:56.539741Z","iopub.status.busy":"2022-07-31T02:32:56.539403Z","iopub.status.idle":"2022-07-31T02:32:56.545387Z","shell.execute_reply":"2022-07-31T02:32:56.544765Z"},"origin_pos":18,"tab":["pytorch"],"id":"44bdd6df"},"outputs":[],"source":["class MLP(nn.Module):\n","    def __init__(self):\n","        super().__init__()\n","        self.hidden = nn.Linear(20, 256)\n","        self.output = nn.Linear(256, 10)\n","\n","    def forward(self, x):\n","        return self.output(F.relu(self.hidden(x)))\n","\n","net = MLP()\n","X = torch.randn(size=(2, 20))\n","Y = net(X)"],"id":"44bdd6df"},{"cell_type":"markdown","metadata":{"origin_pos":20,"id":"416bb902"},"source":["接下来，我们[**将模型的参数存储在一个叫做“mlp.params”的文件中。**]\n"],"id":"416bb902"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:56.549285Z","iopub.status.busy":"2022-07-31T02:32:56.548944Z","iopub.status.idle":"2022-07-31T02:32:56.553472Z","shell.execute_reply":"2022-07-31T02:32:56.552867Z"},"origin_pos":22,"tab":["pytorch"],"id":"0c11c100"},"outputs":[],"source":["torch.save(net.state_dict(), 'mlp.params')"],"id":"0c11c100"},{"cell_type":"markdown","metadata":{"origin_pos":24,"id":"7bef8dab"},"source":["为了恢复模型，我们[**实例化了原始多层感知机模型的一个备份。**]\n","这里我们不需要随机初始化模型参数，而是(**直接读取文件中存储的参数。**)\n"],"id":"7bef8dab"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:56.556338Z","iopub.status.busy":"2022-07-31T02:32:56.556009Z","iopub.status.idle":"2022-07-31T02:32:56.562475Z","shell.execute_reply":"2022-07-31T02:32:56.561831Z"},"origin_pos":26,"tab":["pytorch"],"id":"3b5367f0","outputId":"be76f52e-16aa-4afa-b6e5-409c71a3c5d3"},"outputs":[{"data":{"text/plain":["MLP(\n","  (hidden): Linear(in_features=20, out_features=256, bias=True)\n","  (output): Linear(in_features=256, out_features=10, bias=True)\n",")"]},"execution_count":7,"metadata":{},"output_type":"execute_result"}],"source":["clone = MLP()\n","clone.load_state_dict(torch.load('mlp.params'))\n","clone.eval()"],"id":"3b5367f0"},{"cell_type":"markdown","metadata":{"origin_pos":28,"id":"c2965c95"},"source":["由于两个实例具有相同的模型参数，在输入相同的`X`时，\n","两个实例的计算结果应该相同。\n","让我们来验证一下。\n"],"id":"c2965c95"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T02:32:56.565379Z","iopub.status.busy":"2022-07-31T02:32:56.565041Z","iopub.status.idle":"2022-07-31T02:32:56.570481Z","shell.execute_reply":"2022-07-31T02:32:56.569880Z"},"origin_pos":30,"tab":["pytorch"],"id":"bfd32641","outputId":"6d5cfe55-ee8b-4246-85a1-f06192df2e3e"},"outputs":[{"data":{"text/plain":["tensor([[True, True, True, True, True, True, True, True, True, True],\n","        [True, True, True, True, True, True, True, True, True, True]])"]},"execution_count":8,"metadata":{},"output_type":"execute_result"}],"source":["Y_clone = clone(X)\n","Y_clone == Y"],"id":"bfd32641"},{"cell_type":"markdown","metadata":{"origin_pos":32,"id":"2092c4f8"},"source":["## 小结\n","\n","* `save`和`load`函数可用于张量对象的文件读写。\n","* 我们可以通过参数字典保存和加载网络的全部参数。\n","* 保存架构必须在代码中完成，而不是在参数中完成。\n","\n","## 练习\n","\n","1. 即使不需要将经过训练的模型部署到不同的设备上，存储模型参数还有什么实际的好处？\n","1. 假设我们只想复用网络的一部分，以将其合并到不同的网络架构中。比如说，如果你想在一个新的网络中使用之前网络的前两层，你该怎么做？\n","1. 如何同时保存网络架构和参数？你会对架构加上什么限制？\n"],"id":"2092c4f8"},{"cell_type":"markdown","metadata":{"origin_pos":0,"id":"17458a72"},"source":["# GPU\n",":label:`sec_use_gpu`\n","\n","在 :numref:`tab_intro_decade`中，\n","我们回顾了过去20年计算能力的快速增长。\n","简而言之，自2000年以来，GPU性能每十年增长1000倍。\n","\n","本节，我们将讨论如何利用这种计算性能进行研究。\n","首先是如何使用单个GPU，然后是如何使用多个GPU和多个服务器（具有多个GPU）。\n","\n","我们先看看如何使用单个NVIDIA GPU进行计算。\n","首先，确保你至少安装了一个NVIDIA GPU。\n","然后，下载[NVIDIA驱动和CUDA](https://developer.nvidia.com/cuda-downloads)\n","并按照提示设置适当的路径。\n","当这些准备工作完成，就可以使用`nvidia-smi`命令来(**查看显卡信息。**)\n"],"id":"17458a72"},{"cell_type":"code","execution_count":12,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:45.686782Z","iopub.status.busy":"2022-07-31T03:18:45.686164Z","iopub.status.idle":"2022-07-31T03:18:46.588389Z","shell.execute_reply":"2022-07-31T03:18:46.587649Z"},"origin_pos":1,"tab":["pytorch"],"id":"c27d1e15","outputId":"7f7c2a62-fc56-4187-d253-5daaa9a31a11","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664153650557,"user_tz":-480,"elapsed":685,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"stream","name":"stdout","text":["NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.\n","\n"]}],"source":["!nvidia-smi"],"id":"c27d1e15"},{"cell_type":"markdown","metadata":{"origin_pos":3,"tab":["pytorch"],"id":"ed72ec96"},"source":["在PyTorch中，每个数组都有一个设备（device），\n","我们通常将其称为上下文（context）。\n","默认情况下，所有变量和相关的计算都分配给CPU。\n","有时上下文可能是GPU。\n","当我们跨多个服务器部署作业时，事情会变得更加棘手。\n","通过智能地将数组分配给上下文，\n","我们可以最大限度地减少在设备之间传输数据的时间。\n","例如，当在带有GPU的服务器上训练神经网络时，\n","我们通常希望模型的参数在GPU上。\n"],"id":"ed72ec96"},{"cell_type":"markdown","metadata":{"origin_pos":4,"id":"287aa32e"},"source":["要运行此部分中的程序，至少需要两个GPU。\n","注意，对于大多数桌面计算机来说，这可能是奢侈的，但在云中很容易获得。\n","例如，你可以使用AWS EC2的多GPU实例。\n","本书的其他章节大都不需要多个GPU，\n","而本节只是为了展示数据如何在不同的设备之间传递。\n","\n","## [**计算设备**]\n","\n","我们可以指定用于存储和计算的设备，如CPU和GPU。\n","默认情况下，张量是在内存中创建的，然后使用CPU计算它。\n"],"id":"287aa32e"},{"cell_type":"markdown","metadata":{"origin_pos":6,"tab":["pytorch"],"id":"663a2155"},"source":["在PyTorch中，CPU和GPU可以用`torch.device('cpu')`\n","和`torch.device('cuda')`表示。\n","应该注意的是，`cpu`设备意味着所有物理CPU和内存，\n","这意味着PyTorch的计算将尝试使用所有CPU核心。\n","然而，`gpu`设备只代表一个卡和相应的显存。\n","如果有多个GPU，我们使用`torch.device(f'cuda:{i}')`\n","来表示第$i$块GPU（$i$从0开始）。\n","另外，`cuda:0`和`cuda`是等价的。\n"],"id":"663a2155"},{"cell_type":"code","execution_count":1,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:46.591853Z","iopub.status.busy":"2022-07-31T03:18:46.591637Z","iopub.status.idle":"2022-07-31T03:18:47.237946Z","shell.execute_reply":"2022-07-31T03:18:47.237306Z"},"origin_pos":8,"tab":["pytorch"],"id":"33eee1e0","outputId":"befe938e-6e42-46d6-ab3b-4ac30b3b2fd2","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664153756409,"user_tz":-480,"elapsed":2339,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["(device(type='cpu'), device(type='cuda'), device(type='cuda', index=1))"]},"metadata":{},"execution_count":1}],"source":["import torch\n","from torch import nn\n","\n","torch.device('cpu'), torch.device('cuda'), torch.device('cuda:1')"],"id":"33eee1e0"},{"cell_type":"markdown","metadata":{"origin_pos":10,"id":"280a2bef"},"source":["我们可以(**查询可用gpu的数量。**)\n"],"id":"280a2bef"},{"cell_type":"code","execution_count":2,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:47.240931Z","iopub.status.busy":"2022-07-31T03:18:47.240722Z","iopub.status.idle":"2022-07-31T03:18:47.375362Z","shell.execute_reply":"2022-07-31T03:18:47.374682Z"},"origin_pos":12,"tab":["pytorch"],"id":"61e0359c","outputId":"4166c41f-63f6-4095-a3af-3e6965344a12","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664153767509,"user_tz":-480,"elapsed":595,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["1"]},"metadata":{},"execution_count":2}],"source":["torch.cuda.device_count()"],"id":"61e0359c"},{"cell_type":"markdown","metadata":{"origin_pos":14,"id":"ff0380b3"},"source":["现在我们定义了两个方便的函数，\n","[**这两个函数允许我们在不存在所需所有GPU的情况下运行代码。**]\n"],"id":"ff0380b3"},{"cell_type":"code","execution_count":3,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:47.378665Z","iopub.status.busy":"2022-07-31T03:18:47.378198Z","iopub.status.idle":"2022-07-31T03:18:47.388311Z","shell.execute_reply":"2022-07-31T03:18:47.387565Z"},"origin_pos":16,"tab":["pytorch"],"id":"cef42bd4","outputId":"c75cc9f3-fd48-4845-c334-e1e989b4e875","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664153914336,"user_tz":-480,"elapsed":4,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["(device(type='cuda', index=0),\n"," device(type='cpu'),\n"," [device(type='cuda', index=0)])"]},"metadata":{},"execution_count":3}],"source":["def try_gpu(i=0):  \n","    \"\"\"如果存在，则返回gpu(i)，否则返回cpu()\"\"\"\n","    if torch.cuda.device_count() >= i + 1:\n","        return torch.device(f'cuda:{i}')\n","    return torch.device('cpu')\n","\n","def try_all_gpus():  \n","    \"\"\"返回所有可用的GPU，如果没有GPU，则返回[cpu(),]\"\"\"\n","    devices = [torch.device(f'cuda:{i}')\n","             for i in range(torch.cuda.device_count())]\n","    return devices if devices else [torch.device('cpu')]\n","\n","try_gpu(), try_gpu(10), try_all_gpus()"],"id":"cef42bd4"},{"cell_type":"markdown","metadata":{"origin_pos":18,"id":"05bc6a4e"},"source":["## 张量与GPU\n","\n","我们可以[**查询张量所在的设备。**]\n","默认情况下，张量是在CPU上创建的。\n"],"id":"05bc6a4e"},{"cell_type":"code","execution_count":4,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:47.391143Z","iopub.status.busy":"2022-07-31T03:18:47.390873Z","iopub.status.idle":"2022-07-31T03:18:47.396662Z","shell.execute_reply":"2022-07-31T03:18:47.395899Z"},"origin_pos":20,"tab":["pytorch"],"id":"16ea6117","outputId":"38841bf5-ca9a-4fc9-e404-754658604944","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664153937802,"user_tz":-480,"elapsed":561,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["device(type='cpu')"]},"metadata":{},"execution_count":4}],"source":["x = torch.tensor([1, 2, 3])\n","x.device"],"id":"16ea6117"},{"cell_type":"markdown","metadata":{"origin_pos":22,"id":"c7461551"},"source":["需要注意的是，无论何时我们要对多个项进行操作，\n","它们都必须在同一个设备上。\n","例如，如果我们对两个张量求和，\n","我们需要确保两个张量都位于同一个设备上，\n","否则框架将不知道在哪里存储结果，甚至不知道在哪里执行计算。\n","\n","### [**存储在GPU上**]\n","\n","有几种方法可以在GPU上存储张量。\n","例如，我们可以在创建张量时指定存储设备。接\n","下来，我们在第一个`gpu`上创建张量变量`X`。\n","在GPU上创建的张量只消耗这个GPU的显存。\n","我们可以使用`nvidia-smi`命令查看显存使用情况。\n","一般来说，我们需要确保不创建超过GPU显存限制的数据。\n"],"id":"c7461551"},{"cell_type":"code","execution_count":5,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:47.400064Z","iopub.status.busy":"2022-07-31T03:18:47.399547Z","iopub.status.idle":"2022-07-31T03:18:50.838186Z","shell.execute_reply":"2022-07-31T03:18:50.837531Z"},"origin_pos":24,"tab":["pytorch"],"id":"e620d5dc","outputId":"cfc642b9-cd6d-423b-f2d4-17f936bd24c6","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664153967745,"user_tz":-480,"elapsed":4111,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["tensor([[1., 1., 1.],\n","        [1., 1., 1.]], device='cuda:0')"]},"metadata":{},"execution_count":5}],"source":["X = torch.ones(2, 3, device=try_gpu())\n","X"],"id":"e620d5dc"},{"cell_type":"markdown","metadata":{"origin_pos":26,"id":"81dedc6f"},"source":["假设你至少有两个GPU，下面的代码将在(**第二个GPU上创建一个随机张量。**)\n"],"id":"81dedc6f"},{"cell_type":"code","execution_count":8,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:50.841391Z","iopub.status.busy":"2022-07-31T03:18:50.840900Z","iopub.status.idle":"2022-07-31T03:18:53.684969Z","shell.execute_reply":"2022-07-31T03:18:53.684340Z"},"origin_pos":28,"tab":["pytorch"],"id":"b330feee","outputId":"14d898e8-7ef6-4272-9555-79ff28f62322","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664154067177,"user_tz":-480,"elapsed":620,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["tensor([[0.4677, 0.9099, 0.3235],\n","        [0.7283, 0.3766, 0.8041]], device='cuda:0')"]},"metadata":{},"execution_count":8}],"source":["Y = torch.rand(2, 3, device=try_gpu(1))\n","Y"],"id":"b330feee"},{"cell_type":"markdown","metadata":{"origin_pos":30,"id":"2da3d1c6"},"source":["### 复制\n","\n","如果我们[**要计算`X + Y`，我们需要决定在哪里执行这个操作**]。\n","例如，如 :numref:`fig_copyto`所示，\n","我们可以将`X`传输到第二个GPU并在那里执行操作。\n","*不要*简单地`X`加上`Y`，因为这会导致异常，\n","运行时引擎不知道该怎么做：它在同一设备上找不到数据会导致失败。\n","由于`Y`位于第二个GPU上，所以我们需要将`X`移到那里，\n","然后才能执行相加运算。\n","\n","![复制数据以在同一设备上执行操作](http://d2l.ai/_images/copyto.svg)\n",":label:`fig_copyto`\n"],"id":"2da3d1c6"},{"cell_type":"code","execution_count":9,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:53.688112Z","iopub.status.busy":"2022-07-31T03:18:53.687623Z","iopub.status.idle":"2022-07-31T03:18:53.694122Z","shell.execute_reply":"2022-07-31T03:18:53.693481Z"},"origin_pos":32,"tab":["pytorch"],"id":"7fdafec6","outputId":"855e8937-6d4d-448b-a40e-91645637c711","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664154104840,"user_tz":-480,"elapsed":5,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"stream","name":"stdout","text":["tensor([[1., 1., 1.],\n","        [1., 1., 1.]], device='cuda:0')\n","tensor([[0.4677, 0.9099, 0.3235],\n","        [0.7283, 0.3766, 0.8041]], device='cuda:0')\n"]}],"source":["Z = Y.cuda(0)\n","print(X)\n","print(Z)"],"id":"7fdafec6"},{"cell_type":"markdown","metadata":{"origin_pos":34,"id":"5df7cf7c"},"source":["[**现在数据在同一个GPU上（`Z`和`Y`都在），我们可以将它们相加。**]\n"],"id":"5df7cf7c"},{"cell_type":"code","execution_count":10,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:53.697013Z","iopub.status.busy":"2022-07-31T03:18:53.696584Z","iopub.status.idle":"2022-07-31T03:18:53.701827Z","shell.execute_reply":"2022-07-31T03:18:53.701203Z"},"origin_pos":35,"tab":["pytorch"],"id":"4c8a5da2","outputId":"43645dec-d60f-468f-f5e3-8033e96d3382","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664154116792,"user_tz":-480,"elapsed":586,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["tensor([[0.9354, 1.8198, 0.6470],\n","        [1.4567, 0.7531, 1.6082]], device='cuda:0')"]},"metadata":{},"execution_count":10}],"source":["Y + Z"],"id":"4c8a5da2"},{"cell_type":"markdown","metadata":{"origin_pos":37,"tab":["pytorch"],"id":"4fdbea89"},"source":["假设变量`Z`已经存在于第二个GPU上。\n","如果我们还是调用`Z.cuda(1)`会发生什么？\n","它将返回`Z`，而不会复制并分配新内存。\n"],"id":"4fdbea89"},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:53.704739Z","iopub.status.busy":"2022-07-31T03:18:53.704236Z","iopub.status.idle":"2022-07-31T03:18:53.708695Z","shell.execute_reply":"2022-07-31T03:18:53.708074Z"},"origin_pos":40,"tab":["pytorch"],"id":"a25213ee","outputId":"14b3dddd-0e95-49a5-bac8-16c3a5f2a403"},"outputs":[{"data":{"text/plain":["True"]},"execution_count":10,"metadata":{},"output_type":"execute_result"}],"source":["Z.cuda(1) is Z"],"id":"a25213ee"},{"cell_type":"markdown","metadata":{"origin_pos":42,"id":"c798fbf3"},"source":["### 旁注\n","\n","人们使用GPU来进行机器学习，因为单个GPU相对运行速度快。\n","但是在设备（CPU、GPU和其他机器）之间传输数据比计算慢得多。\n","这也使得并行化变得更加困难，因为我们必须等待数据被发送（或者接收），\n","然后才能继续进行更多的操作。\n","这就是为什么拷贝操作要格外小心。\n","根据经验，多个小操作比一个大操作糟糕得多。\n","此外，一次执行几个操作比代码中散布的许多单个操作要好得多（除非你确信自己在做什么）。\n","如果一个设备必须等待另一个设备才能执行其他操作，\n","那么这样的操作可能会阻塞。\n","这有点像排队订购咖啡，而不像通过电话预先订购：\n","当你到店的时候，咖啡已经准备好了。\n","\n","最后，当我们打印张量或将张量转换为NumPy格式时，\n","如果数据不在内存中，框架会首先将其复制到内存中，\n","这会导致额外的传输开销。\n","更糟糕的是，它现在受制于全局解释器锁，使得一切都得等待Python完成。\n","\n","## [**神经网络与GPU**]\n","\n","类似地，神经网络模型可以指定设备。\n","下面的代码将模型参数放在GPU上。\n"],"id":"c798fbf3"},{"cell_type":"code","execution_count":11,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:53.711623Z","iopub.status.busy":"2022-07-31T03:18:53.711122Z","iopub.status.idle":"2022-07-31T03:18:53.715257Z","shell.execute_reply":"2022-07-31T03:18:53.714604Z"},"origin_pos":44,"tab":["pytorch"],"id":"3fbe1c61","executionInfo":{"status":"ok","timestamp":1664154269204,"user_tz":-480,"elapsed":1343,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[],"source":["net = nn.Sequential(nn.Linear(3, 1))\n","net = net.to(device=try_gpu())"],"id":"3fbe1c61"},{"cell_type":"markdown","metadata":{"origin_pos":46,"id":"a11ea8fc"},"source":["在接下来的几章中，\n","我们将看到更多关于如何在GPU上运行模型的例子，\n","因为它们将变得更加计算密集。\n","\n","当输入为GPU上的张量时，模型将在同一GPU上计算结果。\n"],"id":"a11ea8fc"},{"cell_type":"code","execution_count":12,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:53.718150Z","iopub.status.busy":"2022-07-31T03:18:53.717639Z","iopub.status.idle":"2022-07-31T03:18:53.723782Z","shell.execute_reply":"2022-07-31T03:18:53.723172Z"},"origin_pos":47,"tab":["pytorch"],"id":"19966105","outputId":"03f48886-ba9c-4707-ea85-921f61ec9fba","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664154287116,"user_tz":-480,"elapsed":2846,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["tensor([[-0.5594],\n","        [-0.5594]], device='cuda:0', grad_fn=<AddmmBackward0>)"]},"metadata":{},"execution_count":12}],"source":["net(X)"],"id":"19966105"},{"cell_type":"markdown","metadata":{"origin_pos":48,"id":"8bb7b51a"},"source":["让我们(**确认模型参数存储在同一个GPU上。**)\n"],"id":"8bb7b51a"},{"cell_type":"code","execution_count":13,"metadata":{"execution":{"iopub.execute_input":"2022-07-31T03:18:53.726602Z","iopub.status.busy":"2022-07-31T03:18:53.726178Z","iopub.status.idle":"2022-07-31T03:18:53.730715Z","shell.execute_reply":"2022-07-31T03:18:53.730080Z"},"origin_pos":50,"tab":["pytorch"],"id":"91c4c6fe","outputId":"55577d9f-b0ca-4b46-c951-4feb4b190613","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1664154290985,"user_tz":-480,"elapsed":1067,"user":{"displayName":"Geeks Z","userId":"18417645384289412694"}}},"outputs":[{"output_type":"execute_result","data":{"text/plain":["device(type='cuda', index=0)"]},"metadata":{},"execution_count":13}],"source":["net[0].weight.data.device"],"id":"91c4c6fe"},{"cell_type":"markdown","metadata":{"origin_pos":52,"id":"f1db38e0"},"source":["总之，只要所有的数据和参数都在同一个设备上，\n","我们就可以有效地学习模型。\n","在下面的章节中，我们将看到几个这样的例子。\n","\n","## 小结\n","\n","* 我们可以指定用于存储和计算的设备，例如CPU或GPU。默认情况下，数据在主内存中创建，然后使用CPU进行计算。\n","* 深度学习框架要求计算的所有输入数据都在同一设备上，无论是CPU还是GPU。\n","* 不经意地移动数据可能会显著降低性能。一个典型的错误如下：计算GPU上每个小批量的损失，并在命令行中将其报告给用户（或将其记录在NumPy `ndarray`中）时，将触发全局解释器锁，从而使所有GPU阻塞。最好是为GPU内部的日志分配内存，并且只移动较大的日志。\n","\n","## 练习\n","\n","1. 尝试一个计算量更大的任务，比如大矩阵的乘法，看看CPU和GPU之间的速度差异。再试一个计算量很小的任务呢？\n","1. 我们应该如何在GPU上读写模型参数？\n","1. 测量计算1000个$100 \\times 100$矩阵的矩阵乘法所需的时间，并记录输出矩阵的Frobenius范数，一次记录一个结果，而不是在GPU上保存日志并仅传输最终结果。\n","1. 测量同时在两个GPU上执行两个矩阵乘法与在一个GPU上按顺序执行两个矩阵乘法所需的时间。提示：你应该看到近乎线性的缩放。\n"],"id":"f1db38e0"}],"metadata":{"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"name":"python"},"colab":{"provenance":[{"file_id":"https://github.com/d2l-ai/d2l-zh-pytorch-colab/blob/master/chapter_deep-learning-computation/model-construction.ipynb","timestamp":1664003328096}],"toc_visible":true},"accelerator":"GPU"},"nbformat":4,"nbformat_minor":5}