{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "caf33317",
   "metadata": {},
   "source": [
    "过程：  \n",
    "1.定义具有一些可学习参数（或权重）的神经网络  \n",
    "2.遍历输入数据集  \n",
    "3.计算损失（输出正确的距离有多远）  \n",
    "4.将梯度传播回网络参数  \n",
    "5.通常使用简单的更新规则来更新网络的权重：weight = weight - learning_rate * gradient"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b311b71c",
   "metadata": {},
   "source": [
    "### 定义网络  \n",
    "torch.Tensor：一个多维数组，支持例如bachward()的梯度计算。\n",
    "nn.Module：神经网络模块，封装参数的便捷方法，并带有将其移动到gpu，导出，加载等的帮助器。  \n",
    "nn.Parameter：一种张量，分配为Module的属性时，自动注册为参数。  \n",
    "autograd.Function：实现自动微分操作的正向和反向定义。 每个Tensor操作都会创建至少一个Function节点，该节点连接到创建Tensor的函数，并且编码其历史记录。  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "ada93dfc",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Net(\n",
      "  (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))\n",
      "  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))\n",
      "  (fc1): Linear(in_features=400, out_features=120, bias=True)\n",
      "  (fc2): Linear(in_features=120, out_features=84, bias=True)\n",
      "  (fc3): Linear(in_features=84, out_features=10, bias=True)\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "\n",
    "class Net(nn.Module):\n",
    "\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        # torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)\n",
    "        # 第一层卷积，1输入通道, 6 输出通道, 5x5 滤波器\n",
    "        self.conv1 = nn.Conv2d(1, 6, 5)\n",
    "        # 第二层卷积，1输入，16输出，5x5滤波器\n",
    "        self.conv2 = nn.Conv2d(6, 16, 5)\n",
    "        # an affine operation: y = Wx + b\n",
    "        # 全连接网络\n",
    "        # torch.nn.Linear(in_features, out_features, bias=True)\n",
    "        # 一层全连接，400输入，120输出\n",
    "        self.fc1 = nn.Linear(16 * 5 * 5, 120)\n",
    "        # 二层全连接，120输入，84输出\n",
    "        self.fc2 = nn.Linear(120, 84)\n",
    "        # 第三层全连接，84输入，10输出\n",
    "        self.fc3 = nn.Linear(84, 10)\n",
    "\n",
    "    def forward(self, x):\n",
    "        # torch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\n",
    "        # torch.nn.functional.relu(input, inplace=False)\n",
    "        # 2×2尺寸最大池化，即长宽减半，使用ReLU激活函数前向传递\n",
    "        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))\n",
    "        x = F.max_pool2d(F.relu(self.conv2(x)), 2)\n",
    "        x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension\n",
    "        x = F.relu(self.fc1(x))\n",
    "        x = F.relu(self.fc2(x))\n",
    "        x = self.fc3(x)\n",
    "        return x\n",
    "\n",
    "\n",
    "net = Net()\n",
    "print(net)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5f5ce8d7",
   "metadata": {},
   "source": [
    "通过net.parameters()来返回模型中的可学习参数。  \n",
    "class torch.nn.Parameter()  \n",
    "Variable的一种，常被用于模块参数(module parameter)。  \n",
    "Parameters 是 Variable 的子类。Paramenters和Modules一起使用的时候会有一些特殊的属性，即：当Paramenters赋值给Module的属性的时候，他会自动的被加到 Module的**参数列表**中(即：会出现在 parameters() 迭代器中)。将Varibale赋值给Module属性则不会有这样的影响。 这样做的原因是：我们有时候会需要缓存一些临时的状态(state), 比如：模型中RNN的最后一个隐状态。如果没有Parameter这个类的话，那么这些临时变量也会注册成为模型变量。  \n",
    "Variable 与 Parameter的另一个不同之处在于，Parameter不能被 volatile(即：无法设置volatile=True)而且默认requires_grad=True。Variable默认requires_grad=False。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "0356aca0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "10\n",
      "torch.Size([6, 1, 5, 5])\n"
     ]
    }
   ],
   "source": [
    "params = list(net.parameters())\n",
    "print(len(params))\n",
    "print(params[0].size())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0daae46",
   "metadata": {},
   "source": [
    "进行一个32×32的随机输入。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "08e7c255",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 0.0927,  0.0143, -0.0838, -0.0139,  0.0998,  0.1315,  0.1057,  0.1590,\n",
      "          0.0096,  0.0377]], grad_fn=<AddmmBackward0>)\n"
     ]
    }
   ],
   "source": [
    "# torch.randn(*sizes, out=None) → Tensor\n",
    "input = torch.randn(1, 1, 32, 32)\n",
    "out = net(input)\n",
    "print(out)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "c9196967",
   "metadata": {},
   "outputs": [],
   "source": [
    "# zero_grad():将module中的所有模型参数的梯度设置为0.\n",
    "net.zero_grad()\n",
    "# 计算梯度。当调用loss.backward()时，整个计算图将被微分。 并且图中具有requires_grad=True的所有张量将随梯度累积其.grad张量。\n",
    "out.backward(torch.randn(1, 10))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25f481f7",
   "metadata": {},
   "source": [
    "**注意**：torch.nn仅支持小批量。若只有一个样本，需使用input.unsqueeze(0)添加一个假批量尺寸。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1fb0bfab",
   "metadata": {},
   "source": [
    "接下来添加损失函数。nn包下有几种不同的损失函数。 一个简单的损失是：nn.MSELoss，它计算输入和目标之间的均方误差。  \n",
    "class torch.nn.MSELoss(size_average=True)  \n",
    "x 和 y 可以是任意形状，每个包含n个元素。对n个元素对应的差值的绝对值求和，得出来的结果除以n。如果在创建MSELoss实例的时候在构造函数中传入size_average=False，那么求出来的平方和将不会除以n。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "645f4c14",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(1.5999, grad_fn=<MseLossBackward0>)\n"
     ]
    }
   ],
   "source": [
    "output = net(input)\n",
    "target = torch.randn(10)\n",
    "# view(*args) → Tensor。返回一个有相同数据但大小不同的tensor。 返回的tensor必须有与原tensor相同的数据和相同数目的元素，但可以有不同的大小。一个tensor必须是连续的contiguous()才能被查看。\n",
    "# 将target形状变成和output一样，也可以写作target.view(output.size())\n",
    "target = target.view(1, -1)\n",
    "criterion = nn.MSELoss()\n",
    "\n",
    "loss = criterion(output, target)\n",
    "print(loss)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "a6e24b7b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<MseLossBackward0 object at 0x0000026A9CF09040>\n",
      "<AddmmBackward0 object at 0x0000026A9CF09A30>\n",
      "<AccumulateGrad object at 0x0000026A9CF09040>\n"
     ]
    }
   ],
   "source": [
    "print(loss.grad_fn)  # MSELoss\n",
    "print(loss.grad_fn.next_functions[0][0])  # Linear\n",
    "print(loss.grad_fn.next_functions[0][0].next_functions[0][0])  # ReLU"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "67ded56d",
   "metadata": {},
   "source": [
    "### 反向传播"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "99fadc87",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "conv1.bias.grad before backward\n",
      "tensor([0., 0., 0., 0., 0., 0.])\n",
      "conv1.bias.grad after backward\n",
      "tensor([ 0.0069,  0.0256,  0.0015, -0.0068,  0.0150,  0.0049])\n"
     ]
    }
   ],
   "source": [
    "# 清楚梯度\n",
    "net.zero_grad()  \n",
    "\n",
    "print('conv1.bias.grad before backward')\n",
    "print(net.conv1.bias.grad)\n",
    "# 要反向传播误差，要做的只是loss.backward()\n",
    "loss.backward()\n",
    "\n",
    "print('conv1.bias.grad after backward')\n",
    "print(net.conv1.bias.grad)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6f51193",
   "metadata": {},
   "source": [
    "### 更新权重\n",
    "实践中使用的最简单的更新规则是随机梯度下降（SGD）：  \n",
    "weight = weight - learning_rate * gradient"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "6143cf9a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch.optim as optim\n",
    "\n",
    "# 为了使用torch.optim，需要构建一个optimizer对象。这个对象能够保持当前参数状态并基于计算得到的梯度进行参数更新。\n",
    "# 为了构建一个Optimizer，需要给它一个包含了需要优化的参数（必须都是Variable对象）的iterable。然后，可以设置optimizer的参数选项，比如学习率，权重衰减，等等。\n",
    "optimizer = optim.SGD(net.parameters(), lr=0.01)\n",
    "\n",
    "#zero_grad() 清空所有被优化过的Variable的梯度.\n",
    "optimizer.zero_grad()\n",
    "output = net(input)\n",
    "loss = criterion(output, target)\n",
    "loss.backward()\n",
    "\n",
    "# 所有的optimizer都实现了step()方法，这个方法会更新所有的参数。\n",
    "# 一旦梯度被如backward()之类的函数计算好后，就可以调用step()这个函数。\n",
    "optimizer.step()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9a1c8331",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
