{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "## 梯度的求解"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "---"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 介绍"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "本实验首先讲解了梯度的定义和求解方式，然后引入 PyTorch 中的相关函数，完成了张量的梯度定义、梯度计算、梯度清空以及关闭梯度等操作。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 知识点"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "- 张量的属性\n",
    "- 计算图\n",
    "- 梯度的计算"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "---"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 梯度计算"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 张量的梯度计算"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "在一元函数中，某点的梯度表示的就是某点的导数。在多元函数中某点的梯度表示的是，由每个自变量所对应的偏导值所组成的向量。如 $f(x,y,z)$ 的梯度向量就是$(\\frac{\\partial f}{\\partial x},\\frac{\\partial f}{\\partial y},\\frac{\\partial f}{\\partial z})$。 "
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "梯度的方向就是函数值上升最快的方向。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "我们一般可以使用 `torch.autograd.backward()` 来自动计算变量的梯度，该函数会对指定的变量进行偏导的求取。为了辨别函数中哪些变量需要求偏导，哪些不需要求偏导，我们一般会在定义张量时，加上 ` requires_grad=True`，表示该变量可以求偏导。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "source": [
    "import torch\r\n",
    "x = torch.randn(1, requires_grad=True)\r\n",
    "y = torch.randn(1)\r\n",
    "z = torch.randn(1)\r\n",
    "f1 = 2*x+y\r\n",
    "f2 = y+z\r\n",
    "# 查看变量是否存在求梯度函数\r\n",
    "print(f1.grad_fn)\r\n",
    "print(f2.grad_fn)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<AddBackward0 object at 0x00000290ECA736A0>\n",
      "None\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "source": [
    "import torch\r\n",
    "x = torch.tensor(1.0,requires_grad=True)\r\n",
    "y = torch.tensor(1.0)\r\n",
    "z1 = 2*x\r\n",
    "z2 = 2*y\r\n",
    "print(z1.grad_fn)\r\n",
    "print(z2.grad_fn)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<MulBackward0 object at 0x00000290EC9570D0>\n",
      "None\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "从结果可以看出， x 被定义成可以求偏导的变量，因此，它所对应的变量 f1 就是可求导的（通过 `torch.grad_fn` 查看）。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "接下来让我利用 `f1.backward()` 求取 f1 的梯度（即所有变量的偏导），然后利用 `x.grad` 展示 $\\frac{\\partial f_1}{\\partial x}$ 的值。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "source": [
    "f1.backward()\r\n",
    "print(x.grad)  # df1/dx"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([2.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "source": [
    "# f1.backward()\r\n",
    "# 出错，不能二次求导\r\n",
    "print(x.grad)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([2.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "当然除了上面简单的一元函数求偏导外，我们还可以使用上面的方法来求取复合函数的偏导："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "source": [
    "x = torch.randn(3, requires_grad=True)  # x 中存了三个变量 x1,x2,x3\r\n",
    "y = x + 2\r\n",
    "z = y * y * 3\r\n",
    "z = z.mean()\r\n",
    "print(z)\r\n",
    "print(z.grad_fn)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor(13.9170, grad_fn=<MeanBackward0>)\n",
      "<MeanBackward0 object at 0x00000290EC957A60>\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "source": [
    "x = torch.tensor([1,2],requires_grad=True,dtype=torch.float32)\r\n",
    "y = x+2\r\n",
    "z = y**2\r\n",
    "z = z.max()\r\n",
    "print(z)\r\n",
    "z.backward()\r\n",
    "print(z.grad_fn)\r\n",
    "print(x.grad)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor(16., grad_fn=<MaxBackward1>)\n",
      "<MaxBackward1 object at 0x00000290EC957A60>\n",
      "tensor([0., 8.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "根据上面代码可知，我们定义了一个 z 关于变量 x 的多元复合函数，如下："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "$$y = x+2$$"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "$$z = \\frac{1}{n}\\sum_{i=1}^n 3y_i^2$$"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "我们手动计算一下 $z$ 关于 $x$ 的偏导数。首先我们将 $z$ 进行展开。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "$$z = \\frac{1}{n}\\sum_{i=1}^n 3y_i^2=\\frac{1}{3}(3y_{1}^2+3y_{2}^2+3y_{3}^2)$$"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "特别的，我们计算 $z$ 对于 $x_{1}$的偏导，$\\frac{\\partial z}{\\partial x_{1}} = \\frac{\\partial z}{\\partial y_{1}} \\cdot \\frac{\\partial y_{1}}{\\partial x_{1}}$，其他的计算类似。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "首先计算 $\\frac{\\partial z}{\\partial y_{1}}$："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "$$\\frac{\\partial z}{\\partial y_{1}} = \\frac{1}{3}*3*2*y_{1}=2y_{1}$$"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "接着计算 $\\frac{\\partial y_{1}}{\\partial x_{1}}$ 为："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "$$\\frac{\\partial y_{1}}{\\partial x_{1}} = 1$$"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "所以最终 $\\frac{\\partial z}{\\partial x_{1}}$ 为："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "$$\\frac{\\partial z}{\\partial x_{1}} = \\frac{\\partial z}{\\partial y_{1}} \\cdot \\frac{\\partial y_{1}}{\\partial x_{1}} = 2y_{1}=2(x_{1}+2)$$"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "我们也可以使用 `z.backward()` 求取梯度，该张量的梯度结果会被放在所对应变量的 `grad` 属性中。下面我们比较一下通过 `z.backward()` 求取梯度和我们上面推导出的结果是否一样。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "source": [
    "z.backward()\r\n",
    "print(x.grad)  # dz/dx\r\n",
    "\r\n",
    "print(2*(x+2)) # 比较直接计算的结果"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([1.3244, 3.3350, 6.5415])\n",
      "tensor([1.3244, 3.3350, 6.5415], grad_fn=<MulBackward0>)\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "source": [
    "print(x.grad==2*(x+2))"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([True, True, True])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "上面结果为函数 $z$ 的梯度向量，即函数 $z$ 分别关于 $x_1,x_2,x_3$ 的偏导数。 "
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "简单的说， `torch.autograd.backward` 就是使用链式法则对变量的偏导进行了求解。该函数有一个参数 `grad_variables`，该参数相当于给原梯度进行了一个加权。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "如果使用函数 `k.backward(p)` 则得到的的变量 `x.grad` 的值为："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "$$x.grad=p\\cdot \\frac{dk}{dx}$$"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "source": [
    "x = torch.randn(3, requires_grad=True)\r\n",
    "\r\n",
    "k = x * 2\r\n",
    "for _ in range(10):\r\n",
    "    k = k * 2\r\n",
    "\r\n",
    "print(k)\r\n",
    "print(k.shape)\r\n",
    "p = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float32)\r\n",
    "\r\n",
    "k.backward(p)\r\n",
    "print(x.grad)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([ -248.0860, -4058.3335, -1546.8904], grad_fn=<MulBackward0>)\n",
      "torch.Size([3])\n",
      "tensor([2.0480e+02, 2.0480e+03, 2.0480e-01])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "source": [
    "# 这里不太懂\r\n",
    "\"\"\" 可以通过对backward传递一个参数，来给梯度设定一个比例,求出来的梯度会乘以这个比例系数 \"\"\"\r\n",
    "a = torch.tensor([1,1,1],requires_grad=True,dtype=torch.float32)\r\n",
    "b = 2*a\r\n",
    "rate = torch.tensor([1,2,3],dtype=torch.float32)\r\n",
    "b.backward(rate)\r\n",
    "print(a.grad)\r\n"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "tensor([2., 4., 6.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 停止张量的梯度计算"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "如果我们不需要某些张量的梯度计算，我们就可以使用下面三种方法告诉计算机停止梯度的计算："
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "- `x.requires_grad_(False)`。\n",
    "- `x.detach()`。\n",
    "- `with torch.no_grad():`。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "利用 `x.requires_grad_(...) ` 就地更改现有标志："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "source": [
    "a = torch.randn(2, 2, requires_grad=True)\r\n",
    "b = ((a * 3) / (a - 1))\r\n",
    "print(b.grad_fn)  # 此时可偏导，求取梯度的函数存在\r\n",
    "a.requires_grad_(False)\r\n",
    "b = ((a * 3) / (a - 1))\r\n",
    "print(b.grad_fn)  # 此时不可偏导了，求取梯度的函数不存在了"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<DivBackward0 object at 0x00000290F0F27D00>\n",
      "None\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "source": [
    "# 第一:可以通过设定tensor.requires_grad_(False)来不求导\r\n",
    "a =  torch.tensor([1,2,3],requires_grad=True,dtype=torch.float32)\r\n",
    "b = (2*a).max()\r\n",
    "b.backward()\r\n",
    "print(b.grad_fn)\r\n",
    "a.requires_grad_(False)\r\n",
    "print(b.grad_fn)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<MaxBackward1 object at 0x00000290EC951850>\n",
      "<MaxBackward1 object at 0x00000290EC951850>\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "利用 `x.detach()` 获取具有相同内容但不能进行梯度计算的新张量："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "source": [
    "a = torch.randn(2, 2, requires_grad=True)\r\n",
    "b = a.detach()\r\n",
    "print(a.requires_grad)\r\n",
    "print(b.requires_grad)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "True\n",
      "False\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "source": [
    "# 第二:可以通过detach,把变量拿出来,拿出来的变量就不会被求导\r\n",
    "a = torch.tensor([1,2],dtype=torch.float32,requires_grad=True)\r\n",
    "b = torch.detach(a)\r\n",
    "print(a.requires_grad)\r\n",
    "print(b.requires_grad)\r\n",
    "# 还可以自己把自己拿出来\r\n",
    "a.detach_()\r\n",
    "print(a.requires_grad)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "True\n",
      "False\n",
      "False\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "在 `with torch.no_grad()` 的作用域中定义的都是不进行梯度计算的张量："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "source": [
    "a = torch.randn(2, 2, requires_grad=True)\r\n",
    "print((a ** 2).requires_grad)\r\n",
    "with torch.no_grad():  # 该作用域下定义的都是不进行梯度计算的张量\r\n",
    "    print((a ** 2).requires_grad)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "True\n",
      "False\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "source": [
    "a = torch.randn([1,3],requires_grad=True)\r\n",
    "print(a.requires_grad)\r\n",
    "with torch.no_grad():\r\n",
    "    # with torch.no_grad下定义的计算不计算梯度\r\n",
    "    print((a**2).requires_grad)"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "True\n",
      "False\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "#### 梯度的清空"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "在 PyTorch 中，如果我们利用 `torch.autograd.backward` 求取张量的梯度时。但是，如果我们多次运行该函数，该函数会将计算得到的梯度累加起来，如下所示："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "source": [
    "x = torch.ones(4, requires_grad=True)\r\n",
    "y = (2*x+1).sum()\r\n",
    "z = (2*x).sum()\r\n",
    "y.backward()\r\n",
    "print(\"第一次偏导：\", x.grad)  # dy/dx\r\n",
    "z.backward()\r\n",
    "print(\"第二次偏导：\", x.grad)  # dy/dx+dz/dx"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "第一次偏导： tensor([2., 2., 2., 2.])\n",
      "第二次偏导： tensor([4., 4., 4., 4.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "source": [
    "x = torch.ones(4, requires_grad=True)\r\n",
    "y = (2*x+1).sum()\r\n",
    "z = (2*x).sum()\r\n",
    "y.backward()\r\n",
    "print(\"第一次偏导：\", x.grad)  # dy/dx\r\n",
    "z.backward()\r\n",
    "print(\"第二次偏导：\", x.grad)  # dy/dx+dz/dx"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "第一次偏导： tensor([2., 2., 2., 2.])\n",
      "第二次偏导： tensor([4., 4., 4., 4.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "从上面的结果可以看到，如果我们对张量 y 和 z 分别求梯度，那么它们关于 x 的偏导都会被放入 `x.grad` 中，形成累加的局面。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "为了避免这种情况，一般我们在计算完梯度后，都会清空梯度，即清空 `x.grad` 。在清空梯度后，我们再进行其他张量的梯度求解。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "我们可以使用 `x.grad.zero_()` 清空梯度："
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "source": [
    "x = torch.ones(4, requires_grad=True)\r\n",
    "y = (2*x+1).sum()\r\n",
    "z = (2*x).sum()\r\n",
    "y.backward()\r\n",
    "print(\"第一次偏导：\", x.grad)  # dy/dx\r\n",
    "# 这里对梯度进行了清空,防止累加起来\r\n",
    "x.grad.zero_()\r\n",
    "z.backward()\r\n",
    "print(\"第二次偏导：\", x.grad)  # dz/dx"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "第一次偏导： tensor([2., 2., 2., 2.])\n",
      "第二次偏导： tensor([2., 2., 2., 2.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "source": [
    "x = torch.ones(4, requires_grad=True)\r\n",
    "y = (2*x+1).sum()\r\n",
    "z = (2*x).sum()\r\n",
    "y.backward()\r\n",
    "print(\"第一次偏导：\", x.grad)  # dy/dx\r\n",
    "# 这里对梯度进行了清空,防止累加起来\r\n",
    "x.grad.zero_()\r\n",
    "z.backward()\r\n",
    "print(\"第二次偏导：\", x.grad)  # dz/dx"
   ],
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "第一次偏导： tensor([2., 2., 2., 2.])\n",
      "第二次偏导： tensor([2., 2., 2., 2.])\n"
     ]
    }
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "这个性质是非常重要的，特别是在后面我们将要学到的梯度下降算法之中。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "因为我们训练模型时需要循环求梯度，如果这时梯度一直叠加，那么我们求出来的结果就没有意义。因此，可以使用上面方法对张量的偏导进行清空。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "除了张量中存在梯度清空函数，优化器中也存在这样的函数：`zero_grad()`。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "source": [
    "optimizer = torch.optim.SGD([x], lr=0.1)\r\n",
    "optimizer.step()\r\n",
    "optimizer.zero_grad()\r\n",
    "# optimizer"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "source": [
    "# 指定optimizer\r\n",
    "optimizer = torch.optim.SGD([x],lr=0.1)\r\n",
    "# 进行一次梯度更新\r\n",
    "optimizer.step()\r\n",
    "# 梯度清零\r\n",
    "optimizer.zero_grad()"
   ],
   "outputs": [],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "关于上面代码中的提到的优化器知识，我们将在后面的实验中学到，这里只需要知道优化器也需要进行梯度清空即可。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 实验总结"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "本实验首先讲解了梯度的含义，然后利用 PyTorch 定义了可以自动求偏导的张量，然后对张量进行了梯度求解，最后阐述了梯度清空的重要性和必要性。在下一个实验中，我们会利用梯度求解函数，详细的阐述神经网络中的正向传播和反向传播。"
   ],
   "metadata": {}
  },
  {
   "cell_type": "markdown",
   "source": [
    "<hr><div style=\"color: #999; font-size: 12px;\"><i class=\"fa fa-copyright\" aria-hidden=\"true\"> 本课程内容版权归蓝桥云课所有，禁止转载、下载及非法传播。</i></div>"
   ],
   "metadata": {}
  }
 ],
 "metadata": {
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3.8.0 64-bit ('pytorch': conda)"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  },
  "interpreter": {
   "hash": "95edf26445b41d81dc60008cc593bb3c243ca80a3a822915e2b6f7013280bc10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}