{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实验一：基于三层感知机实现手写数字识别\n",
    "**实验目的**\n",
    "\n",
    "​\t本实验的目的是掌握神经网络的设计原理，掌握神经网络的训练和推理方法。能够使用`python`语言实现一个可以进行手写数字分类的三层全连接神经网络的训练和推理，主要包括：\n",
    "\n",
    "1、实现三层神经网络模型进行手写数字分类，建立完整的神经网络工程，通过本实验理解神经网络中基本模块的作用和模块间的联系，为后续建立更复杂的神经网络奠定基础。\n",
    "\n",
    "2、利用`python`语言实现神经网络的基本单元的前向传播和反向传播，加深对神经网络中基本单元（全连接层、激活函数、损失函数等）的理解。\n",
    "\n",
    "3、利用`python`语言实现神经网络训练所使用的梯度下降算法，加深对神经网络训练过程的理解\n",
    "\n",
    "**实验内容**\n",
    "\n",
    "​\t本实验使用`python`+`numpy`，设计一个由三层全连接层组成的神经网络模型，实现手写数字分类。该网络包含两个隐含层与一个输出层，其中隐含层神经元个数作为超参数自行设置，输出层神经元个数为类别数。本实验中分别为128、64、10。\n",
    "\n",
    "​\t实验数据集`MNIST`(http://yann.lecun.com/exdb/mnist/), 由250个不同的人手写而成，总共有70000张手写数据集。其中训练集有60000张，测试集有10000张。每张图片大小为28×28。\n",
    "\n",
    "**实验环境**\n",
    "\n",
    "设备：`CPU`\n",
    "\n",
    "环境依赖： `python == 3.7.5`;`numpy == 1.21.5`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.全连接层实现\n",
    "**全连接层前向传播**：\n",
    "$$\n",
    "Y=XW^T+b\n",
    "$$\n",
    "**全连接层反向传播**：\n",
    "$$\n",
    "\\nabla_WL=X^T\\nabla_YL\\\\\n",
    "\\nabla_bL=1*\\nabla_YL\\\\\n",
    "\\nabla_XL=\\nabla_YLW\n",
    "$$\n",
    "其中，以第一层全连接层为例：\n",
    "\n",
    "前向过程中，输入$X$ 的`shape`为(64,784)，权重矩阵$W$的`shape`为(128,784)，偏置$b$的`shape`为(128,)经过堆叠扩充至(64,128)，输出$Y$的`shape`为(64,128)；\n",
    "\n",
    "反向过程中，输入梯度$\\nabla_YL$的`shape`为(64,128)，权重梯度$\\nabla_WL$的`shape`(784,128)，输出梯度$\\nabla_bL$的`shape`为(128)，输出梯度$\\nabla_XL$的`shape`为(64,784)。\n",
    "\n",
    "偏置梯度$\\nabla_bL$为全1的向量$1$与$\\nabla_YL$相乘。即$\\nabla_bL(128,)=\\nabla_bL(1,128)=1(1,64)*\\nabla_YL(64,128)$\n",
    "\n",
    "注：**numpy.matmul()**函数返回两个数组的矩阵乘积。在numpy中存在广播机制，如果任一参数是1-D数组，则通过在其维度上附加1来将其提升为矩阵，在乘法后将其删除。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FullyConnectLayer(object):\n",
    "    def __init__(self, in_features, out_features, has_bias=True):\n",
    "        # 初始化权重和偏置\n",
    "        self.weight = np.random.normal(loc=0, scale=0.01, size=(out_features, in_features))\n",
    "        self.bias = np.zeros(out_features) if has_bias else None\n",
    "        self.has_bias = has_bias # 是否使用偏置，默认为True\n",
    "\n",
    "        self.inputs = None\n",
    "        self.grad_weight = None\n",
    "        self.grad_bias = None\n",
    "\n",
    "    def forward(self, inputs):\n",
    "        # 根据公式编写全连接层的前向传播过程\n",
    "        self.inputs = inputs\n",
    "        bias = np.stack([self.bias for _ in range(inputs.shape[0])]) if self.has_bias else 0\n",
    "        outputs = np.dot(inputs, self.weight.T) + bias\n",
    "        return outputs\n",
    "\n",
    "    def backward(self, in_grad):\n",
    "        # 根据公式编写全连接层的反向传播过程\n",
    "        self.grad_weight = np.dot(self.inputs.T, in_grad)\n",
    "        self.grad_bias = np.mean(in_grad, axis=0)\n",
    "        out_grad = np.dot(in_grad, self.weight)\n",
    "        return out_grad\n",
    "    \n",
    "    def update_params(self, lr):\n",
    "        # 根据公式编写全连接层的参数更新过程\n",
    "        self.weight = self.weight - lr * self.grad_weight.T\n",
    "        if self.has_bias:\n",
    "            self.bias = self.bias - lr * self.grad_bias\n",
    "\n",
    "    def load_params(self, weight, bias):\n",
    "        # 加载权重和偏置\n",
    "        assert self.weight.shape == weight.shape\n",
    "        self.weight = weight\n",
    "        if self.has_bias:\n",
    "            assert self.bias.shape == bias.shape\n",
    "            self.bias = bias"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.激活函数层实现\n",
    "**`ReLU`层前向传播**：\n",
    "$$\n",
    "y=max(0,x)\n",
    "$$\n",
    "**`ReLU`层反向传播**：\n",
    "$$\n",
    "\\nabla_{x(i)}L=\\begin{cases} \n",
    "\\nabla_{y(i)}L\\qquad x(i)\\geq0\\\\\n",
    "0\\qquad \\qquad x(i)<0\\\\\n",
    "\\end{cases}\n",
    "$$\n",
    "其中：$\\nabla_{y(i)}L$为输入梯度，$\\nabla_{x(i)}L$为输出梯度；逐元素判断梯度是否$\\geq 0$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ReluLayer(object):\n",
    "    def __init__(self):\n",
    "        self.inputs = None\n",
    "\n",
    "    def forward(self, inputs):\n",
    "        # 根据公式编写激活函数ReLU的前向传播过程\n",
    "        self.inputs = inputs\n",
    "        outputs = np.maximum(self.inputs, 0)\n",
    "        return outputs\n",
    "\n",
    "    def backward(self, in_grad):\n",
    "        # 根据公式编写激活函数ReLU的反向传播过程\n",
    "        b = self.inputs\n",
    "        b[b > 0] = 1\n",
    "        b[b < 0] = 0\n",
    "        out_grad = np.multiply(b, in_grad)\n",
    "        return out_grad"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.损失函数实现\n",
    "**交叉熵前向传播**：\n",
    "$$\n",
    "softmax =\\frac{e^{X(i,j)}}{\\sum_j{e^{X(i,j)}}}\\\\logsoftmax =ln( softmax )\\\\loss = -\\frac{1}{p}\\sum_{i,j}Y(i,j)logsoftmax\n",
    "$$\n",
    "其中：$X(i,j)$（64,10）为上一层的输出，求得上层输出的 $softmax$ 后取对数得到 $logsoftmax $（64,10）；\n",
    "\n",
    "$Y(i,j)$（64,10）为对应真实值的 $one hot$ 编码；$p$（64,）为 $batch$ 样本数量。\n",
    "\n",
    "**交叉熵反向传播**：\n",
    "$$\n",
    "\\nabla_{x}L=\\frac{1}{p}(\\hat{Y}-Y)\n",
    "$$\n",
    "其中：$\\hat{Y}$（64,10）为前向传播时的 $softmax$ 输出，$Y$（64,10）为真实值的 $one hot$ 编码，两者`shape`相同；$\\nabla_{x}L$（64,10）为输出梯度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CrossEntropy(object):\n",
    "    def __init__(self, dim=1):\n",
    "        self.softmax_out = None\n",
    "        self.label_onehot = None\n",
    "        self.batch_size = None\n",
    "        self.dim = dim\n",
    "        \n",
    "    def _softmax(self, inputs, dim=1):\n",
    "        input_exp = np.exp(inputs)\n",
    "        partsum = np.sum(input_exp, axis=dim)\n",
    "        partsum = np.repeat(np.expand_dims(partsum, axis=dim), inputs.shape[dim], axis=dim)\n",
    "        result = input_exp / partsum\n",
    "        return result\n",
    "\n",
    "    def forward(self, inputs, labels):\n",
    "        # 根据公式编写交叉熵损失函数的前向传播过程\n",
    "        self.softmax_out = self._softmax(inputs, dim=self.dim)\n",
    "        self.batch_size, out_size = self.softmax_out.shape\n",
    "        self.label_onehot = np.eye(out_size)[labels]\n",
    "        log_softmax = np.log(self.softmax_out)\n",
    "        outputs = -np.sum(self.label_onehot * log_softmax) / labels.shape[0]\n",
    "        return outputs\n",
    "\n",
    "    def backward(self, in_grad):\n",
    "        # 根据公式编写交叉熵损失函数的反向传播过程\n",
    "        out_grad = (self.softmax_out - self.label_onehot) / self.batch_size\n",
    "        return out_grad"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.搭建MLP网络架构"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MlpMnistModel(object):\n",
    "    def __init__(self, input_size, hidden1, hidden2, out_size):\n",
    "        self.input_size = input_size\n",
    "        self.hidden1 = hidden1\n",
    "        self.hidden2 = hidden2\n",
    "        self.out_size = out_size\n",
    "\n",
    "        # 初始化网络中各组件\n",
    "        self.fc1 = FullyConnectLayer(self.input_size, self.hidden1)\n",
    "        self.fc2 = FullyConnectLayer(self.hidden1, self.hidden2)\n",
    "        self.fc3 = FullyConnectLayer(self.hidden2, self.out_size)\n",
    "        self.relu1 = ReluLayer()\n",
    "        self.relu2 = ReluLayer()\n",
    "        \n",
    "        # 定义需要更新参数的组件列表\n",
    "        self.update_layer_list = [self.fc1, self.fc2, self.fc3] \n",
    "\n",
    "    def forward(self, x):\n",
    "        # 前向传播流程\n",
    "        x = self.fc1.forward(x)\n",
    "        x = self.relu1.forward(x)\n",
    "        x = self.fc2.forward(x)\n",
    "        x = self.relu2.forward(x)\n",
    "        x = self.fc3.forward(x)\n",
    "        return x\n",
    "\n",
    "    def backward(self, dloss):\n",
    "        # 反向传播流程\n",
    "        dh2 = self.fc3.backward(dloss)\n",
    "        dh2 = self.relu2.backward(dh2)\n",
    "        dh1 = self.fc2.backward(dh2)\n",
    "        dh1 = self.relu1.backward(dh1)\n",
    "        dh1 = self.fc1.backward(dh1)\n",
    "\n",
    "    def step(self, lr):\n",
    "        # 参数更新\n",
    "        for layer in self.update_layer_list:\n",
    "            layer.update_params(lr)\n",
    "\n",
    "    def save_model(self, param_dir):\n",
    "        # 保存权重和偏置\n",
    "        params = {}\n",
    "        params['w1'], params['b1'] = self.fc1.weight, self.fc1.bias\n",
    "        params['w2'], params['b2'] = self.fc2.weight, self.fc2.bias\n",
    "        params['w3'], params['b3'] = self.fc3.weight, self.fc3.bias\n",
    "        np.save(param_dir, params)\n",
    "\n",
    "    def load_model(self, params):\n",
    "        # 加载权重和偏置\n",
    "        self.fc1.load_params(params['w1'], params['b1'])\n",
    "        self.fc2.load_params(params['w2'], params['b2'])\n",
    "        self.fc3.load_params(params['w3'], params['b3'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.主函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train Epoch 0, iter 0, loss: 2.314244\n",
      "Train Epoch 0, iter 100, loss: 0.445818\n",
      "Train Epoch 0, iter 200, loss: 0.135425\n",
      "Train Epoch 0, iter 300, loss: 0.514313\n",
      "Train Epoch 0, iter 400, loss: 0.444857\n",
      "Train Epoch 0, iter 500, loss: 0.181411\n",
      "Train Epoch 0, iter 600, loss: 0.379765\n",
      "Train Epoch 0, iter 700, loss: 0.141388\n",
      "Train Epoch 0, iter 800, loss: 0.238649\n",
      "Train Epoch 0, iter 900, loss: 0.096070\n",
      "Train Epoch 0, iter 1000, loss: 0.163236\n",
      "Train Epoch 0, iter 1100, loss: 0.039669\n",
      "Train Epoch 0, iter 1200, loss: 0.083990\n",
      "Train Epoch 0, iter 1300, loss: 0.220094\n",
      "Train Epoch 0, iter 1400, loss: 0.271700\n",
      "Train Epoch 0, iter 1500, loss: 0.015027\n",
      "Train Epoch 0, iter 1600, loss: 0.136117\n",
      "Train Epoch 0, iter 1700, loss: 0.010086\n",
      "Train Epoch 0, iter 1800, loss: 0.115411\n",
      "Val Epoch 0, iter 0, loss: 0.064812\n",
      "Val Epoch 0, iter 100, loss: 0.129036\n",
      "Val Epoch 0, iter 200, loss: 0.142440\n",
      "Val Epoch 0, iter 300, loss: 0.024741\n",
      "Val Epoch: 0, Loss: 0.176204, Acc: 0.944000\n",
      "Train Epoch 1, iter 0, loss: 0.253199\n",
      "Train Epoch 1, iter 100, loss: 0.292653\n",
      "Train Epoch 1, iter 200, loss: 0.040122\n",
      "Train Epoch 1, iter 300, loss: 0.049599\n",
      "Train Epoch 1, iter 400, loss: 0.027004\n",
      "Train Epoch 1, iter 500, loss: 0.094200\n",
      "Train Epoch 1, iter 600, loss: 0.172076\n",
      "Train Epoch 1, iter 700, loss: 0.185049\n",
      "Train Epoch 1, iter 800, loss: 0.107957\n",
      "Train Epoch 1, iter 900, loss: 0.177564\n",
      "Train Epoch 1, iter 1000, loss: 0.006090\n",
      "Train Epoch 1, iter 1100, loss: 0.042359\n",
      "Train Epoch 1, iter 1200, loss: 0.038127\n",
      "Train Epoch 1, iter 1300, loss: 0.128464\n",
      "Train Epoch 1, iter 1400, loss: 0.027210\n",
      "Train Epoch 1, iter 1500, loss: 0.064802\n",
      "Train Epoch 1, iter 1600, loss: 0.084117\n",
      "Train Epoch 1, iter 1700, loss: 0.170503\n",
      "Train Epoch 1, iter 1800, loss: 0.022453\n",
      "Val Epoch 1, iter 0, loss: 0.017847\n",
      "Val Epoch 1, iter 100, loss: 0.138990\n",
      "Val Epoch 1, iter 200, loss: 0.024009\n",
      "Val Epoch 1, iter 300, loss: 0.008905\n",
      "Val Epoch: 1, Loss: 0.100089, Acc: 0.968700\n",
      "Train Epoch 2, iter 0, loss: 0.041713\n",
      "Train Epoch 2, iter 100, loss: 0.025159\n",
      "Train Epoch 2, iter 200, loss: 0.152387\n",
      "Train Epoch 2, iter 300, loss: 0.195642\n",
      "Train Epoch 2, iter 400, loss: 0.214555\n",
      "Train Epoch 2, iter 500, loss: 0.001358\n",
      "Train Epoch 2, iter 600, loss: 0.067088\n",
      "Train Epoch 2, iter 700, loss: 0.046674\n",
      "Train Epoch 2, iter 800, loss: 0.018486\n",
      "Train Epoch 2, iter 900, loss: 0.112662\n",
      "Train Epoch 2, iter 1000, loss: 0.075830\n",
      "Train Epoch 2, iter 1100, loss: 0.015297\n",
      "Train Epoch 2, iter 1200, loss: 0.077860\n",
      "Train Epoch 2, iter 1300, loss: 0.026735\n",
      "Train Epoch 2, iter 1400, loss: 0.001841\n",
      "Train Epoch 2, iter 1500, loss: 0.109834\n",
      "Train Epoch 2, iter 1600, loss: 0.008230\n",
      "Train Epoch 2, iter 1700, loss: 0.026454\n",
      "Train Epoch 2, iter 1800, loss: 0.065553\n",
      "Val Epoch 2, iter 0, loss: 0.004484\n",
      "Val Epoch 2, iter 100, loss: 0.080202\n",
      "Val Epoch 2, iter 200, loss: 0.041934\n",
      "Val Epoch 2, iter 300, loss: 0.014702\n",
      "Val Epoch: 2, Loss: 0.109136, Acc: 0.965700\n",
      "Train Epoch 3, iter 0, loss: 0.009645\n",
      "Train Epoch 3, iter 100, loss: 0.014086\n",
      "Train Epoch 3, iter 200, loss: 0.149974\n",
      "Train Epoch 3, iter 300, loss: 0.117464\n",
      "Train Epoch 3, iter 400, loss: 0.015313\n",
      "Train Epoch 3, iter 500, loss: 0.041434\n",
      "Train Epoch 3, iter 600, loss: 0.011246\n",
      "Train Epoch 3, iter 700, loss: 0.141698\n",
      "Train Epoch 3, iter 800, loss: 0.160388\n",
      "Train Epoch 3, iter 900, loss: 0.052170\n",
      "Train Epoch 3, iter 1000, loss: 0.002074\n",
      "Train Epoch 3, iter 1100, loss: 0.247828\n",
      "Train Epoch 3, iter 1200, loss: 0.009622\n",
      "Train Epoch 3, iter 1300, loss: 0.004717\n",
      "Train Epoch 3, iter 1400, loss: 0.033758\n",
      "Train Epoch 3, iter 1500, loss: 0.018507\n",
      "Train Epoch 3, iter 1600, loss: 0.022411\n",
      "Train Epoch 3, iter 1700, loss: 0.055340\n",
      "Train Epoch 3, iter 1800, loss: 0.040307\n",
      "Val Epoch 3, iter 0, loss: 0.005817\n",
      "Val Epoch 3, iter 100, loss: 0.022899\n",
      "Val Epoch 3, iter 200, loss: 0.023250\n",
      "Val Epoch 3, iter 300, loss: 0.001286\n",
      "Val Epoch: 3, Loss: 0.082961, Acc: 0.974500\n",
      "Train Epoch 4, iter 0, loss: 0.031212\n",
      "Train Epoch 4, iter 100, loss: 0.179406\n",
      "Train Epoch 4, iter 200, loss: 0.242891\n",
      "Train Epoch 4, iter 300, loss: 0.000330\n",
      "Train Epoch 4, iter 400, loss: 0.087641\n",
      "Train Epoch 4, iter 500, loss: 0.003320\n",
      "Train Epoch 4, iter 600, loss: 0.010110\n",
      "Train Epoch 4, iter 700, loss: 0.079785\n",
      "Train Epoch 4, iter 800, loss: 0.146859\n",
      "Train Epoch 4, iter 900, loss: 0.027680\n",
      "Train Epoch 4, iter 1000, loss: 0.052627\n",
      "Train Epoch 4, iter 1100, loss: 0.025902\n",
      "Train Epoch 4, iter 1200, loss: 0.008461\n",
      "Train Epoch 4, iter 1300, loss: 0.009691\n",
      "Train Epoch 4, iter 1400, loss: 0.005374\n",
      "Train Epoch 4, iter 1500, loss: 0.018217\n",
      "Train Epoch 4, iter 1600, loss: 0.019230\n",
      "Train Epoch 4, iter 1700, loss: 0.030483\n",
      "Train Epoch 4, iter 1800, loss: 0.072367\n",
      "Val Epoch 4, iter 0, loss: 0.021142\n",
      "Val Epoch 4, iter 100, loss: 0.028326\n",
      "Val Epoch 4, iter 200, loss: 0.211722\n",
      "Val Epoch 4, iter 300, loss: 0.083515\n",
      "Val Epoch: 4, Loss: 0.108051, Acc: 0.970400\n"
     ]
    }
   ],
   "source": [
    "from dataloader import DataLoader\n",
    "\n",
    "mnist_npy_dir = './mnist'\t# 数据集地址\n",
    "epochs = 5 # 训练轮次\n",
    "batch_size = 32 # batch size\n",
    "lr = 0.01 # 学习率\n",
    "print_freq = 100 # 打印频率\n",
    "train_data_loader = DataLoader(mnist_npy_dir, batch_size=batch_size, mode='train')\n",
    "val_data_loader = DataLoader(mnist_npy_dir, batch_size=batch_size, mode='val')\n",
    "\n",
    "model = MlpMnistModel(input_size=784, hidden1=128, hidden2=64, out_size=10) # 初始化模型\n",
    "criterion = CrossEntropy() # 初始化损失函数\n",
    "\n",
    "best_loss = 999\n",
    "for idx_epoch in range(epochs):\n",
    "    # 训练\n",
    "    train_data_loader.shuffle_data() # 每一个新轮次打乱一次数据\n",
    "    for id_1 in range(train_data_loader.batch_nums):\n",
    "        train_data, train_labels = train_data_loader.get_data(id_1) # 读取训练数据\n",
    "        output = model.forward(train_data) # 前向传播\n",
    "        loss = criterion.forward(output, train_labels) # 计算损失\n",
    "        dloss = criterion.backward(loss) # 损失函数反向\n",
    "        model.backward(dloss) # 反向传播\n",
    "        model.step(lr) # 参数更新\n",
    "\n",
    "        if id_1 % print_freq == 0:\n",
    "            print('Train Epoch %d, iter %d, loss: %.6f' % (idx_epoch, id_1, loss))\n",
    "    # 验证\n",
    "    mean_val_loss = []\n",
    "    pred_results = np.zeros([val_data_loader.input_data.shape[0]]) # 保存推理结果\n",
    "    for id_2 in range(val_data_loader.batch_nums):\n",
    "        val_data, val_labels = val_data_loader.get_data(id_2) # 读取验证数据\n",
    "        prob = model.forward(val_data) # 前向传播（即推理）\n",
    "        val_loss = criterion.forward(prob, val_labels) # 计算损失\n",
    "        mean_val_loss.append(val_loss)\n",
    "        pred_labels = np.argmax(prob, axis=1) # 获取推理结果\n",
    "        pred_results[id_2 * val_labels.shape[0]:(id_2 + 1) * val_labels.shape[0]] = pred_labels # 保存推理结果\n",
    "\n",
    "        if id_2 % print_freq == 0:\n",
    "            print('Val Epoch %d, iter %d, loss: %.6f' % (idx_epoch, id_2, val_loss))\n",
    "\n",
    "    accuracy = np.mean(pred_results == val_data_loader.input_label) # 计算准确率\n",
    "    mean_val_loss = np.array(mean_val_loss).mean() # 计算平均损失\n",
    "    print('Val Epoch: %d, Loss: %.6f, Acc: %.6f' % (idx_epoch, mean_val_loss, accuracy))\n",
    "\n",
    "    # 保存最优模型\n",
    "    if mean_val_loss <= best_loss:\n",
    "        best_loss = mean_val_loss\n",
    "        model.save_model(os.path.join('ckpts', 'epoch_%d_loss_%.6f.npy' % (idx_epoch, mean_val_loss)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
