{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9a664fca",
   "metadata": {},
   "source": [
    "## Report1: 交通事故理赔审核预测\n",
    "\n",
    "### 1. 问题描述及实现目标\n",
    "\n",
    "#### 1.1 问题内容\n",
    "\n",
    "  在交通摩擦（事故）发生后，理赔员会前往现场勘察、采集信息，这些信息往往影响着车主是否能够得到保险公司的理赔。训练集数据包括理赔人员在现场对该事故方采集的36条信息，信息已经被编码，以及该事故方最终是否获得理赔。我们的任务是根据这36条信息预测该事故方没有被理赔的概率。\n",
    "  \n",
    "#### 1.2 数据形式\n",
    "\n",
    "所给出的数据分为训练集和测试集，训练集中共有200000条样本，预测集中有80000条样本，变量名称的解释如图1.1所示。\n",
    "\n",
    "![1.1](figure/1.1.png)\n",
    "<center> 图1.1 数据集中的变量名称解释</center>\n",
    "\n",
    "#### 1.3 程序实现目标\n",
    "\n",
    "需要建立模型，完成根据这36条信息预测该事故方没有被理赔的概率，提交的结果需要是每个测试样本未通过审核的概率，也就是Evaluation为1的概率。评价方法为精度-召回曲线下面积(Precision-Recall AUC)，以下简称PR-AUC。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dba8b8f1",
   "metadata": {},
   "source": [
    "### 2. 程序实现所需的基础知识和背景\n",
    "\n",
    "#### 2.1 模型指标的评估方法简介\n",
    "\n",
    "##### 2.1.1 混淆矩阵 Confusion Matrix\n",
    "在二分类模型当中，我们经常将实例分成正类（Positive）和负类（Negative），一般来说我们将关注的类都归为正类，所以根据模型的预测结果和实际结果，我们可以得到以下的四种结果：\n",
    "\n",
    ">**TP**（True positive）：将正类预测为正类 \n",
    "\n",
    ">**TN**（True negative）：将负类预测为负类\n",
    "\n",
    ">**FP**（False positive）：将负类预测为正类 \n",
    "\n",
    ">**FN**（False negative）：将正类预测为负类\n",
    "\n",
    "用矩阵形式表示如下，我们一般将该矩阵称为混淆矩阵（Confusion Matrix）\n",
    "\n",
    "![1.1](figure/1.1.1.png)\n",
    "\n",
    "##### 2.1.2 Precision and Recall\n",
    "\n",
    "在定义混淆矩阵的基础之上，我们还可以继续定义几个检验模型的度量值：\n",
    "\n",
    "**精确度（Precision）**：正类预测值中正确的比例，用来衡量模型的**查准率**\n",
    "\n",
    "$$P=\\frac{TP}{TP+FP}$$\n",
    "\n",
    "**召回率（Recall）**：预测为正类的占总的正类的比例，用来衡量模型的**查全率**\n",
    "\n",
    "$$R=\\frac{TP}{TP+FN}$$\n",
    "\n",
    "对于复杂的任务来说，精确度和召回率一般不会同时达到较高水平，二者会相互影响，对于不同的问题，我们所预期的precision和recall两个指标也是不同的，比如对于地震的预测即便会产生误报但也希望达到很高的召回率，而对于罪犯的定罪则宁愿放过一些真正的罪犯也不想误判一个无辜的人，也即达到很高的精确度。\n",
    "\n",
    "##### 2.1.3 ROC曲线、AUC\n",
    "如果我们仅仅使用Precision-Recall的方法来衡量模型的优劣，会存在一定的局限性，比如当给出的样本中Positive占了绝大多数的时候，即便将全部样本都预测成Positive也可以达到很高的精确度和召回率，但是这样的做法并没有意义。又比如当降低正类判断的阈值时，识别出来的正类势必增多，但是同时也会增加误判率从而降低精度，在模型调试的过程中我们希望可以反映出这一变化趋势，所以就需要引进一些新的评价手段。\n",
    "\n",
    "* **ROC曲线（Receiver Operating Characteristic Curve）**\n",
    "\n",
    "ROC曲线是由FPR和TPR两个指标作为横轴和纵轴画出的一条变化曲线\n",
    "\n",
    "$$\n",
    "FPR=\\frac{FP}{TN+FP}\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ TPR=\\frac{TP}{TP+FN}\n",
    "$$\n",
    "\n",
    "这两个变量分别表示假阳性率和真阳性率（等同于Recall），显然FPR越小越好，TPR越大越好，我们以FPR为X轴、TPR为Y轴就可以画出ROC曲线，如图2.1(a)所示，可以看到，曲线越接近于Y轴，就代表了模型的分类效果越好，如图2.1(b)所示，用目测的方法很难看出究竟哪条曲线和Y轴更加接近，所以引入AUC指标进行评估。\n",
    "\n",
    "![1.2.1.a](figure/1.2.1.a.png)\n",
    "![1.2.1.b](figure/1.2.1.b.png)\n",
    "\n",
    "<center>图1.2.1 (a) ROC曲线示意 (b) 三条ROC曲线示意</center>\n",
    "\n",
    "* **AUC（Area Under Curve）**\n",
    "\n",
    "AUC表示的即为曲线下方和坐标轴所围成的面积，因为ROC曲线的横纵坐标都在[0,1]范围之内，所以AUC的值也应该在0~1，当ROC曲线越靠近Y轴，AUC的值就越靠近1，所以说AUC的值越大，模型的分类效果就会越好，在0.5<AUC<1的区间范围之内，我们认为此时用模型进行评估是优于随机进行猜测的。如果AUC=1，那么就是完美的分类器，不过这种情况在实际应用中一般不会存在。\n",
    "\n",
    "而AUC的计算，可以直接调用sklearn.metrics中的roc_auc_score函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "af4874b4",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import roc_auc_score"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fbf10315",
   "metadata": {},
   "source": [
    "### 3. 程序实现的原理\n",
    "\n",
    "#### 3.1 随机森林算法 Random Forest\n",
    "\n",
    "##### 3.1.1 算法主要内容\n",
    "\n",
    "Random Forest算法是1995年由贝尔实验室提出的，并由Leo Breiman和Adele Cutler发展出推论并注册商标。Random Forest是一种高度灵活的机器学习方法，可以进行回归和分类，是集成学习（Ensemble Learning）的一种方法，他的基本单元是决策树。直观地讲，Random Forest就是由N棵树构成的一个森林，当输入一个样本的时候就会产生N个结果，集成所有分类投票的结果就是Random Forest最终的输出，这也是bagging思想的很好的体现。\n",
    "\n",
    "##### 3.1.2 决策树 Decision Tree\n",
    "\n",
    "决策树学习是统计学、数据挖掘和机器学习中使用的一种预测建模方法。它使用决策树作为预测模型，从样本的观测数据（对应决策树的分支）推断出该样本的预测结果（对应决策树的叶节点）。他是一种简单并且广泛使用的分类器，可以高效地对未知数据进行分类，他有以下两个主要的优点：\n",
    "\n",
    "* 模型可读性好，具有描述性，有助于人工分析\n",
    "\n",
    "\n",
    "* 效率高，只需要一次构建、反复使用，每次最大计算次数不超过决策树的深度\n",
    "\n",
    "一般情况下，决策树主要有两种类型：\n",
    "\n",
    "* **分类树** 输出是样本的类标。\n",
    "\n",
    "\n",
    "* **回归树** 输出是一个实数，也称CART决策树，就是随机森林中使用的树\n",
    "\n",
    "##### 3.1.3 Bagging算法\n",
    "\n",
    "Bagging算法的全称为Bootstrap Aggregating（引导聚合算法），他是一种集成学习的算法，最初就是在1994年由Leo Breiman提出的，他可以和其他的分类与回归算法相结合，提高准确率和稳定性，并且还可以降低方差防止过拟合现象的发生。\n",
    "Bagging算法基本步骤就是子啊给定一个大小为n的训练集$D$时，从中均匀、又放回地抽出m个大小为$n^\\prime$的子集$D_i$，作为新的训练集，然后在这m个新的训练集上使用相应的分类、回归算法得到m个相应的模型，再通过取平均值、统计多数得票的方法得到Bagging的最终结果，数学的表示形式如下：\n",
    "\n",
    "假定训练集为 $X=x_1,x_2\\ldots x_n$，最终的目标$\\ Y=y_1,y_2\\ldots y_n$，将bagging的方法重复$B$次，那么就会得到相应的模型 $X_b,Y_{b\\ }\\ \\left(b=1,2\\ldots B\\right)$，在这$B$个模型训练结束之后，对未知样本$x^\\prime$的预测就可以通过对所有的单个回归树预测值求平均来实现\n",
    "\n",
    "$$\n",
    "f=\\frac{1}{B}\\ \\sum_{b=1}^{B}{f_b\\left(x^\\prime\\right)}\n",
    "$$\n",
    "\n",
    "##### 3.1.4 从 Bagging 到 Random Forest\n",
    "\n",
    "上述的Bagging算法是比较原始的方法，但是还会存在一些相关性的问题，比如在b个分类特征中，如果某一个特征预测目标值的能力很强，就会导致这一个特征被这b个模型中的很多个模型所选择，这样一来就会导致这些模型之间相关性的增加，这显然是会影响到每个树相对独立的决策的，所以在Random Forest的算法当中做了改进，在scikit-learn的介绍文档中是这样描述的：\n",
    "\n",
    ">“ Furthermore, when splitting each node during the construction of a tree, the best split is found either from all input features or a random subset of size. ”\n",
    "\n",
    "也就是说，每棵树在生成的时候都是使用了整个学习的样本进行训练，自上而下的划分也都是随机的。而在此基础之上又演变出来了更有的极限树算法（Extremely Random Forest），这种方法不再像之前一样试图去寻找一个全局最优的阈值，而是针对每一个参数特征去选取各自的最优阈值（Threshold），从而得到最优的划分。\n",
    "\n",
    "##### 3.1.5 Sklearn中的 RandomForestClassifier函数\n",
    "\n",
    "输入如下的命令就可以调用sklearn.ensemble中的随机森林分类函数\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "1cd2f897",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import RandomForestClassifier"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84cf2905",
   "metadata": {},
   "source": [
    "函数中的参数如下"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "e2b86455",
   "metadata": {},
   "outputs": [
    {
     "ename": "SyntaxError",
     "evalue": "invalid syntax (<ipython-input-10-b59af439a524>, line 2)",
     "output_type": "error",
     "traceback": [
      "\u001b[1;36m  File \u001b[1;32m\"<ipython-input-10-b59af439a524>\"\u001b[1;36m, line \u001b[1;32m2\u001b[0m\n\u001b[1;33m    class sklearn.ensemble.RandomForestClassifier\u001b[0m\n\u001b[1;37m                 ^\u001b[0m\n\u001b[1;31mSyntaxError\u001b[0m\u001b[1;31m:\u001b[0m invalid syntax\n"
     ]
    }
   ],
   "source": [
    "#仅供展示用\n",
    "class sklearn.ensemble.RandomForestClassifier\n",
    "(n_estimators=100, *, criterion='gini', max_depth=None, \n",
    " min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, \n",
    " max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, \n",
    " bootstrap=True, oob_score=False, n_jobs=None, random_state=None, \n",
    " verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d7a1c33",
   "metadata": {},
   "source": [
    "主要参数的意义如下：\n",
    "\n",
    "**n_estimatorsint: int, default=100** 森林中决策树的数量\n",
    "\n",
    "**criterion: {“gini”, “entropy”}, default=”gini”** 决策树评价函数，有基尼函数和熵函数\n",
    "\n",
    "**bootstrap: bool, default=True**  false时使用整个数据集来生成每个决策树\n",
    "\n",
    "**random_state: int, RandomState instance or None, default=None**\n",
    "控制在数据集生成时的随机性(if bootstrap=True) \n",
    "\n",
    "在此基础之上我们就可以调用sklearn中的函数直接使用随机森林算法（程序见4.1）\n",
    "\n",
    "#### 3.2 自适应增强算法（AdaBoost）\n",
    "\n",
    "##### 3.2.1 算法主要内容\n",
    "\n",
    "AdaBoost是自适应增强（Adaptive Boosting）的缩写，这种算法由Yoav Freund和Robert Schapire提出，同样它也可以和许多其他类型的学习算法结合以提高性能。他的基本原理就是从训练集首先用初始权重训练出一个弱学习机（就是指比随即猜测稍好一点的模型，比如一些比较小规模的决策树），然后利用这些弱学习机的学习误差来更新权重，不断迭代，最终使得弱学习机的学习数达到制定的目标，然后将这些弱学习机集成起来，就可以得到最终效果较好的强学习机。\n",
    "\n",
    "##### 3.2.2 Sklearn中的AdaBoostClassifier函数\n",
    "\n",
    "同样在scikit-learn中也可以直接用一行命令调用sklearn.ensemble中的函数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "0ca0ac57",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import AdaBoostClassifier"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d09b4176",
   "metadata": {},
   "source": [
    "函数中的参数如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b65e3457",
   "metadata": {},
   "outputs": [],
   "source": [
    "class sklearn.ensemble.AdaBoostClassifier\n",
    "(base_estimator=None, *, n_estimators=50,\n",
    "learning_rate=1.0,algorithm='SAMME.R', random_state=None)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9a0efb7",
   "metadata": {},
   "source": [
    "主要参数的意义如下：\n",
    "\n",
    "**n_estimators: int, default=50**\n",
    "boosting算法的最大估计器数量。在完全适合的情况下，学习过程会提前停止。\n",
    "\n",
    "**learning_rate: float, default=1.0**\n",
    "权值更新的学习率，默认值为1.0\n",
    "\n",
    "**algorithm: {‘SAMME’, ‘SAMME.R’}, default=’SAMME.R’**  算法选择\n",
    "\n",
    "**random_state: int, RandomState instance or None, default=None** 随机种子\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c846be1a",
   "metadata": {},
   "source": [
    "### 4. 实验主要程序及运行结果\n",
    "\n",
    "#### 4.1 实验主要程序\n",
    "根据第3节中介绍的算法原理，用python写出如下所示的实验程序以实现实验的目标\n",
    "##### 4.1.1 Random Forest算法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "d8474b2c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Feature importances and predictions of RF\n",
      "[0.00177294 0.00207449 0.00187096 0.00471492 0.00443815 0.0029538\n",
      " 0.00364967 0.00652341 0.00235713 0.00739511 0.00245106 0.00106103\n",
      " 0.0007513  0.00090631 0.00150727 0.0037793  0.00183821 0.00196833\n",
      " 0.00209665 0.00726069 0.00816243 0.00107563 0.00559247 0.00766561\n",
      " 0.00760666 0.00028462 0.00025573 0.18472067 0.25559838 0.21436631\n",
      " 0.0425301  0.00662325 0.00297955 0.03148822 0.03907383 0.13060584]\n",
      "[0 0 0 ... 0 0 0]\n",
      "Feature importances and predictions of EF\n",
      "[0.00177294 0.00207449 0.00187096 0.00471492 0.00443815 0.0029538\n",
      " 0.00364967 0.00652341 0.00235713 0.00739511 0.00245106 0.00106103\n",
      " 0.0007513  0.00090631 0.00150727 0.0037793  0.00183821 0.00196833\n",
      " 0.00209665 0.00726069 0.00816243 0.00107563 0.00559247 0.00766561\n",
      " 0.00760666 0.00028462 0.00025573 0.18472067 0.25559838 0.21436631\n",
      " 0.0425301  0.00662325 0.00297955 0.03148822 0.03907383 0.13060584]\n",
      "[0 0 0 ... 0 0 0]\n",
      "acc_train_RF = 0.931525\n",
      "auc_train_RF = 0.846770\n",
      "acc_train_EF = 0.931525\n",
      "auc_train_EF = 0.844546\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.ensemble import ExtraTreesClassifier\n",
    "\"\"\"\n",
    "此程序使用的是随机森林的方法，调用了sklearn中的RandomForestClassifier4\n",
    "和ExtraTreesClassifier两个分类函数，得到的精度均在93%左右，AUC的值均在84.5左右\n",
    "可以说预测的结果比较理想\n",
    "\"\"\"\n",
    "# read data from the file\n",
    "train = pd.read_csv(\"data/train.csv\")\n",
    "test = pd.read_csv(\"data/test.csv\")\n",
    "submit = pd.read_csv(\"data/sample_submit.csv\")\n",
    "\n",
    "# delete id\n",
    "train.drop('CaseId', axis=1, inplace=True)\n",
    "test.drop('CaseId', axis=1, inplace=True)\n",
    "\n",
    "# extract y from train set\n",
    "y_train=train.pop('Evaluation')\n",
    "\n",
    "# Using RandomForest Classifier and ExtraTreesClassifier\n",
    "# 随机森林和极限树森林算法（从结果上看两种方法结果相差不大）\n",
    "clf_RF=RandomForestClassifier(n_estimators=100,random_state=0)\n",
    "clf_RF.fit(train, y_train)\n",
    "y_pred_RF = clf_RF.predict_proba(test)[:, 1]\n",
    "y_train_pred_RF=clf_RF.predict(train)\n",
    "\n",
    "clf_EF=ExtraTreesClassifier(n_estimators=100,random_state=0)\n",
    "clf_EF.fit(train, y_train)\n",
    "y_pred_EF = clf_EF.predict_proba(test)[:, 1]\n",
    "y_train_pred_EF=clf_EF.predict(train)\n",
    "\n",
    "# output predictive results to csv files\n",
    "submit['Evaluation'] = y_pred_RF\n",
    "submit.to_csv('my_RF_prediction_RandomForest.csv', index=False)\n",
    "submit['Evaluation'] = y_pred_EF\n",
    "submit.to_csv('my_RF_prediction_ExtremelyRandomForest.csv', index=False)\n",
    "\n",
    "# print freature importances\n",
    "print(\"Feature importances and predictions of RF\")\n",
    "print(clf_RF.feature_importances_)\n",
    "print(y_train_pred_RF)\n",
    "print(\"Feature importances and predictions of EF\")\n",
    "print(clf_RF.feature_importances_)\n",
    "print(y_train_pred_EF)\n",
    "\n",
    "# Prediction evaluations\n",
    "# Using accuracy and AUC respectively\n",
    "from sklearn.metrics import accuracy_score\n",
    "from sklearn.metrics import roc_auc_score\n",
    "\n",
    "acc_train_RF = accuracy_score(y_train, y_train_pred_RF)\n",
    "print(\"acc_train_RF = %f\" % (acc_train_RF))\n",
    "auc_train_RF=roc_auc_score(y_train, y_train_pred_RF)\n",
    "print(\"auc_train_RF = %f\"%(auc_train_RF))\n",
    "\n",
    "acc_train_EF = accuracy_score(y_train, y_train_pred_EF)\n",
    "print(\"acc_train_EF = %f\" % (acc_train_EF))\n",
    "auc_train_EF=roc_auc_score(y_train, y_train_pred_EF)\n",
    "print(\"auc_train_EF = %f\"%(auc_train_EF))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f958ab09",
   "metadata": {},
   "source": [
    "##### 4.1.2 AdaBoost算法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "d02e8261",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Feature importances and predictions of AdaBoost\n",
      "[0.01 0.01 0.01 0.02 0.02 0.02 0.   0.03 0.01 0.03 0.01 0.02 0.   0.\n",
      " 0.01 0.01 0.01 0.01 0.01 0.01 0.   0.   0.02 0.05 0.02 0.   0.   0.07\n",
      " 0.25 0.07 0.1  0.03 0.   0.02 0.03 0.09]\n",
      "[0 0 0 ... 0 0 0]\n",
      "acc_train = 0.921385\n",
      "auc_train = 0.820942\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "from sklearn.ensemble import AdaBoostClassifier\n",
    "\"\"\"\n",
    "此程序使用的是自适应增强的算法，调用了sklearn中的AdaBoostClassifier\n",
    "得到的精度和AUC相比于随机森林都有小幅的下降，但整体的结果还属于较好的水平\n",
    "\"\"\"\n",
    "# read data from the file\n",
    "train = pd.read_csv(\"data/train.csv\")\n",
    "test = pd.read_csv(\"data/test.csv\")\n",
    "submit = pd.read_csv(\"data/sample_submit.csv\")\n",
    "\n",
    "# delete id\n",
    "train.drop('CaseId', axis=1, inplace=True)\n",
    "test.drop('CaseId', axis=1, inplace=True)\n",
    "\n",
    "# extract y from train set\n",
    "y_train=train.pop('Evaluation')\n",
    "\n",
    "# Using AdaBoostClassifier to classify\n",
    "clf=AdaBoostClassifier(n_estimators=100,random_state=0)\n",
    "clf.fit(train,y_train)\n",
    "y_pred=clf.predict_proba(test)[:,1]\n",
    "y_train_pred=clf.predict(train)\n",
    "\n",
    "# output predictive results to csv files\n",
    "submit['Evaluation']=y_pred\n",
    "submit.to_csv('my_AdaBoost_prediction.csv',index=False)\n",
    "\n",
    "print(\"Feature importances and predictions of AdaBoost\")\n",
    "print(clf.feature_importances_)\n",
    "print(y_train_pred)\n",
    "\n",
    "# Prediction evaluations\n",
    "# Using accuracy and AUC respectively\n",
    "from sklearn.metrics import accuracy_score\n",
    "from sklearn.metrics import roc_auc_score\n",
    "\n",
    "acc_train=accuracy_score(y_train,y_train_pred)\n",
    "print(\"acc_train = %f\" % (acc_train))\n",
    "auc_train=roc_auc_score(y_train,y_train_pred)\n",
    "print(\"auc_train = %f\" % (auc_train))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df4736be",
   "metadata": {},
   "source": [
    "##### 4.1.3 运行结果分析\n",
    "\n",
    "从两个程序、两种方法得到的最终结果来看，随机森林的算法得到的效果较好，无论是从精度角度评估，还是从AUC角度评估，随机森林的方法得分都高于AdaBoost，但是实际上通过多次改变参数设计可以发现，这两种方法得到的结果相差并不是很大，得到的预测结果都是比较理想的。\n",
    "\n",
    "### 5. 体会与感悟 \n",
    "\n",
    "这个问题相对来说整体的设计并不复杂，重点就是要明确不同的分类算法内部的机理，以及评估不同模型效率具体指标的内在机理，在此之上调用已有的sklearn库函数来实现相关功能。只要注意到这些细节，在理解了算法机理的基础之上，不断地对模型进行调试和优化，最终也比较容易得到一个较好的预测结果。\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
