{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "293f884d",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "#  19\\.  决策树实现与应用  # \n",
    "\n",
    "##  19.1.  介绍  # \n",
    "\n",
    "决策树是机器学习中简单又经典的算法，本次实验将带领大家学习决策树的原理，通过代码详细了解决策分类的特征选择，通过 Python 实现决策树的算法流程，并学习使用 scikit-learn 构建决策树模型，最后使用该模型对真实数据进行分类预测。 \n",
    "\n",
    "##  19.2.  知识点  # \n",
    "\n",
    "  * 决策树算法原理 \n",
    "\n",
    "  * 信息增益 \n",
    "\n",
    "  * 决策树算法实现 \n",
    "\n",
    "  * 学生成绩分类预测 \n",
    "\n",
    "  * 决策树可视化 \n",
    "\n",
    "##  19.3.  什么是决策树  # \n",
    "\n",
    "决策树是一种特殊的树形结构，一般由节点和有向边组成。其中，节点表示特征、属性或者一个类，而有向边包含判断条件。决策树从根节点开始延伸，经过不同的判断条件后，到达不同的子节点。而上层子节点又可以作为父节点被进一步划分为下层子节点。一般情况下，我们从根节点输入数据，经过多次判断后，这些数据就会被分为不同的类别。这就构成了一颗简单的分类决策树。 \n",
    "\n",
    "举一个通俗的例子，假设工作多年仍然单身的小楼和他母亲在给他介绍对象时的一段对话： \n",
    "\n",
    "> 母亲：小楼，你都 28 了还是单身，明天亲戚家要来个姑娘要不要去见见。   \n",
    ">  小楼：多大年纪？   \n",
    ">  母亲：26。   \n",
    ">  小楼：有多高？   \n",
    ">  母亲：165厘米。   \n",
    ">  小楼：长的好看不。   \n",
    ">  母亲：还行，比较朴素。   \n",
    ">  小楼：温柔不？ 母亲：看起来挺温柔的，很有礼貌。 小楼：好，去见见。 \n",
    "\n",
    "作为程序员的小楼的思考逻辑就是典型的决策树分类逻辑，将年龄，身高，长相，是否温柔作为特征，并最后对见或者不见进行决策。其决策逻辑如图所示： \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6671timestamp1531711818867.png](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6671timestamp1531711818867.png) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid6671timestamp1531711818867.png)\n",
    "\n",
    "##  19.4.  决策树算法原理  # \n",
    "\n",
    "其实决策树算法如同上面场景一样，其思想非常容易理解，具体的算法流程为： \n",
    "\n",
    "  1. 数据准备 → 通过数据清洗和数据处理，将数据整理为没有缺省值的向量。 \n",
    "\n",
    "  2. 寻找最佳特征 → 遍历每个特征的每一种划分方式，找到最好的划分特征。 \n",
    "\n",
    "  3. 生成分支 → 划分成两个或多个节点。 \n",
    "\n",
    "  4. 生成决策树 → 对分裂后的节点分别继续执行 2-3 步，直到每个节点只有一种类别。 \n",
    "\n",
    "  5. 决策分类 → 根据训练决策树模型，将预测数据进行分类。 \n",
    "\n",
    "下面我们依照决策树的算法流程，用 Python 来实现决策树构建和分类。首先生成一组数据，数据包含两个类别 man 和 woman，特征分别为：hair（头发长短），voice（声音粗细），height（身高），ear_stud（是否带有耳钉）。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0836364",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "def create_data():\n",
    "    # 生成示例数据\n",
    "    data_value = np.array(\n",
    "        [\n",
    "            [\"long\", \"thick\", 175, \"no\", \"man\"],\n",
    "            [\"short\", \"medium\", 168, \"no\", \"man\"],\n",
    "            [\"short\", \"thin\", 178, \"yes\", \"man\"],\n",
    "            [\"short\", \"thick\", 172, \"no\", \"man\"],\n",
    "            [\"long\", \"medium\", 163, \"no\", \"man\"],\n",
    "            [\"short\", \"thick\", 180, \"no\", \"man\"],\n",
    "            [\"long\", \"thick\", 173, \"yes\", \"man\"],\n",
    "            [\"short\", \"thin\", 174, \"no\", \"man\"],\n",
    "            [\"long\", \"thin\", 164, \"yes\", \"woman\"],\n",
    "            [\"long\", \"medium\", 158, \"yes\", \"woman\"],\n",
    "            [\"long\", \"thick\", 161, \"yes\", \"woman\"],\n",
    "            [\"short\", \"thin\", 166, \"yes\", \"woman\"],\n",
    "            [\"long\", \"thin\", 158, \"no\", \"woman\"],\n",
    "            [\"short\", \"medium\", 163, \"no\", \"woman\"],\n",
    "            [\"long\", \"thick\", 161, \"yes\", \"woman\"],\n",
    "            [\"long\", \"thin\", 164, \"no\", \"woman\"],\n",
    "            [\"short\", \"medium\", 172, \"yes\", \"woman\"],\n",
    "        ]\n",
    "    )\n",
    "    columns = np.array([\"hair\", \"voice\", \"height\", \"ear_stud\", \"labels\"])\n",
    "    data = pd.DataFrame(data_value.reshape(17, 5), columns=columns)\n",
    "    return data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bfdaf33",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "在创建好数据之后，加载并打印出这些数据 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3467c307",
   "metadata": {},
   "outputs": [],
   "source": [
    "data = create_data()\n",
    "data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "824e72de",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  hair  |  voice  |  height  |  ear_stud  |  labels   \n",
    "---|---|---|---|---|---  \n",
    "0  |  long  |  thick  |  175  |  no  |  man   \n",
    "1  |  short  |  medium  |  168  |  no  |  man   \n",
    "2  |  short  |  thin  |  178  |  yes  |  man   \n",
    "3  |  short  |  thick  |  172  |  no  |  man   \n",
    "4  |  long  |  medium  |  163  |  no  |  man   \n",
    "5  |  short  |  thick  |  180  |  no  |  man   \n",
    "6  |  long  |  thick  |  173  |  yes  |  man   \n",
    "7  |  short  |  thin  |  174  |  no  |  man   \n",
    "8  |  long  |  thin  |  164  |  yes  |  woman   \n",
    "9  |  long  |  medium  |  158  |  yes  |  woman   \n",
    "10  |  long  |  thick  |  161  |  yes  |  woman   \n",
    "11  |  short  |  thin  |  166  |  yes  |  woman   \n",
    "12  |  long  |  thin  |  158  |  no  |  woman   \n",
    "13  |  short  |  medium  |  163  |  no  |  woman   \n",
    "14  |  long  |  thick  |  161  |  yes  |  woman   \n",
    "15  |  long  |  thin  |  164  |  no  |  woman   \n",
    "16  |  short  |  medium  |  172  |  yes  |  woman   \n",
    "  \n",
    "在得到数据后，根据算法流程，接下来需要寻找最优的划分特征，随着划分的不断进行，我们尽可能的将划分的分支所包含的样本归于同一类别，即结点的「纯度」越来越高。而常用的特征划分方式为信息增益和增益率。 \n",
    "\n",
    "##  19.5.  信息增益（ID3）  # \n",
    "\n",
    "在介绍信息增益之前，先引入「信息熵」的概念。信息熵是度量样本纯度最常用的一种指标，其公式为： \n",
    "\n",
    "$$ \\operatorname{Ent}(\\operatorname{D})=-\\sum_{k=1}^{\\left |y \\right |}p_{k}\\; log_{2}p_{k} \\tag{1} $$ \n",
    "\n",
    "其中  $\\operatorname{D}$  表示样本集合，  $p_{k}$  表示第  $k$  类样本所占的比例。其中  $\\operatorname{Ent}(\\operatorname{D})$  的值越小，则  $\\operatorname{D}$  的纯度越高。根据以上数据，在计算数据集的信息熵时，  $\\left | y \\right |$  显然只有 man，woman 共 2 种，其中为 man 的概率为  $\\frac{8}{17}$  ， woman 的概率为  $\\frac{9}{17}$  ，则根据公式  $(1)$  得到数据集的纯度为： \n",
    "\n",
    "$$ \\operatorname{Ent}(\\operatorname{data})=-\\sum_{k=1}^{2}p_{k}\\;log_{2}p_{k}=-(\\frac{8}{17}log_{2}\\frac{8}{17}+\\frac{9}{17}log_{2}\\frac{9}{17})=0.9975 $$ \n",
    "\n",
    "接下来，定义计算信息熵的函数。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a7758f8d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "\n",
    "def get_Ent(data):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    data -- 数据集\n",
    "\n",
    "    返回:\n",
    "    Ent -- 信息熵\n",
    "    \"\"\"\n",
    "    # 计算信息熵\n",
    "    num_sample = len(data)  # 样本个数\n",
    "    label_counts = {}  # 初始化标签统计字典\n",
    "    for i in range(num_sample):\n",
    "        each_data = data.iloc[i, :]\n",
    "        current_label = each_data[\"labels\"]  # 得到当前元素的标签（label）\n",
    "\n",
    "        # 如果标签不在当前字典中，添加该类标签并初始化 value=0,否则该类标签 value+1\n",
    "        if current_label not in label_counts.keys():\n",
    "            label_counts[current_label] = 0\n",
    "        label_counts[current_label] += 1\n",
    "\n",
    "    Ent = 0.0  # 初始化信息熵\n",
    "    for key in label_counts:\n",
    "        prob = float(label_counts[key]) / num_sample\n",
    "        Ent -= prob * math.log(prob, 2)  # 应用信息熵公式计算信息熵\n",
    "    return Ent"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89285622",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "通过计算信息熵函数，计算根节点的信息熵： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "90b3cc34",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "base_ent = get_Ent(data)\n",
    "base_ent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "157f7a16",
   "metadata": {},
   "outputs": [],
   "source": [
    "0.9975025463691153"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8806d29",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "信息增益 就是建立在信息熵的基础上，在离散特征  $x$  有  $M$  个取值，如果用  $x$  对样本  $\\operatorname{D}$  来进行划分，就会产生  $M$  个分支，其中第  $m$  个分支包含了集合  $\\operatorname{D}$  的所有在特征  $x$  上取值为  $m$  的样本，记为  $\\operatorname{D}^{m}$  （例如：根据以上生成数据，如果我们用 hair 进行划分，则会产生 long，short 两个分支，每一个分支中分别包含了整个集合中属于 long 或者 short 的数据）。 \n",
    "\n",
    "考虑到不同分支节点包含样本数不同，给分支赋予权重  $\\frac{\\left | \\operatorname{D}^{m}\\right |}{\\left | \\operatorname{D} \\right |}$  ，使得样本越多的分支节点影响越大，则 信息增益 的公式就可以得到： \n",
    "\n",
    "$$ \\operatorname{Gain}(\\operatorname{D},x)=\\operatorname{Ent}(\\operatorname{D})- \\sum_{m=1}^{M}\\frac{\\left | \\operatorname{D}^{m}\\right |}{\\left | \\operatorname{D} \\right |}\\operatorname{Ent}(\\operatorname{D}^{m}) \\tag{2} $$ \n",
    "\n",
    "一般情况下，信息增益越大，则说明用  $x$  来划分样本集合  $\\operatorname{D}$  的纯度越高。以 hair 为例，其中它有 short 和 long 两个可能取值，则分别用  $\\operatorname{D}^{1} $  （hair = long）和  $\\operatorname{D}^{2} $  （hair = short）来表示。 \n",
    "\n",
    "其中为  $\\operatorname{D}^{1}$  的数据编号为  $\\\\{0，4，6，8，9，10，12，14，15\\\\}$  共 9 个，在这之中为 man 的有  $\\\\{0，4，6\\\\}$  共 3 个占比为  $\\frac{3}{9}$  ，为 woman 的有  $\\\\{8, 9，10，12，14，15\\\\}$  共 6 个占比为  $\\frac{6}{9}$  。 \n",
    "\n",
    "同样  $\\operatorname{D}^{2}$  编号为  $\\\\{1，2，3，5，7，11，13, 16\\\\}$  共 8 个，其中为 man 的有  $\\\\{1，2，3，5，7\\\\}$  共 5 个占比  $\\frac{5}{8}$  ，为 woman 的有  $\\\\{11，13, 16\\\\}$  共 3 个占比  $\\frac{3}{8}$  ，若按照 hair 进行划分，则两个分支点的信息熵为： \n",
    "\n",
    "$$\\operatorname{Ent}(\\operatorname{D}^{1})=-(\\frac{3}{9}\\;log_{2}\\frac{3}{9}+\\frac{6}{9}\\;log_{2}\\frac{6}{9})=0.449$$ \n",
    "\n",
    "$$\\operatorname{Ent}(\\operatorname{D}^{2})=-(\\frac{5}{8}\\;log_{2}\\frac{5}{8}+\\frac{3}{8}\\;log_{2}\\frac{3}{8})=0.486$$ \n",
    "\n",
    "根据信息增益的公式可以计算出 hair 的信息增益为： \n",
    "\n",
    "$$ \\operatorname{Gain}(\\operatorname{D}, \\text {hair})=\\operatorname{Ent}(\\operatorname{D})-\\sum_{m=1}^{2} \\frac{\\left|\\operatorname{D}^{m}\\right|}{|\\operatorname{D}|} \\operatorname{Ent}\\left(\\operatorname{D}^{m}\\right)=0.9975-\\left(\\frac{9}{17} * 0.449+\\frac{8}{17} * 0.486\\right)=0.062 $$ \n",
    "\n",
    "下面我们用 Python 来实现信息增益（ID3）算法： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a439c451",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_gain(data, base_ent, feature):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    data -- 数据集\n",
    "    base_ent -- 根节点的信息熵\n",
    "    feature -- 计算信息增益的特征\n",
    "\n",
    "    返回:\n",
    "    Ent -- 信息熵\n",
    "    \"\"\"\n",
    "    # 计算信息增益\n",
    "    feature_list = data[feature]  # 得到一个特征的全部取值\n",
    "    unique_value = set(feature_list)  # 特征取值的类别\n",
    "    feature_ent = 0.0\n",
    "\n",
    "    for each_feature in unique_value:\n",
    "        temp_data = data[data[feature] == each_feature]\n",
    "        weight = len(temp_data) / len(feature_list)  # 计算该特征的权重值\n",
    "        temp_ent = weight * get_Ent(temp_data)\n",
    "        feature_ent = feature_ent + temp_ent\n",
    "\n",
    "    gain = base_ent - feature_ent  # 信息增益\n",
    "    return gain"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26f73a9b",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "完成信息增益函数后，尝试计算特征 hair 的信息增益值。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "011b964c",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "get_gain(data, base_ent, \"hair\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2318d9e0",
   "metadata": {},
   "outputs": [],
   "source": [
    "0.062200515199107964"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47b847fb",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "##  19.6.  信息增益率（C4.5）  # \n",
    "\n",
    "信息增益也存在许多不足之处，经过大量的实验发现，当信息增益作为标准时，易偏向于取值较多的特征，为了避免这种偏好给预测结果带来的不好影响，可以使用增益率来选择最优划分。增益率的公式定义为: \n",
    "\n",
    "$$ \\operatorname{GainRatio}(\\operatorname{D},a)=\\frac{\\operatorname{Gain}(\\operatorname{D},a)}{\\operatorname{IV}(a)} \\tag{3} $$ \n",
    "\n",
    "其中： \n",
    "\n",
    "$$ \\operatorname{IV}(a)=-\\sum_{m=1}^{M}\\frac{\\left | \\operatorname{D}^{m} \\right |}{\\left | \\operatorname{D} \\right |}\\; log_{2}\\frac{\\left | \\operatorname{D}^{m} \\right |}{\\left | \\operatorname{D} \\right |} \\tag{4} $$ \n",
    "\n",
    "$\\operatorname{IV}(a)$  称为特征  $a$  的固有值，当  $a$  的取值数目越多，则  $\\operatorname{IV}(a)$  的值通常会比较大。例如： \n",
    "\n",
    "$$\\operatorname{IV}(\\operatorname{hair})= -\\frac{9}{17}\\; log_{2}\\frac{9}{17}-\\frac{8}{17}\\; log_{2}\\frac{8}{17}=0.998 $$ \n",
    "\n",
    "$$\\operatorname{IV}(\\operatorname{voice})= -\\frac{7}{17}\\; log_{2}\\frac{7}{17} -\\frac{5}{17}\\; log_{2}\\frac{5}{17} - \\frac{5}{17}\\; log_{2}\\frac{5}{17} = 1.566$$ \n",
    "\n",
    "##  19.7.  连续值处理  # \n",
    "\n",
    "在前面介绍的特征选择中，都是对离散型数据进行处理，但在实际的生活中数据常常会出现连续值的情况，如生成数据中的身高，当数据较少时，可以将每一个值作为一个类别，但当数据量大时，这样是不可取的，在 C4.5 算法中采用二分法对连续值进行处理。 \n",
    "\n",
    "对于连续的属性  $X$  假设共出现了  $n$  个不同的取值，将这些取值从小到大排序  $\\\\{x_{1},x_{2},x_{3},…,x_{n} \\\\} $  ，其中找一点作为划分点  $t$  ，则将数据划分为两类，大于  $t$  的为一类，小于  $t$  的为另一类。而  $t$  的取值通常为相邻两点的平均数  $t=\\frac{x_{i}+x_{i+1}}{2}$  。 \n",
    "\n",
    "则在  $n$  个连续值之中，可以作为划分点的  $t$  有  $n-1$  个。通过遍历可以像离散型一样来考察这些划分点。 \n",
    "\n",
    "$$ \\operatorname{Gain}(\\operatorname{D}, X)=\\operatorname{Ent}(\\operatorname{D})-\\frac{\\left|\\operatorname{D}<t\\right|}{|\\operatorname{D}|} \\operatorname{Ent}\\left(\\operatorname{D}<t\\right)-\\frac{\\left|\\operatorname{D}>t\\right|}{|\\operatorname{D}|} \\operatorname{Ent}\\left(\\operatorname{D}>t\\right) \\tag{5} $$ \n",
    "\n",
    "其中得到样本  $\\operatorname{D}$  基于划分点  $t$  二分后的信息增益，于是我们可以选择使得  $\\operatorname{Gain}(\\operatorname{D},X)$  值最大的划分点。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "47f1a06d",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_splitpoint(data, base_ent, feature):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    data -- 数据集\n",
    "    base_ent -- 根节点的信息熵\n",
    "    feature -- 需要划分的连续特征\n",
    "\n",
    "    返回:\n",
    "    final_t -- 连续值最优划分点\n",
    "    \"\"\"\n",
    "    # 将连续值进行排序并转化为浮点类型\n",
    "    continues_value = data[feature].sort_values().astype(np.float64)\n",
    "    continues_value = [i for i in continues_value]  # 不保留原来的索引\n",
    "    t_set = []\n",
    "    t_ent = {}\n",
    "\n",
    "    # 得到划分点 t 的集合\n",
    "    for i in range(len(continues_value) - 1):\n",
    "        temp_t = (continues_value[i] + continues_value[i + 1]) / 2\n",
    "        t_set.append(temp_t)\n",
    "\n",
    "    # 计算最优划分点\n",
    "    for each_t in t_set:\n",
    "        # 将大于划分点的分为一类\n",
    "        temp1_data = data[data[feature].astype(np.float64) > each_t]\n",
    "        # 将小于划分点的分为一类\n",
    "        temp2_data = data[data[feature].astype(np.float64) < each_t]\n",
    "        weight1 = len(temp1_data) / len(data)\n",
    "        weight2 = len(temp2_data) / len(data)\n",
    "        # 计算每个划分点的信息增益\n",
    "        temp_ent = (\n",
    "            base_ent - weight1 * get_Ent(temp1_data) - weight2 * get_Ent(temp2_data)\n",
    "        )\n",
    "        t_ent[each_t] = temp_ent\n",
    "    print(\"t_ent:\", t_ent)\n",
    "    final_t = max(t_ent, key=t_ent.get)\n",
    "    return final_t"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d82e45cc",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "实现连续值最优划分点的函数后，寻找 height 连续特征值的划分点。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b8dc0ea1",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "final_t = get_splitpoint(data, base_ent, \"height\")\n",
    "final_t"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f46567dd",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "t_ent: {158.0: 0.1179805181500242, 159.5: 0.1179805181500242, 161.0: 0.2624392604045631, 162.0: 0.2624392604045631, 163.0: 0.3856047022157598, 163.5: 0.15618502398692893, 164.0: 0.3635040117533678, 165.0: 0.33712865788827096, 167.0: 0.4752766311586692, 170.0: 0.32920899348970845, 172.0: 0.5728389611412551, 172.5: 0.4248356349861979, 173.5: 0.3165383509071513, 174.5: 0.22314940393447813, 176.5: 0.14078143361499595, 179.0: 0.06696192680347068}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7562ba3e",
   "metadata": {},
   "outputs": [],
   "source": [
    "172.0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2bc58c81",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "##  19.8.  决策树算法实现  # \n",
    "\n",
    "在对决策树中最佳特征选择和连续值处理之后，接下来就是对决策树的构建。 \n",
    "\n",
    "首先我们将连续值进行处理，在找到最佳划分点之后，将  $<t$  的值设为 0，将  $>=t$  的值设为 1。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3056caad",
   "metadata": {},
   "outputs": [],
   "source": [
    "def choice_1(x, t):\n",
    "    if x > t:\n",
    "        return \">{}\".format(t)\n",
    "    else:\n",
    "        return \"<{}\".format(t)\n",
    "\n",
    "deal_data = data.copy()\n",
    "# 使用lambda和map函数将 height 按照final_t划分为两个类别\n",
    "deal_data[\"height\"] = pd.Series(\n",
    "    map(lambda x: choice_1(int(x), final_t), deal_data[\"height\"])\n",
    ")\n",
    "deal_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "38329e64",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  hair  |  voice  |  height  |  ear_stud  |  labels   \n",
    "---|---|---|---|---|---  \n",
    "0  |  long  |  thick  |  >172.0  |  no  |  man   \n",
    "1  |  short  |  medium  |  <172.0  |  no  |  man   \n",
    "2  |  short  |  thin  |  >172.0  |  yes  |  man   \n",
    "3  |  short  |  thick  |  <172.0  |  no  |  man   \n",
    "4  |  long  |  medium  |  <172.0  |  no  |  man   \n",
    "5  |  short  |  thick  |  >172.0  |  no  |  man   \n",
    "6  |  long  |  thick  |  >172.0  |  yes  |  man   \n",
    "7  |  short  |  thin  |  >172.0  |  no  |  man   \n",
    "8  |  long  |  thin  |  <172.0  |  yes  |  woman   \n",
    "9  |  long  |  medium  |  <172.0  |  yes  |  woman   \n",
    "10  |  long  |  thick  |  <172.0  |  yes  |  woman   \n",
    "11  |  short  |  thin  |  <172.0  |  yes  |  woman   \n",
    "12  |  long  |  thin  |  <172.0  |  no  |  woman   \n",
    "13  |  short  |  medium  |  <172.0  |  no  |  woman   \n",
    "14  |  long  |  thick  |  <172.0  |  yes  |  woman   \n",
    "15  |  long  |  thin  |  <172.0  |  no  |  woman   \n",
    "16  |  short  |  medium  |  <172.0  |  yes  |  woman   \n",
    "  \n",
    "将数据进行预处理之后，接下来就是选择最优的划分特征。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "308bdd0f",
   "metadata": {},
   "outputs": [],
   "source": [
    "def choose_feature(data):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    data -- 数据集\n",
    "\n",
    "    返回:\n",
    "    best_feature -- 最优的划分特征\n",
    "    \"\"\"\n",
    "    # 选择最优划分特征\n",
    "    num_features = len(data.columns) - 1  # 特征数量\n",
    "    base_ent = get_Ent(data)\n",
    "    best_gain = 0.0  # 初始化信息增益\n",
    "    best_feature = data.columns[0]\n",
    "    for i in range(num_features):  # 遍历所有特征\n",
    "        temp_gain = get_gain(data, base_ent, data.columns[i])  # 计算信息增益\n",
    "        if temp_gain > best_gain:  # 选择最大的信息增益\n",
    "            best_gain = temp_gain\n",
    "            best_feature = data.columns[i]\n",
    "    return best_feature  # 返回最优特征"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cef78ded",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "完成函数之后，我们首先看看数据集中信息增益值最大的特征是什么？ "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "57292cee",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "choose_feature(deal_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6654505c",
   "metadata": {},
   "outputs": [],
   "source": [
    "'height'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b5b1a9e",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "在将所有的子模块构建好之后，最后就是对核心决策树的构建，本次实验采用信息增益（ID3）的方式构建决策树。在构建的过程中，根据算法流程，我们反复遍历数据集，计算每一个特征的信息增益，通过比较将最好的特征作为父节点，根据特征的值确定分支子节点，然后重复以上操作，直到某一个分支全部属于同一类别，或者遍历完所有的数据特征，当遍历到最后一个特征时，若分支数据依然「不纯」，就将其中数量较多的类别作为子节点。 \n",
    "\n",
    "因此最好采用递归的方式来构建决策树。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1afed4ec",
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_tree(data):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    data -- 数据集\n",
    "\n",
    "    返回:\n",
    "    tree -- 以字典的形式返回决策树\n",
    "    \"\"\"\n",
    "    # 构建决策树\n",
    "    feature_list = data.columns[:-1].tolist()\n",
    "    label_list = data.iloc[:, -1]\n",
    "    if len(data[\"labels\"].value_counts()) == 1:\n",
    "        leaf_node = data[\"labels\"].mode().values\n",
    "        return leaf_node  # 第一个递归结束条件：所有的类标签完全相同\n",
    "    if len(feature_list) == 1:\n",
    "        leaf_node = data[\"labels\"].mode().values\n",
    "        return leaf_node  # 第二个递归结束条件：用完了所有特征\n",
    "    best_feature = choose_feature(data)  # 最优划分特征\n",
    "    tree = {best_feature: {}}\n",
    "    feat_values = data[best_feature]\n",
    "    unique_value = set(feat_values)\n",
    "    for value in unique_value:\n",
    "        temp_data = data[data[best_feature] == value]\n",
    "        temp_data = temp_data.drop([best_feature], axis=1)\n",
    "        tree[best_feature][value] = create_tree(temp_data)\n",
    "    return tree"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0cc8c208",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "完成创建决策树函数后，接下来对我们第一棵树进行创建。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1f95fe94",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "tree = create_tree(deal_data)\n",
    "tree"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "69b667c8",
   "metadata": {},
   "outputs": [],
   "source": [
    "{'height': {'>172.0': array(['man'], dtype=object),\n",
    "  '<172.0': {'ear_stud': {'yes': array(['woman'], dtype=object),\n",
    "    'no': {'voice': {'thin': array(['woman'], dtype=object),\n",
    "      'medium': array(['man'], dtype=object),\n",
    "      'thick': array(['man'], dtype=object)}}}}}}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5c1efab1",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "通过字典的方式表示构建好的树，可以通过图像的方式更加直观的了解。 \n",
    "\n",
    "![image](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1548036774075.png)\n",
    "\n",
    "通过图形可以看出，在构建决策树时不一定每一个特征都会成为树的节点（如同 hair）。 \n",
    "\n",
    "在构建好决策树之后，最终就可以使用未知样本进行预测分类。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4d995920",
   "metadata": {},
   "outputs": [],
   "source": [
    "def classify(tree, test):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    data -- 数据集\n",
    "    test -- 需要测试的数据\n",
    "\n",
    "    返回:\n",
    "    class_label -- 分类结果\n",
    "    \"\"\"\n",
    "    # 决策分类\n",
    "    first_feature = list(tree.keys())[0]  # 获取根节点\n",
    "    feature_dict = tree[first_feature]  # 根节点下的树\n",
    "    labels = test.columns.tolist()\n",
    "    value = test[first_feature][0]\n",
    "    for key in feature_dict.keys():\n",
    "        if value == key:\n",
    "            if type(feature_dict[key]).__name__ == \"dict\":  # 判断该节点是否为叶节点\n",
    "                class_label = classify(feature_dict[key], test)  # 采用递归直到遍历到叶节点\n",
    "            else:\n",
    "                class_label = feature_dict[key]\n",
    "    return class_label"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5f163950",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "在分类函数完成之后，接下来我们尝试对未知数据进行分类。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2c94e426",
   "metadata": {},
   "outputs": [],
   "source": [
    "test = pd.DataFrame(\n",
    "    {\"hair\": [\"long\"], \"voice\": [\"thin\"], \"height\": [163], \"ear_stud\": [\"yes\"]}\n",
    ")\n",
    "test"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e15dc7a4",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  hair  |  voice  |  height  |  ear_stud   \n",
    "---|---|---|---|---  \n",
    "0  |  long  |  thin  |  163  |  yes   \n",
    "  \n",
    "对连续值进行预处理： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "87c3a557",
   "metadata": {},
   "outputs": [],
   "source": [
    "test[\"height\"] = pd.Series(map(lambda x: choice_1(int(x), final_t), test[\"height\"]))\n",
    "test"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e69fdd2c",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  hair  |  voice  |  height  |  ear_stud   \n",
    "---|---|---|---|---  \n",
    "0  |  long  |  thin  |  <172.0  |  yes   \n",
    "  \n",
    "分类预测： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "93173d12",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "classify(tree, test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "16b6900c",
   "metadata": {},
   "outputs": [],
   "source": [
    "array(['woman'], dtype=object)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5143558e",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "一个身高 163 厘米，长发，带着耳钉且声音纤细的人，在我们构建的决策树判断后预测为一名女性。 \n",
    "\n",
    "上面的实验中，我们没有考虑  $=划分点$  的情况，你可以自行尝试将  $>=划分点$  或  $<=划分点$  归为一类，看看结果又有哪些不同？ \n",
    "\n",
    "##  19.9.  预剪枝和后剪枝  # \n",
    "\n",
    "在决策树的构建过程中，特别在数据特征非常多时，为了尽可能正确的划分每一个训练样本，结点的划分就会不停的重复，则一棵决策树的分支就非常多。对于训练集而言，拟合出来的模型是非常完美的。但是，这种完美就使得整体模型的复杂度变高，同时对其他数据集的预测能力下降，也就是我们常说的过拟合使得模型的泛化能力变弱。为了避免过拟合问题的出现，在决策树中最常见的两种方法就是预剪枝和后剪枝。 \n",
    "\n",
    "##  19.10.  预剪枝  # \n",
    "\n",
    "预剪枝，顾名思义预先减去枝叶，在构建决策树模型的时候，每一次对数据划分之前进行估计，如果当前节点的划分不能带来决策树泛化的提升，则停止划分并将当前节点标记为叶节点。例如前面构造的决策树，按照决策树的构建原则，通过 ` height  ` 特征进行划分后  $<172$  分支中又按照 ` ear_stud  ` 特征值进行继续划分。如果应用预剪枝，则当通过 ` height  ` 进行特征划分之后，对  $<172$  分支是否进行 ` ear_stud  ` 特征进行划分时计算划分前后的准确度，如果划分后的更高则按照 ` ear_stud  ` 继续划分，如果更低则停止划分。 \n",
    "\n",
    "##  19.11.  后剪枝  # \n",
    "\n",
    "跟预剪枝在构建决策树的过程中判断是否继续特征划分所不同的是，后剪枝在决策树构建好之后对树进行修剪。如果说预剪枝是自顶向下的修剪，那么后剪枝就是自底向上进行修剪。后剪枝将最后的分支节点替换为叶节点，判断是否带来决策树泛化的提升，是则进行修剪，并将该分支节点替换为叶节点，否则不进行修剪。例如在前面构建好决策树之后，  $>172$  分支的 voice 特征，将其替换为叶节点如（man），计算替换前后划分准确度，如果替换后准确度更高则进行修剪（用叶节点替换分支节点），否则不修剪。 \n",
    "\n",
    "##  19.12.  学生成绩分类预测  # \n",
    "\n",
    "在前面我们使用 Python 将决策树的特征选择，连续值处理和预测分类做了详细的讲解。接下来我们应用决策树模型对真实的数据进行分类预测。 \n",
    "\n",
    "本次应用到的数据为学生成绩数据集 ` course-13-student.csv  ` ，一共有 395 条数据，26 个特征。首先加载并预览数据集： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c58a2f00",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "wget -nc https://cdn.aibydoing.com/aibydoing/files/course-13-student.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "668eacb0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入数据集并预览\n",
    "stu_grade = pd.read_csv(\"course-13-student.csv\")\n",
    "stu_grade.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "968bc8fb",
   "metadata": {},
   "source": [
    "|  school  |  sex  |  address  |  Pstatus  |  Pedu  |  reason  |  guardian  |  traveltime  |  studytime  |  schoolsup  |  ...  |  famrel  |  freetime  |  goout  |  Dalc  |  Walc  |  health  |  absences  |  G1  |  G2  |  G3   \n",
    "---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---  \n",
    "0  |  GP  |  F  |  U  |  A  |  4.0  |  course  |  mother  |  2  |  2  |  yes  |  ...  |  4  |  3  |  4  |  1  |  1  |  3  |  6  |  5  |  6  |  6   \n",
    "1  |  GP  |  F  |  U  |  T  |  1.0  |  course  |  father  |  1  |  2  |  no  |  ...  |  5  |  3  |  3  |  1  |  1  |  3  |  4  |  5  |  5  |  6   \n",
    "2  |  GP  |  F  |  U  |  T  |  1.0  |  other  |  mother  |  1  |  2  |  yes  |  ...  |  4  |  3  |  2  |  2  |  3  |  3  |  10  |  7  |  8  |  10   \n",
    "3  |  GP  |  F  |  U  |  T  |  3.0  |  home  |  mother  |  1  |  3  |  no  |  ...  |  3  |  2  |  2  |  1  |  1  |  5  |  2  |  15  |  14  |  15   \n",
    "4  |  GP  |  F  |  U  |  T  |  3.0  |  home  |  father  |  1  |  2  |  no  |  ...  |  4  |  3  |  2  |  1  |  2  |  5  |  4  |  6  |  10  |  10   \n",
    "  \n",
    "5 rows × 27 columns \n",
    "\n",
    "由于特征过多，我们选择部分特征作为决策树模型的分类特征,分别为： \n",
    "\n",
    "  * school：学生所读学校（GP，MS） \n",
    "\n",
    "  * sex：性别（F：女，M：男） \n",
    "\n",
    "  * address：家庭住址（U：城市，R：郊区） \n",
    "\n",
    "  * Pstatus：父母状态（A：同居，T：分居） \n",
    "\n",
    "  * Pedu：父母学历由低到高 \n",
    "\n",
    "  * reason：选择这所学校的原因（home：家庭，course：课程设计，reputation：学校地位，other：其他） \n",
    "\n",
    "  * guardian：监护人（mother：母亲，father：父亲，other：其他） \n",
    "\n",
    "  * studytime：周末学习时长 \n",
    "\n",
    "  * schoolsup：额外教育支持（yes：有，no：没有） \n",
    "\n",
    "  * famsup：家庭教育支持（yes：有，no：没有） \n",
    "\n",
    "  * paid：是否上补习班（yes：是，no：否） \n",
    "\n",
    "  * higher：是否想受更好的教育（yes：是，no：否） \n",
    "\n",
    "  * internet：是否家里联网（yes：是，no：否） \n",
    "\n",
    "  * G1：一阶段测试成绩 \n",
    "\n",
    "  * G2：二阶段测试成绩 \n",
    "\n",
    "  * G3：最终成绩 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f692babb",
   "metadata": {},
   "outputs": [],
   "source": [
    "new_data = stu_grade.iloc[:, [0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 14, 15, 24, 25, 26]]\n",
    "new_data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6a8707d6",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  school  |  sex  |  address  |  Pstatus  |  Pedu  |  reason  |  guardian  |  studytime  |  schoolsup  |  famsup  |  paid  |  higher  |  internet  |  G1  |  G2  |  G3   \n",
    "---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---  \n",
    "0  |  GP  |  F  |  U  |  A  |  4.0  |  course  |  mother  |  2  |  yes  |  no  |  no  |  yes  |  no  |  5  |  6  |  6   \n",
    "1  |  GP  |  F  |  U  |  T  |  1.0  |  course  |  father  |  2  |  no  |  yes  |  no  |  yes  |  yes  |  5  |  5  |  6   \n",
    "2  |  GP  |  F  |  U  |  T  |  1.0  |  other  |  mother  |  2  |  yes  |  no  |  yes  |  yes  |  yes  |  7  |  8  |  10   \n",
    "3  |  GP  |  F  |  U  |  T  |  3.0  |  home  |  mother  |  3  |  no  |  yes  |  yes  |  yes  |  yes  |  15  |  14  |  15   \n",
    "4  |  GP  |  F  |  U  |  T  |  3.0  |  home  |  father  |  2  |  no  |  yes  |  yes  |  yes  |  no  |  6  |  10  |  10   \n",
    "  \n",
    "首先我们将成绩 G1，G2，G3 根据分数进行等级划分，将 0-4 划分为 bad，5-9 划分为 medium ， "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d27a679a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def choice_2(x):\n",
    "    x = int(x)\n",
    "    if x < 5:\n",
    "        return \"bad\"\n",
    "    elif x >= 5 and x < 10:\n",
    "        return \"medium\"\n",
    "    elif x >= 10 and x < 15:\n",
    "        return \"good\"\n",
    "    else:\n",
    "        return \"excellent\"\n",
    "\n",
    "stu_data = new_data.copy()\n",
    "stu_data[\"G1\"] = pd.Series(map(lambda x: choice_2(x), stu_data[\"G1\"]))\n",
    "stu_data[\"G2\"] = pd.Series(map(lambda x: choice_2(x), stu_data[\"G2\"]))\n",
    "stu_data[\"G3\"] = pd.Series(map(lambda x: choice_2(x), stu_data[\"G3\"]))\n",
    "stu_data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76268edf",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  school  |  sex  |  address  |  Pstatus  |  Pedu  |  reason  |  guardian  |  studytime  |  schoolsup  |  famsup  |  paid  |  higher  |  internet  |  G1  |  G2  |  G3   \n",
    "---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---  \n",
    "0  |  GP  |  F  |  U  |  A  |  4.0  |  course  |  mother  |  2  |  yes  |  no  |  no  |  yes  |  no  |  medium  |  medium  |  medium   \n",
    "1  |  GP  |  F  |  U  |  T  |  1.0  |  course  |  father  |  2  |  no  |  yes  |  no  |  yes  |  yes  |  medium  |  medium  |  medium   \n",
    "2  |  GP  |  F  |  U  |  T  |  1.0  |  other  |  mother  |  2  |  yes  |  no  |  yes  |  yes  |  yes  |  medium  |  medium  |  good   \n",
    "3  |  GP  |  F  |  U  |  T  |  3.0  |  home  |  mother  |  3  |  no  |  yes  |  yes  |  yes  |  yes  |  excellent  |  good  |  excellent   \n",
    "4  |  GP  |  F  |  U  |  T  |  3.0  |  home  |  father  |  2  |  no  |  yes  |  yes  |  yes  |  no  |  medium  |  good  |  good   \n",
    "  \n",
    "同样我们对 Pedu（父母教育程度）也进行划分 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e648a291",
   "metadata": {},
   "outputs": [],
   "source": [
    "def choice_3(x):\n",
    "    x = int(x)\n",
    "    if x > 3:\n",
    "        return \"high\"\n",
    "    elif x > 1.5:\n",
    "        return \"medium\"\n",
    "    else:\n",
    "        return \"low\"\n",
    "\n",
    "stu_data[\"Pedu\"] = pd.Series(map(lambda x: choice_3(x), stu_data[\"Pedu\"]))\n",
    "stu_data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86b2aa68",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  school  |  sex  |  address  |  Pstatus  |  Pedu  |  reason  |  guardian  |  studytime  |  schoolsup  |  famsup  |  paid  |  higher  |  internet  |  G1  |  G2  |  G3   \n",
    "---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---  \n",
    "0  |  GP  |  F  |  U  |  A  |  high  |  course  |  mother  |  2  |  yes  |  no  |  no  |  yes  |  no  |  medium  |  medium  |  medium   \n",
    "1  |  GP  |  F  |  U  |  T  |  low  |  course  |  father  |  2  |  no  |  yes  |  no  |  yes  |  yes  |  medium  |  medium  |  medium   \n",
    "2  |  GP  |  F  |  U  |  T  |  low  |  other  |  mother  |  2  |  yes  |  no  |  yes  |  yes  |  yes  |  medium  |  medium  |  good   \n",
    "3  |  GP  |  F  |  U  |  T  |  medium  |  home  |  mother  |  3  |  no  |  yes  |  yes  |  yes  |  yes  |  excellent  |  good  |  excellent   \n",
    "4  |  GP  |  F  |  U  |  T  |  medium  |  home  |  father  |  2  |  no  |  yes  |  yes  |  yes  |  no  |  medium  |  good  |  good   \n",
    "  \n",
    "在等级划分之后，为遵循 scikit-learn 函数的输入规范，需要将数据特征进行替换。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "32773f35",
   "metadata": {},
   "outputs": [],
   "source": [
    "def replace_feature(data):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    data -- 数据集\n",
    "\n",
    "    返回:\n",
    "    data -- 将特征值替换后的数据集\n",
    "    \"\"\"\n",
    "    # 特征值替换\n",
    "    for each in data.columns:  # 遍历每一个特征名称\n",
    "        feature_list = data[each]\n",
    "        unique_value = set(feature_list)\n",
    "        i = 0\n",
    "        for fea_value in unique_value:\n",
    "            data[each] = data[each].replace(fea_value, i)\n",
    "            i += 1\n",
    "    return data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "435a77c0",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "将特征值进行替换后展示。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e328d939",
   "metadata": {},
   "outputs": [],
   "source": [
    "stu_data = replace_feature(stu_data)\n",
    "stu_data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6678392d",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  school  |  sex  |  address  |  Pstatus  |  Pedu  |  reason  |  guardian  |  studytime  |  schoolsup  |  famsup  |  paid  |  higher  |  internet  |  G1  |  G2  |  G3   \n",
    "---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---  \n",
    "0  |  0  |  1  |  0  |  1  |  1  |  0  |  0  |  1  |  0  |  1  |  1  |  0  |  1  |  0  |  0  |  0   \n",
    "1  |  0  |  1  |  0  |  0  |  0  |  0  |  2  |  1  |  1  |  0  |  1  |  0  |  0  |  0  |  0  |  0   \n",
    "2  |  0  |  1  |  0  |  0  |  0  |  1  |  0  |  1  |  0  |  1  |  0  |  0  |  0  |  0  |  0  |  3   \n",
    "3  |  0  |  1  |  0  |  0  |  2  |  3  |  0  |  2  |  1  |  0  |  0  |  0  |  0  |  1  |  3  |  1   \n",
    "4  |  0  |  1  |  0  |  0  |  2  |  3  |  2  |  1  |  1  |  0  |  0  |  0  |  1  |  0  |  3  |  3   \n",
    "  \n",
    "加载好预处理的数据集之后，为了实现决策树算法，同样我们需要将数据集分为 训练集和测试集，依照经验：训练集占比为 70%，测试集占 30%。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7d37dca2",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "X_train, X_test, y_train, y_test = train_test_split(\n",
    "    stu_data.iloc[:, :-1], stu_data[\"G3\"], test_size=0.3, random_state=5\n",
    ")\n",
    "\n",
    "X_test.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "086d58bc",
   "metadata": {},
   "source": [
    "|  school  |  sex  |  address  |  Pstatus  |  Pedu  |  reason  |  guardian  |  studytime  |  schoolsup  |  famsup  |  paid  |  higher  |  internet  |  G1  |  G2   \n",
    "---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---  \n",
    "306  |  0  |  0  |  0  |  1  |  2  |  0  |  1  |  0  |  1  |  1  |  1  |  0  |  1  |  1  |  1   \n",
    "343  |  0  |  1  |  0  |  1  |  2  |  3  |  2  |  1  |  1  |  0  |  1  |  0  |  0  |  0  |  0   \n",
    "117  |  0  |  0  |  0  |  0  |  2  |  3  |  2  |  0  |  1  |  1  |  1  |  0  |  0  |  3  |  3   \n",
    "50  |  0  |  1  |  0  |  0  |  2  |  0  |  0  |  1  |  1  |  0  |  0  |  0  |  0  |  3  |  3   \n",
    "316  |  0  |  1  |  0  |  0  |  0  |  0  |  0  |  1  |  1  |  0  |  0  |  0  |  0  |  0  |  0   \n",
    "  \n",
    "在划分好数据集之后，接下来就是进行预测。在前面的实验中我们采用 Python 对决策树算法进行实现，下面我们通过 scikit-learn 来对其进行实现。 scikit-learn 决策树类 ` DecisionTreeClassifier(criterion='gini',  random_state=None)  ` 常用参数如下： \n",
    "\n",
    "  * ` criterion  ` 表示特征划分方法选择，默认为 gini (在后面会讲到)，可选择为 entropy (信息增益)。 \n",
    "\n",
    "  * ` ramdom_state  ` 表示随机数种子，当特征特别多时 scikit-learn 为了提高效率，随机选取部分特征来进行特征选择，即找到所有特征中较优的特征。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "346b1274",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "from sklearn.tree import DecisionTreeClassifier\n",
    "\n",
    "dt_model = DecisionTreeClassifier(criterion=\"entropy\", random_state=34)\n",
    "dt_model.fit(X_train, y_train)  # 使用训练集训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8163fb3b",
   "metadata": {},
   "outputs": [],
   "source": [
    "DecisionTreeClassifier(criterion='entropy', random_state=34)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7cb75892",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.  \n",
    "On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org. **\n",
    "\n",
    "DecisionTreeClassifier "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "75043b1b",
   "metadata": {},
   "outputs": [],
   "source": [
    "DecisionTreeClassifier(criterion='entropy', random_state=34)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d70e0274",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "##  19.13.  决策树可视化  # \n",
    "\n",
    "在构建好决策树之后，我们需要对创建好的决策树进行可视化展示，引入 ` export_graphviz  ` 进行画图。 \n",
    "\n",
    "下面开始生成决策树图像，其中生成决策树较大需要拖动滑动条进行查看。 \n",
    "\n",
    "Note \n",
    "\n",
    "pip install graphviz # 安装 graphviz 可视化库 brew install graphviz # macOS 安装 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e70e3644",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.tree import export_graphviz\n",
    "import graphviz\n",
    "\n",
    "img = export_graphviz(\n",
    "    dt_model,\n",
    "    out_file=None,\n",
    "    feature_names=stu_data.columns[:-1].values.tolist(),  # 传入特征名称\n",
    "    class_names=np.array([\"bad\", \"medium\", \"good\", \"excellent\"]),  # 传入类别值\n",
    "    filled=True,\n",
    "    node_ids=True,\n",
    "    rounded=True,\n",
    ")\n",
    "\n",
    "graphviz.Source(img)  # 展示决策树"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00ae5331",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "[ ![../_images/e057fb33dddf78ba714e20d244097987bbb2c576e49a7e63b3fad0221dfb4266.svg](../_images/e057fb33dddf78ba714e20d244097987bbb2c576e49a7e63b3fad0221dfb4266.svg) ](../_images/e057fb33dddf78ba714e20d244097987bbb2c576e49a7e63b3fad0221dfb4266.svg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1d7cdc99",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "y_pred = dt_model.predict(X_test)  # 使用模型对测试集进行预测\n",
    "y_pred"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bb8c3c63",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "array([1, 2, 3, 3, 0, 3, 0, 3, 3, 3, 0, 0, 1, 1, 3, 1, 0, 0, 3, 3, 3, 0,\n",
    "       1, 1, 3, 0, 1, 1, 3, 1, 0, 0, 0, 3, 2, 0, 3, 3, 1, 0, 1, 1, 0, 3,\n",
    "       0, 3, 3, 3, 3, 3, 1, 3, 3, 0, 1, 0, 1, 3, 0, 2, 1, 0, 3, 3, 3, 1,\n",
    "       1, 1, 3, 0, 0, 2, 2, 1, 3, 0, 0, 1, 0, 1, 0, 1, 3, 2, 1, 2, 0, 2,\n",
    "       0, 3, 2, 2, 3, 3, 3, 1, 0, 0, 1, 0, 0, 3, 1, 3, 2, 3, 3, 0, 3, 3,\n",
    "       0, 3, 0, 3, 3, 2, 3, 1, 3])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "36d57709",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [],
   "source": [
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "# 计算分类准确度\n",
    "accuracy_score(y_test, y_pred)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e7129dbf",
   "metadata": {},
   "outputs": [],
   "source": [
    "0.680672268907563"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c26bf131",
   "metadata": {},
   "source": [
    "##  19.14.  CART 决策树  # \n",
    "\n",
    "分类与回归树（classification and regression tree, CART）同样也是应用广泛的决策树学习算法，CART 算法是按照特征划分，由树的生成和树的剪枝构成，既可以进行分类又可以用于回归，按照作用将其分为决策树和回归树，由于本实验设计为决策树的概念，所以回归树的部分有兴趣的同学可以自己查找相关资料进一步学习。 \n",
    "\n",
    "CART决策树的构建和常见的 ID3 和 C4.5 算法的流程相似，但在特征划分选择上CART选择了 基尼指数 作为划分标准。数据集  $\\operatorname{D}$  的纯度可用基尼值来度量： \n",
    "\n",
    "$$ \\operatorname{Gini}(\\operatorname{D})=\\sum_{y=1}^{|y|} \\sum_{k^{\\prime} \\neq k} p_{k} p_{k^{\\prime}} \\tag{7} $$ \n",
    "\n",
    "基尼指数表示随机抽取两个样本，两个样本类别不一致的概率，基尼指数越小则数据集的纯度越高。同样对于每一个特征值的基尼指数计算，其和 ID3 、 C4.5 相似，定义为： \n",
    "\n",
    "$$ \\operatorname{GiniValue}(\\operatorname{D},a)=\\sum_{m=1}^{M}\\frac{\\left |\\operatorname{D}^{m} \\right |}{\\left |\\operatorname{D} \\right |}\\operatorname{Gini}(\\operatorname{D}^{m}) \\tag{8} $$ \n",
    "\n",
    "在进行特征划分的时候，选择特征中基尼值最小的作为最优特征划分点。 \n",
    "\n",
    "实际上，在应用过程中，更多的会使用 基尼指数 对特征划分点进行决策，最重要的原因是计算复杂度相较于 ID3 和 C4.5 小很多（没有对数运算）。 \n",
    "\n",
    "##  19.15.  总结  # \n",
    "\n",
    "本节实验中学习了决策树的原理和算法流程，采用数学公式和实际例子相结合的方式讲解了信息增益和增益率的特征划分方法。通过 Python 代码对决策树进行完整实现，并且使用 scikit-learn 对实际数据应用决策树进行分类预测。 \n",
    "\n",
    "相关链接 \n",
    "\n",
    "  * [ 决策树- 维基百科 ](https://zh.wikipedia.org/zh-hans/%E5%86%B3%E7%AD%96%E6%A0%91)\n"
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "main_language": "python",
   "notebook_metadata_filter": "-all"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
