{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<br>\n",
    "<center><font face=\"黑体\" size=4>《机器学习基础实践》课程实验指导书</font></center>\n",
    "<br>\n",
    "<br>\n",
    "<center><font face=\"黑体\",size=4>第2章  决策树模型</font></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{1.实验目标}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "了解并掌握决策树模型的构造原理以及决策树模型的实现。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.实验内容}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.1 决策树模型的基本原理}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "决策树是一类常见的机器学习算法，它是基于树结构来进行决策的。叶子结点就对应我们的决策结果，其它的根节点和内部节点就对应于一个属性测试。决策树学习的目的就是为了产生一棵泛化能力强，即处理未见样例能力强的决策树。图2.1与图2.2所示的是根据训练数据构建的决策树模型的示意图。\n",
    "\n",
    "决策树模型通过不断地对输入数据的特征属性进行逻辑判断，得出最终的预测结果，这个过程和人类进行决策的过程非常相似。针对一个决策问题，决策树模型会进行一系列判断或者子决策，决策树的最终结论是我们想要预测的结果。每个子决策针对预测对象的某个属性进行测试，每个测试要么产生最终结论，要么导出下一步判断，而下一步判断的考虑范围是上一步判断结果的限定范围。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=picture/3.1.png>\n",
    "<center><font size=3>图2.1  关于西瓜分类的数据集</font></center>\n",
    "<img src=picture/3.2.png>\n",
    "<center><font size=3>图2.2  决策树模型</font></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.2 决策树模型的基本构建流程}$\n",
    "\n",
    "$\\textbf{决策树模型构建的一般过程:}$\n",
    "\n",
    "(1)创建一个节点。如果该节点上样本属于同一类，则算法停止，把该节点设置为叶节点，并用该类标记节点的类别。\n",
    "\n",
    "(2)否则，选择一个能够最好的将训练集分类的属性，作为该节点的测试属性。\n",
    "\n",
    "(3)对测试属性的每个取值，创建一个分支，并据此划分样本。\n",
    "\n",
    "(4)递归地执行上述过程，自上而下、递归地构建决策树。直到满足下列三个结束条件的至少一个时，递归结束。\n",
    "\n",
    "A. 给定节点的所有样本属于同一个类；\n",
    "\n",
    "B.没有剩余的属性可以划分； \n",
    "\n",
    "C.没有样本。\n",
    "\n",
    "从上述过程可以看出，决策树学习的关键就是如何选择最优划分属性，从众多的属性中选择一个属性作为当前节点分裂的标准，如何选择属性有不同的量化评估方法，从而衍生出不同的决策树。此外，划分选择的目的或准则在于使用某属性对数据集划分之后，各数据子集的纯度要比划分前的数据集$D$的纯度高，也就是不确定性要比划分前数据集$D$的不确定性低。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.3 划分属性的选择}$\n",
    "\n",
    "决策树模型学习过程中最重要的操作就是从众多属性中不断选择最优的属性进行数据的划分，而划分的目的在于使得划分后的分支节点关于决策结果的不确定降低，从而使得结点的“纯度”越来越高。本节将介绍几种常用的决策树划分属性选择方法。\n",
    "\n",
    "$\\textbf{(1)信息增益}$\n",
    "\n",
    "在信息科学中，信息熵通常用来表示系统的不确定程度，信息熵越大，系统的不确定性越大。在决策树模型中，信息熵被用来表示样本集合中样本类别的分散程度，即样本集合的纯度，其计算方法如式(2.1)所示，其中$|Y|$表示样本集合$D$中类别的个数，$p_i$表示第$i$类样本所占的比例。\n",
    "\n",
    "$Ent(D)=-\\sum_{i=1}^{|Y|}p_{i}log_{2}p_i$. 式(2.1)\n",
    "\n",
    "假定某个属性$x_j$具有$k$种取值$\\{x_{j1},x_{j2},…,x_{jk}\\}$，对该属性进行划分可以产生$k$个分支节点，第$l$个分支节点包含的数据样本集合为$D^l$，则划分之前和划分之后的信息熵的差值被称为信息增益，如式(2.2)所示，信息增益越大，表明划分后类别的不确定性降低得越大，划分后的数据纯度越高。因此，可以在所有属性中选择划分后信息增益最大的属性作为最优划分属性。\n",
    "\n",
    "$Gain(D,x_j)=Ent(D)-\\sum_{l=1}^{k}\\frac{|D^l|}{|D|}Ent(D^l)$. 式(2.2)\n",
    "\n",
    "$\\textbf{(2)信息增益率}$\n",
    "\n",
    "信息增益偏好取值数目表较多的属性，为减少这种偏好带来的不利影响，采用信息增益率来选择最优划分属性。信息增益率的计算如式(2.3)所示。\n",
    "\n",
    "$GainRatio(D,x_j) = \\frac{Gain(D,x_j)}{IV(x_j)}$. 式(2.3)\n",
    "\n",
    "$IV(x_j) = -\\sum_{l=1}^{k}\\frac{|D^l|}{|D|}log_{2}(\\frac{|D^l|}{|D|})$. \n",
    "\n",
    "其中，$k$表示属性$x_j$的可能取值的个数，对该属性进行划分可以产生$k$个分支节点，第$l$个分支节点包含的数据样本集合为$D^l$。\n",
    "\n",
    "$\\textbf{(3)基尼指数}$\n",
    "\n",
    "基尼指数反映了从数据集中随机抽取两个样本，其类别标记不一致的概率。因此，基尼指数越小，数据集的纯度越高。基尼指数的计算如式(2.4)、(2.5)所示。\n",
    "\n",
    "$Gini(D) = \\sum_{y=1}^{|Y|}\\sum_{t\\neq y,t \\in Y}p_{y}p_{t}=1-\\sum_{y=1}^{|Y|}p_{y}^{2}$. 式(2.4)\n",
    "\n",
    "其中，$Y$表示数据集$D$中的所有类别组成的集合，$t,y$是集合$Y$中两个互不相等的类别，$p_t,p_y$分别表示$t,y$两个类别的样本占$D$占所有样本总数的百分比。\n",
    "\n",
    "给定一个划分属性$x_j$，在该属性上进行划分，得出的Gini指数的值为，\n",
    "\n",
    "$Gini(D,x_j)=\\sum_{l=1}^{k}\\frac{|D^l|}{|D|}Gini(D^l)$. 式(2.5)\n",
    "\n",
    "其中，$k$表示属性$x_j$的所有可能取值个数，对该属性进行划分可以产生$k$个分支节点，第$l$个分支节点包含的数据样本集合为$D^l$。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.4 连续属性处理}$\n",
    "\n",
    "当属性的取值为离散类型时，对属性进行划分总可以得到有限个分支节点。然而，当属性的取值为连续型时，属性的取值在整个实数域上，对属性进行划分得到的分支节点个数不在是有限的，不能直接根据连续属性的取值来进行节点的划分。在决策树模型中通常采用二分法对连续属性进行离散化处理，当然也可以采用其他方法，例如数据分箱对连续属性进行离散化处理。\n",
    "\n",
    "本节介绍如何使用二分法对连续属性进行处理。给定样本集合$D$和连续属性$x_j$，假设属性$x_j$在$D$上有$n$个不同的取值，将这些取值从小到大进行排序，记为$\\{x_{j1},x_{j2},…,x_{jn}\\}$。对连续属性$x_j$，可以将区间$[x_{jk},x_{j,k+1})$的中位点$(x_{jk}+x_{j,k+1})/2$作为一个划分点，形成具有$n-1$个元素的候选划分点集合$T_{x_j}$。然后就可以像离散属性值一样来考察这些划分点，选取最优的划分点进行划分，划分的方法如式(2.5)所示，其中$D^0$表示连续属性$x_j$的取值小于等于划分点$t$的样本的集合，$D^1$表示连续属性$x_j$的取值大于划分点$t$的样本的集合。 \n",
    "\n",
    "$Gain(D,x_j)=\\underset{t \\in T_{x_j}}\\max Gain(D,x_j,t)=\\underset{t \\in T_{x_j}}\\max Gain(D)-\\sum_{\\lambda \\in \\{0,1\\}}\\frac{|D_{t}^{\\lambda}|}{|D|}Ent(D_{t}^{\\lambda})$. 式(2.5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.5 决策树模型的实现}$\n",
    "\n",
    "在实现决策树模型时，一个很关键的问题就是如何有效第表示树形结构。在Python语言中，我们可以使用字典类型的嵌套来表示决策树的结构。一个简单的例子如下图所示。\n",
    "\n",
    "<img src=picture/3.5.png>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "决策树模型C4.5的实现代码如下，C4.5模型采用信息增益作为最优划分属性的选择标准。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from math import log \n",
    "import operator\n",
    "import numpy as np\n",
    "\n",
    "class DecisionTree:\n",
    "    #计算信息熵\n",
    "    def cal_entropy(self,data):\n",
    "        \"\"\"\n",
    "        输入参数 data: 数据集\n",
    "        输出结果： 数据集的信息熵\n",
    "        \"\"\"\n",
    "        inst_num = len(data)#数据集样本数量\n",
    "        labelcounts = {}#统计类别出现的次数\n",
    "        for featVect in data:\n",
    "            #特征向量的最后一列为类别标签\n",
    "            current_label = featVect[-1]\n",
    "            if current_label not in labelcounts.keys():\n",
    "                #初始化标签计数为0\n",
    "                labelcounts[current_label]=0\n",
    "            #统计各个类别的出现次数\n",
    "            labelcounts[current_label] += 1 \n",
    "        entropy = 0.0\n",
    "        for key in labelcounts:\n",
    "            #计算各类别出现的频率\n",
    "            prob = float(labelcounts[key])/inst_num \n",
    "            #计算信息熵，如式(2.1)所示\n",
    "            entropy -= prob* log(prob,2)\n",
    "        return entropy\n",
    "    #划分数据集，根据划分属性和属性的取值，对原数据集进行划分\n",
    "    def split_data(self,data,feature_index,\n",
    "                   feature_value,feature_type):\n",
    "        \"\"\"       \n",
    "        输入参数 data：划分前的数据集\n",
    "        输入参数 feature_index: 划分属性\n",
    "        输入参数 feature_value: 划分属性的取值\n",
    "        输入参数 feature_type: 划分属性类型\n",
    "        输出结果 splitedData:划分后的数据集\n",
    "        \"\"\"\n",
    "        splitedData = []#保存划分后的数据子集\n",
    "        if feature_type == \"D\":#对于离散属性\n",
    "            for featVect in data:\n",
    "                if featVect[feature_index] == feature_value:\n",
    "                    reducedFev = featVect[:feature_index]\n",
    "                    reducedFev.extend(featVect[feature_index+1:])\n",
    "                    splitedData.append(reducedFev)\n",
    "        if feature_type == \"L\":#对于连续属性\"<=\"\n",
    "            for featVect in data:\n",
    "                if featVect[feature_index] <= feature_value:\n",
    "                    splitedData.append(featVect)\n",
    "        if feature_type == \"R\":#对于连续属性\">\"\n",
    "            for featVect in data:\n",
    "                if featVect[feature_index] > feature_value:\n",
    "                    splitedData.append(featVect)            \n",
    "        return splitedData    \n",
    "    #选择最优划分属性\n",
    "    def choose_best_feature_to_split(self,data):\n",
    "        \"\"\" \n",
    "        输入参数 data： 当前数据集\n",
    "        输出结果 best_feature: 返回选择的最优划分属性\n",
    "        输出结果 best_div_value : 返回最优划分属性的划分点取值\n",
    "        \"\"\"\n",
    "        #当前可用特征个数\n",
    "        feature_num = len(data[0])-1\n",
    "        #计算划分前数据集的信息熵\n",
    "        baseEnt = self.cal_entropy(data)\n",
    "        #初始化最大信息增益、最优划分属性以及划分点\n",
    "        bestInforGain = 0.0\n",
    "        best_feature = -1\n",
    "        best_div_value = 0\n",
    "        #对每个可用属性，计算信息增益，选择信息增益最大的\n",
    "        #特征作为划分属性\n",
    "        for i in range(feature_num):\n",
    "            if isinstance(data[0][i],str):#离散属性\n",
    "                #统计离散属性的所有可能取值\n",
    "                featureList = [example[i] for example in data]\n",
    "                uniqueValues = set(featureList)\n",
    "                newEnt = 0.0\n",
    "                #根据每个取值进行数据划分，并计算信息增益\n",
    "                for value in uniqueValues:\n",
    "                    #每个离散取值划分一个对应的数据子集\n",
    "                    subdata = self.split_data(data, i, value,\"D\")\n",
    "                    prob = float(len(subdata))/len(data)\n",
    "                    #计算数据子集上的信息熵\n",
    "                    newEnt += prob*self.cal_entropy(subdata)\n",
    "                #计算信息增益\n",
    "                inforGain = baseEnt - newEnt\n",
    "                #选取信息增益最大的属性作为划分属性\n",
    "                if inforGain > bestInforGain:\n",
    "                    bestInforGain = inforGain\n",
    "                    best_feature = i\n",
    "            else:#连续属性\n",
    "                #对连续属性的取值去重复并进行升序排序\n",
    "                featureList = [example[i] for example in data]\n",
    "                uniqueValues = set(featureList)\n",
    "                sort_uniqueValues = sorted(uniqueValues)\n",
    "                minEnt = np.inf\n",
    "                #采用二分法处理连续属性                \n",
    "                for j in range(len(sort_uniqueValues)-1):\n",
    "                    #取两个连续取值的中间值作为划分点\n",
    "                    div_value = (sort_uniqueValues[j]+sort_uniqueValues[j+1])/2\n",
    "                    #属性取值小于等于划分点的子集\n",
    "                    subdata_left = self.split_data(data, i, div_value, \"L\")\n",
    "                    #属性取值大于划分点的子集\n",
    "                    subdata_right = self.split_data(data, i, div_value, \"R\")\n",
    "                    prob_left = float(len(subdata_left))/len(data)\n",
    "                    prob_right = float(len(subdata_right))/len(data)\n",
    "                    #如式(2.5)的右半部分\n",
    "                    ent = prob_left*self.cal_entropy(subdata_left)+\\\n",
    "                        prob_right*self.cal_entropy(subdata_right)\n",
    "                    if ent < minEnt:\n",
    "                        minEnt = ent\n",
    "                        best_div_value = div_value\n",
    "                #计算信息增益\n",
    "                inforGain = baseEnt - minEnt\n",
    "                #选取信息增益最大的属性作为划分属性\n",
    "                if inforGain > bestInforGain:\n",
    "                    bestInforGain = inforGain\n",
    "                    best_feature = i                    \n",
    "                \n",
    "        return best_feature,best_div_value\n",
    "    #统计出现次数最对的类别\n",
    "    def majorityCount(self,classList):\n",
    "        \"\"\"\n",
    "        输入参数 classlist: 数据集样本类别组成的列表\n",
    "        输出结果： 出现次数最多的类别\n",
    "        \"\"\"\n",
    "        classcount = {}\n",
    "        for vote in classList:\n",
    "            if vote not in classcount.key():\n",
    "                classcount[vote] = 0\n",
    "            classcount[vote] += 1\n",
    "        sortedclasscount = sorted(classcount.iteritems(),\n",
    "                              operator.itemgetter(1),reverse=True)\n",
    "        return sortedclasscount[0][0]\n",
    "    #创建决策树\n",
    "    def create_decision_tree(self,data,labels):\n",
    "        \"\"\"\n",
    "        输入参数 data: 训练数据集\n",
    "        输入参数 labels: 属性名的列表\n",
    "        输出结果 model_Tree: 以字典的形式返回决策树模型\n",
    "        \"\"\"\n",
    "        #获取数据集的样本类别标签列表\n",
    "        classList = [example[-1] for example in data]\n",
    "        #当类别完全相同时则停止继续划分，直接返回该类的标签\n",
    "        if classList.count(classList[0]) == len(classList):\n",
    "            return classList[0]\n",
    "        #遍历完所有的特征时，返回出现次数最多的类别作为返回值\n",
    "        if len(data[0]) == 1: \n",
    "            return self.majorityCount(classList)\n",
    "        #获取最好的分类特征索引beatFeat以及划分点best_div_value\n",
    "        bestFeat,best_div_value= self.choose_best_feature_to_split(data) \n",
    "        if isinstance(data[0][bestFeat],str):#离散属性\n",
    "            bestFeatLabel = labels[bestFeat] #获取该特征的名字\n",
    "            #生成一个树结点\n",
    "            model_Tree = {bestFeatLabel:{}} \n",
    "            del(labels[bestFeat])#删除属性\n",
    "            #获取离散属性的所有可能取值\n",
    "            featValues = [example[bestFeat] for example in data]\n",
    "            uniqueVals = set(featValues)\n",
    "            #为每个取值生成一个孩子结点，递归构造决策树\n",
    "            for value in uniqueVals:\n",
    "                subLabels = labels[:]          \n",
    "                model_Tree[bestFeatLabel][value] = \\\n",
    "                    self.create_decision_tree(self.split_data(data, \n",
    "                                                              bestFeat, \n",
    "                                                              value,\"D\"),\n",
    "                                              subLabels)\n",
    "        else:#连续属性\n",
    "            bestFeatLabel = labels[bestFeat]+\"<\"+str(best_div_value)\n",
    "            model_Tree ={bestFeatLabel:{}}#生成一个树结点\n",
    "            subLabels = labels\n",
    "            #递归生成左子树\n",
    "            model_Tree[bestFeatLabel][\"Y\"]= \\\n",
    "                self.create_decision_tree(self.split_data(data, \n",
    "                                                          bestFeat, \n",
    "                                                          best_div_value, \"L\"),\n",
    "                                          subLabels)\n",
    "            #递归生成右子树\n",
    "            model_Tree[bestFeatLabel][\"N\"]=\\\n",
    "                self.create_decision_tree(self.split_data(data, \n",
    "                                                          bestFeat, \n",
    "                                                          best_div_value, \"R\"),\n",
    "                                          subLabels)\n",
    "        return model_Tree\n",
    "    #决策树预测函数\n",
    "    def predict(self,tree_model,feature_names,test_vect):    \n",
    "        \"\"\"\n",
    "        输入参数 tree_model: 以字典形式保存的决策树模型\n",
    "        输入参数 feature_names: 特征的命名\n",
    "        输入参数 test_vect: 测试样本的特征向量\n",
    "        输出结果 classlabel: 预测类别标签\n",
    "        \"\"\"\n",
    "        firstStr = list(tree_model.keys())[0]#查找根节点\n",
    "        lessIndex = str(firstStr).find('<') #在根节点中查找\"<\"符号\n",
    "        #节点对应的划分属性是连续属性，firstStr的格式为\"feat_name<div_value\"\n",
    "        if lessIndex > -1:\n",
    "            secondDict = tree_model[firstStr]#根节点的子节点\n",
    "            feat_name = str(firstStr)[:lessIndex] # 连续属性名\n",
    "            featIndex = feature_names.index(feat_name)\n",
    "            div_value = float(str(firstStr)[lessIndex+1:])\n",
    "            #进入左子树,递归分类\n",
    "            if test_vect[featIndex] <= div_value:\n",
    "                if isinstance(secondDict[\"Y\"], dict):#非叶子节点，递归分类\n",
    "                    classLabel = self.predict(secondDict[\"Y\"], \n",
    "                                                    feature_names, test_vect)\n",
    "                else:#叶子节点，直接返回结点的类别标记\n",
    "                    classLabel = secondDict[\"Y\"]\n",
    "            else:#进入右子树，递归分类\n",
    "                if isinstance(secondDict[\"N\"], dict):#非叶子节点，递归分类\n",
    "                    classLabel = self.predict(secondDict[\"N\"], \n",
    "                                                    feature_names, test_vect)\n",
    "                else:#叶子节点，直接返回结点的类别标记\n",
    "                    classLabel = secondDict[\"N\"]\n",
    "            return classLabel\n",
    "        else:#对于离散属性\n",
    "            secondDict = tree_model[firstStr]#根节点的子节点\n",
    "            featIndex = feature_names.index(firstStr)\n",
    "            key = test_vect[featIndex]\n",
    "            valueOfFeat = secondDict[key]\n",
    "            if isinstance(valueOfFeat, dict):#非叶子节点，递归分类\n",
    "                classLabel = self.predict(valueOfFeat, feature_names, test_vect)\n",
    "            else:#叶子节点，直接返回结点的类别标记\n",
    "                classLabel = valueOfFeat\n",
    "            return classLabel\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.6 实践任务}$\n",
    "\n",
    "实验任务1：本章实现的DecisionTree类是C4.5决策树模型，采用信息增益作为最优划分属性的选择标准。请参考DecisionTree类，编程实现一个以基尼指数为最优划分属性选择标准的决策树模型—CART。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#编写代码，实现CART决策树模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "实验任务2：银行等金融机构经常会根据客户的个人资料、财产等情况，来预测借款客户是否会违约，以便进行贷款前期审核、贷款中期管理、贷款后期违约处理等工作。文件“客户信息及违约表现.csv”中包含了1000条客户信息及违约表现数据，特征变量有收入、年龄、性别、历史授信额度、历史违约次数；目标变量为是否违约，若违约则标记为1，否则标记问你0。使用该数据，分别使用C4.5和CART算法学习决策树模型，构建客户违约预测模型，具体要求如下：\n",
    "\n",
    "(1) 读取“客户信息及违约表现”数据集，划分训练集和测试集，70%的记录作为训练集，剩下的30%作为测试集；\n",
    "\n",
    "(2) 分别使用本章实现的C4.5决策树算法和任务1中实现的CART算法，训练决策树模型，并在测试机上验证和比较两者的预测性能；\n",
    "\n",
    "(3) 采用sklearn库中的决策树模型和CART模型，在同一个训练集上训练，并在测试集上比较本章实现的决策树模型与sklearn库中决策树模型的预测准确率。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#编写代码，完成任务2"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
