{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "## 决策树\n",
    "\n",
    "> 理论 《统计学习方法》第5章 决策树\n",
    ">\n",
    "> 代码 numpy version && torch version\n",
    ">\n",
    "> Python3.7"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 模型\n",
    "\n",
    "决策树由节点、有向边组成。\n",
    "\n",
    "内部节点表示一个特征或属性，叶节点表示类\n",
    "\n",
    "使用决策树分类时，从根节点开始对实例的某一特征进行测试，根据测试结果将其分配给子节点，如此递归的进行测试并分配，直到达到叶节点，也就可以将该实例分到叶节点对应的类中。\n",
    "\n",
    "#### 决策树学习\n",
    "\n",
    "决策树学习本质上是从训练数据集中归纳出一组分类规则。\n",
    "\n",
    "决策树学习用损失函数表示这一目标。且损失函数通常是正则化的极大似然函数。\n",
    "\n",
    "构造的决策树可能会出现过拟合，需要对已生成的树自下而上的进行剪枝，从而有更好的泛化能力。\n",
    "\n",
    "决策树的学习算法包括了：特征选择/决策树生成/剪枝。\n",
    "\n",
    "常用的算法有ID3，C4.5，CART"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 特征选择\n",
    "\n",
    "#### 熵entropy\n",
    "\n",
    "定义为信息的期望值,表示随机变量不确定性的度量\n",
    "\n",
    "$$ H = - \\Sigma p(x_i)\\log_2 p(x_i) $$\n",
    "\n",
    "熵只依赖于X的分布，和X的取值无关。\n",
    "\n",
    "熵越大，随机变量X的不确定性就越大。\n",
    "\n",
    "条件熵$H(Y|X)$表示在已知随机变量X的条件下随机变量Y的不确定性。\n",
    "\n",
    "定义为X给定情况下，Y的条件概率分布的熵对X的期望值。\n",
    "$$H(Y|X) = \\Sigma P(X=x_i) H(Y|X=x_i)$$\n",
    "\n",
    "#### 信息增益information gain\n",
    "\n",
    "特征A对训练数据集D的信息增益$g(D,A)$定义为集合D的经验熵$H(D)$与特征A给定条件下D的条件经验熵$H(D|A)$之差。\n",
    "\n",
    "$$g(D,A) = H(D) - H(D|A) $$\n",
    "\n",
    "信息增益表示已知特征X的信息而使得类Y的信息的不确定性减少的程度。\n",
    "\n"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "outputs": [],
   "source": [
    "%matplotlib inline\n",
    "\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import torch\n",
    "from math import log\n",
    "from sklearn.datasets import make_blobs\n",
    "from sklearn.neighbors import KNeighborsClassifier\n",
    "from sklearn.model_selection import train_test_split"
   ],
   "metadata": {
    "collapsed": false
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "outputs": [],
   "source": [
    "# 计算熵\n",
    "def calcEntropy(dataset):\n",
    "    size = len(dataset)\n",
    "    labelCounts = {}\n",
    "    for data in dataset:\n",
    "        curLabel = data[-1] # dataset最后一个表示类别\n",
    "        if curLabel not in labelCounts.keys():\n",
    "            labelCounts[curLabel] = 0\n",
    "        labelCounts[curLabel] += 1\n",
    "    entropy = 0.0\n",
    "    for key,value in labelCounts:\n",
    "        prob = value / size\n",
    "        entropy -= prob * log(prob,2)\n",
    "    return entropy"
   ],
   "metadata": {
    "collapsed": false
   }
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
