{
 "nbformat": 4,
 "nbformat_minor": 2,
 "metadata": {
  "language_info": {
   "name": "python",
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   }
  },
  "orig_nbformat": 2,
  "file_extension": ".py",
  "mimetype": "text/x-python",
  "name": "python",
  "npconvert_exporter": "python",
  "pygments_lexer": "ipython3",
  "version": 3
 },
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "决策树:\n",
    "+ 比较适合分析离散数据。\n",
    "+ 如果是连续数据要先转成离散数据再做分析。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn import tree"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 载入数据\n",
    "data = np.genfromtxt(\"data.csv\", delimiter=\",\")\n",
    "x_data = data[:,0,np.newaxis]\n",
    "y_data = data[:,1,np.newaxis]\n",
    "plt.scatter(x_data,y_data)\n",
    "plt.show()\n",
    "model = tree.DecisionTreeRegressor(max_depth=5)\n",
    "model.fit(x_data, y_data)\n",
    "x_test = np.linspace(20,80,100)\n",
    "x_test = x_test[:,np.newaxis]\n",
    "\n",
    "# 画图\n",
    "plt.plot(x_data, y_data, 'b.')\n",
    "plt.plot(x_test, model.predict(x_test), 'r')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import graphviz # http://www.graphviz.org/\n",
    "\n",
    "dot_data = tree.export_graphviz(model, \n",
    "                                out_file = None, \n",
    "                                feature_names = ['x','y'],\n",
    "                                class_names = ['label0','label1'],\n",
    "                                filled = True,\n",
    "                                rounded = True,\n",
    "                                special_characters = True)\n",
    "graph = graphviz.Source(dot_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "信息熵:\n",
    "+ 一条信息的信息量大小和它的不确定性有直接的关系\n",
    "+ $H[x] = - \\sum\\limits_{x}^{}p(x)log_xp(x)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "ID3算法:决策树会选择最大化信息增益来对结点进行划分。<br>\n",
    "信息增益计算：\n",
    "+ $Info(D)=-\\sum\\limits_{i=1}^{m}p_ilog_2(p_i)$\n",
    "+ $Info_A(D)=-\\sum\\limits_{j=1}^{v}\\frac{\\left | D_j\\right |}{\\left | D\\right |}*Info(D_j)$\n",
    "+ $Gain(A) = Info(D) - Info_A(D)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.feature_extraction import DictVectorizer\n",
    "from sklearn import tree\n",
    "from sklearn import preprocessing\n",
    "# 把数据转换成01表示\n",
    "vec = DictVectorizer()\n",
    "x_data = vec.fit_transform(featureList).toarray()\n",
    "print(\"x_data: \" + str(x_data))\n",
    "print(vec.get_feature_names())# 打印属性名称\n",
    "print(\"labelList: \" + str(labelList))# 打印标签\n",
    "# 把标签转换成01表示\n",
    "lb = preprocessing.LabelBinarizer()\n",
    "y_data = lb.fit_transform(labelList)\n",
    "print(\"y_data: \" + str(y_data))\n",
    "# 创建决策树模型\n",
    "model = tree.DecisionTreeClassifier(criterion='entropy')\n",
    "# 输入数据建立模型\n",
    "model.fit(x_data, y_data)\n",
    "# 测试\n",
    "x_test = x_data[0]\n",
    "print(\"x_test: \" + str(x_test))\n",
    "\n",
    "predict = model.predict(x_test.reshape(1,-1))\n",
    "print(\"predict: \" + str(predict))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "C4.5算法:\n",
    "+ 信息增益的方法倾向于首先选择因子数较多的变量\n",
    "+ 信息增益的改进：增益率\n",
    "+ $SplitInfo_A(D)=-\\sum\\limits_{j=1}^{v}\\frac{\\left | D_j\\right |}{\\left | D\\right |}*log_2(\\frac{\\left | D_j\\right |}{\\left | D\\right |})$\n",
    "+ $GrainRate(A)=\\frac{Grain(A)}{SplitInfo_A(D)}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "CART算法:\n",
    "+ CART决策树的生成就是递归地构建二叉决策树的过程。\n",
    "+ CART用基尼(Gini)系数最小化准则来进行特征选择，生成二叉树。\n",
    "\n",
    "\n",
    "Gini系数计算：\n",
    "+ $Gini(D)=1-\\sum\\limits_{i=1}^{m}p_i^2$\n",
    "+ $Gini_A(D)=\\frac{\\left | D_1\\right |}{\\left | D\\right |}Gini(D_1)+\\frac{\\left | D_2\\right |}{\\left | D\\right |}Gini(D_2)\n",
    "$\n",
    "+ $\\Delta Gini(A)=Gini(D)-Gini_A(D)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建决策树模型\n",
    "model = tree.DecisionTreeClassifier()\n",
    "# 输入数据建立模型\n",
    "model.fit(x_data, y_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import graphviz # http://www.graphviz.org/\n",
    "\n",
    "dot_data = tree.export_graphviz(model, \n",
    "                                out_file = None, \n",
    "                                feature_names = ['house_yes','house_no','single','married','divorced','income'],\n",
    "                                class_names = ['no','yes'],\n",
    "                                filled = True,\n",
    "                                rounded = True,\n",
    "                                special_characters = True)\n",
    "graph = graphviz.Source(dot_data)\n",
    "graph.render('cart')"
   ]
  }
 ]
}