{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Decision Tree\n",
    "决策树是有监督学习的一种基本回归与分类的方法，之前在巴黎datakeen实习时候做过一个决策树的模块，但是当时只是照抄代码其实并不了解决策树的理论机制。 \n",
    "决策树可以用于分类问题和回归问题，这里先看看分类。 分类还是很好理解的，决策树算法的本质就是树形结构，只需要有一些设计好的问题，就可以对数据进行分类了。\n",
    "\n",
    "\n",
    "我们可以把决策树看作是一个if-then规则的集合。将决策树转换成if-then规则的过程是这样的：\n",
    "\n",
    "- 由决策树的根节点到叶节点的每一条路径构建一条规则\n",
    "\n",
    "- 路径上中间节点的特征对应着规则的条件，也叶节点的类标签对应着规则的结论\n",
    "\n",
    "- 决策树的路径或者其对应的if-then规则集合有一个重要的性质：互斥并且完备。也就是说，每一个实例都被 有且仅有一条路径或者规则所覆盖。这里的覆盖是指实例的特征与路径上的特征一致，或实例满足规则的条件。\n",
    "\n",
    "## 准备工作\n",
    "- 首先要收集足够多的数据，如果数据收集不到位，将会导致没有足够的特征去构建错误率低的决策树\n",
    "\n",
    "- 数据特征充足，但是不知道用哪些特征好，也会导致最终无法构建出分类效果好的决策树\n",
    "\n",
    "争对这两个问题，建立决策树的步骤为： \n",
    "- 选择好的特征\n",
    "    - 特征选择就是决定用哪个特征来划分特征空间，其目的在于选取对训练数据具有分类能力的特征。这样可以提高决策树学习的效率。如果利用一个特征进行分类的结果与随机分类的结果没有很大的差别，则称这个特征是没有分类能力的，经验上扔掉这些特征对决策树学习的精度影响不会很大。 那如何来选择最优的特征来划分呢？一般而言，随着划分过程不断进行，我们希望决策树的分支节点所包含的样本尽可能属于同一类别，也就是节点的纯度（purity）越来越高。\n",
    "\n",
    "- 生成决策树\n",
    "- 剪枝\n",
    "\n",
    "\n",
    "### 特征选择\n",
    "在实际使用中，我们衡量的常常是不纯度。度量不纯度的指标有很多种，比如：熵、增益率、基尼值数。这里我们使用的是熵，也叫作香农熵。香农熵及计算函数 熵定义为信息的期望值。在信息论与概率统计中，熵是表示随机变量不确定性的度量。熵定义为信息的期望值。在信息论与概率统计中，熵是表示随机变量不确定性的度量。   \n",
    "\n",
    "香浓第一定律： $l(x_i) = -log_2 p(x_i)$    >>    $Entropy = -\\sum_{i=0}^n p(x_i) log_2 p(x_i)$     \n",
    "\n",
    "公式表示的是 Xi 的信息，$p(x_i)$ 表示的是x被选择的概率。 整个公式的意义在于表示x的信息，l(x_i)越大，则x携带的信息量越大，这句话并不是很好理解，比如一个系统中，越混乱，不确定性越高，则x的信息越大。\n",
    "\n",
    "#### 关于香农第一定律的推导\n",
    "从公式中并不好理解这是如何推导出来的, 由于有log函数，比较违反直觉， 看了一下推导过程，log的出现并不是因为0和1，而是为了方便计算和阅读。\n",
    "- 概率越低，信息量越大， 所以需要一个相反的值，这就出现了负号\n",
    "- 概率是一个乘法，如果数据量大（i很大），结果会特别小，于是转变为log2的形式变成加法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Sample\n",
    "之后的例子会使用统计学习方法一书P71页的例子， 是由15个样本组成的贷款申请训练数据， 包括四个特征： age(young/middle/old), work(0/1), house(0/1), credit(great/good/general), result(0/1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 119,
   "metadata": {},
   "outputs": [],
   "source": [
    "age = ['young'] * 5 + ['middle'] * 5 + ['old'] * 5\n",
    "work = [0, 0 , 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0]\n",
    "house = [0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0]\n",
    "credit = ['general'] + ['good'] * 2 + ['general'] * 3 + ['good'] * 2 + ['great'] * 3 + ['good'] * 2 + ['great'] + ['general']\n",
    "result = [0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0]\n",
    "dataset = {'age': age, 'work': work, 'house': house, 'credit': credit}\n",
    "target = {'result': result}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 香农第一定律python实现\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "def get_feat(data):\n",
    "    feat = {} \n",
    "    for ele in data:\n",
    "        if feat.get(ele):\n",
    "            feat[ele] = feat[ele] + 1\n",
    "        else:\n",
    "            feat[ele] = 1\n",
    "    return feat\n",
    "def entropy(data):\n",
    "    \"\"\"\n",
    "    :param: data: list\n",
    "    :return: entropy\n",
    "    \"\"\"\n",
    "    ent = 0\n",
    "    prob = []\n",
    "    feat = get_feat(data)\n",
    "    for ele in feat:\n",
    "        prob = feat[ele]/len(data)\n",
    "        ent = ent + (-prob * np.log2(prob))\n",
    "    return ent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Result Entropy:  0.9709505944546686\n"
     ]
    }
   ],
   "source": [
    "ent = entropy(target['result'])\n",
    "print(\"Result Entropy: \", ent)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在计算出是否贷款的熵值为0.971。再计算信息增益。"
   ]
  },
  {
   "attachments": {
    "1c5ce970-4b99-4485-a5c8-bfd17239b5b7.png": {
     "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlIAAABQCAYAAADSiJtRAAAXA0lEQVR4Ae2dv2vbWhvH9WdoNWR4Ax1utgjeJYIONXS4hg41dLiEdwjmHYrpUMxdgulQTIcS3qGYDAV3uOAOBXcoOEvBHQruUHCHgjN08JBBQwYNHb4vR9KRj2T9tGRbSb4Xei1LOj/0OYr09fM85zka+B8JkAAJkAAJkAAJkMBaBLS1SrEQCZAACZAACZAACZAAKKR4E5AACZAACZAACZDAmgQopNYEx2IkQAIkQAIkQAIkQCHFe4AESIAESIAESIAE1iRAIbUmOBYjARIgARIgARIgAQop3gMkQAIkQAIkQAIksCYBCqk1wbEYCZAACZAACZAACVBI8R4gARIgARIgARIggTUJUEitCY7FSIAESIAESIAESIBCivcACZAACZAACZAACaxJgEJqTXAsRgIkQAIkQAIkQAIUUrwHSIAESIAESIAESGBNAhRSa4JjMRIggV0SmGPwHxPGngZN07F/1ED3wnI7dD1G57AGTdNQOzTR/ujt32V32TYJkMCtJUAhdWuHlhdGArefwORUCCkDZ9+D12q9P4bxbIxFcDe/kQAJkEDpBCikSkfKCkmABLZFYPGu4Vieul+UFq/HaB+1MPql7OMmCZAACWyIAIXUhsCyWhIggc0TsD+1HSHV+iDddzYmp3W03s833zhbIAESIAEAFFK8DUiABG4uga9d6JqGg1dT9xq+9WA+GaB0GbUYoql34bWyVV7TFzqa/9BJuVXobIwEchCgkMoBi6eSAAlUjMCvARqaBu10AmCO/p919H9soI9OO12IVlb+uxriWPQh6d+egePTIWbScLZSSfwOEQfWeEchFU+IR0hgtwQopHbLn62TAAkUIjBBRwiYkyGmbxuov5rCLlRfTOEkIeUVkW5G7X4/ZBGzsbjooq5r0PaOMbyUbcwxenaM5sMD1J6PYcPC6ESH9mQYCJKnkJK8+EkC1SRAIVXNcWGvSIAEMhGYoveHmwJBP+picp2pUP6TMgip6asDxyqlO9ax1SaE0BJuSP1kBGGYsj62HOE3eXkA7dHAEU/Wxzb0wzPMlOIUUgoMbpJABQlQSFVwUNglEiCBrAQWGDxyUyC0ZR6prEXznJcqpGQ/NLQ/xdjE7DHajvuvhZFQUtcWrKsxOroOP1j+coDjZ2NHaMnuUUhJEvwkgWoSoJCq5riwVyRAAhkJCKEhckatEX6UsQUAaULKF0kNDGLTLkzQdYSUju5Xt2nHHai3MLpyv8/PG0tR5fWOQir7MPFMEtgFAQqpXVBnmyRAAjeLQJqQ8mYPan/04mf2+WJLg8x7JWbk+WVE/quHPUxD7kkKqZt1q7C3d48AhdTdG3NeMQmQQF4CKUJq/sZMjI9ymvvWw4FjkWpgIAPOL0doH9VQf9pB66SL4Y9VtyCFVN7B4vkksF0CFFLb5c3WSIAEbiKBRCG1wPCJm/7Aj3WKuEaZhd23QEWcE7WLQiqKCveRQHUIUEhVZyzYExIggaoSSBJSvstOsTStXIeF4V+u2Dp4mS+tJ4XUCkzuIIFKEaCQqtRwsDMkQAKCgG+9cVxhrgBJTHiZ+7w6+j9zsE4SUtJllxQf9bOPuuij3sQwNhg9uj8UUtFcuJcEqkKAQqoqI8F+kAAJLAmIwGuRwFIKJL2NcSgIe3lyzNZvG5ZlYfFjgtG7M7Qf7zt5nGSduSxDCUJKxkdpz0RSzaj/bIyf6dA0Hc13+RevoZCKYsp9JFAdAhRS1RkL9oQESEAhYH/pwpBCSiSyLGMNvaspBk9NV1DpHYyjlY/SC28zVkilxUfZmL1pQtf0tbOuU0itDgf3kECVCFBIVWk02BcSIAGFgI3pC2NplVrToqNU6G9an7swdQ3H7zNmn4oTUtcjtByxZ4ZchTasHyP0HtWg6Sba7/NbomRnKaQkCX6SQDUJUEhVc1y21yuRWfmoif7PrD/Nt9c1wMbk1ETzzSzGZbLNvlS8rdLHcY7BExOdTxmFxsbwzL3M5dLNZ6D3rZzGnCVbkuKa1GbCQkrmjVIsZtJl6H7q2D86Rvt8jEXBP60bK6RKvyfVAYneFlZM80kfs7xu4OjquJcEMhGgkMqEKcdJvy3MPp6h9chATT5k9+povZ6462t96jhZmAs+W3N0KOHU6wm6h3rMkhbLWUbBF4R8oYnPGoy/uhh+3+TLdo7+g/ViSxKuvPCh2WvVUqIyidpOynZduCvApsbRiVMy0P2y47v1coCmGi91WNaaehYmbzrof8kwBmEhlaFIWadkFVJ34Z6cf2jj+HEdB3uuW9b60IKuNTFcLGnP39TLcQMvq+QWCSQSoJBKxJPv4OKih8aeBv1eE72PM1jy/WMvMHpuwHjUcGI+Gu+Uv/p8TQCXQxyLNv4chFaYz1uRjcnfB/4CqrGl/andYdcFYP8ao/tQBNHWcPzP+q6L2LblgW89GFrOWVay7EY/l+urRY7p1RRnzjpwXUwK9WOB4YkJ87m72G2wqs2Oo/OiWifQO9jJwt/cF+ZSpBqnk+1aKW+AkHIh3+J70hqhJTK/fxaJTb0fJ9YIbd3A2Xf1Fpuid6ih/maDzyS1OW7feQIUUqXcAhYmL9wAVuP5CIvfUZXKVeoPCrkm7M8d1DQd5ouCLxJnOnaGvsip3XqMGJCzq5T1wqKuvtg+1zqmn0QJiWI1Fyrti8x4jkIAaPf7xUTvzz5MYd38O0KObXwc3fvWeJUv91EhrpGFhZtXtQIqC/1Gnl/yzpsipG7zPenNwhw/15UfgHMM/mpjHDKKW++PoW30mVTy/cXqbjQBCqnCw7d8wOsnQyTZmiZ/izwyMYKkcD/yVGBDPIyyvOD9fD6JU7tdS0FSVuc8vYs611nctWpWqTiRadtLa8nnDrQnyfdF1PWq++QYrPLdzjg60/vzzHBTO1/mtuPCXFqlRE4mf6mVMtuJquumCKnbfk86QlER0T/7aET9wPIEZf2cVqmo25n7yiVAIVWQpxOwKqwFGX79iFiH+FwzBTuSp7g9RkfXcJBqZRD5b9wXV6TrymvTuS4xPf3FBq0WwqyvaTArZK6Pzh8kXCu6vyitEEHFuFgYnYgxWHWtYlvj6L2c25+krzrPzVbyuTKxpYw/fFTUxZ2xf4shmno3fkHijNWsc5pY2Lj5T9JPtGWtt/6edIL8pQVYPJ/q6H2Lui+9v5ui1uAlWm6RQCwBCqlYNFkOuL54EYydJbmf9WOM6a/VP3rr+wDthwaMIxP79xo4+zhC/+UZxkoGZPtHH8dHJoy9Ghqvp0uLx7WIwzGwf6+G/cdnmFoWJq9bqB+ZMA9rqB220A8vhOrNOFq1cISveYquE+QrH1zh4+L7UmxppxGup6gict/vBcYvj2EcGjAP92E+HWD0/gy9d8r1yXPhuUajfn3652xzQwocDarItC5E7qN2dH4isUDtAwP7ezXUn40wt2YYnjZhirES4/ewi/GVdw3ixS3FQuBTCazd1jh6IjZdeG+H//ydyMskxKX7z3gRdb9spy/VauUu3JNzjJ6aqD1so3PSQvd9/Ize6asDaFoLo5Dbr1pjxt7cBgIUUkVG8UvXe6BHWAsy1uu+FAx0P3t/7f4vbkW8/Bqiuee6MRz3oDJLZfrKgPm/GSBcSJqG2p6BYz978gQd8bJ5EIzRcV1FOrpfUzopY3MS3ZEy9ktD420OM7rnpnGSLDoxZTL7swYtUix5iQ93ZBVYJeWxVV7o8sUe3X/xktPR+mhj/rbhZLmu3asvx/1ygIaw6j0PZceWrpoI1+r2xtET1AVdlKsM190j7gUxyUGKqQrMLFz3Ukotd5fuyXRwi3+azt9Z6nMuvSqeQQKJBCikEvEkH3RfiOJhvl7ck/3VzdysB16SE3SdF8TSqjF9eQDX1++JFl/YTNDV3dls0qQf/HXuPVhDuXLcX2rpU/LdB1GcsPHYOLEjgoEi/JKxiZXUvBdhcCaejAWKc9+5LsQsvzBtjP82Yd7P+e9ljgB+KXD8sQDwe4az+zHuR8HJiTOSFryQu8YTUmERJpmoVi+Jd3vj6M0EC91Hsh87+byeoPOHFFLCtb7GEjI76fgGG71T92QGjl+6jthOt7xnqIunkEACAQqpBDhph2RskPZosBpkHuea8S0qcpryMp7GaU9agRSrjG1ZsH8D9kXHsYAt3YhiLTHhKpR1hVaf9+oKu2TcfqcJKfnCT7Y0ObNjhPDL85KVlrwAN9levCDL1u+0USvnuC+ilXEC5hj8qVr6hNu1jZFw1zkzjsRYCfErBEAnkBLBDabXEHzoJzPJxkPWUWQc5f2V/IPB/tTJJ14f9zErMByrS8gUC+ov0JVKFL1b92QG5J6QivoRkqE0TyGBzAQopDKjWj1RWoEihZR/+gxn/3Z/OQeWo/CF1tLyJIo40+Ujg6q9GVoxQcdtIWZ8keY2Pj+vu6btUMLBbC9g6bKLFzZCOPQfiGsLWVf8a4/eEMGzwi0TtDzJ9oI81Bqy9VstsaltywsAD4sTC7OLqZ/J2hG+AbEIQArlgJtMjm1Y3HouNdXqpVxSNh6Sa5FxzCaklK5tbVO4tqWLr5S1+JSey3qr9ql0Udm8a/ekculxmxRScWS4v2QCFFJFgErLSsyLzqnaC9TVRAK5S6Ux7488ODVeWg+8l55tLZN6Xg1xLMSSF+/kWKlkdbIfARehJ3Jk365dq5YoksklJF/4cYHTIsz8U9uxkOV7gcmXsmq5UQSGZ+Gx/Wym8iIBVzhs1rW3bC1pS1qVQtcQKOJeZ0A8C9vhOxEfFRKR3uw7Kcj9sZXuPn9cLcy/zf2JBtsbR2/M8lgdAyw2+OX3BN1/adBKy3a+wb5utOq7dk9mgEnXXgZIPKUMAhRShSgqFhk/wDtYof2x5f5iDr+EZDxDYKabfBgKq4wQVUu3n3ShOS9mIaqUvD7SMhYwYXv1u25AYZVYxiNlCcJMi48SswjFsh26yDQcsa6V/WuG2dXqDEVA/nIOWl8CAkOIuLAlx4mrEtavZPdSkP6GvsmxSxCZi/ct6MoYuT2RQnk5rmJ/3NhK66QcV0e4/jV0lhoS5bYxjm6/PcvYypi4R3f3fy+HmxBRd31m1p27J9PvOvkM64Ys8ukleQYJ5CNAIZWP1+rZlyO0DsUL3kDrfOK7deCsuddFfa+GmoiJ8a0KsgovdYJ8Mf4Wi8R6M5HE8i8hseS6w4RVy315LGfIebPZQi4/+RDpfAbEzMCa2r730A3G48h+iU8bo/+KawpZTsSRqxlGLxtudvWnQ8yjsrj7D/WleFNrd5f7WLqarM9u0L1oT/R3GVyvlvJcVAGXmHp8e9tS9GlRy/TYC0zOW85SQMtYNtk3mU4i6L4Mj61c2kKOofMisCboHoUSUG56HGW3PatqsXxYsrLyPudvm9C3mZSzvK6XXtOduyczEHQttkq6kAxleAoJrEOAQmodauEyIh/S6xYahzU/XqN2uFz53f7aQztqfT0nr5DI9WTCPGqid7GAWJTT1GvYP6yjc6H8zBZr7IlcQ/cM1E/HvlUCcF/OelhgXE/Qe1BD7Z4B4z+h1dC9rL/hIHRZV1JMiH7PxPGzPsYR+bB8LCJdgxNQHcyx5B+HjenrJvb39p0cSvVnA8x+icDsGvR7+zAeRQQhey/z3WUqtjD6rzrl3hWa8azC638JE5KbGyo4sxLu+olRY2tLJgaMB20MwvnANj2OcsAcwaajcxFlYZQnbffTTxuyqwWVxViGYhJLIZCr3jt8T6bCttw4xlDql9RiPIEE1iBAIbUGtJtfJPvSIutfq43Zm0bmjMxp7biz2qItXGllb+/xbYwj4LiOV9yUu6PqztbT0Yxxp2+lZ0lLxsh4RieNSYzgFvneToeYKb+VnH4n1buVCyvayHbuydRecomYVEQ8oTwCFFLlsbxZNX3rwciV+ynv5YmFhpfuu7ylg+d7ixar7sngCXf328bH0XWpSnfjzkFfDtDUddRf7TibeQbBI1NarK5paWNx0UVdWG33jjFUJ6FkqHfnY5DWgY3fk2kd8OIOmVssHRTPKIUAhVQpGG9iJW4wux7Ig1TidYiHaVlmdefBHJr1WGJXb3ZVmx1HJ57tjw4mERMKts7Ni0c0TnMkTk3spI3F1yHG39dwWWYQPG6MjgY9MKFk2SF/1qv6N5ih3mUNVd3a7D2ZftVu/OkyjjS9BM8ggSIEKKSK0LvpZZ1lWnSUvxjtDGcPGuj/LAOQmBm5YzdOGZexyTo2NY7XY7T1iiy/oi4pVBZLkV5iXZdlquCRaT60+L8vz/0UWA8utd6yLn7D9WzqnszQ7fmbOvKlZMlQKU8hgQQCFFIJcO7EoasxOkdN9H+u8at844DEDEUTzTfxC5NuvAs3pYHSx1HMIjXR+RQO4tkFEG9Ga5m5osQsyMOItQ2zXl6a4PFFUjDNR7B6uRyUko8srd5gBdX+Vvo9mX65In7OfBKaXJNejGeQQCECFFKF8LEwCZDAZglYmJwabsLNkjTd4qKHxp4IAg/m88p1HWmC56u3oHk4f5zaiC+2NPi5jtLqVctzmwRIoBIEKKQqMQzsBAmQwCoBL+Fm0VxRYp3DyymG52007ykpLJJEzmpngntSBI9MkhsXH+VU5udbU+L/UuoNdoLfSIAEqkCAQqoKo8A+kAAJrBBwc0WJmW1GvsWQ75vu+Uf7zhJGcbm+CuUkSxQ8MklueBHq4CX6STRVQZdYb7A8v5EACVSDAIVUNcaBvSABElAI2F+X2e7jhFCx/QVzkiUJHt9lp1ialGtzN92UHuIaAhnwk+pdqYM7SIAEqkCAQqoKo8A+kAAJ3CwCSYJHuuxUS1P46n72URcJO/Umhr+Ug0n1KqdxkwRIoDoEKKSqMxbsCQmQwE0hkCB4ZHzU6vqa8uLcPEsi2H0lO3tCvbI0P0mABKpFgEKqWuPB3pAACdwEArGCJy0+Siyd1ISuxWRnj633JkBhH0ngbhKgkLqb486rJoG7SWAxRv9/YyyKXn2c4LkeoeWssWeGEtLasH6M0HtUg6abaL+fR/cgrt7os7mXBEigAgQopCowCOwCCZDAZgksPrRhPqzDFPmjkmKXsnYjLHhk3qjYhYp17B8do30+xiIp92243qz94XkkQAI7I0AhtTP0bJgESGDbBCanGxJSZV0IhVRZJFkPCWyNAIXU1lCzIRIggV0ToJDa9QiwfRK4fQQopG7fmPKKSODWEJh/aOP4cR0Hex2MbcD60IKuNTFcM8iJQurW3Bq8EBKoDAEKqcoMBTtCAiQQIGCN0HrYw/RzDweat/ivNUJbN3D23cLkvIfey5R/ocByCqkAYX4hARIogQCFVAkQWQUJkMAGCIg18iwL4+c69JMR3DWL5xj81cZ4zQWMSxNSiyGaehfTsi97U/WW3U/WRwIk4BOgkPJRcIMESKByBJzlVnS0PnjK6WcfDUdUZbRIvZ1CnSRXmpCqHCh2iARIYFcEKKR2RZ7tkgAJpBNw0gocoPdNnCoygtfR+6ZKo/Qq1DMopFQa3CYBEiiDAIVUGRRZBwmQwIYIzDF6aqL2sI3OSQvd97OAhSlzo1/PYN43YYg8UloNxn0TzfNZ5uI8kQRIgATiCFBIxZHhfhIgARIgARIgARJIIUAhlQKIh0mABEiABEiABEggjgCFVBwZ7icBEiABEiABEiCBFAIUUimAeJgESIAESIAESIAE4ghQSMWR4X4SIAESIAESIAESSCFAIZUCiIdJgARIgARIgARIII4AhVQcGe4nARIgARIgARIggRQCFFIpgHiYBEiABEiABEiABOIIUEjFkeF+EiABEiABEiABEkghQCGVAoiHSYAESIAESIAESCCOAIVUHBnuJwESIAESIAESIIEUAhRSKYB4mARIgARIgARIgATiCPwf9kDp3l6nUZ4AAAAASUVORK5CYII="
    }
   },
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 信息增益\n",
    "![image.png](attachment:1c5ce970-4b99-4485-a5c8-bfd17239b5b7.png)     \n",
    "\n",
    "$Gain(D, A) = H(D) - H(Y|X)$   \n",
    "式中的条件熵:     \n",
    "    $H(Y|X) = \\sum_{i=1}^n p_i H(Y|X=x_i)$\n",
    "定义为 X 给定条件下 Y 的条件概率分布的熵对 X 的数学期望（这说的什么JB玩意儿？？？说人话不好吗？？） ====》在X的条件下，Y的的熵的期望。   \n",
    "而信息增益指的就是 **得知X特征的信息后使得Y的信息不确定性减少的程度**\n",
    "\n",
    "式中 a 表示用来划分数据集的属性，有 V 个不同的离散值（注意：这里包括后面信息增益率和基尼指数只讨论离散特征的情况，连续特征最后面会讲到）。上式表达的物理意义就是：对于属性 a，按照不同的取值进行划分，将数据集 D 分成 V 个不同的子集，用数据集 D 的信息熵减去划分后所有子集信息熵的加权和(权重为每个子集样本数占总集合的比例)得到的差值。ID3决策树想要找到使得式(2)信息增益最大的属性 a 对数据集进行划分。\n",
    "怎么理解呢？上面提到，信息熵用来衡量信息的纯度，我们的目的是让划分后每个子集的纯度尽可能大(也就是让式(2)中减数项尽量小)，由于集合 D 的纯度是确定的，那么让式(2)中减数项尽量小也就是让 Gain(D,a) 尽量大。\n",
    "\n",
    "在信贷的例子中，我们已经有了经验熵(Entropy)， 接下来计算各个特征对数据集D的信息增益， 分别以 $A_1, A_2, A_3, A_4$ 表示age, work, house, credit特征。则：      \n",
    "$g(D, A_1) = ent - [5/15 H(D_1) + 5/15 H(D_2) + 5/15 H(D_3)] = 0.971 - [1/3(-2/5 \\log_{2}{2/5} - 3/5 \\log_{2}{3/5}) + ...]$     \n",
    "\n",
    "其中 D1 = $(-2/5 \\log_{2}{2/5} - 3/5 \\log_{2}{3/5})$ 意思就是在5个young的青年中，获得贷款的有2个，失败的有3个。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Gains:  {'age': 0.08300749985576883, 'work': 0.32365019815155627, 'house': 0.4199730940219749, 'credit': 0.36298956253708536}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'age': 0.05237190142858302,\n",
       " 'work': 0.3524465495205019,\n",
       " 'house': 0.4325380677663126,\n",
       " 'credit': 0.23185388128724224}"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def get_condition_feat(data, target):\n",
    "    feat = {}\n",
    "    for i in range(0, len(data)):\n",
    "        if feat.get(data[i]):\n",
    "            feat[data[i]].append(target[i])\n",
    "        else:\n",
    "            index = [target[i]]\n",
    "            feat[data[i]] = index\n",
    "#     print(\"Feature with condition:\", feat)\n",
    "    return feat\n",
    "    \n",
    "def gain(data, target):\n",
    "    \"\"\"\n",
    "    data: list\n",
    "    target: list\n",
    "    \"\"\"\n",
    "    if len(data) != len(target):\n",
    "        raise ValueError(\"Data length is not equal!\")\n",
    "    for ele in target:\n",
    "        if ele not in [0, 1]:\n",
    "            raise ValueError(\"Target value is not in 0, 1!\")\n",
    "    ent = entropy(target)\n",
    "    feat = get_feat(data)\n",
    "    condFeat = get_condition_feat(data, target)\n",
    "    condEnt = 0\n",
    "    for ele in feat:\n",
    "        condP = feat[ele]/len(data)\n",
    "        condFeatEnt = entropy(condFeat[ele])\n",
    "        condEnt += condP * condFeatEnt\n",
    "    return ent - condEnt\n",
    "\n",
    "gains = {}\n",
    "for ele in dataset:\n",
    "    gains[ele]=(gain(dataset[ele], target['result']))\n",
    "print(\"Gains: \", gains)\n",
    "\n",
    "def gain_ratio(dataset, gains):\n",
    "    gainRatio = {}\n",
    "    for ele in gains:\n",
    "        ent = entropy(dataset[ele])\n",
    "        gainRatio[ele] = gains[ele]/ent\n",
    "    return gainRatio\n",
    "\n",
    "gain_ratio(dataset, gains)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由此可以看出， HOUSE是否有自己的房子是最优特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 121,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "class DTree:\n",
    "    def __init__(self, colname='', gain=0, entropy=0):\n",
    "        self.colname = colname\n",
    "        self.gain = gain\n",
    "#         self.entropy = entropy\n",
    "        self.choice = {}\n",
    "        self.parent = ''\n",
    "    \n",
    "    def tostr(self):\n",
    "        print(\"DTree(Feature:\", self.colname, \", Gain/Ent:\", str(self.gain), \", choice:\", str(self.choice.keys()), \" parent:\", self.parent)\n",
    "        if len(self.choice) > 0:\n",
    "            for ele in self.choice:\n",
    "                print(\"Choice \" + str(ele) + \" :\")\n",
    "                self.choice[ele].tostr()\n",
    "\n",
    "def dict_slice(dataset, indexes):\n",
    "    res = {}\n",
    "    for ele in dataset:\n",
    "        res[ele] = [dataset[ele][x] for x in indexes]\n",
    "    return res\n",
    "\n",
    "def classify_by_feat(dataset, target, colname):\n",
    "    res = {}\n",
    "    targetRes = {}\n",
    "    featIndex = feat_index(dataset[colname])\n",
    "    for ele in featIndex:\n",
    "        subData = dict_slice(dataset, featIndex[ele])\n",
    "        subTarget = dict_slice(target, featIndex[ele])\n",
    "        res[ele] = subData\n",
    "        targetRes[ele] = subTarget\n",
    "    return res, targetRes\n",
    "\n",
    "def feat_index(data):\n",
    "    res = {}\n",
    "    for index,ele in enumerate(data):\n",
    "        if res.get(ele):\n",
    "            res[ele].append(index)\n",
    "        else:\n",
    "            res[ele] = [index]\n",
    "    return res\n",
    "\n",
    "def gen_dtree(dataset, target, parent):\n",
    "    tree = DTree()\n",
    "    tree.parent = parent\n",
    "    if len(dataset) == 0:\n",
    "        return tree\n",
    "    else:\n",
    "        for ele in target:\n",
    "            targetName = ele\n",
    "            targetVal = target[ele]\n",
    "        gains = {}\n",
    "        maxCol = ''\n",
    "        maxGain = 0\n",
    "        for ele in dataset:\n",
    "            gains[ele] = gain(dataset[ele], targetVal)\n",
    "            if gains[ele] > maxGain:\n",
    "                maxGain = gains[ele]\n",
    "                maxCol = ele\n",
    "        if maxGain <= 0:\n",
    "            return tree\n",
    "        tree.colname = maxCol\n",
    "        tree.gain = maxGain\n",
    "        print(\"Max Gains in \" + targetName, maxGain, maxCol)\n",
    "        subData, subTarget = classify_by_feat(dataset, target, maxCol)\n",
    "        index = 0\n",
    "        for ele in subData:\n",
    "            print(\"Gen tree: \" + str(ele) + \" for feature \" + maxCol)\n",
    "            tree.choice[ele] = gen_dtree(subData[ele], subTarget[ele], maxCol)\n",
    "            index = index + 1\n",
    "        return tree\n",
    "            \n",
    "        \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 122,
   "metadata": {},
   "outputs": [],
   "source": [
    "subDataset = {}\n",
    "subDataset['age'] = dataset['age']\n",
    "subDataset['house'] = dataset['house']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 123,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Max Gains in result 0.4199730940219749 house\n",
      "Gen tree: 0 for feature house\n",
      "Max Gains in result 0.9182958340544896 work\n",
      "Gen tree: 0 for feature work\n",
      "Gen tree: 1 for feature work\n",
      "Gen tree: 1 for feature house\n"
     ]
    }
   ],
   "source": [
    "tree = gen_dtree(dataset, target, 'root')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 118,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "DTree(Feature: house , Gain/Ent: 0.4199730940219749 , choice: dict_keys([0, 1])  parent: root\n",
      "Choice 0 :\n",
      "DTree(Feature: work , Gain/Ent: 0.9182958340544896 , choice: dict_keys([0, 1])  parent: house\n",
      "Choice 0 :\n",
      "DTree(Feature:  , Gain/Ent: 0 , choice: dict_keys([])  parent: work\n",
      "Choice 1 :\n",
      "DTree(Feature:  , Gain/Ent: 0 , choice: dict_keys([])  parent: work\n",
      "Choice 1 :\n",
      "DTree(Feature:  , Gain/Ent: 0 , choice: dict_keys([])  parent: house\n"
     ]
    }
   ],
   "source": [
    "dummy = tree\n",
    "dummy.tostr()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "到这就成了，但是好像没有写这一条支线的结果（是否借钱）。。 不过结果看起来和教科书上一致了。这么看 决策树是挺好理解的，只不过代码实现起来，由于用到了递归，有点不好写而已。。 \n",
    "计算经验熵，再根据经验熵结果计算条件熵，一直递归下去，直到没有选项或者所有经验熵为0的情况。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
