{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第三题：实现决策树"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "实验内容：  \n",
    "使用LendingClub Safe Loans数据集：\n",
    "1. 实现信息增益、信息增益率、基尼指数三种划分标准\n",
    "2. 使用给定的训练集完成三种决策树的训练过程\n",
    "3. 计算三种决策树在最大深度为10时在训练集和测试集上的精度，查准率，查全率，F1值"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这部分，我们会实现一个很简单的二叉决策树"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 读取数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入类库\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入数据\n",
    "loans = pd.read_csv('data/lendingclub/lending-club-data.csv', low_memory=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据中有两列是我们想预测的指标，一项是safe_loans，一项是bad_loans，分别表示正例和负例，我们对其进行处理，将正例的safe_loans设为1，负例设为-1，删除bad_loans这列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 对数据进行预处理，将safe_loans作为标记\n",
    "loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)\n",
    "del loans['bad_loans']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们只使用grade, term, home_ownership, emp_length这四列作为特征，safe_loans作为标记，只保留loans中的这五列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "features = ['grade',              # grade of the loan\n",
    "            'term',               # the term of the loan\n",
    "            'home_ownership',     # home_ownership status: own, mortgage or rent\n",
    "            'emp_length',         # number of years of employment\n",
    "           ]\n",
    "target = 'safe_loans'\n",
    "loans = loans[features + [target]]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看前五行数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loans.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 划分训练集和测试集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.utils import shuffle\n",
    "loans = shuffle(loans, random_state = 34)\n",
    "\n",
    "split_line = int(len(loans) * 0.6)\n",
    "train_data = loans.iloc[: split_line]\n",
    "test_data = loans.iloc[split_line:]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 特征预处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到所有的特征都是离散类型的特征，需要对数据进行预处理，使用one-hot编码对其进行处理。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "one-hot编码的思想就是将离散特征变成向量，假设特征$A$有三种取值$\\{a, b, c\\}$，这三种取值等价，如果我们使用1,2,3三个数字表示这三种取值，那么在计算时就会产生偏差，有一些涉及距离度量的算法会认为，2和1离得近，3和1离得远，但这三个值应该是等价的，这种表示方法会造成模型在判断上出现偏差。解决方案就是使用一个三维向量表示他们，用$[1, 0, 0]$表示a，$[0, 1, 0]$表示b，$[0, 0, 1]$表示c，这样三个向量之间的距离就都是相等的了，任意两个向量在欧式空间的距离都是$\\sqrt{2}$。这就是one-hot编码是思想。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "pandas中使用get_dummies生成one-hot向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def one_hot_encoding(data, features_categorical):\n",
    "    '''\n",
    "    Parameter\n",
    "    ----------\n",
    "    data: pd.DataFrame\n",
    "    \n",
    "    features_categorical: list(str)\n",
    "    '''\n",
    "    \n",
    "    # 对所有的离散特征遍历\n",
    "    for cat in features_categorical:\n",
    "        \n",
    "        # 对这列进行one-hot编码，前缀为这个变量名\n",
    "        one_encoding = pd.get_dummies(data[cat], prefix = cat)\n",
    "        \n",
    "        # 将生成的one-hot编码与之前的dataframe拼接起来\n",
    "        data = pd.concat([data, one_encoding],axis=1)\n",
    "        \n",
    "        # 删除掉原始的这列离散特征\n",
    "        del data[cat]\n",
    "    \n",
    "    return data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先对训练集生成one-hot向量，然后对测试集生成one-hot向量，这里需要注意的是，如果训练集中，特征$A$的取值为$\\{a, b, c\\}$，这样我们生成的特征就有三列，分别为$A\\_a$, $A\\_b$, $A\\_c$，然后我们使用这个训练集训练模型，模型就就会考虑这三个特征，在测试集中如果有一个样本的特征$A$的值为$d$，那它的$A\\_a$，$A\\_b$，$A\\_c$就都为0，我们不去考虑$A\\_d$，因为这个特征在训练模型的时候是不存在的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_data = one_hot_encoding(train_data, features)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "获取所有特征的名字"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "one_hot_features = train_data.columns.tolist()\n",
    "one_hot_features.remove(target)\n",
    "one_hot_features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来是对测试集进行one_hot编码，但只要保留出现在one_hot_features中的特征即可·"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_data_tmp = one_hot_encoding(test_data, features)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建一个空的DataFrame\n",
    "test_data = pd.DataFrame(columns = train_data.columns)\n",
    "for feature in train_data.columns:\n",
    "    # 如果训练集中当前特征在test_data_tmp中出现了，将其复制到test_data中\n",
    "    if feature in test_data_tmp.columns:\n",
    "        test_data[feature] = test_data_tmp[feature].copy()\n",
    "    else:\n",
    "        # 否则就用全为0的列去替代\n",
    "        test_data[feature] = np.zeros(test_data_tmp.shape[0], dtype = 'uint8')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_data.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_data.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**处理完后，所有的特征都是0和1，标记是1和-1**，以上就是数据预处理流程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 实现3种特征划分准则"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "决策树中有很多常用的特征划分方法，比如信息增益、信息增益率、基尼指数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们需要实现一个函数，它的作用是，给定决策树的某个结点内的所有样本的标记，让它计算出对应划分指标的值是多少"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们会实现上述三种划分指标"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**这里我们约定，将所有特征取值为0的样本，划分到左子树，特征取值为1的样本，划分到右子树**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.1 信息增益\n",
    "信息熵：\n",
    "$$\n",
    "\\mathrm{Ent}(D) = - \\sum^{\\vert \\mathcal{Y} \\vert}_{k = 1} p_k \\mathrm{log}_2 p_k\n",
    "$$\n",
    "\n",
    "信息增益：\n",
    "$$\n",
    "\\mathrm{Gain}(D, a) = \\mathrm{Ent}(D) - \\sum^{V}_{v=1} \\frac{\\vert D^v \\vert}{\\vert D \\vert} \\mathrm{Ent}(D^v)\n",
    "$$\n",
    "\n",
    "计算信息熵时约定：若$p = 0$，则$p \\log_2p = 0$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**下面的函数需要填写两个部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def information_entropy(labels_in_node):\n",
    "    '''\n",
    "    求当前结点的信息熵\n",
    "    \n",
    "    Parameter\n",
    "    ----------\n",
    "    labels_in_node: np.ndarray, 如[-1, 1, -1, 1, 1]\n",
    "    \n",
    "    Returns\n",
    "    ----------\n",
    "    float: information entropy\n",
    "    '''\n",
    "    # 统计样本总个数\n",
    "    num_of_samples = labels_in_node.shape[0]\n",
    "    \n",
    "    if num_of_samples == 0:\n",
    "        return 0\n",
    "    \n",
    "    # 统计出标记为1的个数\n",
    "    num_of_positive = len(labels_in_node[labels_in_node == 1])\n",
    "    \n",
    "    # 统计出标记为-1的个数\n",
    "    num_of_negative =                                                                     # YOUR CODE HERE\n",
    "    \n",
    "    # 统计正例的概率\n",
    "    prob_positive = num_of_positive / num_of_samples\n",
    "    \n",
    "    # 统计负例的概率\n",
    "    prob_negative =                                                                       # YOUR CODE HERE\n",
    "    \n",
    "    if prob_positive == 0:\n",
    "        positive_part = 0\n",
    "    else:\n",
    "        positive_part = prob_positive * np.log2(prob_positive)\n",
    "    \n",
    "    if prob_negative == 0:\n",
    "        negative_part = 0\n",
    "    else:\n",
    "        negative_part = prob_negative * np.log2(prob_negative)\n",
    "    \n",
    "    return - ( positive_part + negative_part )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面是6个测试样例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 信息熵测试样例1\n",
    "example_labels = np.array([-1, -1, 1, 1, 1])\n",
    "print(information_entropy(example_labels)) # 0.97095\n",
    "\n",
    "# 信息熵测试样例2\n",
    "example_labels = np.array([-1, -1, 1, 1, 1, 1, 1])\n",
    "print(information_entropy(example_labels)) # 0.86312\n",
    "    \n",
    "# 信息熵测试样例3\n",
    "example_labels = np.array([-1, -1, -1, -1, -1, 1, 1])\n",
    "print(information_entropy(example_labels)) # 0.86312\n",
    "\n",
    "# 信息熵测试样例4\n",
    "example_labels = np.array([-1] * 9 + [1] * 8)\n",
    "print(information_entropy(example_labels)) # 0.99750\n",
    "\n",
    "# 信息熵测试样例5\n",
    "example_labels = np.array([1] * 8)\n",
    "print(information_entropy(example_labels)) # 0\n",
    "\n",
    "# 信息熵测试样例6\n",
    "example_labels = np.array([])\n",
    "print(information_entropy(example_labels)) # 0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来完成计算所有特征的信息增益的函数  \n",
    "**需要填写三个部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def compute_information_gains(data, features, target, annotate = False):\n",
    "    '''\n",
    "    计算所有特征的信息增益\n",
    "    \n",
    "    Parameter\n",
    "    ----------\n",
    "        data: pd.DataFrame，传入的样本，带有特征和标记的dataframe\n",
    "        \n",
    "        features: list(str)，特征名组成的list\n",
    "        \n",
    "        target: str, 标记(label)的名字\n",
    "        \n",
    "        annotate, boolean，是否打印所有特征的信息增益值，默认为False\n",
    "        \n",
    "    Returns\n",
    "    ----------\n",
    "        information_gains: dict, key: str, 特征名\n",
    "                                 value: float，信息增益\n",
    "    '''\n",
    "    \n",
    "    # 我们将每个特征划分的信息增益值存储在一个dict中\n",
    "    # 键是特征名，值是信息增益值\n",
    "    information_gains = dict()\n",
    "    \n",
    "    # 对所有的特征进行遍历，使用信息增益对每个特征进行计算\n",
    "    for feature in features:\n",
    "        \n",
    "        # 左子树保证所有的样本的这个特征取值为0\n",
    "        left_split_target = data[data[feature] == 0][target]\n",
    "        \n",
    "        # 右子树保证所有的样本的这个特征取值为1\n",
    "        right_split_target =  data[data[feature] == 1][target]\n",
    "            \n",
    "        # 计算左子树的信息熵\n",
    "        left_entropy = information_entropy(left_split_target)\n",
    "        \n",
    "        # 计算左子树的权重\n",
    "        left_weight = len(left_split_target) / (len(left_split_target) + len(right_split_target))\n",
    "\n",
    "        # 计算右子树的信息熵\n",
    "        right_entropy =                                                                 # YOUR CODE HERE\n",
    "        \n",
    "        # 计算右子树的权重\n",
    "        right_weight =                                                                  # YOUR CODE HERE\n",
    "        \n",
    "        # 计算当前结点的信息熵\n",
    "        current_entropy = information_entropy(data[target])\n",
    "            \n",
    "        # 计算使用当前特征划分的信息增益\n",
    "        gain =                                                                          # YOUR CODE HERE\n",
    "        \n",
    "        # 将特征名与增益值以键值对的形式存储在information_gains中\n",
    "        information_gains[feature] = gain\n",
    "        \n",
    "        if annotate:\n",
    "            print(\" \", feature, gain)\n",
    "            \n",
    "    return information_gains"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 信息增益测试样例1\n",
    "print(compute_information_gains(train_data, one_hot_features, target)['grade_A']) # 0.01759\n",
    "\n",
    "# 信息增益测试样例2\n",
    "print(compute_information_gains(train_data, one_hot_features, target)['term_ 60 months']) # 0.01429\n",
    "\n",
    "# 信息增益测试样例3\n",
    "print(compute_information_gains(train_data, one_hot_features, target)['grade_B']) # 0.00370"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.2 信息增益率\n",
    "信息增益率：\n",
    "\n",
    "$$\n",
    "\\mathrm{Gain\\_ratio}(D, a) = \\frac{\\mathrm{Gain}(D, a)}{\\mathrm{IV}(a)}\n",
    "$$\n",
    "\n",
    "其中\n",
    "\n",
    "$$\n",
    "\\mathrm{IV}(a) = - \\sum^V_{v=1} \\frac{\\vert D^v \\vert}{\\vert D \\vert} \\log_2 \\frac{\\vert D^v \\vert}{\\vert D \\vert}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "完成计算所有特征信息增益率的函数  \n",
    "**这里要完成五个部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def compute_information_gain_ratios(data, features, target, annotate = False):\n",
    "    '''\n",
    "    计算所有特征的信息增益率并保存起来\n",
    "    \n",
    "    Parameter\n",
    "    ----------\n",
    "    data: pd.DataFrame, 带有特征和标记的数据\n",
    "    \n",
    "    features: list(str)，特征名组成的list\n",
    "    \n",
    "    target: str， 特征的名字\n",
    "    \n",
    "    annotate: boolean, default False，是否打印注释\n",
    "    \n",
    "    Returns\n",
    "    ----------\n",
    "    gain_ratios: dict, key: str, 特征名\n",
    "                       value: float，信息增益率\n",
    "    '''\n",
    "    \n",
    "    gain_ratios = dict()\n",
    "    \n",
    "    # 对所有的特征进行遍历，使用当前的划分方法对每个特征进行计算\n",
    "    for feature in features:\n",
    "        \n",
    "        # 左子树保证所有的样本的这个特征取值为0\n",
    "        left_split_target = data[data[feature] == 0][target]\n",
    "        \n",
    "        # 右子树保证所有的样本的这个特征取值为1\n",
    "        right_split_target =  data[data[feature] == 1][target]\n",
    "            \n",
    "        # 计算左子树的信息熵\n",
    "        left_entropy = information_entropy(left_split_target)\n",
    "        \n",
    "        # 计算左子树的权重\n",
    "        left_weight = len(left_split_target) / (len(left_split_target) + len(right_split_target))\n",
    "\n",
    "        # 计算右子树的信息熵\n",
    "        right_entropy =                                                                     # YOUR CODE HERE\n",
    "        \n",
    "        # 计算右子树的权重\n",
    "        right_weight =                                                                      # YOUR CODE HERE\n",
    "        \n",
    "        # 计算当前结点的信息熵\n",
    "        current_entropy = information_entropy(data[target])\n",
    "        \n",
    "        # 计算当前结点的信息增益\n",
    "        \n",
    "        gain =                                                                              # YOUR CODE HERE\n",
    "        \n",
    "        # 计算IV公式中，当前特征为0的值\n",
    "        if left_weight == 0:\n",
    "            left_IV = 0\n",
    "        else:\n",
    "            left_IV =                                                                       # YOUR CODE HERE\n",
    "        \n",
    "        # 计算IV公式中，当前特征为1的值\n",
    "        if right_weight == 0:\n",
    "            right_IV = 0\n",
    "        else:\n",
    "            right_IV =                                                                      # YOUR CODE HERE\n",
    "        \n",
    "        # IV 等于所有子树IV之和的相反数\n",
    "        IV = - (left_IV + right_IV)\n",
    "            \n",
    "        # 计算使用当前特征划分的信息增益率\n",
    "        # 这里为了防止IV是0，导致除法得到np.inf（无穷），在分母加了一个很小的小数\n",
    "        gain_ratio = gain / (IV + np.finfo(np.longdouble).eps)\n",
    "        \n",
    "        # 信息增益率的存储\n",
    "        gain_ratios[feature] = gain_ratio\n",
    "        \n",
    "        if annotate:\n",
    "            print(\" \", feature, gain_ratio)\n",
    "            \n",
    "    return gain_ratios"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 信息增益率测试样例1\n",
    "print(compute_information_gain_ratios(train_data, one_hot_features, target)['grade_A']) # 0.02573\n",
    "\n",
    "# 信息增益率测试样例2\n",
    "print(compute_information_gain_ratios(train_data, one_hot_features, target)['grade_B']) # 0.00417\n",
    "\n",
    "# 信息增益率测试样例3\n",
    "print(compute_information_gain_ratios(train_data, one_hot_features, target)['term_ 60 months']) # 0.01970"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.3 基尼指数\n",
    "数据集$D$的基尼值：\n",
    "\n",
    "$$\n",
    "\\begin{aligned}\n",
    "\\mathrm{Gini}(D) & = \\sum^{\\vert \\mathcal{Y} \\vert}_{k=1} \\sum_{k' \\neq k} p_k p_{k'}\\\\\n",
    "& = 1 - \\sum^{\\vert \\mathcal{Y} \\vert}_{k=1} p^2_k.\n",
    "\\end{aligned}\n",
    "$$\n",
    "\n",
    "属性$a$的基尼指数：\n",
    "\n",
    "$$\n",
    "\\mathrm{Gini\\_index}(D, a) = \\sum^V_{v = 1} \\frac{\\vert D^v \\vert}{\\vert D \\vert} \\mathrm{Gini}(D^v)\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "完成数据集基尼值的计算  \n",
    "**这里需要填写三部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def gini(labels_in_node):\n",
    "    '''\n",
    "    计算一个结点内样本的基尼指数\n",
    "    \n",
    "    Paramters\n",
    "    ----------\n",
    "    label_in_data: np.ndarray, 样本的标记，如[-1, -1, 1, 1, 1]\n",
    "    \n",
    "    Returns\n",
    "    ---------\n",
    "    gini: float，基尼指数\n",
    "    '''\n",
    "    \n",
    "    # 统计样本总个数\n",
    "    num_of_samples = labels_in_node.shape[0]\n",
    "    \n",
    "    if num_of_samples == 0:\n",
    "        return 0\n",
    "    \n",
    "    # 统计出1的个数\n",
    "    num_of_positive = len(labels_in_node[labels_in_node == 1])\n",
    "    \n",
    "    # 统计出-1的个数\n",
    "    num_of_negative =                                                   # YOUR CODE HERE\n",
    "    \n",
    "    # 统计正例的概率\n",
    "    prob_positive = num_of_positive / num_of_samples\n",
    "    \n",
    "    # 统计负例的概率\n",
    "    prob_negative =                                                     # YOUR CODE HERE\n",
    "    \n",
    "    # 计算基尼值\n",
    "    gini =                                                              # YOUR CODE HERE\n",
    "    \n",
    "    return gini"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 基尼值测试样例1\n",
    "example_labels = np.array([-1, -1, 1, 1, 1])\n",
    "print(gini(example_labels)) # 0.48\n",
    "\n",
    "# 基尼值测试样例2\n",
    "example_labels = np.array([-1, -1, 1, 1, 1, 1, 1])\n",
    "print(gini(example_labels)) # 0.40816\n",
    "    \n",
    "# 基尼值测试样例3\n",
    "example_labels = np.array([-1, -1, -1, -1, -1, 1, 1])\n",
    "print(gini(example_labels)) # 0.40816\n",
    "\n",
    "# 基尼值测试样例4\n",
    "example_labels = np.array([-1] * 9 + [1] * 8)\n",
    "print(gini(example_labels)) # 0.49827\n",
    "\n",
    "# 基尼值测试样例5\n",
    "example_labels = np.array([1] * 8)\n",
    "print(gini(example_labels)) # 0\n",
    "\n",
    "# 基尼值测试样例6\n",
    "example_labels = np.array([])\n",
    "print(gini(example_labels)) # 0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后计算所有特征的基尼指数  \n",
    "**这里需要填写三部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def compute_gini_indices(data, features, target, annotate = False):\n",
    "    '''\n",
    "    计算使用各个特征进行划分时，各特征的基尼指数\n",
    "    \n",
    "    Parameter\n",
    "    ----------\n",
    "    data: pd.DataFrame, 带有特征和标记的数据\n",
    "    \n",
    "    features: list(str)，特征名组成的list\n",
    "    \n",
    "    target: str， 特征的名字\n",
    "    \n",
    "    annotate: boolean, default False，是否打印注释\n",
    "    \n",
    "    Returns\n",
    "    ----------\n",
    "    gini_indices: dict, key: str, 特征名\n",
    "                       value: float，基尼指数\n",
    "    '''\n",
    "    \n",
    "    gini_indices = dict()\n",
    "    # 对所有的特征进行遍历，使用当前的划分方法对每个特征进行计算\n",
    "    for feature in features:\n",
    "        # 左子树保证所有的样本的这个特征取值为0\n",
    "        left_split_target = data[data[feature] == 0][target]\n",
    "        \n",
    "        # 右子树保证所有的样本的这个特征取值为1\n",
    "        right_split_target =  data[data[feature] == 1][target]\n",
    "            \n",
    "        # 计算左子树的基尼值\n",
    "        left_gini = gini(left_split_target)\n",
    "        \n",
    "        # 计算左子树的权重\n",
    "        left_weight = len(left_split_target) / (len(left_split_target) + len(right_split_target))\n",
    "\n",
    "        # 计算右子树的基尼值\n",
    "        right_gini =                                                               # YOUR CODE HERE\n",
    "        \n",
    "        # 计算右子树的权重\n",
    "        right_weight =                                                             # YOUR CODE HERE\n",
    "        \n",
    "        # 计算当前结点的基尼指数\n",
    "        gini_index =                                                               # YOUR CODE HERE\n",
    "        \n",
    "        # 存储\n",
    "        gini_indices[feature] = gini_index\n",
    "        \n",
    "        if annotate:\n",
    "            print(\" \", feature, gini_index)\n",
    "            \n",
    "    return gini_indices"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 基尼指数测试样例1\n",
    "print(compute_gini_indices(train_data, one_hot_features, target)['grade_A']) # 0.30095\n",
    "\n",
    "# 基尼指数测试样例2\n",
    "print(compute_gini_indices(train_data, one_hot_features, target)['grade_B']) # 0.30568\n",
    "\n",
    "# 基尼指数测试样例3\n",
    "print(compute_gini_indices(train_data, one_hot_features, target)['term_ 36 months']) # 0.30055"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. 完成最优特征的选择 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "到此，我们完成了三种划分策略的实现，接下来就是完成获取最优特征的函数  \n",
    "**这里需要填写三个部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def best_splitting_feature(data, features, target, criterion = 'gini', annotate = False):\n",
    "    '''\n",
    "    给定划分方法和数据，找到最优的划分特征\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    data: pd.DataFrame, 带有特征和标记的数据\n",
    "    \n",
    "    features: list(str)，特征名组成的list\n",
    "    \n",
    "    target: str， 特征的名字\n",
    "    \n",
    "    criterion: str, 使用哪种指标，三种选项: 'information_gain', 'gain_ratio', 'gini'\n",
    "    \n",
    "    annotate: boolean, default False，是否打印注释\n",
    "    \n",
    "    Returns\n",
    "    ----------\n",
    "    best_feature: str, 最佳的划分特征的名字\n",
    "    \n",
    "    '''\n",
    "    if criterion == 'information_gain':\n",
    "        if annotate:\n",
    "            print('using information gain')\n",
    "        \n",
    "        # 得到当前所有特征的信息增益\n",
    "        information_gains = compute_information_gains(data, features, target, annotate)\n",
    "    \n",
    "        # information_gains是一个dict类型的对象，我们要找值最大的那个元素的键是谁\n",
    "        # 根据这些特征和他们的信息增益，找到最佳的划分特征\n",
    "        best_feature =                                                                      # YOUR CODE HERE\n",
    "        \n",
    "        return best_feature\n",
    "\n",
    "    elif criterion == 'gain_ratio':\n",
    "        if annotate:\n",
    "            print('using information gain ratio')\n",
    "        \n",
    "        # 得到当前所有特征的信息增益率\n",
    "        gain_ratios = compute_information_gain_ratios(data, features, target, annotate)\n",
    "    \n",
    "        # 根据这些特征和他们的信息增益率，找到最佳的划分特征\n",
    "        best_feature =                                                                      # YOUR CODE HERE\n",
    "\n",
    "        return best_feature\n",
    "    \n",
    "    elif criterion == 'gini':\n",
    "        if annotate:\n",
    "            print('using gini')\n",
    "        \n",
    "        # 得到当前所有特征的基尼指数\n",
    "        gini_indices = compute_gini_indices(data, features, target, annotate)\n",
    "        \n",
    "        # 根据这些特征和他们的基尼指数，找到最佳的划分特征\n",
    "        best_feature =                                                                      # YOUR CODE HERE\n",
    "\n",
    "        return best_feature\n",
    "    else:\n",
    "        raise Exception(\"传入的criterion不合规!\", criterion)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. 判断结点内样本的类别是否为同一类"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**这里需要填写两个部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def intermediate_node_num_mistakes(labels_in_node):\n",
    "    '''\n",
    "    求树的结点中，样本数少的那个类的样本有多少，比如输入是[1, 1, -1, -1, 1]，返回2\n",
    "    \n",
    "    Parameter\n",
    "    ----------\n",
    "    labels_in_node: np.ndarray, pd.Series\n",
    "    \n",
    "    Returns\n",
    "    ----------\n",
    "    int：个数\n",
    "    \n",
    "    '''\n",
    "    # 如果传入的array为空，返回0\n",
    "    if len(labels_in_node) == 0:\n",
    "        return 0\n",
    "    \n",
    "    # 统计1的个数\n",
    "    num_of_one =                                                                      # YOUR CODE HERE\n",
    "    \n",
    "    # 统计-1的个数\n",
    "    num_of_minus_one =                                                                # YOUR CODE HERE\n",
    "    \n",
    "    return num_of_one if num_of_minus_one > num_of_one else num_of_minus_one"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 测试样例1\n",
    "print(intermediate_node_num_mistakes(np.array([1, 1, -1, -1, -1]))) # 2\n",
    "\n",
    "# 测试样例2\n",
    "print(intermediate_node_num_mistakes(np.array([]))) # 0\n",
    "\n",
    "# 测试样例3\n",
    "print(intermediate_node_num_mistakes(np.array([1]))) # 0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7. 创建叶子结点"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_leaf(target_values):\n",
    "    '''\n",
    "    计算出当前叶子结点的标记是什么，并且将叶子结点信息保存在一个dict中\n",
    "    \n",
    "    Parameter:\n",
    "    ----------\n",
    "    target_values: pd.Series, 当前叶子结点内样本的标记\n",
    "\n",
    "    Returns:\n",
    "    ----------\n",
    "    leaf: dict，表示一个叶结点，\n",
    "            leaf['splitting_features'], None，叶结点不需要划分特征\n",
    "            leaf['left'], None，叶结点没有左子树\n",
    "            leaf['right'], None，叶结点没有右子树\n",
    "            leaf['is_leaf'], True, 是否是叶子结点\n",
    "            leaf['prediction'], int, 表示该叶子结点的预测值\n",
    "    '''\n",
    "    # 创建叶子结点\n",
    "    leaf = {'splitting_feature' : None,\n",
    "            'left' : None,\n",
    "            'right' : None,\n",
    "            'is_leaf': True}\n",
    "   \n",
    "    # 数结点内-1和+1的个数\n",
    "    num_ones = len(target_values[target_values == +1])\n",
    "    num_minus_ones = len(target_values[target_values == -1])    \n",
    "\n",
    "    # 叶子结点的标记使用少数服从多数的原则，为样本数多的那类的标记，保存在 leaf['prediction']\n",
    "    if num_ones > num_minus_ones:\n",
    "        leaf['prediction'] = 1\n",
    "    else:\n",
    "        leaf['prediction'] = -1\n",
    "\n",
    "    # 返回叶子结点\n",
    "    return leaf"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 8. 递归地创建决策树\n",
    "递归的创建决策树  \n",
    "递归算法终止的三个条件：\n",
    "1. 如果结点内所有的样本的标记都相同，该结点就不需要再继续划分，直接做叶子结点即可\n",
    "2. 如果结点所有的特征都已经在之前使用过了，在当前结点无剩余特征可供划分样本，该结点直接做叶子结点\n",
    "3. 如果当前结点的深度已经达到了我们限制的树的最大深度，直接做叶子结点"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**这里需要填写七个部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def decision_tree_create(data, features, target, criterion = 'gini', current_depth = 0, max_depth = 10, annotate = False):\n",
    "    '''\n",
    "    Parameter:\n",
    "    ----------\n",
    "    data: pd.DataFrame, 数据\n",
    "\n",
    "    features: iterable, 特征组成的可迭代对象，比如一个list\n",
    "\n",
    "    target: str, 标记的名字\n",
    "\n",
    "    criterion: 'str', 特征划分方法，只支持三种：'information_gain', 'gain_ratio', 'gini'\n",
    "\n",
    "    current_depth: int, 当前深度，递归的时候需要记录\n",
    "\n",
    "    max_depth: int, 树的最大深度，我们设定的树的最大深度，达到最大深度需要终止递归\n",
    "\n",
    "    Returns:\n",
    "    ----------\n",
    "    dict, dict['is_leaf']          : False, 当前顶点不是叶子结点\n",
    "          dict['prediction']       : None, 不是叶子结点就没有预测值\n",
    "          dict['splitting_feature']: splitting_feature, 当前结点是使用哪个特征进行划分的\n",
    "          dict['left']             : dict\n",
    "          dict['right']            : dict\n",
    "    '''\n",
    "    \n",
    "    if criterion not in ['information_gain', 'gain_ratio', 'gini']:\n",
    "        raise Exception(\"传入的criterion不合规!\", criterion)\n",
    "    \n",
    "    # 复制一份特征，存储起来，每使用一个特征进行划分，我们就删除一个\n",
    "    remaining_features = features[:]\n",
    "    \n",
    "    # 取出标记值\n",
    "    target_values = data[target]\n",
    "    print(\"-\" * 50)\n",
    "    print(\"Subtree, depth = %s (%s data points).\" % (current_depth, len(target_values)))\n",
    "\n",
    "    # 终止条件1\n",
    "    # 如果当前结点内所有样本同属一类，即这个结点中，各类别样本数最小的那个等于0\n",
    "    # 使用前面写的intermediate_node_num_mistakes来完成这个判断\n",
    "    if                                                                                  # YOUR CODE HERE\n",
    "        print(\"Stopping condition 1 reached.\")\n",
    "        return create_leaf(target_values)   # 创建叶子结点\n",
    "    \n",
    "    # 终止条件2\n",
    "    # 如果已经没有剩余的特征可供分割，即remaining_features为空\n",
    "    \n",
    "    if                                                                                  # YOUR CODE HERE\n",
    "        print(\"Stopping condition 2 reached.\")\n",
    "        return create_leaf(target_values)   # 创建叶子结点\n",
    "    \n",
    "    # 终止条件3\n",
    "    # 如果已经到达了我们要求的最大深度，即当前深度达到了最大深度\n",
    "    \n",
    "    if                                                                                  # YOUR CODE HERE\n",
    "        print(\"Reached maximum depth. Stopping for now.\")\n",
    "        return create_leaf(target_values)   # 创建叶子结点\n",
    "\n",
    "    # 找到最优划分特征\n",
    "    # 使用best_splitting_feature这个函数\n",
    "    \n",
    "    splitting_feature =                                                                 # YOUR CODE HERE\n",
    "    \n",
    "    # 使用我们找到的最优特征将数据划分成两份\n",
    "    # 左子树的数据\n",
    "    left_split = data[data[splitting_feature] == 0]\n",
    "    \n",
    "    # 右子树的数据\n",
    "    right_split =                                                                       # YOUR CODE HERE\n",
    "    \n",
    "    # 现在已经完成划分，我们要从剩余特征中删除掉当前这个特征\n",
    "    remaining_features.remove(splitting_feature)\n",
    "    \n",
    "    # 打印当前划分使用的特征，打印左子树样本个数，右子树样本个数\n",
    "    print(\"Split on feature %s. (%s, %s)\" % (\\\n",
    "                      splitting_feature, len(left_split), len(right_split)))\n",
    "    \n",
    "    # 如果使用当前的特征，将所有的样本都划分到一棵子树中，那么就直接将这棵子树变成叶子结点\n",
    "    # 判断左子树是不是“完美”的\n",
    "    if len(left_split) == len(data):\n",
    "        print(\"Creating leaf node.\")\n",
    "        return create_leaf(left_split[target])\n",
    "    \n",
    "    # 判断右子树是不是“完美”的\n",
    "    if len(right_split) == len(data):\n",
    "        print(\"Creating right node.\")\n",
    "        return                                                                          # YOUR CODE HERE\n",
    "\n",
    "    # 递归地创建左子树\n",
    "    left_tree = decision_tree_create(left_split, remaining_features, target, criterion, current_depth + 1, max_depth, annotate)\n",
    "    \n",
    "    # 递归地创建右子树\n",
    "    \n",
    "    right_tree =                                                                        # YOUR CODE HERE\n",
    "\n",
    "    # 返回树的非叶子结点\n",
    "    return {'is_leaf'          : False, \n",
    "            'prediction'       : None,\n",
    "            'splitting_feature': splitting_feature,\n",
    "            'left'             : left_tree, \n",
    "            'right'            : right_tree}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "训练一个模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "my_decision_tree = decision_tree_create(train_data, one_hot_features, target, 'gini', max_depth = 6, annotate = False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在，模型就训练好了"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 9. 预测"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们需要完成预测函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def classify(tree, x, annotate = False):\n",
    "    '''\n",
    "    递归的进行预测，一次只能预测一个样本\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    tree: dict\n",
    "    \n",
    "    x: pd.Series，待预测的样本\n",
    "    \n",
    "    annotate： boolean, 是否显示注释\n",
    "    \n",
    "    Returns\n",
    "    ----------\n",
    "    返回预测的标记\n",
    "    '''\n",
    "    if tree['is_leaf']:\n",
    "        if annotate:\n",
    "            print (\"At leaf, predicting %s\" % tree['prediction'])\n",
    "        return tree['prediction']\n",
    "    else:\n",
    "        split_feature_value = x[tree['splitting_feature']]\n",
    "        if annotate:\n",
    "             print (\"Split on %s = %s\" % (tree['splitting_feature'], split_feature_value))\n",
    "        if split_feature_value == 0:\n",
    "            return classify(tree['left'], x, annotate)\n",
    "        else:\n",
    "            return classify(tree['right'], x, annotate)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们取测试集第一个样本来测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_sample = test_data.iloc[0]\n",
    "print(test_sample)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('True class: %s ' % (test_sample['safe_loans']))\n",
    "print('Predicted class: %s ' % classify(my_decision_tree, test_sample))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "打印出使用决策树判断的过程"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "classify(my_decision_tree, test_sample, annotate=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 10. 在测试集上对我们的模型进行评估"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import accuracy_score\n",
    "from sklearn.metrics import precision_score\n",
    "from sklearn.metrics import recall_score\n",
    "from sklearn.metrics import f1_score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "先来编写一个批量预测的函数，传入的是整个测试集那样的pd.DataFrame，这个函数返回一个np.ndarray，存储模型的预测结果  \n",
    "**这里需要填写一个部分**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def predict(tree, data):\n",
    "    '''\n",
    "    按行遍历data，对每个样本进行预测，将值存在prediction中，最后返回np.ndarray\n",
    "    \n",
    "    Parameter\n",
    "    ----------\n",
    "    tree: dict, 模型\n",
    "    \n",
    "    data: pd.DataFrame, 数据\n",
    "    \n",
    "    Returns\n",
    "    ----------\n",
    "    predictions：np.ndarray, 模型对这些样本的预测结果\n",
    "    '''\n",
    "    predictions = np.zeros(len(data)) # 长度和data一样\n",
    "    \n",
    "    # YOUR CODE HERE\n",
    "    \n",
    "    return predictions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 11. 请你计算使用不同评价指标得到模型的四项指标的值，填写在下方表格内\n",
    "**树的最大深度为6**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# YOUR CODE HERE"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "树的最大深度为6  \n",
    "\n",
    "###### 双击此处编写\n",
    "\n",
    "划分标准|精度|查准率|查全率|F1\n",
    "-|-|-|-|-\n",
    "信息增益|0.0|0.0|0.0|0.0\n",
    "信息增益率|0.0|0.0|0.0|0.0\n",
    "基尼指数|0.0|0.0|0.0|0.0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##  选做：使用Echarts绘制决策树"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们可以使用echarts绘制出我们训练的决策树，这时候可以利用pyecharts这个库\n",
    "[pyecharts](http://pyecharts.org/#/)  \n",
    "pyecharts可以与jupyter notebook无缝衔接，直接在notebook中绘制图表。\n",
    "**提醒：pyecharts还未支持jupyter lab**\n",
    "\n",
    "pyecharts使用：https://pyecharts.org/#/zh-cn/intro\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如何安装？\n",
    "\n",
    "**pip 安装**：$ pip(3) install pyecharts\n",
    "\n",
    "（anaconda可在anaconda prompt中进行安装）\n",
    "\n",
    "**源码安装**：\n",
    "\n",
    "$ git clone https://github.com/pyecharts/pyecharts.git\n",
    "\n",
    "$ cd pyecharts\n",
    "\n",
    "$ pip install -r requirements.txt\n",
    "\n",
    "$ python setup.py install\n",
    "\n",
    "（或者执行 python install.py）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入树形图\n",
    "from pyecharts import Tree"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "echarts中的树形图要求我们提供一组这样的数据  \n",
    "```\n",
    "  [\n",
    "      {\n",
    "          value: 1212,    # 数值\n",
    "          # 子节点\n",
    "          children: [\n",
    "              {\n",
    "                  # 子节点数值\n",
    "                  value: 2323,\n",
    "                  # 子节点名\n",
    "                  name: 'description of this node',\n",
    "                  children: [...],\n",
    "              },\n",
    "              {\n",
    "                  value: 4545,\n",
    "                  name: 'description of this node',\n",
    "                  children: [\n",
    "                      {\n",
    "                          value: 5656,\n",
    "                          name: 'description of this node',\n",
    "                          children: [...]\n",
    "                      },\n",
    "                      ...\n",
    "                  ]\n",
    "              }\n",
    "          ]\n",
    "      },\n",
    "      ...\n",
    "  ]\n",
    "```\n",
    "关于pyecharts中的树形图的文档地址:[pyecharts Tree](http://pyecharts.org/#/zh-cn/charts?id=tree%EF%BC%88%E6%A0%91%E5%9B%BE%EF%BC%89)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其实和我们训练得到的树结构类似，只不过每个结点有个\"name\"属性，表示这个结点的名字，\"value\"表示它的值，\"children\"是一个list，里面还有这样的dict，我们可以写一个递归的函数完成这种数据的生成"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_echarts_data(tree):\n",
    "    \n",
    "    # 当前顶点的dict\n",
    "    value = dict()\n",
    "    \n",
    "    # 如果传入的tree已经是叶子结点了\n",
    "    if tree['is_leaf'] == True:\n",
    "        \n",
    "        # 它的value就设置为预测的标记\n",
    "        value['value'] = tree['prediction']\n",
    "        \n",
    "        # 它的名字就叫\"label: 标记\"\n",
    "        value['name'] = 'label: %s'%(tree['prediction'])\n",
    "        \n",
    "        # 直接返回这个dict即可\n",
    "        return value\n",
    "    \n",
    "    # 如果传入的tree不是叶子结点，名字就叫当前这个顶点的划分特征，子树是一个list\n",
    "    # 分别增加左子树和右子树到children中\n",
    "    value['name'] = tree['splitting_feature']\n",
    "    value['children'] = [generate_echarts_data(tree['left']), generate_echarts_data(tree['right'])]\n",
    "    return value"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data = generate_echarts_data(my_decision_tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用下面的代码进行绘制，绘制完成后，树的结点是可以点击的，点击后会展开它的子树"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tree = Tree(width=800, height=400)\n",
    "tree.add(\"\",\n",
    "         [data],\n",
    "         tree_collapse_interval=5,\n",
    "         tree_top=\"15%\",\n",
    "         tree_right=\"20%\",\n",
    "         tree_symbol = 'rect',\n",
    "         tree_symbol_size = 20,\n",
    "         )\n",
    "tree.render()\n",
    "tree"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## 选做：绘制其他的决策树"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# YOUR CODE HERE"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
