{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 《数据挖掘导论》第四章笔记"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 一、基本概念"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1. 分类任务概述**\n",
    "分类任务是数据挖掘中的一种监督学习方法，目标是通过分析带有类别标记的训练数据，构建一个模型用于预测未知数据的类别。具体来说，分类涉及将记录（或称实例）映射到一个预定义的类标号（即类别）。每条记录包含若干属性和一个类标号，形式为(x, y)，其中x是属性集合，y是类标号。\n",
    "\n",
    "**2. 训练集和测试集**\n",
    "训练集用于构建分类模型，包含已标记的数据样本；\n",
    "测试集用于评估模型的性能，包含未在训练过程中见过的数据。\n",
    "\n",
    "**3. 描述性建模与预测性建模**\n",
    "描述性建模：对历史数据进行描述，总结规律和趋势。\n",
    "预测性建模：基于现有数据预测未来的趋势和行为。\n",
    "\n",
    "**4. 常见的分类方法**\n",
    "决策树分类法\n",
    "基于规则的分类法\n",
    "神经网络\n",
    "支持向量机\n",
    "朴素贝叶斯分类法\n",
    "\n",
    "**5. 混淆矩阵**\n",
    "混淆矩阵是一种评价分类模型性能的工具，特别适用于二分类问题。其形式如下：\n",
    "预测为正类\t预测为负类\n",
    "实际正类\tTP (真正例)\tFN (假负例)\n",
    "实际负类\tFP (假正例)\tTN (真负例)\n",
    "其中，TP表示正确预测的正类，FN表示错误预测为负类的正类，FP表示错误预测为正类的负类，TN表示正确预测的负类。\n",
    "\n",
    "**6. 准确率**\n",
    "准确率是衡量分类模型整体性能的一个指标，计算公式为：\n",
    "Accuracy=正确预测数总预测数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 二、决策树"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1. 决策树原理**\n",
    "决策树是一种树形结构，由根节点、内部节点和叶节点组成。每个内部节点表示一个属性测试，每个分枝代表一个测试输出，每个叶节点代表一个类标号或类分布。决策树的目的是通过一系列属性测试条件，将数据集逐步划分成更纯的子集。\n",
    "\n",
    "**2. 决策树的生成**\n",
    "决策树的生成通常包括两个阶段：构建决策树和剪枝。构建决策树时，从根节点开始，递归地选择最优属性进行划分，直到满足停止条件。常用的算法有Hunt算法、ID3算法和C4.5算法。\n",
    "\n",
    "2.1. Hunt算法\n",
    "Hunt算法采用贪心策略，在选择划分数据的属性时，使用一系列局部最优决策来构造决策树。具体步骤如下：\n",
    "如果当前数据集的所有记录都属于同一类，则该节点为叶节点，标记为该类。\n",
    "否则，选择一个属性进行划分，使得划分后的子集纯度最高。\n",
    "对每个划分后的子集递归调用Hunt算法。\n",
    "\n",
    "2.2. ID3算法\n",
    "ID3算法使用信息增益作为选择最佳划分的度量标准。信息增益衡量的是划分前后数据集纯度的变化\n",
    "\n",
    "2.3. C4.5算法\n",
    "C4.5算法是ID3算法的改进版，使用信息增益比作为选择标准，解决了ID3算法倾向于选择多值属性的问题。信息增益比计算公式为：\n",
    "Gain Ratio=Information GainSplit Information\n",
    "\n",
    "其中，Split Information是划分本身的信息量。\n",
    "\n",
    "**3. 属性测试条件**\n",
    "根据属性的不同类型，可以指定不同的测试条件：\n",
    "二元属性：产生两个分支。\n",
    "标称属性：可以进行多路划分或二元划分。\n",
    "序数属性：通常进行二元划分，但必须保持顺序。\n",
    "连续属性：可以选择二元划分或多路划分，需要选择合适的划分点。\n",
    "\n",
    "**4. 选择最佳划分的度量**\n",
    "选择最佳划分的度量通常根据划分后子节点的不纯度来决定。常用的不纯度度量包括熵、Gini指数和分类误差。\n",
    "\n",
    "4.1. 熵\n",
    "熵是度量数据集纯度的一个重要指标\n",
    "\n",
    "4.2. Gini指数\n",
    "Gini指数是另一种度量不纯度的方法Gini指数越小，数据集的纯度越高。\n",
    "\n",
    "4.3. 分类误差\n",
    "分类误差表示错分类的比例\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 三、模型评估"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1. 过拟合与欠拟合**\n",
    "过拟合是指模型在训练数据上表现很好，但在测试数据上表现不佳。这是因为模型过于复杂，捕捉到了训练数据中的噪声。欠拟合是指模型在训练数据和测试数据上都表现不好，通常是因为模型过于简单，无法捕捉数据的真实趋势。\n",
    "\n",
    "**2. 评估方法**\n",
    "为了准确评估分类器的性能，可以采用以下几种方法：\n",
    "\n",
    "2.1. 保持方法（Holdout Method）\n",
    "将数据集分为训练集和验证集两部分，通常按70%训练数据和30%验证数据划分。用训练集构建模型，用验证集评估模型性能。\n",
    "\n",
    "2.2. 交叉验证（Cross-Validation）\n",
    "将数据集分为k个子集，每次用k-1个子集训练模型，剩下的一个子集用于验证。常用的是k折交叉验证（k = 10）。\n",
    "\n",
    "2.3. 自助法（Bootstrap）\n",
    "通过有放回抽样生成多个训练集和测试集，用这些数据集分别训练和评估模型，最后取平均性能作为最终评估结果。\n",
    "\n",
    "**3. 性能度量指标**\n",
    "除了准确率外，还有多种指标可以评估分类器的性能，如精确率、召回率、F1分数、ROC曲线和AUC值等。\n",
    "\n",
    "3.1. 精确率（Precision）\n",
    "精确率是指在所有被预测为正类的样本中，真正为正类的比例。\n",
    "Precision=TPTP+FP\n",
    "\n",
    "3.2. 召回率（Recall）\n",
    "召回率是指在所有实际为正类的样本中，被正确预测为正类的比例。\n",
    "ecall=TPTP+FP\n",
    "\n",
    "3.3. F1分数（F1 Score）\n",
    "F1分数是精确率和召回率的调和平均值，综合了两者的表现。\n",
    "F1 Score=2×Precision×RecallPrecision+Recall\n",
    "\n",
    "3.4. ROC曲线（Receiver Operating Characteristic Curve）\n",
    "ROC曲线展示了不同阈值下的真正类率（TPR）和假正类率（FPR）的关系。AUC（Area Under Curve）值是ROC曲线下的面积，用于衡量模型的整体性能。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 习题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.考虑表 4-7 中二元分类问题的训练样本。\n",
    "\n",
    "(a) 计算整个训练样本集的 Gini 指标值。\n",
    "\n",
    "(b) 计算属性顾客 ID 的 Gini 指标值。\n",
    "\n",
    "(c) 计算属性性别的 Gini 指标值。\n",
    "\n",
    "(d) 计算使用多路划分属性车型的 Gini 指标值。\n",
    "\n",
    "(e) 计算使用多路划分属性衬衣尺码的 Gini 指标值。\n",
    "\n",
    "(f) 下面哪个属性更好，性别、车型还是衬衣尺码？\n",
    "\n",
    "(g) 解释为什么属性顾客 ID 的 Gini 值最低，但却不能作为属性测试条件。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "表4-7数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|顾客ID|性别|车型|衬衣尺码|类|\n",
    "|----|----|----|----|----|\n",
    "|1|男|家用|小|C0|\n",
    "|2|男|运动|中|C0|\n",
    "|3|男|运动|中|C0|\n",
    "|4|男|运动|大|C0|\n",
    "|5|男|运动|加大|C0|\n",
    "|6|男|运动|加大|C0|\n",
    "|7|女|运动|小|C0|\n",
    "|8|女|运动|小|C0|\n",
    "|9|女|运动|中|C0|\n",
    "|10|女|豪华|大|C0|\n",
    "|11|男|家用|大|C1|\n",
    "|12|男|家用|加大|C1|\n",
    "|13|男|家用|中|C1|\n",
    "|14|男|豪华|加大|C1|\n",
    "|15|女|豪华|小|C1|\n",
    "|16|女|豪华|小|C1|\n",
    "|17|女|豪华|中|C1|\n",
    "|18|女|豪华|中|C1|\n",
    "|19|女|豪华|中|C1|\n",
    "|20|女|豪华|大|C1|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(a) 整个训练样本集的Gini指标值: 0.5\n",
      "(b) 属性顾客ID的Gini指标值: 0.95\n",
      "(c) 属性性别的Gini指标值: 0.5\n",
      "(d) 属性车型的Gini指标值: 0.6399999999999999\n",
      "(e) 属性衬衣尺码的Gini指标值: 0.735\n",
      "(f) 性别、车型和衬衣尺码中，Gini值最小的属性是： 性别\n",
      "(g) 属性顾客ID的Gini值最低是因为每个顾客ID都是唯一的，它不能提供任何关于类别分布的有用信息，无法用于对数据进行有意义的划分和分类，所以不能作为属性测试条件。\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# 数据\n",
    "data = np.array([\n",
    "    ['1', '男', '家用', '小', 'C0'],\n",
    "    ['2', '男', '运动', '中', 'C0'],\n",
    "    ['3', '男', '运动', '中', 'C0'],\n",
    "    ['4', '男', '运动', '大', 'C0'],\n",
    "    ['5', '男', '运动', '加大', 'C0'],\n",
    "    ['6', '男', '运动', '加大', 'C0'],\n",
    "    ['7', '女', '运动', '小', 'C0'],\n",
    "    ['8', '女', '运动', '小', 'C0'],\n",
    "    ['9', '女', '运动', '中', 'C0'],\n",
    "    ['10', '女', '豪华', '大', 'C0'],\n",
    "    ['11', '男', '家用', '大', 'C1'],\n",
    "    ['12', '男', '家用', '加大', 'C1'],\n",
    "    ['13', '男', '家用', '中', 'C1'],\n",
    "    ['14', '男', '豪华', '加大', 'C1'],\n",
    "    ['15', '女', '豪华', '小', 'C1'],\n",
    "    ['16', '女', '豪华', '小', 'C1'],\n",
    "    ['17', '女', '豪华', '中', 'C1'],\n",
    "    ['18', '女', '豪华', '中', 'C1'],\n",
    "    ['19', '女', '豪华', '中', 'C1'],\n",
    "    ['20', '女', '豪华', '大', 'C1']\n",
    "])\n",
    "\n",
    "# 计算Gini指标值的函数\n",
    "def gini_index(labels):\n",
    "    if len(labels) == 0:\n",
    "        return 0\n",
    "    _, counts = np.unique(labels, return_counts=True)\n",
    "    probs = counts / len(labels)\n",
    "    return 1 - np.sum(probs ** 2)\n",
    "\n",
    "# (a) 计算整个训练样本集的Gini指标值\n",
    "all_labels = data[:, -1]\n",
    "print(\"(a) 整个训练样本集的Gini指标值:\", gini_index(all_labels))\n",
    "\n",
    "# (b) 计算属性顾客ID的Gini指标值（顾客ID是唯一的，Gini值为0）\n",
    "customer_ids = data[:, 0]\n",
    "print(\"(b) 属性顾客ID的Gini指标值:\", gini_index(customer_ids))\n",
    "\n",
    "# (c) 计算属性性别的Gini指标值\n",
    "genders = data[:, 1]\n",
    "print(\"(c) 属性性别的Gini指标值:\", gini_index(genders))\n",
    "\n",
    "# (d) 计算使用多路划分属性车型的Gini指标值\n",
    "car_types = data[:, 2]\n",
    "print(\"(d) 属性车型的Gini指标值:\", gini_index(car_types))\n",
    "\n",
    "# (e) 计算使用多路划分属性衬衣尺码的Gini指标值\n",
    "shirt_sizes = data[:, 3]\n",
    "print(\"(e) 属性衬衣尺码的Gini指标值:\", gini_index(shirt_sizes))\n",
    "\n",
    "# (f) 比较哪个属性更好（Gini值越小越好，这里比较性别、车型和衬衣尺码）\n",
    "gender_gini = gini_index(genders)\n",
    "car_type_gini = gini_index(car_types)\n",
    "shirt_size_gini = gini_index(shirt_sizes)\n",
    "print(\"(f) 性别、车型和衬衣尺码中，Gini值最小的属性是：\", min([('性别', gender_gini), ('车型', car_type_gini), ('衬衣尺码', shirt_size_gini)], key=lambda x: x[1])[0])\n",
    "\n",
    "# (g) 解释为什么属性顾客ID的Gini值最低，但却不能作为属性测试条件\n",
    "print(\"(g) 属性顾客ID的Gini值最低是因为每个顾客ID都是唯一的，它不能提供任何关于类别分布的有用信息，无法用于对数据进行有意义的划分和分类，所以不能作为属性测试条件。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3.考虑表 4-8 中的二元分类问题的训练样本集。\n",
    "(a) 整个训练样本集关于类属性的熵是多少？\n",
    "(b) 关于这些训练样本，a1 和 a2 的信息增益是多少？\n",
    "(c) 对于连续属性 a3，计算所有可能的划分的信息增益。\n",
    "(d) 根据信息增益，哪个是最佳划分（在 a1、a2 和 a3 中）？\n",
    "(e) 根据分类错误率，哪个是最佳划分（在 a1 和 a2 中）？\n",
    "(f) 根据 Gini 指标，哪个是最佳划分（在 a1 和 a2 中）？\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|实例|a1|a2|a3|目标类|\n",
    "|----|----|----|----|----|\n",
    "|1|T|T|1.0|+|\n",
    "|2|T|T|6.0|+|\n",
    "|3|T|F|5.0|-|\n",
    "|4|F|F|4.0|+|\n",
    "|5|F|T|7.0|-|\n",
    "|6|F|T|3.0|-|\n",
    "|7|F|F|8.0|-|\n",
    "|8|T|F|7.0|+|\n",
    "|9|F|T|5.0|-|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(a) 整个训练样本集关于类属性的熵: 0.9910760598382222\n",
      "(b) a1的信息增益: 0.22943684069673975\n",
      "(b) a2的信息增益: 0.007214618474517431\n",
      "(c) 连续属性a3的最佳划分点: 7.0 ，信息增益: 0.2247875095893599\n",
      "(d) 根据信息增益，最佳划分是: a1\n",
      "(e) 根据分类错误率，最佳划分是: a1\n",
      "(f) 根据Gini指标，最佳划分是: a1\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from collections import Counter\n",
    "\n",
    "# 数据\n",
    "data = np.array([\n",
    "    ['T', 'T', 1.0, '+'],\n",
    "    ['T', 'T', 6.0, '+'],\n",
    "    ['T', 'F', 5.0, '-'],\n",
    "    ['F', 'F', 4.0, '+'],\n",
    "    ['F', 'T', 7.0, '-'],\n",
    "    ['F', 'T', 3.0, '-'],\n",
    "    ['F', 'F', 8.0, '-'],\n",
    "    ['T', 'F', 7.0, '+'],\n",
    "    ['F', 'T', 5.0, '-']\n",
    "])\n",
    "\n",
    "# 计算熵的函数\n",
    "def entropy(labels):\n",
    "    _, counts = np.unique(labels, return_counts=True)\n",
    "    probs = counts / len(labels)\n",
    "    return -np.sum(probs * np.log2(probs))\n",
    "\n",
    "# (a) 整个训练样本集关于类属性的熵\n",
    "class_labels = data[:, -1]\n",
    "print(\"(a) 整个训练样本集关于类属性的熵:\", entropy(class_labels))\n",
    "\n",
    "# 计算信息增益的函数\n",
    "def information_gain(parent_labels, left_child_labels, right_child_labels):\n",
    "    parent_entropy = entropy(parent_labels)\n",
    "    left_entropy = entropy(left_child_labels)\n",
    "    right_entropy = entropy(right_child_labels)\n",
    "    left_weight = len(left_child_labels) / len(parent_labels)\n",
    "    right_weight = len(right_child_labels) / len(parent_labels)\n",
    "    return parent_entropy - (left_weight * left_entropy + right_weight * right_entropy)\n",
    "\n",
    "# (b) a1的信息增益\n",
    "a1_true_indices = np.where(data[:, 0] == 'T')[0]\n",
    "a1_false_indices = np.where(data[:, 0] == 'F')[0]\n",
    "a1_true_labels = class_labels[a1_true_indices]\n",
    "a1_false_labels = class_labels[a1_false_indices]\n",
    "print(\"(b) a1的信息增益:\", information_gain(class_labels, a1_true_labels, a1_false_labels))\n",
    "\n",
    "# a2的信息增益\n",
    "a2_true_indices = np.where(data[:, 1] == 'T')[0]\n",
    "a2_false_indices = np.where(data[:, 1] == 'F')[0]\n",
    "a2_true_labels = class_labels[a2_true_indices]\n",
    "a2_false_labels = class_labels[a2_false_indices]\n",
    "print(\"(b) a2的信息增益:\", information_gain(class_labels, a2_true_labels, a2_false_labels))\n",
    "\n",
    "# (c) 对于连续属性a3，计算所有可能的划分的信息增益\n",
    "a3_values = data[:, 2].astype(float)\n",
    "sorted_indices = np.argsort(a3_values)\n",
    "best_gain_a3 = 0\n",
    "best_split_a3 = 0\n",
    "for i in range(1, len(data)):\n",
    "    left_indices = sorted_indices[:i]\n",
    "    right_indices = sorted_indices[i:]\n",
    "    left_labels = class_labels[left_indices]\n",
    "    right_labels = class_labels[right_indices]\n",
    "    gain = information_gain(class_labels, left_labels, right_labels)\n",
    "    if gain > best_gain_a3:\n",
    "        best_gain_a3 = gain\n",
    "        best_split_a3 = (a3_values[left_indices[-1]] + a3_values[right_indices[0]]) / 2\n",
    "print(\"(c) 连续属性a3的最佳划分点:\", best_split_a3, \"，信息增益:\", best_gain_a3)\n",
    "\n",
    "# (d) 根据信息增益，确定最佳划分（在a1、a2和a3中）\n",
    "a1_gain = information_gain(class_labels, a1_true_labels, a1_false_labels)\n",
    "a2_gain = information_gain(class_labels, a2_true_labels, a2_false_labels)\n",
    "print(\"(d) 根据信息增益，最佳划分是:\", max([('a1', a1_gain), ('a2', a2_gain), ('a3', best_gain_a3)], key=lambda x: x[1])[0])\n",
    "\n",
    "# 计算分类错误率的函数\n",
    "def classification_error_rate(labels):\n",
    "    counter = Counter(labels)\n",
    "    majority_class = counter.most_common(1)[0][0]\n",
    "    error_rate = 1 - counter[majority_class] / len(labels)\n",
    "    return error_rate\n",
    "\n",
    "# (e) 根据分类错误率，确定a1和a2中的最佳划分\n",
    "a1_true_error_rate = classification_error_rate(a1_true_labels)\n",
    "a1_false_error_rate = classification_error_rate(a1_false_labels)\n",
    "a1_error_rate = len(a1_true_indices) * a1_true_error_rate + len(a1_false_indices) * a1_false_error_rate\n",
    "a2_true_error_rate = classification_error_rate(a2_true_labels)\n",
    "a2_false_error_rate = classification_error_rate(a2_false_labels)\n",
    "a2_error_rate = len(a2_true_indices) * a2_true_error_rate + len(a2_false_indices) * a2_false_error_rate\n",
    "print(\"(e) 根据分类错误率，最佳划分是:\", min([('a1', a1_error_rate), ('a2', a2_error_rate)], key=lambda x: x[1])[0])\n",
    "\n",
    "# 计算Gini指标的函数\n",
    "def gini_index(labels):\n",
    "    _, counts = np.unique(labels, return_counts=True)\n",
    "    probs = counts / len(labels)\n",
    "    return 1 - np.sum(probs ** 2)\n",
    "\n",
    "# (f) 根据Gini指标，确定a1和a2中的最佳划分\n",
    "a1_true_gini = gini_index(a1_true_labels)\n",
    "a1_false_gini = gini_index(a1_false_labels)\n",
    "a1_gini = len(a1_true_indices) * a1_true_gini + len(a1_false_indices) * a1_false_gini\n",
    "a2_true_gini = gini_index(a2_true_labels)\n",
    "a2_false_gini = gini_index(a2_false_labels)\n",
    "a2_gini = len(a2_true_indices) * a2_true_gini + len(a2_false_indices) * a2_false_gini\n",
    "print(\"(f) 根据Gini指标，最佳划分是:\", min([('a1', a1_gini), ('a2', a2_gini)], key=lambda x: x[1])[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "5.考虑如下二元分类的数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 计算按照属性A和B划分时的信息增益。决策树归纳算法将会选择哪个属性？\n",
    "\n",
    "(b) 计算按照属性A和B划分时Gini指标。决策树归纳算法将会选择哪个属性？\n",
    "\n",
    "(c) 从图4-13可以看出熵和Gini指标在区间[0, 0.5]都是单调递增的，而在区间[0.5, 1]都是单调递减的。有没有可能信息增益和Gini指标增益支持不同的属性？解释你的理由。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|A|B|类标号|\n",
    "|----|----|----|\n",
    "|T|F|+|\n",
    "|T|T|+|\n",
    "|T|T|+|\n",
    "|T|F|-|\n",
    "|T|T|+|\n",
    "|F|F|-|\n",
    "|F|F|-|\n",
    "|F|F|-|\n",
    "|T|T|-|\n",
    "|T|F|-|"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(a) 按照属性A划分的信息增益: 0.2812908992306925\n",
      "(a) 按照属性B划分的信息增益: 0.256425891682003\n",
      "(a) 决策树归纳算法将会选择属性A\n",
      "(b) 按照属性A划分的Gini指标: 0.3428571428571429\n",
      "(b) 按照属性B划分的Gini指标: 0.31666666666666665\n",
      "决策树归纳算法将会选择属性B\n",
      "(c) 有可能信息增益和Gini指标增益支持不同的属性。理由如下：\n",
      "信息增益和Gini指标是两种不同的衡量属性划分好坏的标准。信息增益主要基于熵的减少来衡量，而Gini指标是通过计算划分后子集的不纯度来评估。\n",
      "在某些情况下，一个属性可能会使熵大幅减少，但划分后的子集Gini指标可能不是最优的，反之亦然。这是因为它们对数据分布的敏感度和计算方式不同，所以可能会导致在选择最佳划分属性时出现不一致的情况。\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from collections import Counter\n",
    "\n",
    "# 数据\n",
    "data = np.array([\n",
    "    ['T', 'F', '+'],\n",
    "    ['T', 'T', '+'],\n",
    "    ['T', 'T', '+'],\n",
    "    ['T', 'F', '-'],\n",
    "    ['T', 'T', '+'],\n",
    "    ['F', 'F', '-'],\n",
    "    ['F', 'F', '-'],\n",
    "    ['F', 'F', '-'],\n",
    "    ['T', 'T', '-'],\n",
    "    ['T', 'F', '-']\n",
    "])\n",
    "\n",
    "# 计算熵的函数\n",
    "def entropy(labels):\n",
    "    _, counts = np.unique(labels, return_counts=True)\n",
    "    probs = counts / len(labels)\n",
    "    return -np.sum(probs * np.log2(probs))\n",
    "\n",
    "# 计算信息增益的函数\n",
    "def information_gain(parent_labels, left_child_labels, right_child_labels):\n",
    "    parent_entropy = entropy(parent_labels)\n",
    "    left_entropy = entropy(left_child_labels)\n",
    "    right_entropy = entropy(right_child_labels)\n",
    "    left_weight = len(left_child_labels) / len(parent_labels)\n",
    "    right_weight = len(right_child_labels) / len(parent_labels)\n",
    "    return parent_entropy - (left_weight * left_entropy + right_weight * right_entropy)\n",
    "\n",
    "# 计算Gini指标的函数\n",
    "def gini_index(labels):\n",
    "    _, counts = np.unique(labels, return_counts=True)\n",
    "    probs = counts / len(labels)\n",
    "    return 1 - np.sum(probs ** 2)\n",
    "\n",
    "# 计算按照属性划分的Gini指标\n",
    "def gini_index_split(attribute_values, class_labels):\n",
    "    gini_indices = []\n",
    "    for value in np.unique(attribute_values):\n",
    "        indices = np.where(attribute_values == value)[0]\n",
    "        left_labels = class_labels[indices]\n",
    "        right_indices = np.setdiff1d(np.arange(len(class_labels)), indices)\n",
    "        right_labels = class_labels[right_indices]\n",
    "        left_gini = gini_index(left_labels)\n",
    "        right_gini = gini_index(right_labels)\n",
    "        left_weight = len(left_labels) / len(class_labels)\n",
    "        right_weight = len(right_labels) / len(class_labels)\n",
    "        gini_indices.append(left_weight * left_gini + right_weight * right_gini)\n",
    "    return min(gini_indices)\n",
    "\n",
    "# (a) 计算按照属性A和B划分时的信息增益\n",
    "A_values = data[:, 0]\n",
    "B_values = data[:, 1]\n",
    "class_labels = data[:, 2]\n",
    "\n",
    "A_true_indices = np.where(A_values == 'T')[0]\n",
    "A_false_indices = np.where(A_values == 'F')[0]\n",
    "A_true_labels = class_labels[A_true_indices]\n",
    "A_false_labels = class_labels[A_false_indices]\n",
    "\n",
    "B_true_indices = np.where(B_values == 'T')[0]\n",
    "B_false_indices = np.where(B_values == 'F')[0]\n",
    "B_true_labels = class_labels[B_true_indices]\n",
    "B_false_labels = class_labels[B_false_indices]\n",
    "\n",
    "gain_A = information_gain(class_labels, A_true_labels, A_false_labels)\n",
    "gain_B = information_gain(class_labels, B_true_labels, B_false_labels)\n",
    "\n",
    "print(\"(a) 按照属性A划分的信息增益:\", gain_A)\n",
    "print(\"(a) 按照属性B划分的信息增益:\", gain_B)\n",
    "print(\"(a) 决策树归纳算法将会选择属性A\" if gain_A > gain_B else \"决策树归纳算法将会选择属性B\")\n",
    "\n",
    "# (b) 计算按照属性A和B划分时的Gini指标\n",
    "gini_A = gini_index_split(A_values, class_labels)\n",
    "gini_B = gini_index_split(B_values, class_labels)\n",
    "\n",
    "print(\"(b) 按照属性A划分的Gini指标:\", gini_A)\n",
    "print(\"(b) 按照属性B划分的Gini指标:\", gini_B)\n",
    "print(\"(b) 决策树归纳算法将会选择属性A\" if gini_A < gini_B else \"决策树归纳算法将会选择属性B\")\n",
    "\n",
    "# (c) 解释信息增益和Gini指标增益支持不同属性的可能性及理由\n",
    "print(\"(c) 有可能信息增益和Gini指标增益支持不同的属性。理由如下：\")\n",
    "print(\"信息增益和Gini指标是两种不同的衡量属性划分好坏的标准。信息增益主要基于熵的减少来衡量，而Gini指标是通过计算划分后子集的不纯度来评估。\")\n",
    "print(\"在某些情况下，一个属性可能会使熵大幅减少，但划分后的子集Gini指标可能不是最优的，反之亦然。这是因为它们对数据分布的敏感度和计算方式不同，所以可能会导致在选择最佳划分属性时出现不一致的情况。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "6.考虑如下训练样本集"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|X|Y|Z|C1类样本数|C2类样本数|\n",
    "|----|----|----|----|----|\n",
    "|0|0|0|5|40|\n",
    "|0|0|1|0|15|\n",
    "|0|1|0|10|5|\n",
    "|0|1|1|45|0|\n",
    "|1|0|0|10|5|\n",
    "|1|0|1|25|0|\n",
    "|1|1|0|5|20|\n",
    "|1|1|1|0|15|"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 用本章所介绍的贪心法计算两层的决策树。使用分类错误率作为划分标准。决策树的总错误率是多少？\n",
    "\n",
    "(b) 使用 X 作为第一个划分属性，两个后继结点分别在剩余的属性中选择最佳的划分属性，重复步骤 (a)。所构造决策树的错误率是多少？\n",
    "\n",
    "(c) 比较 (a) 和 (b) 的结果。评述在划分属性选择上启发式贪心法的作用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(a) 决策树的总错误率: 0.575\n",
      "(b) 所构造决策树的错误率: 0.36458333333333337\n",
      "(c) 比较结果：\n",
      "使用指定X作为第一个划分属性的方法得到的决策树错误率更低，这表明贪心法虽然是一种启发式方法，在某些情况下可能会陷入局部最优，而通过特定的属性选择顺序可能会得到更好的结果。但贪心法的优点是计算效率高，在大规模数据上具有较好的实用性。\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# 数据\n",
    "data = np.array([\n",
    "    [0, 0, 0, 5, 40],\n",
    "    [0, 0, 1, 0, 15],\n",
    "    [0, 1, 0, 10, 5],\n",
    "    [0, 1, 1, 45, 0],\n",
    "    [1, 0, 0, 10, 5],\n",
    "    [1, 0, 1, 25, 0],\n",
    "    [1, 1, 0, 5, 20],\n",
    "    [1, 1, 1, 0, 15]\n",
    "])\n",
    "\n",
    "# 计算分类错误率的函数\n",
    "def classification_error_rate(labels):\n",
    "    total_samples = np.sum(labels)\n",
    "    majority_class_count = np.max(labels)\n",
    "    error_rate = 1 - majority_class_count / total_samples\n",
    "    return error_rate\n",
    "\n",
    "# 计算信息增益的函数（这里以分类错误率作为信息增益的度量）\n",
    "def information_gain(parent_labels, left_child_labels, right_child_labels):\n",
    "    parent_error_rate = classification_error_rate(parent_labels)\n",
    "    left_error_rate = classification_error_rate(left_child_labels)\n",
    "    right_error_rate = classification_error_rate(right_child_labels)\n",
    "    left_weight = np.sum(left_child_labels) / np.sum(parent_labels)\n",
    "    right_weight = np.sum(right_child_labels) / np.sum(parent_labels)\n",
    "    gain = parent_error_rate - (left_weight * left_error_rate + right_weight * right_error_rate)\n",
    "    return gain\n",
    "\n",
    "# (a) 用贪心法计算两层的决策树，使用分类错误率作为划分标准\n",
    "def build_decision_tree_a(data):\n",
    "    X = data[:, 0]\n",
    "    Y = data[:, 1]\n",
    "    Z = data[:, 2]\n",
    "    C1 = data[:, 3]\n",
    "    C2 = data[:, 4]\n",
    "\n",
    "    # 计算按X划分的信息增益\n",
    "    X_true_indices = np.where(X == 1)[0]\n",
    "    X_false_indices = np.where(X == 0)[0]\n",
    "    X_true_labels = np.array([C1[X_true_indices], C2[X_true_indices]])\n",
    "    X_false_labels = np.array([C1[X_false_indices], C2[X_false_indices]])\n",
    "    gain_X = information_gain(np.array([C1, C2]), X_true_labels, X_false_labels)\n",
    "\n",
    "    # 计算按Y划分的信息增益\n",
    "    Y_true_indices = np.where(Y == 1)[0]\n",
    "    Y_false_indices = np.where(Y == 0)[0]\n",
    "    Y_true_labels = np.array([C1[Y_true_indices], C2[Y_true_indices]])\n",
    "    Y_false_labels = np.array([C1[Y_false_indices], C2[Y_false_indices]])\n",
    "    gain_Y = information_gain(np.array([C1, C2]), Y_true_labels, Y_false_labels)\n",
    "\n",
    "    # 计算按Z划分的信息增益\n",
    "    Z_true_indices = np.where(Z == 1)[0]\n",
    "    Z_false_indices = np.where(Z == 0)[0]\n",
    "    Z_true_labels = np.array([C1[Z_true_indices], C2[Z_true_indices]])\n",
    "    Z_false_labels = np.array([C1[Z_false_indices], C2[Z_false_indices]])\n",
    "    gain_Z = information_gain(np.array([C1, C2]), Z_true_labels, Z_false_labels)\n",
    "\n",
    "    # 选择信息增益最大的属性作为根节点划分属性\n",
    "    best_attribute = np.argmax([gain_X, gain_Y, gain_Z])\n",
    "    if best_attribute == 0:\n",
    "        root_split = X\n",
    "        left_indices = X_true_indices\n",
    "        right_indices = X_false_indices\n",
    "    elif best_attribute == 1:\n",
    "        root_split = Y\n",
    "        left_indices = Y_true_indices\n",
    "        right_indices = Y_false_indices\n",
    "    else:\n",
    "        root_split = Z\n",
    "        left_indices = Z_true_indices\n",
    "        right_indices = Z_false_indices\n",
    "\n",
    "    # 构建左子树\n",
    "    left_data = data[left_indices]\n",
    "    left_error_rate = classification_error_rate(np.array([left_data[:, 3], left_data[:, 4]]))\n",
    "\n",
    "    # 构建右子树\n",
    "    right_data = data[right_indices]\n",
    "    right_error_rate = classification_error_rate(np.array([right_data[:, 3], right_data[:, 4]]))\n",
    "\n",
    "    total_error_rate = (len(left_data) * left_error_rate + len(right_data) * right_error_rate) / len(data)\n",
    "    return total_error_rate\n",
    "\n",
    "# (b) 使用X作为第一个划分属性，两个后继结点分别在剩余的属性中选择最佳的划分属性\n",
    "def build_decision_tree_b(data):\n",
    "    X = data[:, 0]\n",
    "    Y = data[:, 1]\n",
    "    Z = data[:, 2]\n",
    "    C1 = data[:, 3]\n",
    "    C2 = data[:, 4]\n",
    "\n",
    "    # 按X划分\n",
    "    X_true_indices = np.where(X == 1)[0]\n",
    "    X_false_indices = np.where(X == 0)[0]\n",
    "    left_data = data[X_true_indices]\n",
    "    right_data = data[X_false_indices]\n",
    "\n",
    "    # 左子树\n",
    "    Y_left_true_indices = np.where(left_data[:, 1] == 1)[0]\n",
    "    Y_left_false_indices = np.where(left_data[:, 1] == 0)[0]\n",
    "    Y_left_true_labels = np.array([left_data[Y_left_true_indices, 3], left_data[Y_left_true_indices, 4]])\n",
    "    Y_left_false_labels = np.array([left_data[Y_left_false_indices, 3], left_data[Y_left_false_indices, 4]])\n",
    "    gain_Y_left = information_gain(np.array([left_data[:, 3], left_data[:, 4]]), Y_left_true_labels, Y_left_false_labels)\n",
    "\n",
    "    Z_left_true_indices = np.where(left_data[:, 2] == 1)[0]\n",
    "    Z_left_false_indices = np.where(left_data[:, 2] == 0)[0]\n",
    "    Z_left_true_labels = np.array([left_data[Z_left_true_indices, 3], left_data[Z_left_true_indices, 4]])\n",
    "    Z_left_false_labels = np.array([left_data[Z_left_false_indices, 3], left_data[Z_left_false_indices, 4]])\n",
    "    gain_Z_left = information_gain(np.array([left_data[:, 3], left_data[:, 4]]), Z_left_true_labels, Z_left_false_labels)\n",
    "\n",
    "    best_attribute_left = np.argmax([gain_Y_left, gain_Z_left])\n",
    "    if best_attribute_left == 0:\n",
    "        left_split = left_data[:, 1]\n",
    "        left_left_indices = Y_left_true_indices\n",
    "        left_right_indices = Y_left_false_indices\n",
    "    else:\n",
    "        left_split = left_data[:, 2]\n",
    "        left_left_indices = Z_left_true_indices\n",
    "        left_right_indices = Z_left_false_indices\n",
    "\n",
    "    left_left_data = left_data[left_left_indices]\n",
    "    left_left_error_rate = classification_error_rate(np.array([left_left_data[:, 3], left_left_data[:, 4]]))\n",
    "\n",
    "    left_right_data = left_data[left_right_indices]\n",
    "    left_right_error_rate = classification_error_rate(np.array([left_right_data[:, 3], left_right_data[:, 4]]))\n",
    "\n",
    "    left_error_rate = (len(left_left_data) * left_left_error_rate + len(left_right_data) * left_right_error_rate) / len(left_data)\n",
    "\n",
    "    # 右子树\n",
    "    Y_right_true_indices = np.where(right_data[:, 1] == 1)[0]\n",
    "    Y_right_false_indices = np.where(right_data[:, 1] == 0)[0]\n",
    "    Y_right_true_labels = np.array([right_data[Y_right_true_indices, 3], right_data[Y_right_true_indices, 4]])\n",
    "    Y_right_false_labels = np.array([right_data[Y_right_false_indices, 3], right_data[Y_right_false_indices, 4]])\n",
    "    gain_Y_right = information_gain(np.array([right_data[:, 3], right_data[:, 4]]), Y_right_true_labels, Y_right_false_labels)\n",
    "\n",
    "    Z_right_true_indices = np.where(right_data[:, 2] == 1)[0]\n",
    "    Z_right_false_indices = np.where(right_data[:, 2] == 0)[0]\n",
    "    Z_right_true_labels = np.array([right_data[Z_right_true_indices, 3], right_data[Z_right_true_indices, 4]])\n",
    "    Z_right_false_labels = np.array([right_data[Z_right_false_indices, 3], right_data[Z_right_false_indices, 4]])\n",
    "    gain_Z_right = information_gain(np.array([right_data[:, 3], right_data[:, 4]]), Z_right_true_labels, Z_right_false_labels)\n",
    "\n",
    "    best_attribute_right = np.argmax([gain_Y_right, gain_Z_right])\n",
    "    if best_attribute_right == 0:\n",
    "        right_split = right_data[:, 1]\n",
    "        right_left_indices = Y_right_true_indices\n",
    "        right_right_indices = Y_right_false_indices\n",
    "    else:\n",
    "        right_split = right_data[:, 2]\n",
    "        right_left_indices = Z_right_true_indices\n",
    "        right_right_indices = Z_right_false_indices\n",
    "\n",
    "    right_left_data = right_data[right_left_indices]\n",
    "    right_left_error_rate = classification_error_rate(np.array([right_left_data[:, 3], right_left_data[:, 4]]))\n",
    "\n",
    "    right_right_data = right_data[right_right_indices]\n",
    "    right_right_error_rate = classification_error_rate(np.array([right_right_data[:, 3], right_right_data[:, 4]]))\n",
    "\n",
    "    right_error_rate = (len(right_left_data) * right_left_error_rate + len(right_right_data) * right_right_error_rate) / len(right_data)\n",
    "\n",
    "    total_error_rate = (len(left_data) * left_error_rate + len(right_data) * right_error_rate) / len(data)\n",
    "    return total_error_rate\n",
    "\n",
    "# 计算(a)的总错误率\n",
    "error_rate_a = build_decision_tree_a(data)\n",
    "print(\"(a) 决策树的总错误率:\", error_rate_a)\n",
    "\n",
    "# 计算(b)的总错误率\n",
    "error_rate_b = build_decision_tree_b(data)\n",
    "print(\"(b) 所构造决策树的错误率:\", error_rate_b)\n",
    "\n",
    "# (c) 比较(a)和(b)的结果并评述启发式贪心法的作用\n",
    "print(\"(c) 比较结果：\")\n",
    "if error_rate_a < error_rate_b:\n",
    "    print(\"使用贪心法计算得到的决策树错误率更低，说明贪心法在选择划分属性时能够更有效地找到全局较优的划分方式，减少了决策树的错误率。\")\n",
    "else:\n",
    "    print(\"使用指定X作为第一个划分属性的方法得到的决策树错误率更低，这表明贪心法虽然是一种启发式方法，在某些情况下可能会陷入局部最优，而通过特定的属性选择顺序可能会得到更好的结果。但贪心法的优点是计算效率高，在大规模数据上具有较好的实用性。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "7. 下表汇总了具有三个属性A, B, C，以及两个类标号＋，－的数据集。建立一棵两层决策树。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|A|B|C|实例数 +|实例数 -|\n",
    "|----|----|----|----|----|\n",
    "|T|T|T|5|0|\n",
    "|F|T|T|0|20|\n",
    "|T|F|T|20|0|\n",
    "|F|F|T|0|5|\n",
    "|T|T|F|0|0|\n",
    "|F|T|F|25|0|\n",
    "|T|F|F|0|0|\n",
    "|F|F|F|0|25|"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 根据分类错误率，哪个属性应当选作第一个划分属性？对每个属性，给出相依表和分类错误率的增益。\n",
    "\n",
    "(b) 对根结点的两个子女重复以上问题。\n",
    "\n",
    "(c) 最终的决策树错误分类的实例数是多少？\n",
    "\n",
    "(d) 使用C作为划分属性，重复(a)、(b)和(c)。\n",
    "\n",
    "(e) 使用(c)和(d)中的结果分析决策树归纳算法贪心的本质。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "决策树归纳算法贪心的本质是在每一步选择当前看起来最优的划分属性（例如这里是选择信息增益最大的属性），而不考虑该选择对后续步骤的影响，希望通过这种局部最优的选择来构建决策树。\n",
    "从 (c) 和 (d) 的结果可以看出，当选择不同的属性作为第一个划分属性时（如 (a) 中选择的属性和 (d) 中强制选择C作为第一个划分属性），最终得到的决策树的结构和错误分类的实例数可能不同。这表明贪心算法可能会陷入局部最优解，即虽然在当前步骤选择了信息增益最大的属性，但从全局来看，可能不是最优的决策树构建方式，导致最终的决策树性能不是最好的（例如错误分类实例数可能不是最少的）。\n",
    "例如，如果 (a) 中根据信息增益选择的第一个划分属性得到的决策树错误分类实例数比 (d) 中使用C作为第一个划分属性得到的错误分类实例数更少，就说明贪心算法在 (a) 中的选择在全局上是更优的，但它是基于局部最优的贪心策略做出的决策，并没有遍历所有可能的属性组合来找到真正的全局最优解。这就是贪心算法的特点，它在计算效率上有优势，但不能保证得到全局最优结果。\n",
    "通过对比不同划分属性下的结果，可以更深入地理解贪心算法在决策树构建中的这种特性，以及它在实际应用中可能存在的局限性和优势。\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
