{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "dd29a815-2bb7-4200-a51a-9984bbe3c1af",
   "metadata": {},
   "source": [
    "大数据第三次作业 姚龙飞"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89b9f3ed-2c25-49b0-baa4-5e7e76cd62eb",
   "metadata": {},
   "source": [
    "### 3.考虑表 4-8中的二元分类问题的训练样本集。  \n",
    "(a)整个训练样本集关于类属性的熵是多少?  \n",
    "(b)关于这些训练样本，a和az的信息增益是多少?  \n",
    "(c)对于连续属性a3，计算所有可能的划分的信息增益。  \n",
    "(d)根据信息增益，哪个是最佳划分(在a,q和as中)?"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46aa1181-ec6f-4e68-8e4c-db8c0f89450a",
   "metadata": {},
   "source": [
    "| 实例 |  $ a_1 $  |  $ a_2 $  |  $ a_3 $  | 目标类 |\n",
    "|------|-----------|-----------|-----------|--------|\n",
    "| 1    | T         | T         | 1.0       | +      |\n",
    "| 2    | T         | T         | 6.0       | +      |\n",
    "| 3    | T         | F         | 5.0       | -      |\n",
    "| 4    | F         | F         | 4.0       | +      |\n",
    "| 5    | F         | T         | 7.0       | -      |\n",
    "| 6    | F         | T         | 3.0       | -      |\n",
    "| 7    | F         | F         | 8.0       | -      |\n",
    "| 8    | T         | F         | 7.0       | +      |\n",
    "| 9    | F         | T         | 5.0       | -      |"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "672d346c-341d-440b-94c7-ac84e1539fef",
   "metadata": {},
   "source": [
    "答：- (a)整个训练样本集关于类属性的熵是多少？\n",
    "熵的计算公式为：\n",
    "$ H(S)=-\\sum{i=1}^{n}p_i\\log_2(p_i) \n",
    "其中p_i是第i类的概率。$\n",
    "在这个数据集中，有5个正类（+）和4个负类（-）。  \n",
    "因此，熵为：\n",
    "\n",
    " $$ H(S) = -\\left(\\frac{5}{9} \\log_2 \\frac{5}{9} + \\frac{4}{9} \\log_2 \\frac{4}{9}\\right) $$ \n",
    "\n",
    " $$ H(S) = -\\left(\\frac{5}{9} \\times (-0.4150) + \\frac{4}{9} \\times (-0.7219)\\right) $$ \n",
    "\n",
    " $$ H(S) = 0.991 $$ \n",
    "\n",
    "\n",
    "- (b)$ 关于这些训练样本,a_1和a_2的信息增益是多少？$\n",
    "\n",
    "信息增益的计算公式为：\n",
    "$ IG(S,A)=H(S)-\\sum{v\\in values(A)}\\frac{|S_v|}{|S|}H(S_v) \n",
    "其中S_v是属性A取值为v的子集。$\n",
    "\n",
    "对于  $a_1$ ：\n",
    "\n",
    "-  $T$  类有6个实例，其中5个是正类，1个是负类。\n",
    "-  $F$  类有3个实例，其中0个是正类，3个是负类。\n",
    "\n",
    "\n",
    " $$ H(S_{a_1=T}) = -\\left(\\frac{5}{6} \\log_2 \\frac{5}{6} + \\frac{1}{6} \\log_2 \\frac{1}{6}\\right) $$ \n",
    "\n",
    " $$ H(S_{a_1=F}) = -\\left(0 \\log_2 0 + \\frac{3}{3} \\log_2 \\frac{3}{3}\\right) = 0 $$ \n",
    "\n",
    "\n",
    " $$ IG(S, a_1) = H(S) - \\left(\\frac{6}{9} H(S_{a_1=T}) + \\frac{3}{9} H(S_{a_1=F})\\right) $$ \n",
    "\n",
    " $$ IG(S, a_1) = 0.991 - \\left(\\frac{2}{3} \\times 0.7219 + \\frac{1}{3} \\times 0\\right) $$ \n",
    "\n",
    " $$ IG(S, a_1) = 0.991 - 0.4816 $$ \n",
    "\n",
    " $$ IG(S, a_1) = 0.5094 $$ \n",
    "\n",
    "对于  $a_2$ ：\n",
    "\n",
    "-  $T$  类有4个实例，其中2个是正类，2个是负类。\n",
    "-  $F$  类有5个实例，其中3个是正类，2个是负类。\n",
    "\n",
    "\n",
    " $$ H(S_{a_2=T}) = -\\left(\\frac{2}{4} \\log_2 \\frac{2}{4} + \\frac{2}{4} \\log_2 \\frac{2}{4}\\right) = 1 $$ \n",
    "\n",
    " $$ H(S_{a_2=F}) = -\\left(\\frac{3}{5} \\log_2 \\frac{3}{5} + \\frac{2}{5} \\log_2 \\frac{2}{5}\\right) $$ \n",
    "\n",
    " $$ H(S_{a_2=F}) = -\\left(\\frac{3}{5} \\times (-0.5145) + \\frac{2}{5} \\times (-0.7219)\\right) $$ \n",
    "\n",
    " $$ H(S_{a_2=F}) = 0.9183 $$ \n",
    "\n",
    "\n",
    " $$ IG(S, a_2) = H(S) - \\left(\\frac{4}{9} H(S_{a_2=T}) + \\frac{5}{9} H(S_{a_2=F})\\right) $$ \n",
    "\n",
    " $$ IG(S, a_2) = 0.991 - \\left(\\frac{4}{9} \\times 1 + \\frac{5}{9} \\times 0.9183\\right) $$ \n",
    "\n",
    " $$ IG(S, a_2) = 0.991 - 0.9183 $$ \n",
    "\n",
    " $$ IG(S, a_2) = 0.0727 $$\n",
    "\n",
    "\n",
    "- (c)对于连续属性$a_3$，计算所有可能的划分的信息增益。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "80439b76-cd38-4657-bfcf-7b33dd9eefab",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Best split value for a3: 1.0, with information gain: 0.14269027946047552\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from math import log2\n",
    "\n",
    "\n",
    "data = np.array([\n",
    "    [1, 1, 1.0, '+'],\n",
    "    [1, 1, 6.0, '+'],\n",
    "    [1, 0, 5.0, '-'],\n",
    "    [0, 0, 4.0, '+'],\n",
    "    [0, 1, 7.0, '-'],\n",
    "    [0, 1, 3.0, '-'],\n",
    "    [0, 0, 8.0, '-'],\n",
    "    [1, 0, 7.0, '+'],\n",
    "    [0, 1, 5.0, '-']\n",
    "])\n",
    "\n",
    "\n",
    "def entropy(class_labels):\n",
    "    unique, counts = np.unique(class_labels, return_counts=True)\n",
    "    probabilities = counts / counts.sum()\n",
    "    ent = -np.sum(probabilities * np.log2(probabilities))\n",
    "    return ent\n",
    "\n",
    "\n",
    "def information_gain(data, split_attribute, split_value):\n",
    "    parent_entropy = entropy(data[:, -1])\n",
    "    \n",
    "\n",
    "    left_child = data[data[:, split_attribute] <= split_value]\n",
    "    right_child = data[data[:, split_attribute] > split_value]\n",
    "    \n",
    "\n",
    "    left_weight = len(left_child) / len(data)\n",
    "    right_weight = len(right_child) / len(data)\n",
    "    \n",
    "\n",
    "    left_entropy = entropy(left_child[:, -1])\n",
    "    right_entropy = entropy(right_child[:, -1])\n",
    "    \n",
    "\n",
    "    info_gain = parent_entropy - (left_weight * left_entropy + right_weight * right_entropy)\n",
    "    return info_gain\n",
    "\n",
    "\n",
    "a3_values = data[:, 2]  # a3 属性的值\n",
    "best_info_gain = 0\n",
    "best_split = None\n",
    "\n",
    "for value in a3_values:\n",
    "    info_gain = information_gain(data, 2, value)\n",
    "    if info_gain > best_info_gain:\n",
    "        best_info_gain = info_gain\n",
    "        best_split = value\n",
    "\n",
    "print(f\"Best split value for a3: {best_split}, with information gain: {best_info_gain}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c4fe983c-c8ff-4653-a1e8-56bf78065656",
   "metadata": {},
   "source": [
    "- (d)根据信息增益，哪个是最佳划分（$在a_1,a_2和_3中$）？\n",
    "\n",
    "根据上面的计算， $a_1$  的信息增益最高，所以  $a_1$  是最佳划分。\n",
    "\n",
    "- (e)根据分类错误率，哪个是最佳划分（$在a_1和a_2中$）？\n",
    "\n",
    "对于  $a_1$ ：\n",
    "\n",
    "-  $T$  类的错误率 =  $\\frac{1}{6}$ \n",
    "-  $F$  类的错误率 =  $\\frac{2}{3}$ \n",
    "\n",
    "\n",
    " $$ Error(S, a_1) = \\frac{6}{9} \\times \\frac{1}{6} + \\frac{3}{9} \\times \\frac{2}{3} = \\frac{1}{9} + \\frac{2}{9} = \\frac{1}{3} $$ \n",
    "\n",
    "对于  $a_2$ ：\n",
    "\n",
    "-  $T$  类的错误率 =  $\\frac{2}{4}$ \n",
    "-  $F$  类的错误率 =  $\\frac{2}{5}$ \n",
    "\n",
    "\n",
    " $$ Error(S, a_2) = \\frac{4}{9} \\times \\frac{2}{4} + \\frac{5}{9} \\times \\frac{2}{5} = \\frac{2}{9} + \\frac{2}{9} = \\frac{4}{9} $$ \n",
    "\n",
    " $a_1$  的错误率更低，所以  $a_1$  是最佳划分。\n",
    "对于$a_1和a_2$，我们需要计算每个属性划分后的最大错误率，并选择错误率最小的属性。\n",
    "\n",
    "\n",
    "- (f)根据 Gini 指标，哪个是最佳划分（$在a_1和a_2中$）？\n",
    "\n",
    "对于  $a_1$ ：\n",
    "\n",
    " $$ Gini(S, a_1) = \\frac{6}{9} \\times (1 - (\\frac{5}{6})^2) + \\frac{3}{9} \\times (1 - (\\frac{3}{3})^2) $$ \n",
    "\n",
    " $$ Gini(S, a_1) = \\frac{2}{3} \\times (1 - \\frac{25}{36}) + \\frac{1}{3} \\times 0 $$ \n",
    "\n",
    " $$ Gini(S, a_1) = \\frac{2}{3} \\times \\frac{11}{36} = \\frac{11}{54} $$ \n",
    "\n",
    "对于  $a_2$ ：\n",
    "\n",
    " $$ Gini(S, a_2) = \\frac{4}{9} \\times (1 - (\\frac{2}{4})^2) + \\frac{5}{9} \\times (1 - (\\frac{3}{5})^2) $$ \n",
    "\n",
    " $$ Gini(S, a_2) = \\frac{4}{9} \\times \\frac{3}{4} + \\frac{5}{9} \\times \\frac{16}{25} $$ \n",
    "\n",
    " $$ Gini(S, a_2) = \\frac{1}{3} + \\frac{16}{45} = \\frac{45}{135} + \\frac{40}{135} = \\frac{85}{135} $$ \n",
    "\n",
    " $a_1$  的Gini指数更低，所以  $a_1$  是最佳划分。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b6b0082-af37-48bd-ad62-182cec085377",
   "metadata": {},
   "source": [
    "### 5. 考虑如下二元分类问题的数据集。\n",
    "\n",
    "| A | B | 类标号 |\n",
    "|---|---|---|\n",
    "| T | F | + |\n",
    "| T | T | - |\n",
    "| T | T | - |\n",
    "| T | F | - |\n",
    "| T | T | - |\n",
    "| F | F | - |\n",
    "| F | F | - |\n",
    "| F | F | - |\n",
    "| T | T | - |\n",
    "| T | F | - |\n",
    "\n",
    "(a) 计算按照属性 A 和 B 划分时的信息增益。决策树归纳算法将会选择哪个属性？\n",
    "\n",
    "(b) 计算按照属性 A 和 B 划分时 Gini 指标。决策树归纳算法将会选择哪个属性？\n",
    "\n",
    "(c) 从图 4-13 可以看出熵和 Gini 指标在区间 [0, 0.5] 都是单调递增的，而在区间 [0.5, 1] 都是单调递减的。有没有可能信息增益和 Gini 指标增益支持不同的属性？解释你的理由。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf490f1b-0573-4566-8492-7149ead6b159",
   "metadata": {},
   "source": [
    "答：- (a) 计算按照属性 A 和 B 划分时的信息增益\n",
    "\n",
    "总体熵  $E$  的计算：\n",
    "\n",
    " $$ E = -\\frac{5}{10} \\log_2 \\frac{5}{10} - \\frac{5}{10} \\log_2 \\frac{5}{10} = 1 $$ \n",
    "\n",
    "属性 A 的条件熵  $E(A)$  的计算：\n",
    "\n",
    "- A=T: 5个样本，其中+有4个，-有1个。\n",
    "- A=F: 5个样本，其中+有1个，-有4个。\n",
    "\n",
    "\n",
    " $$ E(A) = \\frac{5}{10} \\times (-\\frac{4}{5} \\log_2 \\frac{4}{5} - \\frac{1}{5} \\log_2 \\frac{1}{5}) + \\frac{5}{10} \\times (-\\frac{1}{5} \\log_2 \\frac{1}{5} - \\frac{4}{5} \\log_2 \\frac{4}{5}) $$ \n",
    "\n",
    " $$ E(A) = 0.5 \\times 0.722 + 0.5 \\times 0.722 = 0.722 $$ \n",
    "\n",
    "信息增益  $IG(A)$  的计算：\n",
    "\n",
    " $$ IG(A) = E - E(A) = 1 - 0.722 = 0.278 $$ \n",
    "\n",
    "属性 B 的条件熵  $E(B)$  的计算：\n",
    "\n",
    "- B=T: 3个样本，其中+有3个，-有0个。\n",
    "- B=F: 7个样本，其中+有2个，-有5个。\n",
    "\n",
    "\n",
    " $$ E(B) = \\frac{3}{10} \\times 0 + \\frac{7}{10} \\times (-\\frac{2}{7} \\log_2 \\frac{2}{7} - \\frac{5}{7} \\log_2 \\frac{5}{7}) $$ \n",
    "\n",
    " $$ E(B) = 0 + 0.7 \\times 0.985 = 0.6895 $$ \n",
    "\n",
    "信息增益  $IG(B)$  的计算：\n",
    "\n",
    " $$ IG(B) = E - E(B) = 1 - 0.6895 = 0.3105 $$ \n",
    "\n",
    "决策树归纳算法选择属性 B，因为  $IG(B) > IG(A)$ 。\n",
    "\n",
    "- (b) 计算按照属性 A 和 B 划分时 Gini 指标\n",
    "\n",
    "属性 A 的 Gini 指数  $G(A)$  的计算：\n",
    "\n",
    " $$ G(A) = \\frac{5}{10} \\times (1 - \\frac{4}{5}^2 - \\frac{1}{5}^2) + \\frac{5}{10} \\times (1 - \\frac{1}{5}^2 - \\frac{4}{5}^2) $$ \n",
    "\n",
    " $$ G(A) = 0.5 \\times 0.32 + 0.5 \\times 0.32 = 0.32 $$ \n",
    "\n",
    "属性 B 的 Gini 指数  $G(B)$  的计算：\n",
    "\n",
    " $$ G(B) = \\frac{3}{10} \\times 0 + \\frac{7}{10} \\times (1 - \\frac{2}{7}^2 - \\frac{5}{7}^2) $$ \n",
    "\n",
    " $$ G(B) = 0 + 0.7 \\times 0.4286 = 0.3 $$ \n",
    "\n",
    "决策树归纳算法选择属性 B，因为  $G(B) < G(A)$ 。\n",
    "\n",
    "- (c) 信息增益和 Gini 指标增益支持不同属性的可能性\n",
    "\n",
    "有可能信息增益和 Gini 指标增益支持不同的属性。这是因为信息增益衡量的是熵的减少，而 Gini 指数衡量的是分类错误率的减少。信息增益倾向于选择那些能够最大化类别间差异的属性，而 Gini 指数则倾向于选择那些能够最小化分类错误率的属性。在某些情况下，这两个指标可能会推荐不同的属性，因为它们衡量的是不同的性能指标。在这个例子中，信息增益和 Gini 指标都选择了属性 B，但理论上存在它们选择不同属性的可能性。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8b1ef975-be73-41c7-828e-743ecd7c64d0",
   "metadata": {},
   "source": [
    "### 7. 下表汇总了具有三个属性  $ A, B, C $ ，以及两个类标号  $ +, - $  的数据集。建立一棵两层决策树。\n",
    "\n",
    "| A | B | C | 实例数  $ + $  | 实例数  $ - $  |\n",
    "|---|---|---|--------------|--------------|\n",
    "| T | T | T | 5            | 0            |\n",
    "| F | T | T | 0            | 20           |\n",
    "| T | F | T | 20           | 0            |\n",
    "| F | F | T | 0            | 5            |\n",
    "| T | T | F | 0            | 0            |\n",
    "| F | T | F | 25           | 0            |\n",
    "| T | F | F | 0            | 0            |\n",
    "| F | F | F | 0            | 25           |\n",
    "\n",
    "#### (a) 根据分类错误率，哪个属性应当选作第一个划分属性？对每个属性，给出相依表和分类错误率的增益。\n",
    "\n",
    "#### (b) 对根结点的两个子女重复以上问题。\n",
    "\n",
    "#### (c) 最终的决策树错误分类的实例数是多少？\n",
    "\n",
    "#### (d) 使用 C 作为划分属性，重复 (a)、(b) 和 (c)。\n",
    "\n",
    "#### (e) 使用 (c) 和 (d) 中的结果分析决策树归纳算法贪心的本质。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b3401ae-780c-41e5-9ff8-f2e9878941c3",
   "metadata": {},
   "source": [
    "答: \n",
    "- (a) 第一个划分属性的选择\n",
    "\n",
    "**属性 A 的分类错误率：**\n",
    "- 错误率 =  $\\frac{0+25}{50} = 0.5$ \n",
    "\n",
    "**属性 B 的分类错误率：**\n",
    "- 错误率 =  $\\frac{0+20}{50} = 0.4$ \n",
    "\n",
    "**属性 C 的分类错误率：**\n",
    "- 错误率 =  $\\frac{5+25}{50} = 0.6$ \n",
    "\n",
    "**信息增益计算：**\n",
    "- 总体错误率 =  $\\frac{20+5+25}{80} = 0.625$ \n",
    "- 信息增益 A = 总体错误率 - 错误率 A = 0.625 - 0.5 = 0.125\n",
    "- 信息增益 B = 总体错误率 - 错误率 B = 0.625 - 0.4 = 0.225\n",
    "- 信息增益 C = 总体错误率 - 错误率 C = 0.625 - 0.6 = 0.025\n",
    "\n",
    "**选择属性 B 作为第一个划分属性，因为其信息增益最大。**\n",
    "\n",
    "- (b) 对根结点的两个子女重复以上问题\n",
    "\n",
    "**子节点1 (B=True)：**\n",
    "- 实例数：5(+), 0(-)\n",
    "- 错误率 = 0\n",
    "\n",
    "**子节点2 (B=False)：**\n",
    "- 实例数：20(+), 25(-)\n",
    "- 错误率 =  $\\frac{25}{45} \\approx 0.556$ \n",
    "\n",
    "- (c) 最终的决策树错误分类的实例数\n",
    "\n",
    "- 子节点1：0 错误\n",
    "- 子节点2：25 错误\n",
    "- 总错误 = 25\n",
    "\n",
    "- (d) 使用 C 作为划分属性，重复 (a)、(b) 和 (c)\n",
    "\n",
    "**属性 C 的错误率：**\n",
    "- C=True：错误率 =  $\\frac{0+5}{25} = 0.2$ \n",
    "- C=False：错误率 =  $\\frac{5+25}{55} \\approx 0.545$ \n",
    "\n",
    "**信息增益 C：**\n",
    "- 信息增益 = 总体错误率 - (25/80 * 0.2 + 55/80 * 0.545) ≈ 0.125\n",
    "\n",
    "**子节点1 (C=True)：**\n",
    "- 实例数：25(+), 5(-)\n",
    "- 错误率 =  $\\frac{5}{30} \\approx 0.167$ \n",
    "\n",
    "**子节点2 (C=False)：**\n",
    "- 实例数：5(+), 20(-)\n",
    "- 错误率 =  $\\frac{20}{25} = 0.8$ \n",
    "\n",
    "**总错误：**\n",
    "- 子节点1：5 错误\n",
    "- 子节点2：5 错误\n",
    "- 总错误 = 10\n",
    "\n",
    "- (e) 分析决策树归纳算法贪心的本质\n",
    "\n",
    "使用属性 B 作为划分属性时，虽然在第一个划分点上错误率较低，但最终的错误分类实例数较多。而使用属性 C 虽然在第一个划分点上错误率较高，但最终的错误分类实例数较少。这表明贪心算法在每一步选择局部最优解（即信息增益最大的属性），但不一定能得到全局最优解。这体现了贪心算法在决策树构建中的局限性，即它可能不会总是导致整体上最优的决策树结构。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
