{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f1027463",
   "metadata": {},
   "source": [
    "# 习题\n",
    "## 习题3.1：参照图3.1，在二维空间中给出实例点，画出k为1和2时的k近邻法构成的空间划分，并对其进行比较，体会k值选择与模型复杂度及预测准确率的关系。\n",
    "解答：\n",
    "\n",
    "解答思路：\n",
    "\n",
    "- 参照图3.1，使用已给的实例点，采用sklearn的KNeighborsClassifier分类器，对k=1和2时的模型进行训练\n",
    "- 使用matplotlib的contourf和scatter，画出k为1和2时的k近邻法构成的空间划分\n",
    "- 根据模型得到的预测结果，计算预测准确率，并设置图形标题\n",
    "- 根据程序生成的图，比较k为1和2时，k值选择与模型复杂度、预测准确率的关系\n",
    "\n",
    "解答步骤：\n",
    "\n",
    "第1、2、3步：使用已给的实例点，对k为1和2时的k近邻模型进行训练，并绘制空间划分\n",
    "\n",
    "\n",
    "### sklearn.neighbors中的KNeighborsClassifier\n",
    "KNeighborsClassifier 是 scikit-learn（sklearn）库中的一个近邻分类器类，用于基于最近邻算法进行分类。\n",
    "\n",
    "这个类的构造函数有以下参数：\n",
    "\n",
    "n_neighbors：int，默认为 5。指定要考虑的最近邻数。在进行分类时，模型会查找与样本最近的 n_neighbors 个邻居，并根据它们的类标签进行投票。\n",
    "\n",
    "weights：{'uniform', 'distance'} 或者可调用对象，默认为 'uniform'。指定近邻的权重。如果设置为 'uniform'，则所有邻居的权重都相等；如果设置为 'distance'，则权重与距离成反比，即越近的邻居权重越大；如果设置为一个可调用对象，则调用该对象以根据距离计算权重。\n",
    "\n",
    "algorithm：{'auto', 'ball_tree', 'kd_tree', 'brute'}，默认为 'auto'。指定用于计算最近邻的算法。可以选择自动选择最佳算法（'auto'），或者使用球树算法（'ball_tree'）、kd 树算法（'kd_tree'）或者暴力搜索算法（'brute'）。\n",
    "\n",
    "leaf_size：int，默认为 30。指定用于构造球树或 kd 树的叶子节点的大小。较小的叶子节点将会使树构造更快，但是查询速度可能会变慢。\n",
    "\n",
    "p：int，默认为 2。指定用于计算距离的距离度量。如果 p = 1，则使用曼哈顿距离（L1 距离）；如果 p = 2，则使用欧氏距离（L2 距离）；如果设置为其他整数，则使用闵可夫斯基距离。\n",
    "\n",
    "metric：str 或可调用对象，默认为 'minkowski'。指定用于计算距离的距离度量。可以是字符串（例如 'euclidean'、'manhattan'、'chebyshev'、'minkowski' 等）或者可调用对象，如果为可调用对象，则将使用其来计算样本之间的距离。\n",
    "\n",
    "metric_params：dict，默认为 None。指定用于计算距离的附加参数。如果 metric 参数为字符串，则该参数将被忽略；如果 metric 参数为可调用对象，则将传递给该对象。\n",
    "\n",
    "n_jobs：int 或 None，默认为 None。指定并行运行的作业数。如果设置为 -1，则使用所有可用的 CPU 核心；如果设置为 None 或 1，则不并行运行。如果 n_neighbors 大于样本数，则不会并行运行。\n",
    "\n",
    "**kwargs：关键字参数。其他参数传递给基础分类器。\n",
    "\n",
    "KNeighborsClassifier 是一个简单但有效的分类器，特别适用于小型数据集和低维特征空间。选择合适的参数值是使用该算法的关键。例如，n_neighbors 的选择会影响分类器的性能和计算成本，weights 参数会影响邻居的投票权重，algorithm 参数会影响算法的计算速度等等。通常情况下，需要根据数据集的特征和大小，以及任务的要求来调整这些参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "86b445a6",
   "metadata": {},
   "source": [
    "from matplotlib.colors import ListedColormap\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.neighbors import KNeighborsClassifier\n",
    "import numpy as np\n",
    "%matplotlib inline"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "4c727337",
   "metadata": {},
   "source": [
    "data = np.array([[5, 12, 1],\n",
    "                 [6, 21, 0],\n",
    "                 [14, 5, 0],\n",
    "                 [16, 10, 0],\n",
    "                 [13, 19, 0],\n",
    "                 [13, 32, 1],\n",
    "                 [17, 27, 1],\n",
    "                 [18, 24, 1],\n",
    "                 [20, 20, 0],\n",
    "                 [23, 14, 1],\n",
    "                 [23, 25, 1],\n",
    "                 [23, 31, 1],\n",
    "                 [26, 8, 0],\n",
    "                 [30, 17, 1],\n",
    "                 [30, 26, 1],\n",
    "                 [34, 8, 0],\n",
    "                 [34, 19, 1],\n",
    "                 [37, 28, 1]])\n",
    "# 得到特征向量\n",
    "X_train = data[:, 0:2]\n",
    "# 得到类别向量\n",
    "y_train = data[:, 2]\n",
    "\n",
    "#（1）使用已给的实例点，采用sklearn的KNeighborsClassifier分类器，\n",
    "# 对k=1和2时的模型进行训练\n",
    "# 分别构造k=1和k=2的k近邻模型\n",
    "models = (KNeighborsClassifier(n_neighbors=1, n_jobs=-1), KNeighborsClassifier(n_neighbors=2, n_jobs=-1))\n",
    "\n",
    "# 模型训练\n",
    "models = (clf.fit(X_train, y_train) for clf in models)"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "0d96c86d",
   "metadata": {},
   "source": [
    "# 设置图形标题\n",
    "titles = ('K Neighbors with k=1',\n",
    "          'K Neighbors with k=2')\n",
    "\n",
    "# 设置图形的大小和图间距\n",
    "# 用于创建一个新的图形窗口，并返回一个 Figure 对象。该函数的参数非常多，以下是一些常用的参数：\n",
    "# figsize：tuple，指定图形的宽度和高度，单位为英寸。\n",
    "# dpi：int，指定图形的分辨率，单位为每英寸点数。\n",
    "# facecolor：str，指定图形的背景色。\n",
    "# edgecolor：str，指定图形的边缘颜色。\n",
    "# linewidth：float，指定图形的边缘线宽度。\n",
    "# frameon：bool，指定是否显示图形的边框。\n",
    "# subplotpars：SubplotParams，指定图形的子图参数。\n",
    "# tight_layout：bool 或 dict，指定是否启用紧凑布局。\n",
    "# constrained_layout：bool，指定是否启用约束布局。\n",
    "# subplot_kw：dict，指定用于创建子图的关键字参数。\n",
    "# gridspec_kw：dict，指定用于创建子图网格规范的关键字参数。\n",
    "# clear：bool，指定是否清除当前图形窗口。\n",
    "# num：int 或 str，指定图形的标识号。\n",
    "# fig_id：int 或 str，指定图形的标识号。\n",
    "# id：int 或 str，指定图形的标识号。\n",
    "fig = plt.figure(figsize=(15, 5))\n",
    "# 用于调整子图之间的间距和布局。以下是该函数的常用参数：\n",
    "# left：float，指定子图左边缘与图形左边缘之间的间距，取值范围为 [0, 1]，默认为 0.125。\n",
    "# right：float，指定子图右边缘与图形右边缘之间的间距，取值范围为 [0, 1]，默认为 0.9。\n",
    "# bottom：float，指定子图底边缘与图形底边缘之间的间距，取值范围为 [0, 1]，默认为 0.1。\n",
    "# top：float，指定子图顶边缘与图形顶边缘之间的间距，取值范围为 [0, 1]，默认为 0.9。\n",
    "# wspace：float，指定子图之间的水平间距，取值范围为 [0, ∞)，默认为 0.2。\n",
    "# hspace：float，指定子图之间的垂直间距，取值范围为 [0, ∞)，默认为 0.2。\n",
    "plt.subplots_adjust(wspace=0.4, hspace=0.4)\n",
    "\n",
    "# 分别获取第1个和第2个特征向量\n",
    "X0, X1 = X_train[:, 0], X_train[:, 1]\n",
    "\n",
    "# 得到坐标轴的最小值和最大值\n",
    "x_min, x_max = X0.min() - 1, X0.max() + 1\n",
    "y_min, y_max = X1.min() - 1, X1.max() + 1\n",
    "\n",
    "# 构造网格点坐标矩阵\n",
    "# 设置0.2的目的是生成更多的网格点，数值越小，划分空间之间的分隔线越清晰\n",
    "xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.2),\n",
    "                     np.arange(y_min, y_max, 0.2))\n",
    "\n",
    "# 创建一个包含两个子图的图形，并返回这两个子图的引用。然后通过 .flatten() 方法将这两个子图转换为一个一维数组，方便对它们进行迭代或者索引访问。\n",
    "# 用于将多个可迭代对象（例如列表、元组、集合等）中的对应元素打包成一个元组，并返回一个由这些元组组成的迭代器。\n",
    "for clf, title, ax in zip(models, titles, fig.subplots(1, 2).flatten()):\n",
    "    # （2）使用matplotlib的contourf和scatter，画出k为1和2时的k近邻法构成的空间划分\n",
    "    # 对所有网格点进行预测\n",
    "    #     ravel() 函数是 NumPy 数组的一个方法，用于将多维数组展平为一维数组。\n",
    "    # c_[] 是一个用于按列连接数组的索引器对象。它允许将两个或多个数组按列连接成一个二维数组。\n",
    "    Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n",
    "    Z = Z.reshape(xx.shape)\n",
    "    # 设置颜色列表\n",
    "    colors = ('red', 'green', 'lightgreen', 'gray', 'cyan')\n",
    "    # 根据类别数生成颜色\n",
    "    #     unique() 函数用于在数组中找到唯一的元素，并返回已排序的唯一元素。\n",
    "    #     ListedColormap 是 Matplotlib 库中用于创建基于列表的颜色映射的类。颜色映射在可视化中非常重要，它将数值映射到颜色，使得数据可以以视觉方式呈现出来。\n",
    "    cmap = ListedColormap(colors[:len(np.unique(Z))])\n",
    "    # 绘制分隔线，contourf函数用于绘制等高线，alpha表示颜色的透明度，一般设置成0.5\n",
    "    #     contourf 是 Matplotlib 库中用于绘制填充等高线图（Filled Contour Plot）的函数。填充等高线图与普通等高线图类似，但在等高线之间填充颜色，以突出不同区域的数据密度或者数值范围。\n",
    "    # contourf 函数的参数有以下几个：\n",
    "    # X：array_like，二维数组，可选。指定数据的 x 坐标。如果未指定，则默认为列索引。\n",
    "    # Y：array_like，二维数组，可选。指定数据的 y 坐标。如果未指定，则默认为行索引。\n",
    "    # Z：array_like，二维数组，必需。指定要绘制的数据值。数据值通常是一个与 X 和 Y 相同大小的数组。\n",
    "    # levels：int 或 array_like，可选。指定绘制等高线的层数或者等高线的数值。如果是 int 类型，则表示要绘制的等高线的层数，系统会自动生成这些层次的数值；如果是 array_like 类型，则表示要绘制的等高线的数值。\n",
    "    # colors：str 或 array_like，可选。指定等高线的填充颜色。如果是 str 类型，则表示使用一个固定的颜色来填充等高线图；如果是 array_like 类型，则表示为每个等高线层次指定一种颜色。\n",
    "    # cmap：str 或 Colormap 对象，可选。指定用于填充等高线图的颜色映射。默认值为 'viridis'。\n",
    "    # alpha：float，可选。指定填充颜色的透明度，取值范围为 [0, 1]，其中 0 表示完全透明，1 表示完全不透明。\n",
    "    # antialiased：bool，可选。指定是否对等高线进行抗锯齿处理。默认值为 True。\n",
    "    # extend：{'neither', 'both', 'min', 'max'}，可选。指定如何处理超出 levels 范围的数据值。默认值为 'neither'，表示不处理超出范围的数据值；'both' 表示在颜色条的两端添加两个颜色块以指示范围；'min' 表示只在颜色条的最小值端添加颜色块；'max' 表示只在颜色条的最大值端添加颜色块。\n",
    "    # extent：(left, right, bottom, top)，可选。指定 X 和 Y 轴的范围，用于控制绘制等高线图的大小和位置。\n",
    "    # origin：{'upper', 'lower'}, 可选。指定等高线图的起点位置，即数据坐标系中的原点位置。'upper' 表示原点位于左上角，'lower' 表示原点位于左下角。\n",
    "    # locator：ticker.Locator 对象，可选。指定等高线层次的定位器对象，用于控制等高线的间距。\n",
    "    # extendrect：bool，可选。指定是否在颜色条两端添加矩形，以指示超出范围的数据值。默认值为 False。\n",
    "    ax.contourf(xx, yy, Z, cmap=cmap, alpha=0.5)\n",
    "\n",
    "    # 绘制样本点\n",
    "    #     scatter 函数是 Matplotlib 库中用于绘制散点图的函数。散点图是一种用于显示两个变量之间关系的图表类型，其中每个点表示一个数据样本，横坐标和纵坐标分别表示两个变量的取值。\n",
    "    # 以下是 scatter 函数常用的参数：\n",
    "    # x：array_like，指定散点图中点的横坐标。\n",
    "    # y：array_like，指定散点图中点的纵坐标。\n",
    "    # s：scalar or array_like，可选。指定散点的大小。如果是标量，则所有散点的大小相同；如果是数组，则可以为每个点指定不同的大小。\n",
    "    # c：color or sequence of color，可选。指定散点的颜色。可以是颜色名称、颜色缩写、RGB 元组或者颜色列表。\n",
    "    # marker：str，可选。指定散点的标记样式。常用的标记样式包括 'o'（圆圈）、's'（正方形）、'^'（三角形）、'+'（加号）等。\n",
    "    # alpha：float，可选。指定散点的透明度，取值范围为 [0, 1]，其中 0 表示完全透明，1 表示完全不透明。\n",
    "    # cmap：Colormap，可选。指定用于指定散点颜色的颜色映射。默认为 None，表示使用当前的颜色映射。\n",
    "    # norm：Normalize，可选。指定用于对散点大小进行归一化的对象。\n",
    "    # vmin，vmax：float，可选。指定用于归一化散点大小的值的范围。\n",
    "    # linewidths：float or array_like，可选。指定散点边界线的宽度。如果是标量，则所有散点的边界线宽度相同；如果是数组，则可以为每个点指定不同的宽度。\n",
    "    # edgecolors：color or sequence of color，可选。指定散点边界线的颜色。\n",
    "    # label：str，可选。指定散点的标签，用于图例。\n",
    "    ax.scatter(X0, X1, c=y_train, s=50, edgecolors='k', cmap=cmap, alpha=0.5)\n",
    "\n",
    "    # （3）根据模型得到的预测结果，计算预测准确率，并设置图形标题\n",
    "    # 计算预测准确率\n",
    "    acc = clf.score(X_train, y_train)\n",
    "    # 设置标题\n",
    "    ax.set_title(title + ' (Accuracy: %d%%)' % (acc * 100))\n",
    "\n",
    "plt.show()"
   ],
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "id": "6fc4a9d2",
   "metadata": {},
   "source": [
    "![image.png](./images/exercise1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "29545c74",
   "metadata": {},
   "source": [
    "## 习题3.2：利用例题3.2构造的kd树求点𝑥=(3,4.5)𝑇 的最近邻点。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "cf26a03d",
   "metadata": {},
   "source": [
    "![image-2.png](images/exercise2.1.png)\n",
    "\n",
    "在 sklearn.neighbors 中的 KDTree 构造函数中，主要参数包括：\n",
    "- X：数组，形状为 (n_samples, n_features)，必需。表示要构建 KD 树的数据集，每行是一个样本，每列是一个特征。\n",
    "- leaf_size：int，可选。表示叶子节点的最大大小。较小的叶子大小将导致更多的树层次，但会降低建树和查询的速度。默认值为 30。\n",
    "- metric：str or callable，可选。表示用于距离度量的指标。可以是预定义的字符串指标（如 'euclidean'、'manhattan'、'minkowski' 等），也可以是一个可调用的自定义距离函数。默认值为 'minkowski'。\n",
    "- metric_params：dict，可选。表示距离度量的附加参数。如果 metric 是字符串指标，则此参数可用于指定额外的参数（如 p、w 等）。如果 metric 是自定义距离函数，则此参数用于传递给自定义函数的额外参数。\n",
    "- balanced_tree：bool，可选。表示是否使用平衡的 KD 树。平衡的 KD 树在构建过程中会花费更多的时间，但在查询时可能会更快。默认值为 True。\n",
    "- copy_data：bool，可选。表示是否复制输入数据。如果设置为 True，则输入数据将被复制，否则将使用原始数据。默认值为 True。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "17ba71e7",
   "metadata": {},
   "source": [
    "import numpy as np\n",
    "from sklearn.neighbors import KDTree\n",
    "\n",
    "# 构造例题3.2的数据集\n",
    "train_data = np.array([[2, 3],\n",
    "                       [5, 4],\n",
    "                       [9, 6],\n",
    "                       [4, 7],\n",
    "                       [8, 1],\n",
    "                       [7, 2]])\n",
    "# （1）使用sklearn的KDTree类，构建平衡kd树\n",
    "# 设置leaf_size为2，表示平衡树\n",
    "tree = KDTree(train_data, leaf_size=2)\n",
    "\n",
    "# （2）使用tree.query方法，设置k=1，查找(3, 4.5)的最近邻点\n",
    "# dist表示与最近邻点的距离，ind表示最近邻点在train_data的位置\n",
    "dist, ind = tree.query(np.array([[3, 4.5]]), k=1)\n",
    "node_index = ind[0]\n",
    "\n",
    "# （3）得到最近邻点\n",
    "x1 = train_data[node_index][0][0]\n",
    "x2 = train_data[node_index][0][1]\n",
    "print(\"x点(3,4.5)的最近邻点是({0}, {1})\".format(x1, x2))"
   ],
   "outputs": []
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "4a1d3800",
   "metadata": {},
   "source": [
    "![image.png](images/exercise2.2.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "79ddaa6d",
   "metadata": {},
   "source": [
    "## 习题3.3：参照算法3.3，写出输出为x的k近邻的算法。\n",
    "![image.png](images/exercise3.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "5bfaab59",
   "metadata": {},
   "source": [
    "import json\n",
    "\n",
    "\n",
    "class Node:\n",
    "    \"\"\"节点类\"\"\"\n",
    "\n",
    "    def __init__(self, value, index, left_child, right_child):\n",
    "        self.value = value.tolist()\n",
    "        self.index = index\n",
    "        self.left_child = left_child\n",
    "        self.right_child = right_child\n",
    "\n",
    "    # __repr__ 函数是 Python 中的特殊方法之一，用于定义对象的字符串表示形式。当我们在交互式环境中输入对象的名称并按下回车时，\n",
    "    # Python 解释器会调用该对象的 __repr__ 方法来获取对象的字符串表示形式，并将其显示在屏幕上。\n",
    "    def __repr__(self):\n",
    "        return json.dumps(self, indent=3, default=lambda obj: obj.__dict__, ensure_ascii=False, allow_nan=False)\n",
    "\n",
    "\n",
    "class KDTree:\n",
    "    \"\"\"kd tree类\"\"\"\n",
    "\n",
    "    def __init__(self, data):\n",
    "        # 数据集\n",
    "        # numpy.asarray() 函数的作用是将输入转换为数组。如果输入已经是一个数组，则该函数会返回输入的副本，如果输入是一个类数组对象（如列表、元组等），则会将其转换为数组。\n",
    "        self.data = np.asarray(data)\n",
    "        # kd树\n",
    "        self.kd_tree = None\n",
    "        # 创建平衡kd树\n",
    "        self._create_kd_tree(data)\n",
    "\n",
    "    def _split_sub_tree(self, data, depth=0):\n",
    "        # 算法3.2第3步：直到子区域没有实例存在时停止\n",
    "        if len(data) == 0:\n",
    "            return None\n",
    "        # 算法3.2第2步：选择切分坐标轴, 从0开始（书中是从1开始）\n",
    "        l = depth % data.shape[1]\n",
    "        # 对数据进行排序\n",
    "        # numpy.argsort() 函数返回的是数组排序后的索引数组。该函数的返回值是一个数组，数组中的元素是原始数组的索引，这些索引用于按照升序对原始数组进行排序。\n",
    "        data = data[data[:, l].argsort()]\n",
    "        # 算法3.2第1步：将所有实例坐标的中位数作为切分点\n",
    "        median_index = data.shape[0] // 2\n",
    "        # 获取结点在数据集中的位置\n",
    "        node_index = [i for i, v in enumerate(\n",
    "            self.data) if list(v) == list(data[median_index])]\n",
    "        return Node(\n",
    "            # 本结点\n",
    "            value=data[median_index],\n",
    "            # 本结点在数据集中的位置\n",
    "            index=node_index[0],\n",
    "            # 左子结点\n",
    "            left_child=self._split_sub_tree(data[:median_index], depth + 1),\n",
    "            # 右子结点\n",
    "            right_child=self._split_sub_tree(\n",
    "                data[median_index + 1:], depth + 1)\n",
    "        )\n",
    "\n",
    "    def _create_kd_tree(self, X):\n",
    "        self.kd_tree = self._split_sub_tree(X)\n",
    "\n",
    "    def query(self, data, k=1):\n",
    "        data = np.asarray(data)\n",
    "        hits = self._search(data, self.kd_tree, k=k, k_neighbor_sets=list())\n",
    "        dd = np.array([hit[0] for hit in hits])\n",
    "        ii = np.array([hit[1] for hit in hits])\n",
    "        return dd, ii\n",
    "\n",
    "    def __repr__(self):\n",
    "        return str(self.kd_tree)\n",
    "\n",
    "    @staticmethod\n",
    "    def _cal_node_distance(node1, node2):\n",
    "        \"\"\"计算两个结点之间的距离\"\"\"\n",
    "        return np.sqrt(np.sum(np.square(node1 - node2)))\n",
    "\n",
    "    def _search(self, point, tree=None, k=1, k_neighbor_sets=None, depth=0):\n",
    "        n = point.shape[1]\n",
    "        if k_neighbor_sets is None:\n",
    "            k_neighbor_sets = []\n",
    "        if tree is None:\n",
    "            return k_neighbor_sets\n",
    "\n",
    "        # (1)找到包含目标点x的叶结点\n",
    "        if tree.left_child is None and tree.right_child is None:\n",
    "            # 更新当前k近邻点集\n",
    "            return self._update_k_neighbor_sets(k_neighbor_sets, k, tree, point)\n",
    "\n",
    "        # 递归地向下访问kd树\n",
    "        if point[0][depth % n] < tree.value[depth % n]:\n",
    "            direct = 'left'\n",
    "            next_branch = tree.left_child\n",
    "        else:\n",
    "            direct = 'right'\n",
    "            next_branch = tree.right_child\n",
    "        if next_branch is not None:\n",
    "            # (3)(b)检查另一子结点对应的区域是否相交\n",
    "            k_neighbor_sets = self._search(point, tree=next_branch, k=k, depth=depth + 1,\n",
    "                                           k_neighbor_sets=k_neighbor_sets)\n",
    "\n",
    "            # 计算目标点与切分点形成的分割超平面的距离\n",
    "            temp_dist = abs(tree.value[depth % n] - point[0][depth % n])\n",
    "\n",
    "            if direct == 'left':\n",
    "                # 判断超球体是否与超平面相交\n",
    "                if not (k_neighbor_sets[0][0] < temp_dist and len(k_neighbor_sets) == k):\n",
    "                    # 如果相交，递归地进行近邻搜索\n",
    "                    # (3)(a) 判断当前结点，并更新当前k近邻点集\n",
    "                    k_neighbor_sets = self._update_k_neighbor_sets(k_neighbor_sets, k, tree, point)\n",
    "                    return self._search(point, tree=tree.right_child, k=k, depth=depth + 1,\n",
    "                                        k_neighbor_sets=k_neighbor_sets)\n",
    "            else:\n",
    "                # 判断超球体是否与超平面相交\n",
    "                if not (k_neighbor_sets[0][0] < temp_dist and len(k_neighbor_sets) == k):\n",
    "                    # 如果相交，递归地进行近邻搜索\n",
    "                    # (3)(a) 判断当前结点，并更新当前k近邻点集\n",
    "                    k_neighbor_sets = self._update_k_neighbor_sets(k_neighbor_sets, k, tree, point)\n",
    "                    return self._search(point, tree=tree.left_child, k=k, depth=depth + 1,\n",
    "                                        k_neighbor_sets=k_neighbor_sets)\n",
    "        else:\n",
    "            return self._update_k_neighbor_sets(k_neighbor_sets, k, tree, point)\n",
    "                \n",
    "        return k_neighbor_sets\n",
    "\n",
    "    def _update_k_neighbor_sets(self, best, k, tree, point):\n",
    "        # 计算目标点与当前结点的距离\n",
    "        node_distance = self._cal_node_distance(point, tree.value)\n",
    "        if len(best) == 0:\n",
    "            best.append((node_distance, tree.index, tree.value))\n",
    "        elif len(best) < k:\n",
    "            # 如果“当前k近邻点集”元素数量小于k\n",
    "            self._insert_k_neighbor_sets(best, tree, node_distance)\n",
    "        else:\n",
    "            # 叶节点距离小于“当前 𝑘 近邻点集”中最远点距离\n",
    "            if best[0][0] > node_distance:\n",
    "                best = best[1:]\n",
    "                self._insert_k_neighbor_sets(best, tree, node_distance)\n",
    "        return best\n",
    "\n",
    "    @staticmethod\n",
    "    def _insert_k_neighbor_sets(best, tree, node_distance):\n",
    "        \"\"\"将距离最远的结点排在前面\"\"\"\n",
    "        n = len(best)\n",
    "        for i, item in enumerate(best):\n",
    "            if item[0] < node_distance:\n",
    "                # 将距离最远的结点插入到前面\n",
    "                best.insert(i, (node_distance, tree.index, tree.value))\n",
    "                break\n",
    "        if len(best) == n:\n",
    "            best.append((node_distance, tree.index, tree.value))\n",
    "\n",
    "# 打印信息\n",
    "def print_k_neighbor_sets(k, ii, dd):\n",
    "    if k == 1:\n",
    "        text = \"x点的最近邻点是\"\n",
    "    else:\n",
    "        text = \"x点的%d个近邻点是\" % k\n",
    "\n",
    "    for i, index in enumerate(ii):\n",
    "        res = X_train[index]\n",
    "        if i == 0:\n",
    "            text += str(tuple(res))\n",
    "        else:\n",
    "            text += \", \" + str(tuple(res))\n",
    "\n",
    "    if k == 1:\n",
    "        text += \"，距离是\"\n",
    "    else:\n",
    "        text += \"，距离分别是\"\n",
    "    for i, dist in enumerate(dd):\n",
    "        if i == 0:\n",
    "            text += \"%.4f\" % dist\n",
    "        else:\n",
    "            text += \", %.4f\" % dist\n",
    "\n",
    "    print(text)\n",
    "\n",
    "    \n",
    "    \n",
    "X_train = np.array([[2, 3],\n",
    "                    [5, 4],\n",
    "                    [9, 6],\n",
    "                    [4, 7],\n",
    "                    [8, 1],\n",
    "                    [7, 2]])\n",
    "kd_tree = KDTree(X_train)\n",
    "# 设置k值\n",
    "k = 1\n",
    "# 查找邻近的结点\n",
    "dists, indices = kd_tree.query(np.array([[3, 4.5]]), k=k)\n",
    "# 打印邻近结点\n",
    "print_k_neighbor_sets(k, indices, dists)"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "ebd5bf95",
   "metadata": {},
   "source": [
    "# print KDTree\n",
    "print(kd_tree)"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "914e7370",
   "metadata": {},
   "source": [
    "# 更换数据集，使用更高维度的数据，并设置𝑘=3\n",
    "X_train = np.array([[2, 3, 4],\n",
    "                    [5, 4, 4],\n",
    "                    [9, 6, 4],\n",
    "                    [4, 7, 4],\n",
    "                    [8, 1, 4],\n",
    "                    [7, 2, 4]])\n",
    "kd_tree = KDTree(X_train)\n",
    "# 设置k值\n",
    "k = 3\n",
    "# 查找邻近的结点\n",
    "dists, indices = kd_tree.query(np.array([[3, 4.5, 4]]), k=k)\n",
    "# 打印邻近结点\n",
    "print_k_neighbor_sets(k, indices, dists)"
   ],
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "id": "215f5b56",
   "metadata": {},
   "source": [
    "# iris数据集使用KNN分类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "21b7668e",
   "metadata": {},
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.model_selection import train_test_split\n",
    "from collections import Counter\n",
    "\n",
    "# data\n",
    "iris = load_iris()\n",
    "df = pd.DataFrame(iris.data, columns=iris.feature_names)\n",
    "df['label'] = iris.target\n",
    "df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "c96ac3a7",
   "metadata": {},
   "source": [
    "plt.scatter(df[:50]['sepal length'], df[:50]['sepal width'], label='0')\n",
    "plt.scatter(df[50:100]['sepal length'], df[50:100]['sepal width'], label='1')\n",
    "plt.xlabel('sepal length')\n",
    "plt.ylabel('sepal width')\n",
    "plt.legend()"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "b1a3e05e",
   "metadata": {},
   "source": [
    "data = np.array(df.iloc[:100, [0, 1, -1]])\n",
    "X, y = data[:,:-1], data[:,-1]\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "b8ed4ef5",
   "metadata": {},
   "source": [
    "class KNN:\n",
    "    def __init__(self, X_train, y_train, n_neighbors=3, p=2):\n",
    "        \"\"\"\n",
    "        parameter: n_neighbors 临近点个数\n",
    "        parameter: p 距离度量\n",
    "        \"\"\"\n",
    "        self.n = n_neighbors\n",
    "        self.p = p\n",
    "        self.X_train = X_train\n",
    "        self.y_train = y_train\n",
    "\n",
    "    def predict(self, X):\n",
    "        # 取出n个点\n",
    "        knn_list = []\n",
    "        for i in range(self.n):\n",
    "            # np.linalg.norm() 函数用于计算数组的范数。范数是向量空间中的一种度量方式，用于衡量向量的大小。\n",
    "            # 在机器学习和线性代数中经常用到范数，常见的有欧几里德范数（L2 范数）、曼哈顿范数（L1 范数）等。\n",
    "            dist = np.linalg.norm(X - self.X_train[i], ord=self.p)\n",
    "            knn_list.append((dist, self.y_train[i]))\n",
    "\n",
    "        for i in range(self.n, len(self.X_train)):\n",
    "            max_index = knn_list.index(max(knn_list, key=lambda x: x[0]))\n",
    "            dist = np.linalg.norm(X - self.X_train[i], ord=self.p)\n",
    "            if knn_list[max_index][0] > dist:\n",
    "                knn_list[max_index] = (dist, self.y_train[i])\n",
    "\n",
    "        # 统计\n",
    "        knn = [k[-1] for k in knn_list]\n",
    "        # Counter 类是 Python 标准库 collections 模块中的一个类，用于统计可迭代对象中各元素的出现次数。\n",
    "# Counter 类的作用包括：\n",
    "# 计数：统计可迭代对象中各元素出现的次数，返回一个字典，字典的键是元素，值是该元素出现的次数。\n",
    "# 元素频率统计：提供了一种便捷的方式来获取元素的频率，即出现次数与总元素数的比例。\n",
    "# 集合运算：Counter 对象支持数学集合运算，如并集、交集、差集等。\n",
    "# 字典增强功能：除了计数功能外，Counter 对象也可以作为字典来使用，支持字典的各种操作。\n",
    "        count_pairs = Counter(knn)\n",
    "#         max_count = sorted(count_pairs, key=lambda x: x)[-1]\n",
    "        # max_count为经过多数表决后，出现次数最多的类别\n",
    "        max_count = sorted(count_pairs.items(), key=lambda x: x[1])[-1][0]\n",
    "        return max_count\n",
    "\n",
    "    def score(self, X_test, y_test):\n",
    "        right_count = 0\n",
    "        n = 10\n",
    "        for X, y in zip(X_test, y_test):\n",
    "            label = self.predict(X)\n",
    "            if label == y:\n",
    "                right_count += 1\n",
    "        return right_count / len(X_test)\n",
    "    \n",
    "    \n",
    "clf = KNN(X_train, y_train)\n",
    "clf.score(X_test, y_test)"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "4269f536",
   "metadata": {},
   "source": [
    "test_point = [6.0, 3.0]\n",
    "print('Test Point: {}'.format(clf.predict(test_point)))"
   ],
   "outputs": []
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "586480f5",
   "metadata": {},
   "source": [
    "plt.scatter(df[:50]['sepal length'], df[:50]['sepal width'], label='0')\n",
    "plt.scatter(df[50:100]['sepal length'], df[50:100]['sepal width'], label='1')\n",
    "plt.plot(test_point[0], test_point[1], 'bo', label='test_point')\n",
    "plt.xlabel('sepal length')\n",
    "plt.ylabel('sepal width')\n",
    "plt.legend()"
   ],
   "outputs": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
