{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# K最近邻算法 - KNN\n",
    "K最近邻（K-Nearest Neighbors，简称KNN）算法是一种基础且直观的监督学习方法，在分类和回归问题中都有应用。以下是对KNN算法的一个通俗解释：\n",
    "\n",
    "  想象一下你是一个新搬入小区的人：\n",
    "    （1）邻居投票决定你是谁：当你想知道自己属于哪个群体或类别时，KNN算法就像询问你的邻居们。这里的“邻居”是指在特征空间中最接近你的那些数据点。\n",
    "    （2）找到最近的K个人：你会找出离你家最近的K个邻居（例如K=5），也就是计算你在特征空间中的点与所有已知样本点之间的距离（通常使用欧氏距离、曼哈顿距离或其他距离度量方式），然后按距离由近到远排序，选择前K个邻居。\n",
    "    （3）多数表决或加权表决：接下来，看看这K个邻居都分别属于哪些类别，并统计各个类别的数量。在分类问题中，你会被归为这些邻居中最多数的那类。也就是说，如果大部分邻居是篮球爱好者，那么你也可能会被认为是篮球爱好者。在某些情况下，可能还会采用加权的方式，即距离越近的邻居对最终分类的影响越大。\n",
    "    （4）无须训练模型： KNN算法与其他一些机器学习方法不同的是，它不预先构建一个模型进行训练，而是等到需要预测时才根据数据库中所有已有数据来确定未知样本的类别或数值。\n",
    "  总结起来，KNN算法的核心思想就是“物以类聚”，通过比较待预测对象与训练集中已知对象的距离，利用最相近的K个对象的信息来进行决策。这种简单直接的方法虽然易于理解，但在大数据集上可能会有较高的计算成本，同时对异常值较为敏感，并且对参数K的选择有一定依赖性。\n"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "23b309dca8d92f98"
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 一、导入相关包"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "9a8269692b6ca6e2"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "ename": "ModuleNotFoundError",
     "evalue": "No module named 'tensorflow'",
     "output_type": "error",
     "traceback": [
      "\u001B[1;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[1;31mModuleNotFoundError\u001B[0m                       Traceback (most recent call last)",
      "Cell \u001B[1;32mIn[1], line 3\u001B[0m\n\u001B[0;32m      1\u001B[0m \u001B[38;5;28;01mimport\u001B[39;00m \u001B[38;5;21;01mnumpy\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m \u001B[38;5;21;01mnp\u001B[39;00m\n\u001B[0;32m      2\u001B[0m \u001B[38;5;28;01mfrom\u001B[39;00m \u001B[38;5;21;01msklearn\u001B[39;00m \u001B[38;5;28;01mimport\u001B[39;00m datasets\n\u001B[1;32m----> 3\u001B[0m \u001B[38;5;28;01mimport\u001B[39;00m \u001B[38;5;21;01mtensorflow\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m \u001B[38;5;21;01mtf\u001B[39;00m\n",
      "\u001B[1;31mModuleNotFoundError\u001B[0m: No module named 'tensorflow'"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from sklearn import datasets\n",
    "import tensorflow as tf"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-03-28T08:34:31.650907900Z",
     "start_time": "2024-03-28T08:33:58.380803900Z"
    }
   },
   "id": "2b39db05a1a2778b",
   "execution_count": 1
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 鸢尾花（Iris）数据集\n",
    "鸢尾花（Iris）数据集是机器学习和统计学领域中一个非常经典的、用于演示和实验的数据集。该数据集最初由英国统计学家兼生物学家Ronald Fisher在1936年收集整理，包含了150个样本，每个样本代表了一朵鸢尾属（Iris）植物，共分为三个不同的品种：\n",
    "    Setosa（山鸢尾）\n",
    "    Versicolour（变色鸢尾）\n",
    "    Virginica（维吉尼亚鸢尾）\n",
    "每个样本包含四个特征变量，这些特征都是连续数值类型，并且单位为厘米：\n",
    "    萼片长度（sepal length）\n",
    "    萼片宽度（sepal width）\n",
    "    花瓣长度（petal length）\n",
    "    花瓣宽度（petal width）\n",
    "因此，这个数据集是一个多变量数据分析的理想选择，可以用来测试和验证分类算法的性能。数据集中的每个样本都有对应的标签（目标变量），表示其所属的鸢尾花种类。\n",
    "在Python的scikit-learn库中，可以通过以下代码轻松加载鸢尾花数据集：\n"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "e4a2ff2e21e63873"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 加载内置的鸢尾花（Iris）数据集\n",
    "iris = datasets.load_iris()\n",
    "# 数据集内容概览\n",
    "print(\"特征数据：\", iris.data.shape)  # (150, 4)，表示有150个样本，每个样本有4个特征\n",
    "print(\"标签数据：\", iris.target.shape)  # (150,), 表示有150个对应的类别标签\n",
    "print(\"特征名称：\",\n",
    "      iris.feature_names)  # ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']\n",
    "print(\"类别标签：\", iris.target_names)  # ['setosa' 'versicolor' 'virginica']，三种鸢尾花的名称"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.641201100Z"
    }
   },
   "id": "e315fbec34cc8784",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "x = np.array([i for i in iris.data])  #特征\n",
    "y = np.array(iris.target)  #类别标签"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.642149800Z"
    }
   },
   "id": "17417304b40f6ff6",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "print(len(x))\n",
    "print(x)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.643147100Z"
    }
   },
   "id": "467cb5406b5bca78",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "print(len(y))\n",
    "print(y)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.644220200Z"
    }
   },
   "id": "d81a95f5abc07e2e",
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "source": [],
   "metadata": {
    "collapsed": false
   },
   "id": "d2cccd2250bb81b"
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 二、数据处理"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "eb5b4259eed920da"
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 1. 将花标签放在列表中以备后用\n",
    " Setosa（山鸢尾）\n",
    " Versicolour（变色鸢尾）\n",
    " Virginica（维吉尼亚鸢尾）"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "ef033b7b49e464f5"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 将花标签放在列表中以备后用\n",
    "flower_labels = list(iris.target_names)\n",
    "print(flower_labels)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.644220200Z"
    }
   },
   "id": "616e841cbd9d0c91",
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 2. 对类别标签进行独热编码\n",
    "独热编码（One-Hot Encoding）是一种将分类变量转换为数值型数据的编码方式，主要用于机器学习和深度学习等领域。在处理离散类别特征时，它通过创建一个N维向量来表示可能的N个状态，其中向量中的每一位对应一种状态或类别，并且：\n",
    "向量中只有一个位置上的值为1，这个位置与当前实例所属的类别相对应。\n",
    "其他所有位置上的值均为0。\n",
    "例如，假设我们有一个包含三个类别的特征：红、蓝、绿。对于红色类别，使用独热编码会将其表示为 [1, 0, 0]；蓝色类别则为 [0, 1, 0]；绿色类别为 [0, 0, 1]。\n",
    "这种编码方法使得每个类别成为一个独立的维度，从而让模型能够更容易地理解并处理分类特征，同时避免了类别之间的顺序关系和数值大小对模型的影响。此外，许多机器学习算法（如逻辑回归、支持向量机等）以及神经网络需要输入数据是数值形式，因此独热编码是一个常用的预处理步骤。\n",
    "\n",
    "\n",
    "（1）生成二维单位矩阵\n",
    "np.eye() 函数是NumPy库中的一个函数，它用于生成一个二维的单位矩阵（Identity Matrix）或者对角矩阵。单位矩阵是一个方阵，其主对角线上的元素全为1，而其他位置的元素全为0。\n",
    "（2）对类别标签进行独热编码\n",
    "将一维的类别标签数组作索引，选择二维单位矩阵对应的行，使得机器学习模型可以更好地理解和处理分类任务中的离散类别特征。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "9096a70522a0df5c"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 对类别标签进行独热编码（One-hot Encoding）\n",
    "# 生成 3×3的单位矩阵\n",
    "array_y = np.eye(len(set(y)))\n",
    "print(array_y)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.645142100Z"
    }
   },
   "id": "ef2992ceb6429516",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "y = array_y[y]\n",
    "print(y)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.645142100Z"
    }
   },
   "id": "3932ae7bfa45b208",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.646225900Z"
    }
   },
   "id": "3da6b9f369b6c787",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.647149800Z"
    }
   },
   "id": "fd3d9e096159442e",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.647149800Z"
    }
   },
   "id": "14a2df25e7dca993",
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 3. 归一化处理\n",
    "对特征数据进行归一化处理转化为0-1之间的数据。\n",
    "在NumPy中，axis参数用于指定数组的某个维度进行操作。其取值可以是整数或整数元组，具体含义如下：\n",
    "对于二维数组（矩阵）：\n",
    "axis=0 表示对每一列的所有元素执行操作。\n",
    "axis=1 表示对每一行的所有元素执行操作。\n",
    "\n",
    "归一化处理：x = (x - x_min) / (x_max-x_min)\n",
    "第一步，将列表中的每个数减去最小值，则原来列表中的最小值变0，原来列表中的最大值变为n，n = 最大值-最小值。\n",
    "第二步，将列表中的每个数除以n，则原列表的最大值变为1，原列表的最小值变仍为0，其他数值变为 (0,1)范围。\n",
    "这种归一化方法被称为线性归一化。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "61ffee7d067eb69d"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 对于二维数组（矩阵）：\n",
    "# axis=0 表示按行操作，即对每一列的所有元素执行操作。\n",
    "# axis=1 表示按列操作，即对每一行的所有元素执行操作。\n",
    "x_min = x.min(axis=0)\n",
    "x_max = x.max(axis=0)\n",
    "\n",
    "print(x_max)\n",
    "print(x_min)\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-03-28T08:34:31.659877900Z",
     "start_time": "2024-03-28T08:34:31.651898900Z"
    }
   },
   "id": "fbf0d5498bc9831b",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "x = (x - x_min) / (x_max - x_min)\n",
    "print(x)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-03-28T08:34:31.661872600Z",
     "start_time": "2024-03-28T08:34:31.660875600Z"
    }
   },
   "id": "ce65124b0e9f7f4f",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.669881300Z"
    }
   },
   "id": "108fb442913ad942",
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 三、划分数据集"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "ef26a1c85e353d3b"
  },
  {
   "cell_type": "markdown",
   "source": [
    "随机选取80%的数据集作为训练集，选取20%的数据集作为测试集。\n",
    "数据集共150行，所以训练集为120行，测试机为30行。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "6953a1491f78cdad"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.677829300Z"
    }
   },
   "id": "8daae3a6739de953",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 设置随机种子，保证每次的运行结果一致\n",
    "np.random.seed(420)\n",
    "split = 0.8  # 2:8 进行划分\n",
    "\n",
    "# 一共150个行，120个样本作为训练集，30个样本作为测试集\n",
    "# 通过随机数获取到训练集以及测试集的索引\n",
    "train_indices = np.random.choice(len(x), round(len(x) * split), replace=False)  #训练集占0.8\n",
    "test_indices = np.setdiff1d(range(len(x)), train_indices)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.684968900Z"
    }
   },
   "id": "591021118566992f",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 划分好的数据集\n",
    "Xtrain = x[train_indices]  # 训练集的特征数据\n",
    "Ytrain = y[train_indices]  # 训练集的标签数据\n",
    "\n",
    "Xtest = x[test_indices]  # 测试集的特征数据\n",
    "Ytest = y[test_indices]  # 测试集的标签数据\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.692364300Z"
    }
   },
   "id": "841bdb817b68e25",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 训练集的特征数据\n",
    "print(Xtrain)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.699297800Z"
    }
   },
   "id": "8a02101d0da53923",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 训练集的标签数据\n",
    "print(Ytrain)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.703284400Z"
    }
   },
   "id": "fedbc68665e55ed5",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 测试集的特征数据\n",
    "print(Xtest)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.709272500Z"
    }
   },
   "id": "4fb19964059a07a4",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 测试集的标签数据\n",
    "print(Ytest)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.715252Z"
    }
   },
   "id": "ea2cf8da41edec10",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 设置K值\n",
    "k = 5"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.720239200Z"
    }
   },
   "id": "bcc18d9a64268ec1",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.727220300Z"
    }
   },
   "id": "cb9881b9d83b4f13",
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 四、建模\n",
    "\n",
    "使用TensorFlow2.x的API：\n",
    "\n",
    "distances包含的是我们120个训练点与30个测试点之间的所有（曼哈顿）距离；也就是说，由20行乘120列的数组。\n",
    "说明：x[1]，x[2]的两个数据点向量的值之差的绝对值；即 |x[1] - x[2]| 的两个数据点向量的值之差的绝对值。\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "tf.subtract：表示两个矩阵相减。\n",
    "\n",
    "tf.nn_top_k(input,k=1,sorted=True,name=Node):找到输入的张量的最后的一个维度的最大的k个值和他的索引。\n",
    "\n",
    "tf.gather：根据索引收集数据，输出张量维度和输入张量维度相同。\n",
    "\n",
    "tf.argmax(input,axis)：根据axis取值的不同返回每行或者每列最大值的索引。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "130f162f840a7200"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 30个测试集到120个训练集的距离（曼哈顿）\n",
    "\"\"\"\n",
    "tf.expand_dims(input, axis)是TensorFlow中用于在输入张量指定轴上增加一维的函数。在这个上下文中，它被用来处理数据集中的测试点。\n",
    "假设我们有一个二维数组 Xtest，其维度可能为 (30, 4)，表示有30个测试点，每个测试点具有四个特征。为了能够与形状为(120, 4)的训练数据进行逐元素相减操作（例如计算距离），我们需要将 Xtest 的形状调整为 (30, 1, 4)。这样，在进行减法时，通过广播机制，Xtest 将能够与 (120, 4) 形状的训练数据在特征维度上进行逐元素比较。\n",
    "在执行了 tf.expand_dims(Xtest, axis=1) 后，Xtest 的形状变为 (30, 1, 4)，此时可以安全地与训练数据进行减法运算，如 tf.subtract(train_data, Xtest_expanded)。\n",
    "接下来提到的 reduce_sum 操作，是用来计算两个矩阵在特征维度上的绝对差值之和，从而得到曼哈顿距离。\n",
    "\"\"\"\n",
    "\n",
    "\n",
    "\n",
    "d0 = tf.expand_dims(Xtest, axis=1)  #扩展维度\n",
    "# print(d0)\n",
    "# \n",
    "# print(Xtrain)\n",
    "\n",
    "d1 = tf.subtract(Xtrain, d0)  #两个矩阵相减\n",
    "\n",
    "print(d1)\n",
    "\n",
    "d2 = tf.abs(d1)  #取绝对值\n",
    "\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.733205500Z"
    }
   },
   "id": "580cc69efbe4de13",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 汇总计算所有距离\n",
    "\"\"\"\n",
    "rtf.reduce_sum：表示是用多维tensor的元素之和的方法，即计算一个张量的各个维度上元素的总和。\n",
    "因为d2是训练集减去测试集的三维张量，绝对差之和，从而distances的结果为这两个矩阵在特征维度上的绝对差值之和，从而得到曼哈顿距离。\n",
    "\"\"\"\n",
    "distances = tf.reduce_sum(input_tensor=d2, axis=2)\n",
    "print(distances)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.738535400Z"
    }
   },
   "id": "55292fbabc6b8b32",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 接收一个张量作为输入，并返回一个新的张量，其中每个元素的值都是原张量对应元素的相反数（即求每个元素的负值）\n",
    "print(tf.negative(distances))"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-03-28T08:34:31.787794800Z",
     "start_time": "2024-03-28T08:34:31.745081100Z"
    }
   },
   "id": "d3c7bf9cd10e2b2f",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 得到最近的5个点的索引值（训练集的）\n",
    "# tf.negative(distances)：首先对distances张量中的每个元素取负值。由于距离通常是越小越好（例如，在查找最近邻时），因此通过取负数将使得距离的排序与寻找最近点的需求一致，即最小的距离现在变成了最大的数值。\n",
    "\n",
    "# tf.nn.top_k(tf.negative(distances), k=k)：将上一步得到的负距离张量作为输入传递给tf.nn.top_k()函数，并指定参数k为要找的最大值的数量。此操作会返回两个结果：\n",
    "# values：包含从大到小排列的k个最大值（在这里实际上是原距离的最小k个负值）。\n",
    "# indices：这些最大值在原始输入张量中的索引位置，也就是最小距离对应的测试点和训练点之间的索引。\n",
    "\n",
    "_, top_k_indices = tf.nn.top_k(tf.negative(distances), k=k)\n",
    "\n",
    "print(top_k_indices)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.749072900Z"
    }
   },
   "id": "90c3b87b4f49f496",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 使用索引作为切片，找到训练集所对应的标签  都是属于哪个类别\n",
    "top_k_labels = tf.gather(Ytrain, top_k_indices)\n",
    "\n",
    "print(top_k_labels)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.754059500Z"
    }
   },
   "id": "b4e80a53b871b32c",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 对预测进行汇总\n",
    "predictions_sum = tf.reduce_sum(top_k_labels, axis=1)\n",
    "print(predictions_sum)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.757051800Z"
    }
   },
   "id": "e61da8e2a694721d",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "# 通过找到最大值的索引返回预测的标签\n",
    "pred = tf.argmax(input=predictions_sum, axis=1)\n",
    "print(pred)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.763129Z"
    }
   },
   "id": "fe5671be7edf4cc0",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "def prediction(Xtrain, Xtest, Ytrain, k):\n",
    "    # 得到曼哈顿距离\n",
    "    distances = tf.reduce_sum(tf.abs(tf.subtract(Xtrain, tf.expand_dims(Xtest, axis=1))), axis=2)\n",
    "    # 得到最近的5个点的索引值（训练集的）\n",
    "    _, top_k_indices = tf.nn.top_k(tf.negative(distances), k=k)  #tf.negative() 取负\n",
    "    # 使用索引作为切片，找到训练集所对应的标签--->都是属于哪个类别\n",
    "    top_k_labels = tf.gather(Ytrain, top_k_indices)\n",
    "    # 对预测进行汇总\n",
    "    predictions_sum = tf.reduce_sum(top_k_labels, axis=1)\n",
    "    # 通过找到最大值的索引返回预测的标签\n",
    "    pred = tf.argmax(input=predictions_sum, axis=1)\n",
    "    return pred"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.769021100Z"
    }
   },
   "id": "87b1f7d9220b6640",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "pred = prediction(Xtrain,Xtest,Ytrain,k)\n",
    "\n",
    "print(pred)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.774007200Z"
    }
   },
   "id": "7b5770e53010c649",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "#将预测结果与实际标签值进行比对\n",
    "i, total = 0,0\n",
    "results = zip(prediction(Xtrain,Xtest,Ytrain,k),Ytest)\n",
    "for pred,actual in results:\n",
    "    print(i, flower_labels[pred.numpy()],'\\t',flower_labels[np.argmax(actual)])\n",
    "    if pred.numpy() == np.argmax(actual):\n",
    "        total += 1 #统计预测正确的个数\n",
    "    i +=1\n",
    "#计算准确率\n",
    "accuracy = round(total/len(Xtest),4) * 100\n",
    "print('Accuracy = ',accuracy,'%')\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.781762500Z"
    }
   },
   "id": "2daa205e51937602",
   "execution_count": null
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "start_time": "2024-03-28T08:34:31.785752400Z"
    }
   },
   "id": "7b567f355e457856",
   "execution_count": null
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
