{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c21f394d",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "#  13\\.  K 近邻回归算法实现与应用  # \n",
    "\n",
    "##  13.1.  介绍  # \n",
    "\n",
    "K 近邻算法实验中，我们学习了将其用于分类问题的解决思路。实际上，K 近邻亦可用于回归分析预测。本次挑战中，你将完成对 K 近邻算法改造，将其应用于回归分析。 \n",
    "\n",
    "##  13.2.  知识点  # \n",
    "\n",
    "  * K 近邻回归介绍 \n",
    "\n",
    "  * K 近邻回归实现 \n",
    "\n",
    "##  13.3.  内容回顾  # \n",
    "\n",
    "回顾我们在 K 近邻实验中学习过的内容。当使用 K 近邻算法完成分类任务时，需要的步骤有： \n",
    "\n",
    "  * 数据准备：通过数据清洗，数据处理，将每条数据整理成向量。 \n",
    "\n",
    "  * 计算距离：计算测试数据与训练数据之间的距离。 \n",
    "\n",
    "  * 寻找邻居：找到与测试数据距离最近的 K 个训练数据样本。 \n",
    "\n",
    "  * 决策分类：根据决策规则，从 K 个邻居得到测试数据的类别。 \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1546417333161.gif](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1546417333161.gif) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1546417333161.gif)\n",
    "\n",
    "其中，「决策分类」是决定未知样本类别的关键步骤。那么，当我们将 K 近邻算法用于回归预测时，实际上只需要将这一步修改为适合于回归问题的流程即可。 \n",
    "\n",
    "  * 分类问题：根据 K 个邻居的类别，多数表决得到未知样本的类别。 \n",
    "\n",
    "  * 回归问题：根据 K 个邻居的目标值，计算平均值得到未知样本的预测值。 \n",
    "\n",
    "K 近邻回归算法图示如下： \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1546420986145.jpg](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1546420986145.jpg) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1546420986145.jpg)\n",
    "\n",
    "接下来，你需要根据上面的图示和说明，实现 K 近邻回归算法，并用示例数据进行验证。 \n",
    "\n",
    "Exercise 13.1 \n",
    "\n",
    "挑战：根据上述图示和说明，实现 K 近邻回归算法。 \n",
    "\n",
    "规定：距离计算使用欧式距离公式，部分代码可以参考实验内容。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bbf1c5d7",
   "metadata": {},
   "outputs": [],
   "source": [
    "def knn_regression(train_data, train_labels, test_data, k):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    train_data -- 训练数据特征 numpy.ndarray.2d\n",
    "    train_labels -- 训练数据目标 numpy.ndarray.1d\n",
    "    test_data -- 测试数据特征 numpy.ndarray.2d\n",
    "    k -- k 值\n",
    "\n",
    "    返回:\n",
    "    test_labels -- 测试数据目标 numpy.ndarray.1d\n",
    "    \"\"\"\n",
    "\n",
    "    ### 代码开始 ### (≈ 10 行代码)\n",
    "    test_labels = None\n",
    "    ### 代码结束 ###\n",
    "\n",
    "    return test_labels"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "829c0130",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "参考答案  Exercise 13.1 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f3fe6301",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "\n",
    "def knn_regression(train_data, train_labels, test_data, k):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    train_data -- 训练数据特征 numpy.ndarray.2d\n",
    "    train_labels -- 训练数据目标 numpy.ndarray.1d\n",
    "    test_data -- 测试数据特征 numpy.ndarray.2d\n",
    "    k -- k 值\n",
    "\n",
    "    返回:\n",
    "    test_labels -- 训练数据目标 numpy.ndarray.1d\n",
    "    \"\"\"\n",
    "\n",
    "    ### 代码开始 ### (≈ 10 行代码)\n",
    "    test_labels = np.array([])  # 创建一个空的数组用于存放预测结果\n",
    "    for X_test in test_data:\n",
    "        distances = np.array([])\n",
    "        for each_X in train_data:  # 使用欧式距离计算数据相似度\n",
    "            d = np.sqrt(np.sum(np.square(X_test - each_X)))\n",
    "            distances = np.append(distances, d)\n",
    "        sorted_distance_index = distances.argsort()  # 获取按距离大小排序后的索引\n",
    "        k_labels = train_labels[sorted_distance_index[:k]]\n",
    "        y_test = np.mean(k_labels) # 计算k个最近邻的平均值作为预测值  结果为 (2 + 3 + 4) / 3 = 3\n",
    "        \"\"\"ArithmeticError\n",
    "        # np.append 添加过程\n",
    "        第1次循环: [] -> [2.0]        # 添加第1个预测值\n",
    "        第2次循环: [2.0] -> [2.0, 4.0] # 添加第2个预测值\n",
    "        第3次循环: [2.0, 4.0] -> [2.0, 4.0, 6.0]\n",
    "        第4次循环: [2.0, 4.0, 6.0] -> [2.0, 4.0, 6.0, 7.0]\n",
    "        \"\"\"\n",
    "        test_labels = np.append(test_labels, y_test)\n",
    "    ### 代码结束 ###\n",
    "\n",
    "    return test_labels"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b374434a",
   "metadata": {},
   "source": [
    "------**以下是上段代码的解释**------"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f1c7a104",
   "metadata": {},
   "source": [
    "让我解释一下 KNN 回归算法中使用循环的必要性：\n",
    "\n",
    "1. **外层循环的必要性**\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49e5d87b",
   "metadata": {},
   "source": [
    "for X_test in test_data:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88881e59",
   "metadata": {},
   "source": [
    "- 这个循环用于遍历每一个需要预测的测试样本\n",
    "- 因为我们需要对每个测试样本分别进行预测\n",
    "- 一次只能处理一个测试样本与所有训练样本的距离计算\n",
    "\n",
    "2. **内层循环的必要性**\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7491cca",
   "metadata": {},
   "source": [
    "```\n",
    "for each_X in train_data:\n",
    "    d = np.sqrt(np.sum(np.square(X_test - each_X)))\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8937b334",
   "metadata": {},
   "source": [
    "- 这个循环用于计算当前测试样本与每个训练样本之间的距离\n",
    "- 需要遍历所有训练样本来找到最近的K个邻居\n",
    "- 计算欧式距离来衡量样本间的相似度\n",
    "\n",
    "3. **为什么不能向量化？**\n",
    "- 虽然 NumPy 提供了很多向量化操作，但在 KNN 算法中：\n",
    "  - 每个预测都需要独立地找到K个最近邻\n",
    "  - 对每个预测都需要单独计算K个最近邻的平均值\n",
    "  - 这种依赖于排序和局部计算的特性使得完全向量化比较困难\n",
    "\n",
    "这种循环结构反映了 KNN 算法的本质：**对每个待预测样本，都需要遍历整个训练集来找到最相似的K个邻居**。\n",
    "\n",
    "------**以上是上段代码的解释**------"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c613a1ed",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "下面，我们提供一组测试数据。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "993adca5",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "# 训练样本特征\n",
    "train_data = np.array(\n",
    "    [[1, 1], [2, 2], [3, 3], [4, 4], [5, 5], [6, 6], [7, 7], [8, 8], [9, 9], [10, 10]]\n",
    ")\n",
    "# 训练样本目标值\n",
    "train_labels = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "464ea2d8",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "运行测试 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "575e58d3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([2., 4., 6., 7.])"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 测试样本特征\n",
    "test_data = np.array([[1.2, 1.3], [3.7, 3.5], [5.5, 6.2], [7.1, 7.9]])\n",
    "# 测试样本目标值\n",
    "knn_regression(train_data, train_labels, test_data, k=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9722ee10",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "期望输出 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bed8d21e",
   "metadata": {},
   "outputs": [],
   "source": [
    "array([2., 4., 6., 7.])"
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "main_language": "python",
   "notebook_metadata_filter": "-all"
  },
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
