<!DOCTYPE html>
<html lang="en-US">
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width,initial-scale=1">
    <title>《统计学习方法》第 3 章“k 近邻法”学习笔记 | 算法不好玩</title>
    <meta name="generator" content="VuePress 1.8.2">
    <link rel="icon" href="/book/logo.png">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.7.1/katex.min.css">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/github-markdown-css/2.10.0/github-markdown.min.css">
    <meta name="description" content="">
    
    <link rel="preload" href="/book/assets/css/0.styles.e159a327.css" as="style"><link rel="preload" href="/book/assets/js/app.a4355b0d.js" as="script"><link rel="preload" href="/book/assets/js/2.ba785ee6.js" as="script"><link rel="preload" href="/book/assets/js/43.065f077f.js" as="script"><link rel="prefetch" href="/book/assets/js/10.b6950be1.js"><link rel="prefetch" href="/book/assets/js/11.f4ee7d6e.js"><link rel="prefetch" href="/book/assets/js/12.50fe1ace.js"><link rel="prefetch" href="/book/assets/js/13.42356fcc.js"><link rel="prefetch" href="/book/assets/js/14.e740c742.js"><link rel="prefetch" href="/book/assets/js/15.05c628f7.js"><link rel="prefetch" href="/book/assets/js/16.bd9bc9d5.js"><link rel="prefetch" href="/book/assets/js/17.e120b4fc.js"><link rel="prefetch" href="/book/assets/js/18.398213f8.js"><link rel="prefetch" href="/book/assets/js/19.e29ad0b2.js"><link rel="prefetch" href="/book/assets/js/20.14e30c1c.js"><link rel="prefetch" href="/book/assets/js/21.4f05f6c9.js"><link rel="prefetch" href="/book/assets/js/22.98fbf199.js"><link rel="prefetch" href="/book/assets/js/23.285387f4.js"><link rel="prefetch" href="/book/assets/js/24.852addbe.js"><link rel="prefetch" href="/book/assets/js/25.d30ade13.js"><link rel="prefetch" href="/book/assets/js/26.23dfa040.js"><link rel="prefetch" href="/book/assets/js/27.b6eae7df.js"><link rel="prefetch" href="/book/assets/js/28.97878b08.js"><link rel="prefetch" href="/book/assets/js/29.7217a3d0.js"><link rel="prefetch" href="/book/assets/js/3.f0c15194.js"><link rel="prefetch" href="/book/assets/js/30.0ced5a5a.js"><link rel="prefetch" href="/book/assets/js/31.8f432033.js"><link rel="prefetch" href="/book/assets/js/32.aac9aa62.js"><link rel="prefetch" href="/book/assets/js/33.db835477.js"><link rel="prefetch" href="/book/assets/js/34.a1a59c2a.js"><link rel="prefetch" href="/book/assets/js/35.ed2ef96b.js"><link rel="prefetch" href="/book/assets/js/36.48099c55.js"><link rel="prefetch" href="/book/assets/js/37.32827612.js"><link rel="prefetch" href="/book/assets/js/38.58abec0e.js"><link rel="prefetch" href="/book/assets/js/39.f6eb3874.js"><link rel="prefetch" href="/book/assets/js/4.13ea3b32.js"><link rel="prefetch" href="/book/assets/js/40.80523ed8.js"><link rel="prefetch" href="/book/assets/js/41.f7b72b7a.js"><link rel="prefetch" href="/book/assets/js/42.44e40de9.js"><link rel="prefetch" href="/book/assets/js/44.386d9aea.js"><link rel="prefetch" href="/book/assets/js/45.d8a88f16.js"><link rel="prefetch" href="/book/assets/js/46.33bc2083.js"><link rel="prefetch" href="/book/assets/js/47.e7767237.js"><link rel="prefetch" href="/book/assets/js/48.9bb76138.js"><link rel="prefetch" href="/book/assets/js/49.c2ca5c29.js"><link rel="prefetch" href="/book/assets/js/5.32eda204.js"><link rel="prefetch" href="/book/assets/js/50.1242fe24.js"><link rel="prefetch" href="/book/assets/js/51.e8f8117d.js"><link rel="prefetch" href="/book/assets/js/52.eae5648f.js"><link rel="prefetch" href="/book/assets/js/53.39876d83.js"><link rel="prefetch" href="/book/assets/js/6.1c662163.js"><link rel="prefetch" href="/book/assets/js/7.29470445.js"><link rel="prefetch" href="/book/assets/js/8.814b737f.js"><link rel="prefetch" href="/book/assets/js/9.d4211262.js">
    <link rel="stylesheet" href="/book/assets/css/0.styles.e159a327.css">
  </head>
  <body>
    <div id="app" data-server-rendered="true"><div class="theme-container no-sidebar"><header class="navbar"><div class="sidebar-button"><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" role="img" viewBox="0 0 448 512" class="icon"><path fill="currentColor" d="M436 124H12c-6.627 0-12-5.373-12-12V80c0-6.627 5.373-12 12-12h424c6.627 0 12 5.373 12 12v32c0 6.627-5.373 12-12 12zm0 160H12c-6.627 0-12-5.373-12-12v-32c0-6.627 5.373-12 12-12h424c6.627 0 12 5.373 12 12v32c0 6.627-5.373 12-12 12zm0 160H12c-6.627 0-12-5.373-12-12v-32c0-6.627 5.373-12 12-12h424c6.627 0 12 5.373 12 12v32c0 6.627-5.373 12-12 12z"></path></svg></div> <a href="/book/" class="home-link router-link-active"><!----> <span class="site-name">算法不好玩</span></a> <div class="links"><div class="search-box"><input aria-label="Search" autocomplete="off" spellcheck="false" value=""> <!----></div> <nav class="nav-links can-hide"><div class="nav-item"><a href="/book/" class="nav-link">
  主页
</a></div><div class="nav-item"><a href="/book/guide/" class="nav-link">
  关于我
</a></div><div class="nav-item"><div class="dropdown-wrapper"><button type="button" aria-label="基础算法与数据结构" class="dropdown-title"><span class="title">基础算法与数据结构</span> <span class="arrow down"></span></button> <button type="button" aria-label="基础算法与数据结构" class="mobile-dropdown-title"><span class="title">基础算法与数据结构</span> <span class="arrow right"></span></button> <ul class="nav-dropdown" style="display:none;"><li class="dropdown-item"><h4>
          排序算法
        </h4> <ul class="dropdown-subitem-wrapper"><li class="dropdown-subitem"><a href="/book/algs/01-binary-search/" class="nav-link">
  第 1 章 二分查找
</a></li><li class="dropdown-subitem"><a href="/book/algs/recursion/" class="nav-link">
  第 2 章 递归
</a></li></ul></li><li class="dropdown-item"><h4>
          数组里的算法
        </h4> <ul class="dropdown-subitem-wrapper"><li class="dropdown-subitem"><a href="/book/algs/01-binary-search/" class="nav-link">
  第 4 章 滑动窗口
</a></li><li class="dropdown-subitem"><a href="/book/algs/02-basic-sorting/" class="nav-link">
  第 5 章 双指针
</a></li></ul></li></ul></div></div><div class="nav-item"><div class="dropdown-wrapper"><button type="button" aria-label="威威道来" class="dropdown-title"><span class="title">威威道来</span> <span class="arrow down"></span></button> <button type="button" aria-label="威威道来" class="mobile-dropdown-title"><span class="title">威威道来</span> <span class="arrow right"></span></button> <ul class="nav-dropdown" style="display:none;"><li class="dropdown-item"><!----> <a href="/book/talk-show/2021-04/" class="nav-link">
  2021 年 4 月
</a></li></ul></div></div><div class="nav-item"><a href="https://leetcode-cn.com/" target="_blank" rel="noopener noreferrer" class="nav-link external">
  力扣
  <span><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" x="0px" y="0px" viewBox="0 0 100 100" width="15" height="15" class="icon outbound"><path fill="currentColor" d="M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"></path> <polygon fill="currentColor" points="45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"></polygon></svg> <span class="sr-only">(opens new window)</span></span></a></div><div class="nav-item"><a href="https://gitee.com/liweiwei1419/book/pages" target="_blank" rel="noopener noreferrer" class="nav-link external">
  Gitee 部署页面
  <span><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" x="0px" y="0px" viewBox="0 0 100 100" width="15" height="15" class="icon outbound"><path fill="currentColor" d="M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"></path> <polygon fill="currentColor" points="45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"></polygon></svg> <span class="sr-only">(opens new window)</span></span></a></div> <!----></nav></div></header> <div class="sidebar-mask"></div> <aside class="sidebar"><nav class="nav-links"><div class="nav-item"><a href="/book/" class="nav-link">
  主页
</a></div><div class="nav-item"><a href="/book/guide/" class="nav-link">
  关于我
</a></div><div class="nav-item"><div class="dropdown-wrapper"><button type="button" aria-label="基础算法与数据结构" class="dropdown-title"><span class="title">基础算法与数据结构</span> <span class="arrow down"></span></button> <button type="button" aria-label="基础算法与数据结构" class="mobile-dropdown-title"><span class="title">基础算法与数据结构</span> <span class="arrow right"></span></button> <ul class="nav-dropdown" style="display:none;"><li class="dropdown-item"><h4>
          排序算法
        </h4> <ul class="dropdown-subitem-wrapper"><li class="dropdown-subitem"><a href="/book/algs/01-binary-search/" class="nav-link">
  第 1 章 二分查找
</a></li><li class="dropdown-subitem"><a href="/book/algs/recursion/" class="nav-link">
  第 2 章 递归
</a></li></ul></li><li class="dropdown-item"><h4>
          数组里的算法
        </h4> <ul class="dropdown-subitem-wrapper"><li class="dropdown-subitem"><a href="/book/algs/01-binary-search/" class="nav-link">
  第 4 章 滑动窗口
</a></li><li class="dropdown-subitem"><a href="/book/algs/02-basic-sorting/" class="nav-link">
  第 5 章 双指针
</a></li></ul></li></ul></div></div><div class="nav-item"><div class="dropdown-wrapper"><button type="button" aria-label="威威道来" class="dropdown-title"><span class="title">威威道来</span> <span class="arrow down"></span></button> <button type="button" aria-label="威威道来" class="mobile-dropdown-title"><span class="title">威威道来</span> <span class="arrow right"></span></button> <ul class="nav-dropdown" style="display:none;"><li class="dropdown-item"><!----> <a href="/book/talk-show/2021-04/" class="nav-link">
  2021 年 4 月
</a></li></ul></div></div><div class="nav-item"><a href="https://leetcode-cn.com/" target="_blank" rel="noopener noreferrer" class="nav-link external">
  力扣
  <span><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" x="0px" y="0px" viewBox="0 0 100 100" width="15" height="15" class="icon outbound"><path fill="currentColor" d="M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"></path> <polygon fill="currentColor" points="45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"></polygon></svg> <span class="sr-only">(opens new window)</span></span></a></div><div class="nav-item"><a href="https://gitee.com/liweiwei1419/book/pages" target="_blank" rel="noopener noreferrer" class="nav-link external">
  Gitee 部署页面
  <span><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" x="0px" y="0px" viewBox="0 0 100 100" width="15" height="15" class="icon outbound"><path fill="currentColor" d="M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"></path> <polygon fill="currentColor" points="45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"></polygon></svg> <span class="sr-only">(opens new window)</span></span></a></div> <!----></nav>  <!----> </aside> <main class="page"> <div class="theme-default-content content__default"><div class="custom-block tip"><p class="custom-block-title">TIP</p> <p>本文介绍了 k 近邻法。</p></div> <p><img src="https://ws2.sinaimg.cn/large/006tKfTcly1g0u0rmgzw1j32450u079i.jpg" alt="k 近邻法"></p> <p>“k 近法”在算法层面理解容易，可以从使用“k 近邻法”处理分类问题入手，解释机器学习中的各种概念和一般流程。</p> <h3 id="k-近邻法的基本思想"><a href="#k-近邻法的基本思想" class="header-anchor">#</a> k 近邻法的基本思想</h3> <p>“k 近邻法” 几乎是所有机器学习算法中最简单的算法，它用于分类的核心思想就是“物以类聚，人以群分”，即未标记样本的类别由距离其最近的 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 个邻居投票来决定。</p> <p><img src="http://upload-images.jianshu.io/upload_images/414598-7755e3944c95b26c.jpg?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" alt="说明：图片来自周志华《机器学习》第 10 章第 1 节。"></p> <p>（图片来自周志华《机器学习》第 10 章第 1 节）</p> <h3 id="有监督学习、分类学习、回归"><a href="#有监督学习、分类学习、回归" class="header-anchor">#</a> 有监督学习、分类学习、回归</h3> <p>有监督学习的数据包含了“特征”和“标签”。根据这些数据对新的数据的分类进行预测或预测，如果待预测的目标变量是离散值，那么这一类问题称之为“分类问题”；如果待预测的目标变量是连续值，那么这一类问题称之为“回归”问题。</p> <h3 id="评估算法时不能使用在训练过程中出现过的数据"><a href="#评估算法时不能使用在训练过程中出现过的数据" class="header-anchor">#</a> 评估算法时不能使用在训练过程中出现过的数据</h3> <p>这一点很像我们以前学习的时候，常常会买一本练习册做题，如果这本练习册没有参考答案，你就不知道自己做对与否。而真正检验你学习水平的大型考试，例如期中考试、期末考试、中考、高考都是重新出题，如果使用以前出现过的题目，则不能检验你学习的真实水平，因为你有可能是记住了问题的解法，而没有理解它。</p> <p>这就是分离训练数据集和测试数据集的必要性。因此采集到的所有<strong>带标签</strong>的样本，应该分离一部分用于测试。那么评估算法应该采用什么指标呢？</p> <h3 id="评估算法好坏的指标"><a href="#评估算法好坏的指标" class="header-anchor">#</a> 评估算法好坏的指标</h3> <p>一般地，“<mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻法”用于分类问题使用“准确率”作为指标。但是在数据分布不平衡的时候，就不能使用准确率了，而应该使用精准率、召回率、混淆矩阵等，甚至应该看看 auc。</p> <h3 id="超参数"><a href="#超参数" class="header-anchor">#</a> 超参数</h3> <p>超参数是算法执行之前认为定义的。“<mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻法” 中的 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 就是一个超参数，它决定了模型的复杂度。</p> <p>“<mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻法” 还有其它超参数，使用的距离的定义是欧氏距离还是闵式距离，闵式距离的参数 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="p"></mjx-c></mjx-mi></mjx-math></mjx-container> 是多少，投票的时候是“平票”还是“加权投票”。</p> <h3 id="模型的复杂度、过拟合、欠拟合"><a href="#模型的复杂度、过拟合、欠拟合" class="header-anchor">#</a> 模型的复杂度、过拟合、欠拟合</h3> <p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 越小，模型就越复杂。极端情况下 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi><mjx-mo space="4" class="mjx-n"><mjx-c c="="></mjx-c></mjx-mo><mjx-mn space="4" class="mjx-n"><mjx-c c="1"></mjx-c></mjx-mn></mjx-math></mjx-container> ，新来的预测数据的类别取决于训练样本中离他最近的那个样本的类别，如果这个样本恰好是标记错误的样本，预测就可能发生错误，因为它看不到更多数据，就有可能过拟合，学习到的不是样本数据的一般规律。</p> <p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 越大，模型就越简单。极端情况下 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 等于所有训练样本的个数，此时每新来一个数据做预测的时候，就直接把训练样本中出现最多的类别数返回就行了，这样的模型过于简单，以致于失去了算法真正的意义，所有的预测数据都返回同一个值，对数据不能很好的预测，这是欠拟合。</p> <h3 id="网格搜索与交叉验证"><a href="#网格搜索与交叉验证" class="header-anchor">#</a> 网格搜索与交叉验证</h3> <p>网格搜索其实就是暴力搜索，把我们认为可能合理的超参数和超参数的组合输入算法。而<strong>评价一组超参数的好坏一定不能使用测试数据集，一般的做法是从训练数据集中再分离出一部分数据用于验证超参数的好坏，并且这种验证超参数好坏的做法要使用不同的训练数据集的子集重复多次，这就是交叉验证</strong>。</p> <p><strong>交叉验证用于选择超参数</strong>，由于分离数据集其实带有一定的随机性，把所有的数据集都看一遍，多次取平均的做法，比起一次性随机地使用训练数据集的一部分子集作为测试数据集要靠谱。</p> <p>网格搜索中就用到了交叉验证，通过框架被封装了起来，不用我们手动实现。</p> <h3 id="数据标准化"><a href="#数据标准化" class="header-anchor">#</a> 数据标准化</h3> <p>数据标准化是我刚开始接触学习机器学习算法的时候经常被忽略的。由于 k 近邻法使用距离作为度量，数据在量纲上的统一是十分重要的，数据标准化则可以避免计算出来的距离被量纲大的特征所主导。</p> <p>后面我们可以看到数据标准化在梯度下降中也发挥很大的作用，还有 SVM、K-means 这些距离度量的算法，都要求我们对数据进行标准化。</p> <p>例如：《机器学习实战》提供的例子。</p> <p><img src="http://upload-images.jianshu.io/upload_images/414598-a9fddfeaca8c4979.jpg?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" alt="image-20190217153027062"></p> <p>在数据标准化这件事上，还要注意：训练数据集和测试数据集一定都使用相同的标准化方式，即训练数据集的标准化方式，请看下面的公式。</p>
标准化的训练数据集 = \cfrac{原始训练数据集数据-训练数据集的平均值}{训练数据集的标准差}

标准化的测试数据集 = \cfrac{原始测试集数据集数据-训练数据集的平均值}{训练数据集的标准差}

<p>测试数据集在标准化的时候，一定也要使用“训练数据集”的平均值和“训练数据集”的标准差，而不能使用测试数据集的。</p> <p>原因其实很简单：</p> <p>1、标准化其实可以视为算法的一部分，既然数据集都减去了一个数，然后除以一个数，这两个数对于所有的数据来说，就要一视同仁；</p> <p>2、训练数据集其实很少，在预测新样本的时候，新样本就更少得可怜，如果新样本就一个数据，它的均值就是它自己，标准差是 0 ，这根本就不合理。</p> <h3 id="k-近邻算法的三要素"><a href="#k-近邻算法的三要素" class="header-anchor">#</a> k 近邻算法的三要素</h3> <p>1、超参数：<mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> ；</p> <p>2、距离的定义（例如：欧氏距离）；</p> <p>3、决策的规则（例如：投票表决，或者加权投票）。</p> <h3 id="手写-k-近邻法的核心代码"><a href="#手写-k-近邻法的核心代码" class="header-anchor">#</a> 手写 k 近邻法的核心代码</h3> <p>Python 代码：</p> <div class="language-python line-numbers-mode"><pre class="language-python"><code>distances <span class="token operator">=</span> <span class="token punctuation">[</span>np<span class="token punctuation">.</span>linalg<span class="token punctuation">.</span>norm<span class="token punctuation">(</span>point <span class="token operator">-</span> X<span class="token punctuation">)</span> <span class="token keyword">for</span> point <span class="token keyword">in</span> X_train<span class="token punctuation">]</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">&quot;打印每个点距离待测点的距离：&quot;</span><span class="token punctuation">)</span>
<span class="token keyword">for</span> index<span class="token punctuation">,</span> distance <span class="token keyword">in</span> <span class="token builtin">enumerate</span><span class="token punctuation">(</span>distances<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">&quot;[{}] {}&quot;</span><span class="token punctuation">.</span><span class="token builtin">format</span><span class="token punctuation">(</span>index<span class="token punctuation">,</span> np<span class="token punctuation">.</span><span class="token builtin">round</span><span class="token punctuation">(</span>distance<span class="token punctuation">,</span> <span class="token number">2</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">)</span>

sorted_index <span class="token operator">=</span> np<span class="token punctuation">.</span>argsort<span class="token punctuation">(</span>distances<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>y_train<span class="token punctuation">[</span>sorted_index<span class="token punctuation">]</span><span class="token punctuation">)</span>

k <span class="token operator">=</span> <span class="token number">6</span>
topK <span class="token operator">=</span> y_train<span class="token punctuation">[</span>sorted_index<span class="token punctuation">]</span><span class="token punctuation">[</span><span class="token punctuation">:</span>k<span class="token punctuation">]</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>topK<span class="token punctuation">)</span>

<span class="token keyword">from</span> collections <span class="token keyword">import</span> Counter

votes <span class="token operator">=</span> Counter<span class="token punctuation">(</span>topK<span class="token punctuation">)</span>
mc <span class="token operator">=</span> votes<span class="token punctuation">.</span>most_common<span class="token punctuation">(</span>n<span class="token operator">=</span><span class="token number">1</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>mc<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">&quot;根据投票得出的点 X 的标签为：&quot;</span><span class="token punctuation">,</span> mc<span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span><span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span><span class="token punctuation">)</span>
</code></pre> <div class="line-numbers-wrapper"><span class="line-number">1</span><br><span class="line-number">2</span><br><span class="line-number">3</span><br><span class="line-number">4</span><br><span class="line-number">5</span><br><span class="line-number">6</span><br><span class="line-number">7</span><br><span class="line-number">8</span><br><span class="line-number">9</span><br><span class="line-number">10</span><br><span class="line-number">11</span><br><span class="line-number">12</span><br><span class="line-number">13</span><br><span class="line-number">14</span><br><span class="line-number">15</span><br><span class="line-number">16</span><br><span class="line-number">17</span><br><span class="line-number">18</span><br></div></div><h3 id="通过-k-近邻法了解机器学习项目的执行流程"><a href="#通过-k-近邻法了解机器学习项目的执行流程" class="header-anchor">#</a> 通过 k 近邻法了解机器学习项目的执行流程</h3> <p>使用 iris 鸢尾花数据集。</p> <p>1、分割训练数据集和测试数据集；</p> <p>2、只用训练数据集 <code>fit</code> 得到均值和标准差；</p> <p>3、分别对训练数据集和测试数据集进行 <code>transform</code>，注意：这里只需要传入 X_train 和 y_train 就可以了，不用传入标签；</p> <p>4、使用 k 近邻法进行评分，注意：传入的特征矩阵一定要经过数据标准化。</p> <p><img src="http://upload-images.jianshu.io/upload_images/414598-b52452670e124e08.jpg?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" alt="image-20190217154816935"></p> <h3 id="k​-近邻法的特点"><a href="#k​-近邻法的特点" class="header-anchor">#</a> k​ 近邻法的特点</h3> <p>在整理这部分内容的时候，发现优点和缺点其实要在一定的前提下讨论，所以就干脆放在一起，说一说 k 近邻法的特点。</p> <p>k 近邻法是一个懒惰学习的算法，没有显式的学习过程，即没有它没有训练的步骤，是一个基于记忆的学习算法。</p> <p>k 近邻法是一种在线学习技术，新数据可以直接加入数据集而不必进行重新训练，但与此同时在线学习计算量大，对内存的需求也较大。基本的 k 近邻法每预测一个“点”的分类都会重新进行一次<strong>全局</strong>运算，对于样本容量大的数据集计算量比较大。k 近邻法的优化实现：kd 树，即给训练数据建立树结构一样的索引，期望快速找到 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 个邻居，以防止线性扫描。</p> <p>“多数表决”规则等价于“经验风险最小化”，即算法在训练数据集上“风险”最小。</p> <p>对异常值和噪声有较高的容忍度，在算距离的时候，异常值和噪声离待预测的点的距离会比较远，且个数较少，就不会参与最终结果的投票。</p> <p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法天生就支持多分类，类似还有决策树、朴素贝叶斯分类，它们区别于感知机、逻辑回归、SVM 这类原生只用于二分类问题的算法。</p> <h3 id="维度灾难"><a href="#维度灾难" class="header-anchor">#</a> <strong>维度灾难</strong></h3> <p>在高维空间中计算距离的时候，就会变得非常远。</p> <p>样本不平衡时，预测偏差会比较大。</p> <p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 值大小的选择得依靠经验或者交叉验证得到，不能自己拍脑门随便指定一个。</p> <p>增加邻居的权重，距离越近，权重越高，参数：<code>weights</code>。</p> <p>当数据采样不均匀的时候，使用一定半径内的点，该算法可以取得更好的性能，可以参考 <code>from sklearn.neighbors import RadiusNeighborsClassifier</code>。</p> <p>k 近邻法还可以用于回归，找最近的邻居，然后计算它们的平均值就可以了。</p> <h2 id="参考资料"><a href="#参考资料" class="header-anchor">#</a> 参考资料</h2> <p>[1] 李航. 统计学习方法（第 2 版，第 3 章“<mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻法”）. 北京：清华大学出版社，2019.
说明：介绍了 kd 树，并给出了例子。</p> <p>[2] 周志华. 机器学习（第 10 章第 1 节）. 北京：清华大学出版社.
说明：只简单介绍了思想，并给出了 k 近邻算法虽简单但预测效果在一定情况下比最优贝叶斯估计强的结论（我的这个概括待推敲），没有《统计学习方法》介绍详细。</p> <p>[3] [美] Peter Harrington 著，李锐，李鹏，曲亚东 等 译.机器学习实战（第 2 章）.北京：人民邮电出版社.
说明：想吐槽一下这本书在这一章节给出的示例代码，很简单的一个算法，它给出的代码变得很复杂，其实并不利于理解 k 近邻算法的基本思想。</p> <p>（本节完）</p> <hr> <p>以下为草稿，我自己留着备用，读者可以忽略，欢迎大家批评指针。</p> <h3 id="想说一说-k-近邻算法-在机器学习中的地位"><a href="#想说一说-k-近邻算法-在机器学习中的地位" class="header-anchor">#</a> 想说一说“k 近邻算法”在机器学习中的地位</h3> <p>“k 近邻算法” 可以说是最容易理解的机器学习算法，所以可以用“k 近邻算法”来作为入门机器学习算法的基本流程的最好材料，因为我们在理解算法上不须要花太多时间。</p> <p>下面简单说说，“k 近邻算法” 给我们带来了什么。</p> <ul><li>超参数：k 就是一个超参数，这是我们得根据经验，在算法运行之前指定的；</li> <li>数据集分离：我们不能使用所有的样本训练数据，我们还要评估算法的性能，即使是同一个算法，不同的超参数还须要评估好坏，因此，必须从数据集中分离出一部分数据，进行算法好坏，超参数选择的验证；</li> <li>评估算法好坏的准则：k 近邻算法用于分类问题，一个最容易理解的评价指标就是准确率（或者错误率，因为错误率=1-准确率）；</li> <li>交叉验证：交叉验证用于选择超参数，比起简单地那一部分数据作为测试数据集要靠谱，因为分离数据集带有一定随机性；</li> <li>网格搜索：其实就是暴力搜索，把我们认为可能合理的超参数和超参数的组合输入算法，而在其中评估算法好坏，超参数的选择也用到了交叉验证的过程；</li> <li>数据标准化：这一步是一开始学习机器学习算法的时候经常被忽略的，后面我们可以看到数据标准化在梯度下降中也发挥很大的作用；</li> <li>模型复杂度：理解下面这句话 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 的值越小，模型越复杂，<mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 的值越大，模型越简单，因为 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 如果和训练数据集一样大的话，其实我们每个预测数据都只能预测为一个类别，即训练数据集中数量最多的那个类别；</li> <li>决策边界：这是分类问题的一个重要且简单的概念。</li></ul> <h3 id="算法执行的步骤"><a href="#算法执行的步骤" class="header-anchor">#</a> 算法执行的步骤</h3> <p>1、选择 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 和距离的度量；
2、计算待标记的数据样本和数据集中每个样本的距离，取距离最近的 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 个样本。待标记的数据样本所属的类别，就由这 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 个距离最近的样本投票产生。</p> <h3 id="k-近邻算法的训练过程-即是利用训练数据集-对特征向量空间进行划分"><a href="#k-近邻算法的训练过程-即是利用训练数据集-对特征向量空间进行划分" class="header-anchor">#</a> k 近邻算法的训练过程，即是利用训练数据集，<strong>对特征向量空间进行划分</strong></h3> <p><img src="https://upload-images.jianshu.io/upload_images/414598-90d7e2f862ea411d.jpg?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" alt="李航《统计学习方法》P37"></p> <p>说明：从这张图中，你可以看到决策边界。</p> <ul><li><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法是一个懒惰学习的算法，没有显式的学习过程，即没有它没有训练的步骤，是一个基于记忆的学习算法；</li> <li>“多数表决”规则等价于“经验风险最小化”（我们的算法在训练数据集上“风险”最小）；</li> <li><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法的优化实现：kd 树，即是给训练数据建立树结构一样的索引，期望快速找到 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 个邻居，以防止线性扫描。</li></ul> <h3 id="近邻算法的应用领域"><a href="#近邻算法的应用领域" class="header-anchor">#</a> <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法的应用领域</h3> <p>文本分类、模式识别、聚类分析，多分类领域。</p> <h2 id="近邻算法的优点"><a href="#近邻算法的优点" class="header-anchor">#</a> <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法的优点</h2> <ul><li><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法是一种在线技术，新数据可以直接加入数据集而不必进行重新训练；</li> <li><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法理论简单，容易实现；</li> <li>准确性高：对异常值和噪声有较高的容忍度；</li> <li><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法天生就支持多分类，区别与感知机、逻辑回归、SVM。</li></ul> <h3 id="近邻算法的缺点"><a href="#近邻算法的缺点" class="header-anchor">#</a> <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法的缺点</h3> <ul><li><p>基本的 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法每预测一个“点”的分类都会重新进行一次<strong>全局</strong>运算，对于样本容量大的数据集计算量比较大；</p></li> <li><p><strong>维度灾难</strong>：在高维空间中计算距离的时候，就会变得非常远；</p></li> <li><p>样本不平衡时，预测偏差比较大，例如：某一类的样本比较少，而其它类样本比较多；</p></li> <li><p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 值大小的选择得依靠经验或者交叉验证得到。</p></li> <li><p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 的选择可以使用交叉验证，也可以使用网格搜索（其实是一件事情，网格搜索里面其实就是用的交叉验证）；</p></li> <li><p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 的值越大，模型的偏差越大，对噪声数据越不敏感，当 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 的值很大的时候，可能造成模型欠拟合；<mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 的值越小，模型的方差就会越大，当 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 的值很小的时候，就会造成模型的过拟合。</p></li></ul> <h3 id="从-近邻算法说开"><a href="#从-近邻算法说开" class="header-anchor">#</a> 从 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法说开</h3> <ul><li>增加邻居的权重，距离越近，权重越高，参数：weights；</li></ul> <p>维基百科<a href="https://zh.wikipedia.org/wiki/%E6%9C%80%E8%BF%91%E9%84%B0%E5%B1%85%E6%B3%95" target="_blank" rel="noopener noreferrer">最近邻居法<span><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" x="0px" y="0px" viewBox="0 0 100 100" width="15" height="15" class="icon outbound"><path fill="currentColor" d="M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"></path> <polygon fill="currentColor" points="45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"></polygon></svg> <span class="sr-only">(opens new window)</span></span></a>词条中是这样介绍的：</p> <blockquote><p>无论是分类还是回归，衡量邻居的权重都非常有用，使较近邻居的权重比较远邻居的权重大。例如，一种常见的加权方案是给每个邻居权重赋值为 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mfrac><mjx-frac><mjx-num><mjx-nstrut></mjx-nstrut><mjx-mrow size="s"><mjx-mpadded><mjx-block style="margin:0.896em 0 0.313em;"><mjx-mrow></mjx-mrow></mjx-block></mjx-mpadded><mjx-mstyle style="font-size:141.4%;"><mjx-TeXAtom><mjx-mn class="mjx-n"><mjx-c c="1"></mjx-c></mjx-mn></mjx-TeXAtom></mjx-mstyle></mjx-mrow></mjx-num><mjx-dbox><mjx-dtable><mjx-line></mjx-line><mjx-row><mjx-den><mjx-dstrut></mjx-dstrut><mjx-mrow size="s"><mjx-mpadded><mjx-block style="margin:0.896em 0 0.313em;"><mjx-mrow></mjx-mrow></mjx-block></mjx-mpadded><mjx-mstyle style="font-size:141.4%;"><mjx-TeXAtom><mjx-mi class="mjx-i"><mjx-c c="d"></mjx-c></mjx-mi></mjx-TeXAtom></mjx-mstyle></mjx-mrow></mjx-den></mjx-row></mjx-dtable></mjx-dbox></mjx-frac></mjx-mfrac></mjx-math></mjx-container>，其中 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="d"></mjx-c></mjx-mi></mjx-math></mjx-container> 是到邻居的距离。</p></blockquote> <ul><li><p>使用一定半径内的点，当数据采样不均匀的时候，该算法可以取得更好的性能：<code>from sklearn.neighbors import RadiusNeighborsClassifier</code>；</p></li> <li><p><mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="k"></mjx-c></mjx-mi></mjx-math></mjx-container> 近邻算法还可以用于回归，找最近的邻居，计算它们的平均值就可以了。</p></li></ul> <h3 id="参考资料-2"><a href="#参考资料-2" class="header-anchor">#</a> 参考资料</h3> <ul><li>理解 kd 树的文章：https://www.cnblogs.com/lesleysbw/p/6074662.html</li> <li>理解 balltree 的文章：https://blog.csdn.net/pipisorry/article/details/53156836</li> <li>https://blog.csdn.net/skyline0623/article/details/8154911</li></ul> <p><img src="http://upload-images.jianshu.io/upload_images/414598-7bcad386879f9081.jpg?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" alt="image-20190217154320184"></p> <h3 id="在二维平面上绘制决策边界"><a href="#在二维平面上绘制决策边界" class="header-anchor">#</a> 在二维平面上绘制决策边界</h3> <p>方法很简单，就是在整个二维平面上采用一定密度的数据做预测，预测的不同类别使用不同的颜色标记，此时不同的类别之间就会形成决策边界。</p> <p>k 近邻算法的训练过程，即是利用训练数据集，<strong>对特征向量空间进行划分</strong>。</p> <p><img src="http://upload-images.jianshu.io/upload_images/414598-ee6ebbf869b9f82b.jpg?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240" alt="李航《统计学习方法》P37"></p> <p>说明：从这张图中，你可以看到决策边界。</p> <p>维基百科<a href="https://zh.wikipedia.org/wiki/%E6%9C%80%E8%BF%91%E9%84%B0%E5%B1%85%E6%B3%95" target="_blank" rel="noopener noreferrer">最近邻居法<span><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" x="0px" y="0px" viewBox="0 0 100 100" width="15" height="15" class="icon outbound"><path fill="currentColor" d="M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"></path> <polygon fill="currentColor" points="45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"></polygon></svg> <span class="sr-only">(opens new window)</span></span></a>词条中是这样介绍的：</p> <blockquote><p>无论是分类还是回归，衡量邻居的权重都非常有用，使较近邻居的权重比较远邻居的权重大。例如，一种常见的加权方案是给每个邻居权重赋值为 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mfrac><mjx-frac><mjx-num><mjx-nstrut></mjx-nstrut><mjx-mrow size="s"><mjx-mpadded><mjx-block style="margin:0.896em 0 0.313em;"><mjx-mrow></mjx-mrow></mjx-block></mjx-mpadded><mjx-mstyle style="font-size:141.4%;"><mjx-TeXAtom><mjx-mn class="mjx-n"><mjx-c c="1"></mjx-c></mjx-mn></mjx-TeXAtom></mjx-mstyle></mjx-mrow></mjx-num><mjx-dbox><mjx-dtable><mjx-line></mjx-line><mjx-row><mjx-den><mjx-dstrut></mjx-dstrut><mjx-mrow size="s"><mjx-mpadded><mjx-block style="margin:0.896em 0 0.313em;"><mjx-mrow></mjx-mrow></mjx-block></mjx-mpadded><mjx-mstyle style="font-size:141.4%;"><mjx-TeXAtom><mjx-mi class="mjx-i"><mjx-c c="d"></mjx-c></mjx-mi></mjx-TeXAtom></mjx-mstyle></mjx-mrow></mjx-den></mjx-row></mjx-dtable></mjx-dbox></mjx-frac></mjx-mfrac></mjx-math></mjx-container>，其中 <mjx-container jax="CHTML" class="MathJax"><mjx-math class=" MJX-TEX"><mjx-mi class="mjx-i"><mjx-c c="d"></mjx-c></mjx-mi></mjx-math></mjx-container> 是到邻居的距离。</p></blockquote> <p>8、《机器学习实战》笔记ch02-K近邻算法
https://zhuanlan.zhihu.com/p/21799146</p> <p>9、用Python做科学计算（很不错的资料）
http://old.sebug.net/paper/books/scipydoc/index.html</p> <hr> <p>理解 kd 树的文章：https://www.cnblogs.com/lesleysbw/p/6074662.html</p> <p>理解 balltree 的文章：https://blog.csdn.net/pipisorry/article/details/53156836</p> <p>https://blog.csdn.net/skyline0623/article/details/8154911</p></div> <footer class="page-edit"><!----> <div class="last-updated"><span class="prefix">上次更新:</span> <span class="time">4/10/2021, 6:19:58 PM</span></div></footer> <!----> </main></div><div class="global-ui"><!----></div></div>
    <script src="/book/assets/js/app.a4355b0d.js" defer></script><script src="/book/assets/js/2.ba785ee6.js" defer></script><script src="/book/assets/js/43.065f077f.js" defer></script>
  </body>
</html>
