{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "第5讲 k近邻法\n",
    "===\n",
    "主讲教师：高鹏\n",
    "---\n",
    "办公地点：网络空间安全学院407\n",
    "---\n",
    "联系方式：pgao@qfnu.edu.cn\n",
    "---\n",
    "面向专业：软件工程（智能数据）\n",
    "---"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 兼容python2和python3\n",
    "from __future__ import print_function\n",
    "\n",
    "import numpy as np\n",
    "from numpy.linalg import inv\n",
    "import scipy as sp\n",
    "from scipy.optimize import leastsq\n",
    "import pandas as pd\n",
    "import sklearn\n",
    "from sklearn.linear_model import Perceptron\n",
    "from sklearn.datasets import load_iris\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 引言\n",
    "\n",
    "$k$近邻法（$k$-nearestneighbor，$k$-NN）是一种基本分类与回归方法。我们只讨论分类问题中的太近邻法。k近邻法的输入为实例的特征向量，对应于特征空间的点；输出为实例的类别，可以取多类。k近邻法假设给定一个训练数据集，其中的实例类别已定。分类时，对新的实例，根据其k个最近邻的训练实例的类别，通过多数表决等方式进行预测。因此，k近邻法不具有显式的学习过程，k近邻法实际上利用训练数据集对特征向量空间进行划分，并作为其分类的\"模型\"。<font color=#ff0000>k值的选择</font>、<font color=#ff0000>距离度量</font>及<font color=#ff0000>分类决策规则</font>是k近邻法的三个基本要素，k近邻法1968年由Cover和Hart提出。\n",
    "\n",
    "本讲首先叙述k近邻算法，然后讨论k近邻法的模型及三个基本要素，最后讲述k近邻法的一个实现方法—kd树，介绍构造kd树和搜索kd树的算法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# k近邻算法\n",
    "\n",
    "k近邻算法简单、直观：给定一个训练数据集，对新的输入实例，在训练数据集中找到与该实例最邻近的k个实例，这k个实例的多数属于某个类，就把该输入实例分为这个类。下面先叙述k近邻算法，然后再讨论其细节。\n",
    "\n",
    "**算法1（k近邻法）**\n",
    "\n",
    "输入：训练数据集\n",
    "\n",
    "$$\n",
    "T=\\{(x_1,y_1),(x_2,y_2),\\ldots,(x_N,y_N)\\}\n",
    "$$\n",
    "\n",
    "其中，$x_i\\in\\mathcal{X}=\\mathbb{R}^n$为实例的特征向量，$y_i\\in\\mathcal{Y}=\\{c_1,c_2,\\ldots,c_K\\}$为实例的类别，$i=1,2,\\ldots,N$；实例特征向量$x$；\n",
    "\n",
    "输出：实例$x$所属的类$y$。\n",
    "\n",
    "(1) 根据给定的距离度量，在训练集T中找出与x最邻近的k个点，涵盖这k个点的x的邻域记作 N，（x）;\n",
    "\n",
    "(2) 在$N_k(x)$中根据分类决策规则（如多数表决）决定$x$的类别$y$\n",
    "\n",
    "$$\n",
    "y=\\arg\\max_{c_j}\\sum_{x_i\\in N_k(x)}I(y_i=c_j),\\quad i=1,2,\\ldots,N,\\quad j=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "式中，$I$为指示函数，即当$y_i=c_i$时$I$为1，否则$I$为0。\n",
    "\n",
    "k近邻法的特殊情况是$k=1$的情形，称为最近邻算法。对于输入的实例点（特征向量）$x$，最近邻法将训练数据集中与$x$最邻近点的类作为$x$的类。\n",
    "\n",
    "k近邻法没有显式的学习过程。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# k近邻模型\n",
    "\n",
    "\n",
    "k近邻法使用的模型实际上对应于对特征空间的划分。模型由三个基本要素—距离度量、k值的选择和分类决策规则决定。\n",
    "\n",
    "## 模型\n",
    "\n",
    "k近邻法中，当训练集、距离度量（如欧氏距离）、k值及分类决策规则（如多数表决）确定后，对于任何一个新的输入实例，它所属的类唯一地确定。这相当于根据上述要素将特征空间划分为一些子空间，确定子空间里的每个点所属的类。这一事实从最近邻算法中可以看得很清楚。\n",
    "\n",
    "特征空间中，对每个训练实例点$x$，距离该点比其他点更近的所有点组成一个区域，叫作单元（cell）。每个训练实例点拥有一个单元，所有训练实例点的单元构成对特征空间的一个划分。最近邻法将实例$x_i$的类$y_i$，作为其单元中所有点的类标记（class label）。这样，每个单元的实例点的类别是确定的。下图是二维特征空间划分的一个例子。\n",
    "\n",
    "<p align=\"center\">\n",
    "  <img width=\"400\" src=\"Lesson5-1.jpg\">\n",
    "</p>\n",
    "\n",
    "## 距离度量\n",
    "\n",
    "特征空间中两个实例点的距离是两个实例点相似程度的反映。k近邻模型的特征空间一般是$n$维实数向量空间$\\mathbb{R}^n$。使用的距离是欧氏距离，但也可以是其他距离，如更一般的$L_p$距离（$L_p$ distance）或Minkowski距离（Minkowki distance）。\n",
    "\n",
    "设特征空间$\\mathcal{X}$是$n$维实数向量空间$\\mathbb{R}^n$，$x_i,x_j\\in\\mathcal{X}$，$x_i=(x_i^{(1)},x_i^{(1)},\\ldots,x_i^{(n)})^T$，$x_j=(x_j^{(1)},x_j^{(1)},\\ldots,x_j^{(n)})^T$，$x_i,x_j$的$L_p$距离定义为\n",
    "\n",
    "$$\n",
    "L_p(x_i,x_j)=(\\sum^n_{l=1}|x_i^{(l)}-x_j^{(l)}|^p)^{\\frac{1}{p}}\n",
    "$$\n",
    "\n",
    "这里$p\\geq 1$。\n",
    "\n",
    "当$p=2$时，称为欧式距离（Euclidean distance），即\n",
    "\n",
    "$$\n",
    "L_2(x_i,x_j)=(\\Sigma^n_{l=1}|x_i^{(l)}-x_j^{(l)}|^2)^{\\frac{1}{2}}\n",
    "$$\n",
    "\n",
    "当$p=1$时，称为曼哈顿距离（Manhattan distance），即\n",
    "\n",
    "$$\n",
    "L_1(x_i,x_j)=\\sum^n_{l=1}|x_i^{(l)}-x_j^{(l)}|\n",
    "$$\n",
    "\n",
    "当$p=\\infty$时，是各个坐标距离的最大值，即\n",
    "\n",
    "$$\n",
    "L_\\infty(x_i,x_j)=\\max_l|x_i^{(l)}-x_j^{(l)}|\n",
    "$$\n",
    "\n",
    "下图给出了二维空间中$p$取不同值时，与原点的$L_p$距离为1（$L_p=1$）的点的图形。\n",
    "\n",
    "<p align=\"center\">\n",
    "  <img width=\"400\" src=\"Lesson5-2.jpg\">\n",
    "</p>\n",
    "\n",
    "### 例\n",
    "\n",
    "已知二维空间的3个点$x_1=(1,1)^T$，$x_2=(5,1)^T$，$x_3=(4,4)^T$，试求在$p$取不同值时，$L_p$距离下$x_1$的最近邻点。\n",
    "\n",
    "**解**  因为$x_1$和$x_2$只有第二维上值不同，所以$p$为任何值时，$L_p(x_1,x_2)=4$。而\n",
    "\n",
    "$$\n",
    "L_1(x_1,x_3)=6，L_2(x_1,x_3)=4.24，L_3(x_1,x_3)=3.78，L_4(x_1,x_3)=3.57\n",
    "$$\n",
    "\n",
    "于是得到：$p$等于1或者2时，$x_2$是$x_1$的最近邻点；$p$大于等于3时，$x_3$是$x_1$的最近邻点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import math\n",
    "from itertools import combinations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "x1 = [1, 1]\n",
    "x2 = [5, 1]\n",
    "x3 = [4, 4]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def L(x, y, p=2):\n",
    "    if len(x) == len(y) and len(x) > 1:\n",
    "        sum = 0\n",
    "        for i in range(len(x)):\n",
    "            sum += math.pow(abs(x[i] - y[i]), p)\n",
    "        return math.pow(sum, 1 / p)\n",
    "    else:\n",
    "        return 0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1, 1] - [5, 1] : 4.0\n",
      "[1, 1] - [5, 1] : 4.0\n",
      "[1, 1] - [5, 1] : 3.9999999999999996\n",
      "[1, 1] - [5, 1] : 4.0\n",
      "[1, 1] - [4, 4] : 6.0\n",
      "[1, 1] - [4, 4] : 4.242640687119285\n",
      "[1, 1] - [4, 4] : 3.7797631496846193\n",
      "[1, 1] - [4, 4] : 3.5676213450081633\n"
     ]
    }
   ],
   "source": [
    "for c in [x2, x3]:\n",
    "    for i in range(1, 5):\n",
    "        r = L(x1, c, p=i)\n",
    "        print('[1, 1] -',c,':',r)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## k值的选择\n",
    "\n",
    "k值的选择会对k近邻法的结果产生重大影响。\n",
    "\n",
    "如果选择较小的k值，就相当于用较小的邻域中的训练实例进行预测，\"学习\"的近似误差（approximationerror）会减小，只有与输入实例较近的（相似的）训练实例才会对预测结果起作用。但缺点是\"学习\"的估计误差（estimationerror）会增大，预测结果会对近邻的实例点非常敏感。如果邻近的实例点恰巧是噪声，预测就会出错。换句话说，k值的减小就意味着整体模型变得复杂，容易发生过拟合。\n",
    "\n",
    "如果选择较大的k值，就相当于用较大邻域中的训练实例进行预测。其优点是可以减少学习的估计误差。但缺点是学习的近似误差会增大。这时与输入实例较远的（不相似的）训练实例也会对预测起作用，使预测发生错误。k值的增大就意味着整体的模型变得简单。\n",
    "\n",
    "如果k=$N$，那么无论输入实例是什么，都将简单地预测它属于在训练实例中最多的类。这时，模型过于简单，完全忽略训练实例中的大量有用信息，是不可取的。\n",
    "\n",
    "在应用中，k值一般取一个比较小的数值。通常采用交叉验证法来选取最优的k值。\n",
    "\n",
    "## 分类决策规则\n",
    "\n",
    "k近邻法中的分类决策规则往往是多数表决，即由输入实例的k个邻近的训练实例中的多数类决定输入实例的类。\n",
    "\n",
    "多数表决规则（majority voting rule）有如下解释：如果分类的损失函数为0-1损失函数，分类函数为\n",
    "\n",
    "$$\n",
    "f:\\mathbb{R}^n\\rightarrow\\{c_1,c_2,\\ldots,c_K\\}\n",
    "$$\n",
    "\n",
    "那么误分类的概率是\n",
    "\n",
    "$$\n",
    "P(Y \\neq f(X))=1-P(Y=f(X))\n",
    "$$\n",
    "\n",
    "对给定的实例$x\\in\\mathcal{X}$，其最近邻的k个训练实例点构成集合$N_k(x)$。如果涵盖$N_k(x)$的区域的类别是$c_j$，那么误分类率是\n",
    "\n",
    "$$\n",
    "\\frac{1}{k}\\sum_{x_i\\in N_k(x)}I(y_i\\neq c_j)=1-\\frac{1}{k}\\sum_{x_i\\in N_k(x)}I(y_i= c_j)\n",
    "$$\n",
    "\n",
    "要使误分类率最小即经验风险最小，就要使$\\Sigma_{x_i\\in N_k(x)}I(y_i=c_j)$最大，所以多数表决规则等价于经验风险最小化。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
