{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## k近邻算法 \n",
    "### 对应课件"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**是否是有监督的学习看是否有标签标记**\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了判断未知样本的类别，已全部训练样本作为样本点，并以**最近邻者**的类别作为决策未知样本类别的唯一依据。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 决策树判断分类绿色图形 No\n",
    "- K近邻算法\n",
    " - k值的选取，\n",
    " - 距离度量的方式\n",
    " - 分类决策规则\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " ### k值的选取\n",
    " - k 一般选择奇数类 3.5.7.....\n",
    " - 3.5.7递增，选择泛化误差最小的K\n",
    " - 一个样本附近的**k个最近(即特征空间中最邻近)样本**的大多数属于某一个类别，则该样本也属于这个类别\n",
    "+ 对于k值的选择，没有一个固定的经验，一般根据样本的分布，选择一个较小的值，可以通过交叉验证选择一个合适的k值。\n",
    "+ 选择较小的k值，就相当于用较小的领域中的训练实例进行预测，训练误差会减小，只有与输入实例较近或相似的训练实例才会对预测结果起作用，与此同时带来的问题是泛化误差会增大，换句话说，K值的减小就意味着整体模型变得复杂，容易发生过拟合；\n",
    "+ 选择较大的k值，就相当于用较大领域中的训练实例进行预测，其优点是可以减少泛化误差，但缺点是训练误差会增大。这时候，与输入实例较远（不相似的）训练实例也会对预测器作用，使预测发生错误，且K值的增大就意味着整体的模型变得简单。\n",
    "+ 一个极端是k等于样本数m，则完全没有分类，此时无论输入实例是什么，都只是简单的预测它属于在训练实例中最多的类，模型过于简单。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 计算距离\n",
    "  + 曼哈顿距离\n",
    " D(x,y)=|x1−y1|+|x2−y2|+...+|xn−yn|=∑i=1n|xi−yi|\n",
    "  + 欧几里得距离\n",
    "  D(x,y)=∑i=1n(xi−yi)2−−−−−−−−−−√"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 分类决策规则\n",
    "+ 多数表决法\n",
    " + 加权表决 距离近权值大 距离远权值小\n",
    " "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "+ 不需要提前训练\n",
    "+ 输入：最近邻数目k,训练集d,测试集z\n",
    "+ 输出：对测试集z中所有测试样本预测其类标号\n",
    " 1. for 每个测试样本\n",
    " 2. 计算距离"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "+ 相同记为0，不同记为1.\n",
    "+ d1=1+1+0.28125=2.28125\n",
    "+ d2=0+0+0.125=0.125\n",
    "+ d3=0+1+0.0625=1.0625\n",
    "+ d4=1+0+0.25=1.25\n",
    "+ d5=0+1+0.09375=1.0375\n",
    "+ d6=0+0+0.125=0.125\n",
    "+ d7=1+1+0.875=2.875\n",
    "+ d8=0+1+0.03125=1.03125\n",
    "+ d9=0+0+0.03125=0.03125\n",
    "+ d10=0+1+00625=1.0625\n",
    "\n",
    "+ d2 d6 d9 no\n",
    "\n",
    "+ 使用最小-最大规范化\n",
    "+ 220记为1,220-60=160\n",
    "+ 125-80/220-60\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2.28125, 0.125, 0.0625, 2.25, 0.09375, 0.125, 2.875, 0.03125, 0.03125, 0.0625]\n",
      "拖欠贷款的结果 N\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "#计算距离\n",
    "def dis(vec1,vec2):\n",
    "    ret1=abs(vec1[0]-vec2[0])\n",
    "    ret2=abs(vec1[0]-vec2[0])\n",
    "    ret3=abs((vec1[2]-vec2[2]))/(220-60)\n",
    "    return ret1+ret2+ret3\n",
    "def createDataSet():\n",
    "    #no,没有结婚用0表示，yes,已婚用1表示\n",
    "    group=np.array([\n",
    "        [1,0,125],\n",
    "        [0,1,100],\n",
    "        [0,0,70],\n",
    "        [1,1,120],\n",
    "        [0,0,95],\n",
    "        [0,1,60],\n",
    "        [1,0,220],\n",
    "        [0,0,85],\n",
    "        [0,1,75],\n",
    "        [0,0,90]\n",
    "    ])\n",
    "    labels=['N','N','N','N','Y','N','N','Y','N','Y']\n",
    "    return group,labels\n",
    "def KNNclassify(newInput,dataset,labels,k):\n",
    "    distance=[]\n",
    "    for vet in dataset:\n",
    "        distance.append(dis(vet,newInput))\n",
    "    sorteddis=np.argsort(distance)\n",
    "    print(distance)\n",
    "    #多数表决\n",
    "    classcount={}\n",
    "    #k个最小距离\n",
    "    for i in range(k):\n",
    "        vetlabel=labels[sorteddis[i]]\n",
    "        classcount[vetlabel]=classcount.get(vetlabel,0)+1\n",
    "    maxcount=0\n",
    "    for key,val in classcount.items():\n",
    "        if val>maxcount:\n",
    "            maxcount=val\n",
    "            maxindex=key\n",
    "    return maxindex\n",
    "dataset,labels=createDataSet()\n",
    "k=3\n",
    "test=np.array([0,1,80])\n",
    "outputlabel=KNNclassify(test,dataset,labels,3)\n",
    "print(\"拖欠贷款的结果\",outputlabel)\n",
    "            \n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KNN算法蛮力实现"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "+ 步骤  \n",
    "  + 计算预测样本与训练集中所有样本的距离，\n",
    "  + 计算出最小的K个距离，\n",
    "  + 接着多数表决，很容易做出预测。\n",
    "+ 不适合特征多，样本多的案例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "A\n"
     ]
    }
   ],
   "source": [
    "'''\n",
    "数据点testX testY,分别调用KNNClassify函数计算其分类归属\n",
    "1.建立数据集\n",
    "2.定义距离函数\n",
    "3.KNN函数 distance=[] np.argsort(distance) 选k个最近邻 \n",
    "'''\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "import matplotlib as mpl\n",
    "import matplotlib.pyplot as plt\n",
    "#创建数据集\n",
    "#2个类别 A B\n",
    "#8个样本\n",
    "def createDataSet():\n",
    "    #生成一个矩阵，每行表示一个样本\n",
    "    group=np.array([\n",
    "        [1.0,0.9],\n",
    "        [1.0,1.0],\n",
    "        [0.8,0.9],\n",
    "        [0.6,0.65],\n",
    "        [0.1,0.2],\n",
    "        [0.3,0.4],\n",
    "        [0.2,0.3],\n",
    "        [0.0,0.1]\n",
    "    ])\n",
    "    labels=['A','A','A','A','B','B','B','B']\n",
    "    return group,labels\n",
    "#计算欧式距离\n",
    "def euclDistance(vector1,vector2):\n",
    "    return np.sqrt(np.sum(np.power(vector2-vector1,2)))\n",
    "#KNN分类算法函数实现\n",
    "def KNNClassify(newInput,dataSet,labels,k):\n",
    "    #.shape可以快速读取矩阵的形状,dataSet.shape (8L,2L) dataSet.shape[0] 8行\n",
    "    #numSamples=dataSet.shape[0]#行数\n",
    "    #建立distance列表存放newInput与dataSet中各点的距离\n",
    "    distance=[]\n",
    "    for vec in dataSet:\n",
    "        distance.append(euclDistance(newInput,vec))\n",
    "    #对距离进行排序\n",
    "    #argsort函数返回的是数组值从小到大的索引值\n",
    "    sortedDistance=np.argsort(distance)\n",
    "    #建立字典存放,多数表决\n",
    "    classCount={}\n",
    "    #选择k个最近邻\n",
    "    for i in range(k):\n",
    "        #取距离最小的k个值的label\n",
    "        voteLabel=labels[sortedDistance[i]]\n",
    "        #计数 多数表决\n",
    "        classCount[voteLabel]=classCount.get(voteLabel,0)+1\n",
    "    maxCount=0\n",
    "    #多数表决\n",
    "    for key,val in classCount.items():\n",
    "        if val>maxCount:\n",
    "            maxCount=val\n",
    "            maxkey=key\n",
    "    return maxkey\n",
    "dataSet,labels=createDataSet()  \n",
    "testX=np.array([1.2,1.0])\n",
    "outputLabel=KNNClassify(testX,dataSet,labels,3)\n",
    "print(outputLabel)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KNN的改进"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "提高分类速度和准确度"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KD树"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
