{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "1a43f20a",
   "metadata": {},
   "source": [
    "## 感知机\n",
    "\n",
    "感知机是一种二分类的**线性分类模型**，输入为实例的特征向量，输出为实施类别（+1，-1）。感知机对应于输入空间中将实例划分为两类的分离超平面，感知机旨在求出该超平面。为求得超平面导入了基于误分类的损失函数，利用梯度下降法 对损失函数进行最优化（最优化）。感知机的学习算法具有简单而易于实现的优点，感知机预测是用学习得到的感知机模型对新的实例进行预测的，因此属于判别模型。\n",
    "\n",
    "* **Biological Interpretation**\n",
    "\n",
    "![neuron](images/neuron.png)\n",
    "\n",
    "* dendrites - 树突\n",
    "* nucleus - 细胞核\n",
    "* axon - 轴突\n",
    "\n",
    "它取一组二进制输入值（附近的神经元），将每个输入值乘以一个连续值权重（每个附近神经元的突触强度），并设立一个阈值，如果这些加权输入值的和超过这个阈值，就输出1，否则输出0（同理于神经元是否放电）。\n",
    "\n",
    "学习的过程：**给定一个有输入输出实例的训练集，感知机应该「学习」一个函数：对每个例子，若感知机的输出值比实例低太多，则增加它的权重，否则若设比实例高太多，则减少它的权重。**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42ae65bc",
   "metadata": {},
   "source": [
    "### 1.感知机模型\n",
    "\n",
    "假设输入空间(特征向量)为$X \\subseteq R^n$，输出空间为$Y=\\{-1, +1\\}$。输入$x \\in X$ 表示实例的特征向量，对应于输入空间的点；输出$y \\in Y$表示示例的类别。由输入空间到输出空间的函数为\n",
    "\n",
    "$$\n",
    "f(x) = sign(w x + b)\n",
    "$$\n",
    "\n",
    ">如果$wx+b>0$则表示该点在线的上方（二维），小于零则在线的下方，加一个符号函数就可以实现二分类\n",
    "\n",
    "这个模型就称为感知机，参数$w$叫做权值向量，$b$称为偏置。$w·x$表示$w$和$x$的内积。$sign$为符号函数\n",
    "![sign_function](images/sign.png)\n",
    "![perceptron_2](images/perceptron_2.PNG)\n",
    "\n",
    ">ps：对于下面这种分类的问题，可以直观地看到感知机的分类方法应该不止一种（即不止一条线可以分开两个聚类），但是perceptron实际上是无法找到最优解的，寻找最优还需应用到SVM\n",
    "![perceptron_geometry_def](images/perceptron_geometry_def.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d953258e",
   "metadata": {},
   "source": [
    "### 2.学习策略\n",
    "\n",
    "* **损失函数**\n",
    "\n",
    "如果数据集是线性可分的（可以在二维直线、三维平面或者更高维度的更高线性函数分开），那么感知机的学习目标就是要找到可以分开**正负实例点**的超平面的参数$w b$，那么就需要定义一个损失函数，然后对他进行最小化\n",
    "\n",
    "有两种选择，第一种是采用误分类的点的总数，但是这个函数并非是w b的连续可导函数，另一种方法是误分类点到分类面的距离之和，我们采取的是这种定义方法。\n",
    "\n",
    "首先，对于任意一点xo到超平面的距离为\n",
    "\n",
    "$$\n",
    "\\frac{1}{||w||} | w \\cdot xo + b |\n",
    "$$\n",
    "\n",
    "其次，对于误分类点$(x_i,y_i)$来说 $-y_i(w \\cdot x_i + b) > 0$\n",
    "\n",
    ">在这里我们需要和最小二乘的方法加以区分，最小二乘的lost function实际上是取了某一个样本点和当前的拟合函数在y坐标上的差值。但是perceptron里我们取得是样本点到拟合函数的垂直距离！\n",
    "\n",
    ">因为$y_i$是$wx+b$通过sign函数得到的标签值，所以当在直线上的样本点标签值为1的时候，分类是正确的，同样样本点在下标签值为-1也是正确的，那么对误分类的点就会有$-y_i(w \\cdot x_i + b) > 0$\n",
    "\n",
    "这样，假设超平面S的**总的误分类点集合为M**，那么所有误分类点到S的距离之和为\n",
    "$$\n",
    "-\\frac{1}{||w||} \\sum_{x_i \\in M} y_i (w \\cdot x_i + b)\n",
    "$$\n",
    "不考虑1/||w||，就得到了感知机学习的损失函数。\n",
    "\n",
    "* **[经验风险函数](https://blog.csdn.net/zhzhx1204/article/details/70163099)**\n",
    "\n",
    "感知机sign(w·x+b)学习的损失函数定义为\n",
    "\n",
    "$$\n",
    "L(w, b) = - \\sum_{x_i \\in M} y_i (w \\cdot x_i + b)\n",
    "$$\n",
    "\n",
    "而这个损失函数也就是感知机学习的经验风险函数，显然L(w,b)是非负的，误分类点越少那么得到的L(w,b)的值就越小，当没有误分类点的时候就是0，可以看到，**给定训练数据集T,损失函数L(w,b)是w,b的连续可导函数。**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9cedf6aa",
   "metadata": {},
   "source": [
    "### 3.学习算法\n",
    "\n",
    "给定数据集$T = \\{(x_1,y_1), (x_2, y_2), ... （x_N, y_N)\\}$\n",
    "（其中$x_i \\in R^n$, $y_i \\in \\{-1, +1\\}，i=1,2...N$），求参数w,b,使其成为损失函数的解（M为误分类的集合）：\n",
    "\n",
    "$$\n",
    "min_{w,b} L(w, b) =  - \\sum_{x_i \\in M} y_i (w \\cdot x_i + b)\n",
    "$$\n",
    "\n",
    "感知机的学习是误分类驱动的：首先我们选定$w_0$$b_0$，然后我们再使用梯度下降来不断极小化目标函数，这里我们使用的是很广泛使用的**[随机梯度下降法(stochastic gradient descent)](https://blog.csdn.net/zbc1090549839/article/details/38149561)**,我们首先来介绍他和一般梯度下降的区别：\n",
    "\n",
    ">一般的损失函数\n",
    "$$\n",
    "J(\\theta)=\\frac{1}{m}\\sum_{i=1}^{m}(h_\\theta(x)-y)^2\\\\\n",
    "h_\\theta(x)=\\theta_0+\\theta_1x_1+...+\\theta_nx_n\n",
    "$$\n",
    "迭代策略：（求偏导的具体步骤在逻辑回归一节已经提到）\n",
    "$$\n",
    "\\theta_j=\\theta_j-\\alpha\\frac{\\partial J(\\theta)}{\\partial\\theta_j}=\\theta_j-\\alpha(h_\\theta(x_j)-y_j)x_j\n",
    "$$\n",
    "如果使用梯度下降，每次自变量迭代的计算开销为$O(n)$，它随着n增大线性增长。因此，当训练数据样本数很大时，梯度下降每次迭代的计算开销很高，需要对n+1个$\\theta_j$都迭代一遍。而SGD的方法则是在每次随机抽取一个j，将计算开销从$O(n)$降到$O(1)$\n",
    "所以他的迭代策略可以被写作\n",
    "$$\n",
    "\\theta=\\theta-\\alpha\\frac{\\partial J(\\theta)}{\\partial\\theta_j}\n",
    "$$\n",
    "并且$\\frac{\\partial J(\\theta)}{\\partial\\theta_j}$是对$\\frac{\\partial J(\\theta)}{\\partial\\theta}$的无偏估计，也就是说从平均的意义上来说，随机梯度是对梯度的一个比较良好的估计"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5c08d7ef",
   "metadata": {},
   "source": [
    "如果这个误分类的集合M是固定的，那么$L(\\omega,b)$的梯度可以表示为\n",
    "\n",
    "$$\n",
    "\\triangledown_w L(w, b) = - \\sum_{x_i \\in M} y_i x_i \\\\\n",
    "\\triangledown_b L(w, b) = - \\sum_{x_i \\in M} y_i \\\\\n",
    "$$\n",
    "\n",
    "每次迭代我们都是随机选取一个误分类点$(x_i,y_i)$进行更新，而非全部的点对参数逐一更新。对$\\omega$、$b$的迭代还有比较直观的几何解释方法，就是当一个实例点被误分类时，调整w,b，使分离超平面向该误分类点的一侧移动，以减少该误分类点与超平面的距离，直至超越该点被正确分类。\n",
    "\n",
    "$$\n",
    "w = w + \\eta y_i x_i \\\\\n",
    "b = b + \\eta y_i\n",
    "$$\n",
    "\n",
    "注意因为梯度算出来是带负号的，所以和迭代公式里面的负号抵消了。\n",
    "\n",
    "这里的$0≤ \\eta≤ 1$表示步长或者学习率，同样如果步长过大，那么下降的速率就会很快，导致错过极小值点，步长过小则会导致消耗时间过长。\n",
    "\n",
    "* **学习算法概括（学习的过程使用了标签，所以它还是在监督学习的范畴之内）**\n",
    "\n",
    "输入：$T={(x_1,y_1),(x_2,y_2)...(x_N,y_N)}$\n",
    "\n",
    "其中$x_i∈X=R_n$，$y_i∈Y$={-1, +1}，$i=1,2...N$，学习速率为 $\\eta$\n",
    "\n",
    "输出：w, b;感知机模型$f(x)=sign(\\omega·x+b)$\n",
    "\n",
    "1. 初始化$w_0$,$b_0$\n",
    "\n",
    "2. 在训练数据集中选取$(x_i, y_i）$\n",
    "\n",
    "3. 如果$y_i(w * x_i+b)≤0$\n",
    "           \n",
    "    $w = w + η y_i x_i$\n",
    "    \n",
    "    $b = b + η y_i$\n",
    "\n",
    "4. 如果所有的样本都正确分类，或者迭代次数超过设定值，则终止\n",
    "\n",
    "5. 否则，跳转至（2）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0d3d5960",
   "metadata": {},
   "source": [
    "### 4. 程序实现\n",
    "\n",
    "一个简单的流程图\n",
    "\n",
    "![processmap](images/processmap.jpg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "60bef45c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "update weight and bias:  1.0 3.0 0.5\n",
      "update weight and bias:  -2.0 2.0 0.0\n",
      "w =  [-2.0, 2.0]\n",
      "b =  0.0\n",
      "ground_truth:  [1, 1, 1, 1, -1, -1, -1, -1]\n",
      "predicted:     [1, 1, 1, 1, -1, -1, -1, -1]\n"
     ]
    }
   ],
   "source": [
    "import random\n",
    "import numpy as np\n",
    "\n",
    "# 符号函数\n",
    "def sign(v):\n",
    "    if v > 0:  return 1\n",
    "    else:      return -1\n",
    "    \n",
    "def perceptron_train(train_data, eta=0.5, n_iter=100):\n",
    "    weight = [0, 0]  # 权重\n",
    "    bias = 0  # 偏置量\n",
    "    learning_rate = eta  # 学习速率\n",
    "\n",
    "    train_num = n_iter  # 迭代次数\n",
    "\n",
    "    for i in range(train_num):\n",
    "        #FIXME: the random chose sample is too slow\n",
    "        #本来我们在使用random choice的时候就可以保证每次选取的值的随机性\n",
    "        train = random.choice(train_data)\n",
    "        x1, x2, y = train\n",
    "        predict = sign(weight[0] * x1 + weight[1] * x2 + bias)  # 输出\n",
    "        #print(\"train data: x: (%2d, %2d) y: %2d  \n",
    "        #      ==> predict: %2d\" % (x1, x2, y, predict))\n",
    "          \n",
    "        if y * predict <= 0:  # 判断误分类点\n",
    "            weight[0] = weight[0] + learning_rate * y * x1  # 更新权重\n",
    "            weight[1] = weight[1] + learning_rate * y * x2\n",
    "            bias      = bias      + learning_rate * y       # 更新偏置量\n",
    "            print(\"update weight and bias: \", \n",
    "                  weight[0], weight[1], bias)\n",
    "\n",
    "    #print(\"stop training: \", weight[0], weight[1], bias)\n",
    "\n",
    "    return weight, bias\n",
    "\n",
    "def perceptron_pred(data, w, b):\n",
    "    #输出最终的结果\n",
    "    y_pred = []\n",
    "    for d in data:\n",
    "        x1, x2, y = d\n",
    "        yi = sign(w[0]*x1 + w[1]*x2 + b)\n",
    "        y_pred.append(yi)\n",
    "        \n",
    "    return y_pred\n",
    "\n",
    "# set training data\n",
    "train_data = np.array([[1, 3,  1], [2, 5,  1], [3, 8,  1], [2, 6,  1], \n",
    "                       [3, 1, -1], [4, 1, -1], [6, 2, -1], [7, 3, -1]])\n",
    "\n",
    "# do training\n",
    "w, b = perceptron_train(train_data)\n",
    "print(\"w = \", w)\n",
    "print(\"b = \", b)\n",
    "\n",
    "# predict \n",
    "y_pred = perceptron_pred(train_data, w, b)\n",
    "\n",
    "#将样本的真实标签和预测标签进行对比\n",
    "print(\"ground_truth: \", list(train_data[:, 2]))\n",
    "print(\"predicted:    \", y_pred)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
