{
 "nbformat": 4,
 "nbformat_minor": 2,
 "metadata": {
  "language_info": {
   "name": "python",
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   }
  },
  "orig_nbformat": 2,
  "file_extension": ".py",
  "mimetype": "text/x-python",
  "name": "python",
  "npconvert_exporter": "python",
  "pygments_lexer": "ipython3",
  "version": 3
 },
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "逻辑回归：\n",
    "+ 监督式二分类算法模型\n",
    "+ 逻辑回归的预测函数为$ℎ\\theta (𝑥) = 𝑔(\\theta ^𝑇𝑥)$\n",
    "+ g(x)为sigmoid函数：$g(x)=\\frac{1}{1+e^{-x}}$\n",
    "+ sigmoid函数把所有样本点都映射到(0,1)区间内\n",
    "+ sigmoid函数连续可导，求导后的形式为：$g'(x)=g(x)(1-g(x))$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "代价函数：<br>\n",
    "$Cost(h_\\theta (x),y)=\\left\\{\\begin{matrix}\n",
    "-log(h_\\theta (x)) & y=1\\\\ \n",
    " -log(1-h_\\theta (x))& y=0 \n",
    "\\end{matrix}\\right.$<br>\n",
    "代价函数为：<br>\n",
    "$J(\\theta )= -\\frac{1}{m}\\left [ \\sum\\limits_{i=1}^{m}y^{(i)}logh_\\theta (x^{(i)})+(1-y^{(i)})log(1-h_\\theta (x^{(i)}))\\right ]$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 求解损失函数\n",
    "\n",
    "​\t最小化损失函数采用梯度下降算法，在逻辑回归中梯度下降算法的迭代格式为\n",
    "$$\n",
    "\\theta_j:=\\theta_j-\\alpha\\nabla_{\\theta_j} {J(\\theta)}\\\\\n",
    "$$\n",
    "其中$\\ \\alpha \\in (0,1]\\ $代表逻辑回归算法的学习率，为了实现该算法首先要求出损失函数$J(\\theta)$对参数$\\theta_j$的梯度：\n",
    "$$\n",
    "\\begin{aligned}\n",
    "\\nabla_{\\theta_j} J(\\theta) &=- \\dfrac{1}{m}\\nabla_{\\theta_j}\\sum\\limits_{i=1}^m \\left\\{y^{(i)} \\cdot (\\log h_\\theta(x^{(i)})) +(1-y^{(i)})\\cdot \\log(1-h_\\theta(x^{(i)}))\\right\\}\\\\\n",
    "&=- \\dfrac{1}{m}\\sum\\limits_{i=1}^m\\left( y^{(i)} \\cdot \\dfrac{1}{h_\\theta(x^{(i)})}\\nabla_{\\theta_j}h_\\theta(x^{(i)}) \n",
    " - (1-y^{(i)})\\cdot \\dfrac{1}{1-h_\\theta(x^{(i)})} \\nabla_{\\theta_j}h_\\theta(x^{(i)})\\right) \\\\\n",
    " &=- \\dfrac{1}{m}\\sum\\limits_{i=1}^m\\left( y^{(i)} \\cdot \\dfrac{1}{h_\\theta(x^{(i)})} \n",
    " - (1-y^{(i)})\\cdot \\dfrac{1}{1-h_\\theta(x^{(i)})}\\right)\\cdot\\nabla_{\\theta_j}h_\\theta(x^{(i)}) \\\\\n",
    " &=- \\dfrac{1}{m}\\sum\\limits_{i=1}^m\\left( y^{(i)} \\cdot \\dfrac{1}{h_\\theta(x^{(i)})} \n",
    " - (1-y^{(i)})\\cdot \\dfrac{1}{1-h_\\theta(x^{(i)})}\\right)\\cdot h_\\theta(x^{(i)})\\cdot(1-h_\\theta(x^{(i)}))\\nabla_{\\theta_j}\\theta^Tx^{(i)}\\\\\n",
    " &=- \\dfrac{1}{m}\\sum\\limits_{i=1}^m \\left( y^{(i)}\\cdot(1-h_\\theta(x^{(i)})) -  (1-y^{(i)})\\cdot h_\\theta(x^{(i)})\\right)x_j^{(i)}\\\\\n",
    " &=- \\dfrac{1}{m}\\sum\\limits_{i=1}^m \\left( y^{(i)} -   h_\\theta(x^{(i)})\\right)x_j^{(i)}\\\\\n",
    "&=\\dfrac{1}{m}\\sum\\limits_{i=1}^m \\left(h_\\theta(x^{(i)}) - y^{(i)}\\right)x_j^{(i)}\n",
    "\\end{aligned}\n",
    "$$\n",
    "因此在逻辑回归模型中利用所有的样本数据，训练梯度下降算法的完整迭代更新格式为\n",
    "$$\n",
    "\\theta_j:=\\theta_j-\\alpha \\sum\\limits_{i=1}^m(h_\\theta(x^{(i)})-y^{(i)})x_j^{(i)} \\ \\ \\ ( j\\ \\  for \\ \\ 0 \\ \\sim \\ n)\n",
    "$$\n",
    "可以发现逻辑回归模型的迭代更新格式和线性回归模型的迭代更新格式完全一样，==但是它们的假设函数$\\ h_\\theta(x)\\ $的函数表达式是不一样的==。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn import preprocessing\n",
    "scale = True # 数据是否需要标准化\n",
    "def sigmoid(x):\n",
    "    return 1.0/(1+np.exp(-x))\n",
    "\n",
    "def cost(xMat, yMat, ws):\n",
    "    z = sigmoid(xMat*ws)\n",
    "    left = np.multiply(yMat, np.log(z))\n",
    "    right = np.multiply(1 - yMat, np.log(1 - z))\n",
    "    return np.sum(left + right) / -(len(xMat))\n",
    "\n",
    "def gradAscent(xArr, yArr):\n",
    "    if scale == True:\n",
    "        xArr = preprocessing.scale(xArr)\n",
    "    xMat = np.mat(xArr)\n",
    "    yMat = np.mat(yArr)\n",
    "    \n",
    "    lr = 0.001\n",
    "    epochs = 10000\n",
    "    costList = []\n",
    "    # 计算数据行列数\n",
    "    m,n = np.shape(xMat)# 行代表数据个数，列代表权值个数\n",
    "    # 初始化权值\n",
    "    ws = np.mat(np.ones((n,1)))\n",
    "    for i in range(epochs+1):             \n",
    "        # xMat和weights矩阵相乘\n",
    "        h = sigmoid(xMat*ws)   \n",
    "        # 计算误差\n",
    "        ws_grad = xMat.T*(h - yMat)/m\n",
    "        ws = ws - lr*ws_grad \n",
    "        if i % 50 == 0:\n",
    "            costList.append(cost(xMat,yMat,ws))\n",
    "    return ws,costList"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "正确率与召回率:\n",
    "+ 正确率与召回率（Precision & Recall）是广泛应用于信息检索和统计\n",
    "学分类领域的两个度量值，用来评价结果的质量。\n",
    "+ 一般来说，正确率就是检索出来的条目有多少是正确的，召回率就是\n",
    "所有正确的条目有多少被检索出来了。\n",
    "\n",
    "\n",
    "综合评价指标:\n",
    "+ 最常见的方法就是F-Measure（又称为F-Score）：\n",
    "+ $F_\\beta =(1+\\beta ^2)\\cdot \\frac{precision \\cdot recall}{\\beta ^2 \\cdot precision +recall}$\n",
    "+ 当𝛽 =1时，就是常见的F1指标"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import classification_report\n",
    "predictions = predict(X_data, ws)\n",
    "print(classification_report(y_data, predictions)) #计算p,r,f1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# sklearn 的逻辑回归\n",
    "logistic = linear_model.LogisticRegression()\n",
    "logistic.fit(x_data, y_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义多项式回归,degree的值可以调节多项式的特征\n",
    "poly_reg  = PolynomialFeatures(degree=5) \n",
    "# 特征处理\n",
    "x_poly = poly_reg.fit_transform(x_data)\n",
    "# 定义逻辑回归模型\n",
    "logistic = linear_model.LogisticRegression()\n",
    "# 训练模型\n",
    "logistic.fit(x_poly, y_data)\n",
    "\n",
    "# 获取数据值所在的范围\n",
    "x_min, x_max = x_data[:, 0].min() - 1, x_data[:, 0].max() + 1\n",
    "y_min, y_max = x_data[:, 1].min() - 1, x_data[:, 1].max() + 1\n",
    "\n",
    "# 生成网格矩阵\n",
    "xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),\n",
    "                     np.arange(y_min, y_max, 0.02))\n",
    "\n",
    "# np.r_按row来组合array， \n",
    "# np.c_按colunm来组合array\n",
    "# >>> a = np.array([1,2,3])\n",
    "# >>> b = np.array([5,2,5])\n",
    "# >>> np.r_[a,b]\n",
    "# array([1, 2, 3, 5, 2, 5])\n",
    "# >>> np.c_[a,b]\n",
    "# array([[1, 5],\n",
    "#        [2, 2],\n",
    "#        [3, 5]])\n",
    "# >>> np.c_[a,[0,0,0],b]\n",
    "# array([[1, 0, 5],\n",
    "#        [2, 0, 2],\n",
    "#        [3, 0, 5]])\n",
    "z = logistic.predict(poly_reg.fit_transform(np.c_[xx.ravel(), yy.ravel()]))# ravel与flatten类似，多维数据转一维。flatten不会改变原始数据，ravel会改变原始数据\n",
    "z = z.reshape(xx.shape)\n",
    "# 等高线图\n",
    "cs = plt.contourf(xx, yy, z)\n",
    "# 样本散点图\n",
    "plt.scatter(x_data[:, 0], x_data[:, 1], c=y_data)\n",
    "plt.show()\n",
    "\n",
    "print('score:',logistic.score(x_poly,y_data))"
   ]
  }
 ]
}