{
 "nbformat": 4,
 "nbformat_minor": 2,
 "metadata": {
  "language_info": {
   "name": "python",
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "version": "3.7.4-final"
  },
  "orig_nbformat": 2,
  "file_extension": ".py",
  "mimetype": "text/x-python",
  "name": "python",
  "npconvert_exporter": "python",
  "pygments_lexer": "ipython3",
  "version": 3,
  "kernelspec": {
   "name": "python37432bit8d8e1828c3004b5281cefc61c8c7a538",
   "display_name": "Python 3.7.4 32-bit"
  }
 },
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "+ 回归分析(regression analysis)用来建立方程模拟两个或者多个变量之间如何关联\n",
    "+ 被预测的变量叫做：因变量(dependent variable), 输出(output)\n",
    "+ 被用来进行预测的变量叫做： 自变量(independent variable), 输入(input)\n",
    "+ 一元线性回归包含一个自变量和一个因变量以上两个变量的关系用一条直线来模拟\n",
    "+ 如果包含两个以上的自变量，则称作多元回归分析(multiple regression)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "+ 一元线性回归:$h_{\\theta}(x)=\\theta _0 + \\theta _1x$\n",
    "+ 这个方程对应的图像是一条直线，称作回归线。其中，$𝜃_1$为回归线的斜率， $𝜃_0$为回归线的截距。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "代价函数(Cost Function):\n",
    "+ 最小二乘法\n",
    "+ 真实值$y$，预测值$h_{\\theta}(x)$，则误差平方为$(y − h_{\\theta}(x))^2$\n",
    "+ 找到合适的参数，使得误差平方和：\n",
    "$J(\\theta _0,\\theta _1)= \\frac{1}{2m} \\sum  \\limits_{i=1}^{{m}}(y − h_{\\theta}(x))^2$ 最小\n",
    "+ 相关系数去衡量线性相关性的强弱：<br>\n",
    "$r_{xy}=\\frac{\\sum(X_i-\\overline{X})(Y_i-\\overline{Y})}{\\sqrt{\\sum(X_i-\\overline{X})^2\\sum(Y_i-\\overline{Y})^2}}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "相关系数$𝑅^2$(coefficient of determination)是用来描述两个变量之间的线性关系的，但决定系数的适<br>\n",
    "用范围更广，可以用于描述非线性或者有两个及两个以上自变量的相关关系。它可以用来评价模型的效果。<br>\n",
    "总平方和(SST)：$\\sum_{i=1}^{n}(y_i-\\overline{y})^2$<br>\n",
    "回归平方和(SSR)：$\\sum_{i=1}^{n}(\\widehat{y}-\\overline{y})^2$<br>\n",
    "残差平方和(SSE)：$\\sum_{i=1}^{n}(y_i-\\widehat{y})^2$<br>\n",
    "它们三者的关系是：SST = SSR +SSE<br>\n",
    "决定系数：$R^2=\\frac{SSR}{SST}=1-\\frac{SSE}{SST}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "梯度下降法：<br>\n",
    "while{<br>\n",
    "    ${\\color{Red}{\\theta _j := \\theta _j - \\alpha \\frac{\\partial}{\\partial \\theta _j}J(\\theta _0,\\theta _1) (for j =0 \n",
    " and j=0 )}}$<br>\n",
    "}<br>\n",
    "代价函数：<br>\n",
    "$J(\\theta _0,\\theta _1)= \\frac{1}{2m}\\sum  \\limits_{i=1}^{{m}}( h_{\\theta}(x^{(i)})-y^{(i)})^2$<br>\n",
    "求偏导：<br>\n",
    "$\\frac{\\partial}{\\partial \\theta _j}J(\\theta _0,\\theta _1)=\\left\\{\\begin{matrix}\n",
    " \\frac{1}{m}\\sum  \\limits_{i=1}^{{m}}( h_{\\theta}(x^{(i)})-y^{(i)})  & j=0 \\\\ \n",
    " \\frac{1}{m}\\sum  \\limits_{i=1}^{{m}}( h_{\\theta}(x^{(i)})-y^{(i)})x_i^{(i)}  & j=1\n",
    "\\end{matrix}\\right.$<br>\n",
    "算法步骤：<br>\n",
    "while(){<br>\n",
    "$\\theta _0 := \\theta _0-\\alpha  \\frac{1}{m}\\sum  \\limits_{i=1}^{{m}}( h_{\\theta}(x^{(i)})-y^{(i)})\\\\\n",
    "\\theta _1 := \\theta _1-\\alpha  \\frac{1}{m}\\sum  \\limits_{i=1}^{{m}}( h_{\\theta}(x^{(i)})-y^{(i)})x_1^{(i)}$\n",
    "}<br>\n",
    "多元线性回归：<br>\n",
    "$回归方程： h_\\theta (x) = \\theta ^Tx = \\theta_0 x + \\cdots + \\theta_n x \\\\\n",
    "代价函数：J(\\theta _0,\\theta _1,\\cdots \\theta _n) =\\frac{1}{2m}\\sum  \\limits_{i=1}^{{m}}( h_{\\theta}(x^{(i)})-y^{(i)})^2 \\\\\n",
    "算法步骤：\\\\\n",
    "while()\\{\\\\\n",
    "\\qquad \\theta _j := \\theta _j -\\alpha  \\frac{1}{m}\\sum  \\limits_{i=1}^{{m}}( h_{\\theta}(x^{(i)})-y^{(i)})x_j^{(i)} \\\\\n",
    "\\}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np \n",
    "import matplotlib.pyplot  as plt "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data = np.genfromtxt(\"data.csv\", delimiter=\",\")# 载入数据\n",
    "x_data = data[:,0]\n",
    "y_data = data[:,1]\n",
    "plt.scatter(x_data,y_data)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 最小二乘法\n",
    "def compute_error(b, k, x_data, y_data):\n",
    "    totalError = 0\n",
    "    for i in range(0, len(x_data)):\n",
    "        totalError += (y_data[i] - (k * x_data[i] + b)) ** 2\n",
    "    return totalError / float(len(x_data)) / 2.0\n",
    "\n",
    "def gradient_descent_runner(x_data, y_data, lr=0.0001, epochs=50):\n",
    "    b = 0 # 截距\n",
    "    k = 0 # 斜率    \n",
    "    m = float(len(x_data)) # 计算总数据量\n",
    "    for i in range(epochs):  # 循环epochs次\n",
    "        b_grad = 0\n",
    "        k_grad = 0 # 计算梯度的总和再求平均\n",
    "        for j in range(0, m):\n",
    "            z = (k * x_data[j] + b - y_data[j])  \n",
    "            b_grad +=  z\n",
    "            k_grad += z * x_data[j] \n",
    "        # 更新b和k\n",
    "        b = b - (lr * b_grad*(1/m))\n",
    "        k = k - (lr * k_grad*(1/m))\n",
    "        # 每迭代5次，输出一次图像\n",
    "        # if i % 5==0:\n",
    "        #     print(\"epochs:\",i)\n",
    "        #     plt.plot(x_data, y_data, 'b.')\n",
    "        #     plt.plot(x_data, k*x_data + b, 'r')\n",
    "        #     plt.show()\n",
    "    return b, k"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def linearRegression2(x, y,  alpha = 0.0013, maxCycle = 70):\n",
    "    x = np.mat(x) #变成矩阵\n",
    "    y = np.mat(y)\n",
    "    m, n = np.shape(x)\n",
    "    weights = np.zeros((1, m+1))\n",
    "    count = 0\n",
    "    while True:\n",
    "        count += 1\n",
    "        diff = [0, 0]\n",
    "        diff[0] = np.sum(x.T * weights - y.T)\n",
    "        diff[1] = np.sum((x.T * weights - y.T).T * x.T)\n",
    "        weights[0, 0] = weights[0, 0] - alpha/(2*n) * diff[0]\n",
    "        weights[0, 1] = weights[0, 1] - alpha/(2*n) * diff[1]\n",
    "        error = np.sum(np.array(x.T * weights - y.T) ** 2)\n",
    "        if error < esplion:\n",
    "            break\n",
    "        if count > maxCycle:\n",
    "            break\n",
    "    return weights, error"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"Starting b = {0}, k = {1}, error = {2}\".format(b, k, compute_error(b, k, x_data, y_data)))\n",
    "print(\"Running...\")\n",
    "b, k = gradient_descent_runner(x_data, y_data)\n",
    "print(\"After {0} iterations b = {1}, k = {2}, error = {3}\".format(70, b, k, compute_error(b, k, x_data, y_data)))\n",
    "\n",
    "# 画图\n",
    "# plt.plot(x_data, y_data, 'b.')\n",
    "# plt.plot(x_data, k*x_data + b, 'r')\n",
    "# plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "标准方程法：<br>\n",
    "令$\\frac{\\partial}{\\partial \\theta _j}J(\\theta) =  \\cdots = 0$求出$\\theta _j$ 。<br>\n",
    "损失函数$J(\\theta)=\\dfrac{1}{2m}\\sum\\limits_{i=1}^m(h_\\theta(x^{(i)})-y^{(i)})^2$改写成矩阵乘法的形式，在此之前需要先定义一些矩阵，不妨令：<br>\n",
    "$Y = \\begin{bmatrix}\n",
    "(y^{(1)}) \\\\ (y^{(2)}) \\\\ \\vdots \\\\ (y^{(m)})\n",
    "\\end{bmatrix} \\\\\n",
    "X=\\begin{bmatrix}\n",
    "(x^{(1)})^T \\\\ (x^{(2)})^T \\\\ \\vdots \\\\ (x^{(m)})^T\n",
    "\\end{bmatrix}\n",
    "=\\begin{bmatrix}\n",
    "1 & x_1^{(1)} & x_2^{(1)} & \\cdots & x_n^{(1)} \\\\\n",
    "1 & x_1^{(2)} & x_2^{(2)} & \\cdots & x_n^{(2)} \\\\\n",
    "\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
    "1 & x_1^{(m)} & x_2^{(m)} & \\cdots & x_n^{(m)}\n",
    "\\end{bmatrix}$\n",
    "由于$h_\\theta(x^{(i)})=\\theta^Tx^{(i)}=(x^{(i)})^T\\theta$，因此可以得出:<br>\n",
    "$X \\theta - Y =\n",
    "\\begin{bmatrix}\n",
    "(x^{(1)})^T\\theta \\\\\n",
    "(x^{(2)})^T\\theta \\\\\n",
    "\\vdots\\\\\n",
    "(x^{(m)})^T\\theta\n",
    "\\end{bmatrix}$-$\\begin{bmatrix}\n",
    "(y^{(1)}) \\\\ (y^{(2)}) \\\\ \\vdots \\\\ (y^{(m)})\n",
    "\\end{bmatrix}$=$\\begin{bmatrix}\n",
    "(x^{(1)})^T\\theta - (y^{(1)}) \\\\ (x^{(2)})^T\\theta -  (y^{(2)}) \\\\ \\vdots \\\\ (x^{(m)})^T\\theta -  (y^{(m)})\n",
    "\\end{bmatrix}$<br>\n",
    "然后根据$\\sum\\limits_{i=1}^n\\phi_i^2=\\phi^T\\phi$,综上可以得出$J(\\theta)$的矩阵形式\n",
    "$$\n",
    "J(\\theta)=\\dfrac{1}{2} \\sum\\limits_{i=1}^m(h_\\theta(x^{(i)})-y^{(i)})^2=\\dfrac{1}{2}(X\\theta - Y)^T(X\\theta-Y)\n",
    "$$\n",
    "最后求解$\\begin{aligned}\\mathop{\\arg\\min}_{\\theta} \\dfrac{1}{2}(X\\theta-Y)^T(X\\theta-Y) \\end{aligned}$即可。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "所以\n",
    "$$\n",
    "\\begin{aligned}\n",
    "\\nabla_\\theta J(\\theta) &= \\nabla_\\theta \\ \\dfrac{1}{2}(X\\theta-Y)^T(X\\theta-Y) \\\\\n",
    "&=\\dfrac{1}{2} \\nabla_\\theta \\ (\\theta^TX^T-Y^T)(X\\theta-Y) \\\\\n",
    "&=\\dfrac{1}{2} \\nabla_\\theta \\ (\\theta^TX^TX\\theta-\\theta^TX^TY-Y^TX\\theta+Y^TY)\\\\\n",
    "&=\\dfrac{1}{2}(2X^TX\\theta-2X^TY) \\\\\n",
    "&=X^TX\\theta-X^TY\n",
    "\\end{aligned}\n",
    "$$\n",
    "称$X^TX\\theta=X^TY$为正规方程，因此$\\theta=(X^TX)^{-1}X^TY$，实际上$X^TX$不可逆的情况非常少就算$X^TX$真的是不可逆也无妨，可以先对原始数据进行特征筛选或者正则化即可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 标准方程法求解回归参数\n",
    "def weights(xArr, yArr):\n",
    "    xMat = np.mat(xArr)\n",
    "    yMat = np.mat(yArr)\n",
    "    xTx = xMat.T*xMat # 矩阵乘法\n",
    "    # 计算矩阵的值,如果值为0，说明该矩阵没有逆矩阵\n",
    "    if np.linalg.det(xTx) == 0.0:\n",
    "        print(\"This matrix cannot do inverse\")\n",
    "        return\n",
    "    ws = xTx.I*xMat.T*yMat   # xTx.I为xTx的逆矩阵\n",
    "    return ws"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "梯度下降法:\n",
    "```\n",
    "缺点：\n",
    "需要选择合适的学习率\n",
    "需要迭代很多个周期\n",
    "只能得到最优解的近似\n",
    "\n",
    "优点：\n",
    "当特征值非常多的时候也\n",
    "可以很好的工作\n",
    "```\n",
    "标准方程法:\n",
    "```\n",
    "优点：\n",
    "不需要学习率\n",
    "不需要迭代\n",
    "可以得到全局最优解\n",
    "缺点：\n",
    "需要计算(𝑋^𝑇𝑋)^−1\n",
    "时间复杂度大约是O(𝑛^3)\n",
    "n是特征数量\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "防止过拟合\n",
    "+ 1.减少特征\n",
    "+ 2.增加数据量\n",
    "+ 3.正则化(Regularized)\n",
    "    + L2: $J(\\theta )=\\frac{1}{2m}\\left [\\sum\\limits_{i=1}^m(h_\\theta(x^{(i)})-y^{(i)})^2+\\lambda \\sum\\limits_{j=1}^{m}\\theta _j^2 \\right ]$\n",
    "    + L1: $J(\\theta )=\\frac{1}{2m}\\left [\\sum\\limits_{i=1}^m(h_\\theta(x^{(i)})-y^{(i)})^2+\\lambda \\sum\\limits_{j=1}^{m} \\left |\\theta _j \\right | \\right ]$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "岭回归(Ridge Regression):<br>\n",
    "$\\theta=(X^TX)^{-1}X^TY$<br>\n",
    "如果数据的特征比样本点还多，数据特征n，样本个数m\n",
    "，如果n>m，则计算$(𝑋^𝑇𝑋)^{−1}$时会出错。因为$(𝑋^𝑇𝑋)^{−1}$不是满秩矩阵，所以不可逆。<br>\n",
    "为了解决这个问题，统计学家引入了岭回归的概念。<br>\n",
    "$\\theta=(X^TX+\\lambda I)^{-1}X^TY$<br> \n",
    "其中𝜆为岭系数，𝐼为单位矩阵(对角线上全为1，其他元素全\n",
    "为0)，代价函数是L2正则化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 岭回归标准方程法求解回归参数\n",
    "def weights(xArr, yArr, lam=0.2):\n",
    "    xMat = np.mat(xArr)\n",
    "    yMat = np.mat(yArr)\n",
    "    xTx = xMat.T*xMat # 矩阵乘法\n",
    "    rxTx = xTx + np.eye(xMat.shape[1])*lam\n",
    "    # 计算矩阵的值,如果值为0，说明该矩阵没有逆矩阵\n",
    "    if np.linalg.det(rxTx) == 0.0:\n",
    "        print(\"This matrix cannot do inverse\")\n",
    "        return\n",
    "    # xTx.I为xTx的逆矩阵\n",
    "    ws = rxTx.I*xMat.T*yMat\n",
    "    return ws"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "LASSO：\n",
    "+ 通过构造一个一阶惩罚函数获得一个精炼的模型；通过最终确定一些\n",
    "指标（变量）的系数为零（岭回归估计系数等于0的机会微乎其微，\n",
    "造成筛选变量困难），解释力很强。\n",
    "+ 擅长处理具有多重共线性的数据，与岭回归一样是有偏估计 。\n",
    "+ 代价函数为L1正则化"
   ]
  }
 ]
}