{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "ebe5e8f9",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "#  11\\.  梯度下降法实现与应用  # \n",
    "\n",
    "##  11.1.  介绍  # \n",
    "\n",
    "逻辑回归实验中，我们学会使用梯度下降方法求解。梯度下降作为一种最优化方法，可以普遍用于参数问题的优化过程中。为了更好地体会这种方法的优点和了解其使用过程，本次挑战中将尝试使用梯度下降解决前面学习过的线性回归问题。 \n",
    "\n",
    "##  11.2.  知识点  # \n",
    "\n",
    "  * 最小二乘法求解线性回归参数 \n",
    "\n",
    "  * 梯度下降法求解线性回归参数 \n",
    "\n",
    "还记得学习过的「线性回归」吗？我们使用了普通最小二乘法 OLS 来求解线性回归拟合参数。当然，根据不同的计算方法，OLS 又可以分为代数求解和矩阵求解。 \n",
    "\n",
    "首先，我们复习一下线性回归普通最小二乘法代数求解过程。对于一元线性方程： \n",
    "\n",
    "$$ y ( x , w ) = w _ { 0 } + w _ { 1 } x \\tag{1} $$ \n",
    "\n",
    "定义其平方损失函数为： \n",
    "\n",
    "$$ f = \\sum _ { i = 1 } ^ { n } \\left( y _ { i } - \\left( w _ { 0 } + w _ { 1 } x _ { i } \\right) \\right) ^ { 2 } \\tag{2} $$ \n",
    "\n",
    "接下来，求平方损失函数 1 阶偏导数： \n",
    "\n",
    "$$ \\frac { \\partial f } { \\partial w _ { 0 } } = - 2 \\sum _ { i = 1 } ^ { n } \\left( y _ { i } - \\left( w _ { 0 } + w _ { 1 } x _ { i } \\right) \\right) \\tag{3a} $$ \n",
    "\n",
    "$$ \\frac { \\partial f } { \\partial w _ { 1 } } = -2 \\sum _ { i = 1 } ^ { n } x _ { i } \\left( y _ { i } - \\left( w _ { 0 } + w _ { 1 } x _ { i } \\right) \\right) \\tag{3b} $$ \n",
    "\n",
    "为了求出  $w_0$  和  $w_1$  的值, 我们令  $\\frac{\\partial f}{\\partial w_{0}}=0$  以及  $\\frac{\\partial f}{\\partial w_{1}}=0$  ，解得： \n",
    "\n",
    "$$\\begin{align*} w_0 &= \\frac{\\sum_{i=1}^n x_i^2 \\sum_{i=1}^n y_i - \\sum_{i=1}^n x_i \\sum_{i=1}^n x_iy_i}{n\\sum_{i=1}^n x_i^2 - \\left(\\sum_{i=1}^n x_i\\right)^2} \\tag{4a} \\\\\\ w_1 &= \\frac{n\\sum_{i=1}^n x_iy_i - \\sum_{i=1}^n x_i \\sum_{i=1}^n y_i}{n\\sum_{i=1}^n x_i^2 - \\left(\\sum_{i=1}^n x_i\\right)^2} \\tag{4b} \\end{align*}$$ \n",
    "\n",
    "补充公式 3 → 4 详细求解过程 \n",
    "\n",
    "我们从等式(3a)和(3b)开始: \n",
    "\n",
    "$$\\begin{split} \\begin{align} \\frac{\\partial f}{\\partial w_0} &= -2\\sum_{i=1}^{n}(y_i - (w_0 + w_1x_i)) = 0 \\tag{3a}\\\\\\ \\frac{\\partial f}{\\partial w_1} &= -2\\sum_{i=1}^{n}x_i(y_i - (w_0 + w_1x_i)) = 0 \\tag{3b} \\end{align} \\end{split}$$ \n",
    "\n",
    "将等式(3a)的右边等于0,可以得到: \n",
    "\n",
    "$$ \\sum_{i=1}^{n}y_i - nw_0 - w_1\\sum_{i=1}^{n}x_i = 0 \\tag{5} $$ \n",
    "\n",
    "将等式(3b)的右边等于0,可以得到: \n",
    "\n",
    "$$ \\sum_{i=1}^{n}x_iy_i - w_0\\sum_{i=1}^{n}x_i^2 - w_1\\sum_{i=1}^{n}x_i^2 = 0 \\tag{6} $$ \n",
    "\n",
    "我们现在有两个方程(5)和(6),包含两个未知数  $w_0$  和  $w_1$  。通过代数运算,我们可以解出  $w_0$  和  $w_1$  的表达式。 \n",
    "\n",
    "首先,将方程(5)中的  $w_1$  表达式代入方程(6): \n",
    "\n",
    "$$\\begin{split} \\begin{align} \\sum_{i=1}^{n}x_iy_i - w_0\\sum_{i=1}^{n}x_i^2 - \\left(\\frac{1}{n}\\sum_{i=1}^{n}y_i - \\frac{w_0}{n}\\sum_{i=1}^{n}x_i\\right)\\sum_{i=1}^{n}x_i^2 &= 0\\\\\\ \\sum_{i=1}^{n}x_iy_i - w_0\\sum_{i=1}^{n}x_i^2 \\- \\frac{1}{n}\\sum_{i=1}^{n}x_i^2\\sum_{i=1}^{n}y_i + \\frac{w_0}{n}\\left(\\sum_{i=1}^{n}x_i\\right)^2 &= 0\\\\\\ \\therefore w_0 &= \\frac{\\sum_{i=1}^{n}x_i^2\\sum_{i=1}^{n}y_i - \\sum_{i=1}^{n}x_i\\sum_{i=1}^{n}x_iy_i}{n\\sum_{i=1}^{n}x_i^2 \\- \\left(\\sum_{i=1}^{n}x_i\\right)^2} \\tag{4a} \\end{align} \\end{split}$$ \n",
    "\n",
    "代入等式(5),我们可以得到  $w_1$  的表达式: \n",
    "\n",
    "$$\\begin{split} \\begin{align} \\sum_{i=1}^{n}y_i - n\\left(\\frac{\\sum_{i=1}^{n}x_i^2\\sum_{i=1}^{n}y_i - \\sum_{i=1}^{n}x_i\\sum_{i=1}^{n}x_iy_i}{n\\sum_{i=1}^{n}x_i^2 \\- \\left(\\sum_{i=1}^{n}x_i\\right)^2}\\right) - w_1\\sum_{i=1}^{n}x_i &= 0\\\\\\ \\therefore w_1 &= \\frac{n\\sum_{i=1}^{n}x_iy_i - \\sum_{i=1}^{n}x_i\\sum_{i=1}^{n}y_i}{n\\sum_{i=1}^{n}x_i^2 \\- \\left(\\sum_{i=1}^{n}x_i\\right)^2} \\tag{4b} \\end{align} \\end{split}$$ \n",
    "\n",
    "至此，我们就用代数的方式求解出线性方程的参数了。下面使用 Python 代码实现 OLS 代数求解函数。 \n",
    "\n",
    "Exercise 11.1 \n",
    "\n",
    "挑战：根据公式 (4) 完成普通最小二乘法代数计算函数。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "da423a1a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def ols_algebra(x, y):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    x -- 自变量数组\n",
    "    y -- 因变量数组\n",
    "\n",
    "    返回:\n",
    "    w1 -- 线性方程系数\n",
    "    w0 -- 线性方程截距项\n",
    "    \"\"\"\n",
    "\n",
    "    ### 代码开始 ### (≈ 3 行代码)\n",
    "    w1 = None\n",
    "    w0 = None\n",
    "    ### 代码结束 ###\n",
    "\n",
    "    return w1, w0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "338099cc",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "参考答案  Exercise 11.1 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "49042dbd",
   "metadata": {},
   "outputs": [],
   "source": [
    "def ols_algebra(x, y):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    x -- 自变量数组\n",
    "    y -- 因变量数组\n",
    "\n",
    "    返回:\n",
    "    w1 -- 线性方程系数\n",
    "    w0 -- 线性方程截距项\n",
    "    \"\"\"\n",
    "\n",
    "    ### 代码开始 ### (≈ 3 行代码)\n",
    "    n = len(x)\n",
    "    w1 = (n*sum(x*y) - sum(x)*sum(y))/(n*sum(x*x) - sum(x)*sum(x))\n",
    "    w0 = (sum(x*x)*sum(y) - sum(x)*sum(x*y))/(n*sum(x*x)-sum(x)*sum(x))\n",
    "    ### 代码结束 ###\n",
    "\n",
    "    return w1, w0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f1d2254",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "下面，我们提供一组测试数据。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d4fb6d3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "x = np.array([55, 71, 68, 87, 101, 87, 75, 78, 93, 73])\n",
    "y = np.array([91, 101, 87, 109, 129, 98, 95, 101, 104, 93])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a9bc0b4",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "运行测试 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "533389cd",
   "metadata": {},
   "outputs": [],
   "source": [
    "w1, w0 = ols_algebra(x, y)\n",
    "round(w1, 3), round(w0, 3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9917cac4",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "期望输出 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2e8625eb",
   "metadata": {},
   "outputs": [],
   "source": [
    "(0.718, 44.256)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2178c1eb",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "至此，我们得到了最终计算结果： \n",
    "\n",
    "$$ y=0.718∗x+44.256 $$ \n",
    "\n",
    "线性回归问题除了可以使用普通最小二乘法求解之外，实际上还可以使用迭代法求解。这里，我们可以应用逻辑回归实验中学习到的梯度下降方法来求解线性回归问题。 \n",
    "\n",
    "我们依旧沿用上面的平方损失函数。那么当使用迭代法时，只需要修改一个步骤。也就是不再令  $\\frac{\\partial f}{\\partial w_{0}}=0$  以及  $\\frac{\\partial f}{\\partial w_{1}}=0$  ，而是使用梯度下降法求解参数： \n",
    "\n",
    "$$ w _ { 0 } = w _ { 0 } - l r * \\frac { \\partial f } { \\partial w _ { 0 } } \\tag{5a} $$ \n",
    "\n",
    "$$ w _ { 1 } = w _ { 1 } - l r * \\frac { \\partial f } { \\partial w _ { 1 } } \\tag{5b} $$ \n",
    "\n",
    "下面使用 Python 代码实现梯度下降法求解过程。 \n",
    "\n",
    "Exercise 11.2 \n",
    "\n",
    "挑战：根据公式 (3) 和公式 (5) 完成普通最小二乘法代数计算函数。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4f4c0168",
   "metadata": {},
   "outputs": [],
   "source": [
    "def ols_gradient_descent(x, y, lr, num_iter):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    x -- 自变量数组\n",
    "    y -- 因变量数组\n",
    "    lr -- 学习率\n",
    "    num_iter -- 迭代次数\n",
    "\n",
    "    返回:\n",
    "    w1 -- 线性方程系数\n",
    "    w0 -- 线性方程截距项\n",
    "    \"\"\"\n",
    "\n",
    "    ### 代码开始 ### (> 5 行代码)\n",
    "    w1 = None\n",
    "    w0 = None\n",
    "    ### 代码结束 ###\n",
    "\n",
    "    return w1, w0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17b7b686",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "参考答案  Exercise 11.2 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "22d34cca",
   "metadata": {},
   "outputs": [],
   "source": [
    "def ols_gradient_descent(x, y, lr, num_iter):\n",
    "    \"\"\"\n",
    "    参数:\n",
    "    x -- 自变量数组\n",
    "    y -- 因变量数组\n",
    "    lr -- 学习率\n",
    "    num_iter -- 迭代次数\n",
    "\n",
    "    返回:\n",
    "    w1 -- 线性方程系数\n",
    "    w0 -- 线性方程截距项\n",
    "    \"\"\"\n",
    "\n",
    "    ### 代码开始 ### (> 5 行代码)\n",
    "\n",
    "    w1 = 0  # 初始参数为 0\n",
    "    w0 = 0  # 初始参数为 0\n",
    "\n",
    "    for i in range(num_iter):  # 梯度下降迭代\n",
    "        # 计算近似值\n",
    "        y_hat = (w1 * x) + w0\n",
    "        # 计算参数对应梯度\n",
    "        w1_gradient = -2 * sum(x * (y - y_hat))\n",
    "        w0_gradient = -2 * sum(y - y_hat)\n",
    "        # 根据梯度更新参数\n",
    "        w1 -= lr * w1_gradient\n",
    "        w0 -= lr * w0_gradient\n",
    "\n",
    "    ### 代码结束 ###\n",
    "\n",
    "    return w1, w0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44ec2360",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "运行测试 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d1529a5",
   "metadata": {},
   "outputs": [],
   "source": [
    "w1_, w0_ = ols_gradient_descent(x, y, lr=0.00001, num_iter=100)\n",
    "round(w1_, 3), round(w0_, 3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "341787a2",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "期望输出 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "64f081b3",
   "metadata": {},
   "outputs": [],
   "source": [
    "(1.264, 0.038)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c2e6f1b7",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "至此，我们得到了最终计算结果： \n",
    "\n",
    "$$ y=1.264∗x+0.038 $$ \n",
    "\n",
    "你会发现迭代法和我们上面使用 OLS 计算的精确解有一些出入。实际上主要是截距项的差异，不过截距项对线性方程的影响较小。我们可以通过绘图来对比两种方法拟合效果。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ad405bb1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from matplotlib import pyplot as plt\n",
    "\n",
    "%matplotlib inline\n",
    "\n",
    "fig, axes = plt.subplots(1, 2, figsize=(15, 5))\n",
    "\n",
    "axes[0].scatter(x, y)\n",
    "axes[0].plot(np.array([50, 110]), np.array([50, 110]) * w1 + w0, \"r\")\n",
    "axes[0].set_title(\"OLS\")\n",
    "axes[1].scatter(x, y)\n",
    "axes[1].plot(np.array([50, 110]), np.array([50, 110]) * w1_ + w0_, \"r\")\n",
    "axes[1].set_title(\"Gradient descent\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ecf121c",
   "metadata": {},
   "source": [
    "可以看出，图像显示的结果还是非常接近的。 \n",
    "\n",
    "线性回归方法之所以使用普通最小二乘法来求解，是因为我们可以很方便地求出损失函数的最小值。但是，机器学习中的很多问题，往往会面对非常复杂的损失函数，这些损失函数一般无法直接求得最小值，只能使用迭代方法来求取局部或全局极小值。这也就是我们学习梯度下降等迭代方法的原因。 "
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "main_language": "python",
   "notebook_metadata_filter": "-all"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
