{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<br>\n",
    "<center><font face=\"黑体\" size=4>《机器学习基础实践》课程实验指导书</font></center>\n",
    "<br>\n",
    "<br>\n",
    "<center><font face=\"黑体\",size=4>第1章  线性模型</font></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{1.实验目标}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "了解并掌握线性回归，岭回归，Lasso回归以及逻辑回归模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.实验内容}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.1 线性模型的基本形式}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "  假设数据集中的每个样本都是由$d$个属性来描述的，即对任意一个样本$\\textbf{x}_i$，可以通过一个$d$维特征向量$\\textbf{x}_i=[x_{i1},x_{i2},…,x_{id}]$表示，其中$x_{ij}$表示样本$\\textbf{x}_i$在第$j$个属性上的取值。线性模型通过将属性进行线性组合，构建模型的预测函数，即\n",
    "\n",
    "$f(\\textbf{x}_i)=w_1x_{i1}+w_2x_{i2}+...+w_dx_{id}+b$,\n",
    "\n",
    "其中$w_1,w_2,...,w_d$是线性模型的系数，$b$是线性模型的常数项。\n",
    "\n",
    "线性模型在机器学习领域有着广泛的应用，下面以分类问题为例来说明线性模型的基本结构。假设我们需要对两种水果(柠檬和苹果)进行分类，每种水果通过两个属性特征进行描述。给定训练集，我们可以将训练集中的样本通过绘制散点图进行可视化，如下图所示。图中，绿色的点表示柠檬，红色的点表示苹果。为了对这两种水果进行分类，我们可以从训练集中学习得到一个线性模型(即图中的直线，其表达式为$f(x)=w_1*x_1+w_2*x_2+b$)，把位于直线左侧的数据点分类为柠檬，把位于直线右侧的样本点分类为苹果。\n",
    "<img src=picture/图2.1.jpg>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.2 线性回归模型的基本原理}$\n",
    "\n",
    "对于回归问题，模型的输出是连续性的数值，及要求机器学习算法将输入特征映射到实数域上。本节将介绍如何使用线性模型解决回归问题，即线性回归模型。\n",
    "\n",
    "给定训练数据集$D=\\{(\\textbf{x}_i,y_i)\\}_{i=1}^{m}$,其中$\\textbf{x}_i=[x_{i1},x_{i2},...,x_{id}]$是第$i$个样本的特征向量，$y_i \\in R$ 是其对应的输出值。线性回归模型试图从训练集$D$中学习得到一个属性的线性组合函数，将输入特征$\\textbf{x}$映射到实数域$R$上。\n",
    "\n",
    "$\\textbf{线性回归模型的基本形式}：$\n",
    "\n",
    "$f(\\textbf{x}_{i})=w_{1}x_{i1}+...+w_dx_{id}+b, f(\\textbf{x}_{i}) \\in R.$   式(2.1)\n",
    "\n",
    "$\\textbf{线性回归模型的参数}：$\n",
    "\n",
    "要确定线性回归模型，需要从训练数据集中学习得到系数$\\textbf{w}=[w_1,w_2,...,w_d]$的值以及常数项$b$的值。\n",
    "\n",
    "$\\textbf{线性回归模型参数的学习}：$\n",
    "\n",
    "为了从训练数据集中学习得到线性回归模型的参数，直观的做法就是让线性回归模型在训练样本上的预测值与训练样本的真实输出值尽可能的接近。可以采用均方误差函数来表示预测值与真实输出值之间的差异,因此从训练数据中学习线性回归模型的参数，可以转化为求解如式(2.2)所示的目标函数。\n",
    "\n",
    "$\\underset{\\textbf{w},b}{\\min} J(\\textbf{w},b)=\\frac{1}{2m}\\sum_{1}^{m}(f(\\textbf(x)_i)-y_i)^2$.   式(2.2)\n",
    "\n",
    "其中，$m$表示训练样本的个数，$f(\\textbf(x)_i)$表示线性回归模型的预测输出，$y_i$表示训练样本的真是输出。\n",
    "\n",
    "$\\textbf{梯度下降算法求函数极值}$\n",
    "\n",
    "线性回归模型的目标函数是一个关于系数$\\textbf{w}$和常数项$b$的二次凸函数，根据凸优化理论，可以采用梯度下降算法求得系数和常数项的最优解。梯度下降法是一种寻找函数极小值的方法。该算法在已知当前参数的情况下，按当前点的梯度的反方向，按照一定的步长，对模型的参数进行调整，使得目标函数不断接近一个极小值点。图2.2展示了梯度下降发求解函数极小值的过程。\n",
    "<img src=picture/图2.2.jpg>\n",
    "\n",
    "在梯度下降法中，求函数的极值，从函数的任意一个初始点开始，沿着函数在该点的梯度的反方向，按照一定的步长(即学习速率)迭代到下一个点，不断重复，直到函数收敛到极小值为止。\n",
    "\n",
    "梯度下降算法训练线性回归模型的流程如下所示：\n",
    "\n",
    "$\\textbf{输入：}训练集D=\\{(\\textbf{x}_i,y_i)\\}_{i=1}^{m},学习速率：\\eta, 最大迭代次数：N$\n",
    "\n",
    "$\\textbf{输出：}从训练集中学习得到的线性回归模型参数\\textbf{w}=\\{w_1,w_2,...,w_d\\}以及常数项b的值$\n",
    "\n",
    "$\\textbf{学习过程：}$\n",
    "\n",
    "1.令迭代次数$k=0$,随机初始化系数$\\textbf{w}^{(0)}=[w_{1}^{(0)},w_{2}^{(0)},...,w_{d}^{(0)}]$以及常数项$b^{(0)}$，根据式（2.2）计算当前损失函数值$J(\\textbf{w}^{(0)},b^{0})$;\n",
    "\n",
    "2.分别求式(2.2)所示的目标函数在点$(\\textbf{w}^{(k)},b^{(k)})$处关于系数和常数项的偏导数，\n",
    "\n",
    "$\\Delta w_j =\\frac{\\partial{J(\\textbf{w},b)}}{\\partial{w_j}}=\\frac{1}{m}\\sum_{i=1}^{m}(f(\\textbf{x}_i)-y_i)x_{ij}$  式(2.3)\n",
    "\n",
    "$\\Delta b=\\frac{\\partial{J(\\textbf{w},b)}}{\\partial{b}}=\\frac{1}{m}\\sum_{i=1}^{m}(f(\\textbf{x}_i)-y_i)$  式(2.4)\n",
    "\n",
    "3.更新系数项合格常数项\n",
    "\n",
    "$w_{j}^{(k+1)}= w_{j}^{k}-\\eta \\Delta w_j$  式(2.5)\n",
    "\n",
    "$b^{(k+1)}=b^{(k)}-\\eta \\Delta b$ 式(2.6)\n",
    "\n",
    "4.根据式（2.2）计算当前损失函数值$J(\\textbf{w}^{(k+1)},b^{k+1})$，\n",
    "\n",
    "5.如果$|J(\\textbf{w}^{(k+1)},b^{k+1})-J(\\textbf{w}^{(k)},b^{k})|<=e$或者$k<N$, $k=k+1$，转入步骤2继续执行。否则，执行结束。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.3 线性回归模型的实现}$\n",
    "\n",
    "本节介绍如实使用Python编程语言和梯度下降算法训练线性回归模型，实现代码如下，其中有部分重要的代码需要读者按照提示补齐。本节通过实现LinearRegression类来封装线性回归模型的具体实现细节。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "jupyter": {
     "source_hidden": true
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "#导入需要使用的Python工具包\n",
    "import numpy as np\n",
    "#定义LinearRegression类来封装线性回归模型的具体实现细节\n",
    "class LinearRegression:\n",
    "    #初始化线性模型\n",
    "    def __init__(self):\n",
    "        self.w = None #系数\n",
    "        self.b = None #常数项\n",
    "    #利用线性回归模型对输入样本的特征向量x进行预测，返回模型的预测值y\n",
    "    def predict(self, x):\n",
    "        \"\"\" \n",
    "        输入参数 x: 待预测样本\n",
    "        输出结果 y: 待预测样本x的预测输出\n",
    "        \"\"\"\n",
    "        #此处添加一行代码，根据式(2.1)计算线性回归模型对样本x的预测值y\n",
    "        return y\n",
    "    #定义线性回归模型的损失函数\n",
    "    def calculate_loss(self, x_train,y_train):\n",
    "        \"\"\"\n",
    "        输入参数 x_train: 训练集的特征空间\n",
    "        输出参数 y_train: 训练集的真实输出\n",
    "        输出结果 loss: 线性回归模型在训练集上的预测损失\n",
    "        \"\"\"\n",
    "        inst_num = x_train.shape[0]#训练样本的个数\n",
    "        #线性回归模型对训练集样本的预测输出\n",
    "        #此处添加一行代码，根据式(2.1)计算线性回归模型对训练集x_train的预测值y_pred \n",
    "        #计算均方误差\n",
    "        #此处添加一行代码，计算线性回归模型对训练集x_train的预测值y_pred与真实输出值之间的均方误差\n",
    "        return loss\n",
    "    #计算线性回归模型损失函数的梯度\n",
    "    def calculate_grad(self, x_train, y_train):\n",
    "        \"\"\"        \n",
    "        输入参数 x_train：训练集样本的特征输入\n",
    "        输入参数 y_train：训练集样本的真实输出\n",
    "        输出结果 dw,db: 线性回归模型参数的梯度\n",
    "        \"\"\"\n",
    "        inst_num = x_train.shape[0] #训练样本的个数\n",
    "        #线性回归模型对训练集样本的预测输出\n",
    "        y_pred = x_train.dot(self.w) + self.b \n",
    "        #计算w的梯度\n",
    "        #此处添加一行代码，根据式(2.3)计算w各个分量的梯度dw\n",
    "        #计算b的梯度\n",
    "        #此处添加一行代码，根据式(2.4)计算b的梯度db\n",
    "        return dw,db\n",
    "    #采用梯度下降法学习线性回归模型\n",
    "    def fit(self, x_train, y_train, learn_rate, max_iter):         \n",
    "        \"\"\"    \n",
    "        输入参数 x_train：训练集样本的特征输入\n",
    "        输入参数 y_train：训练集样本的真实输出\n",
    "        输入参数 learn_rate: 梯度下降算法的学习速率\n",
    "        输入参数 max_iter: 梯度下降算法的最大迭代次数    \n",
    "        输出结果 loss: 线性回归模型在训练过程中的损失函数值\n",
    "        \"\"\"\n",
    "        feature_num = x_train.shape[1] #特征属性个数\n",
    "        self.w = np.zeros((feature_num,1))#初始化线性回归模型的系数\n",
    "        self.b = 0 #初始化线性回归模型的常数项\n",
    "        loss_list = []#存放训练过程中每次迭代的损失函数值\n",
    "        #迭代训练过程\n",
    "        for i in range(max_iter):\n",
    "            loss = self.calculate_loss(x_train,y_train)\n",
    "            #计算参数的梯度\n",
    "            dw,db = self.calculate_grad(x_train,y_train) \n",
    "            #更新模型参数\n",
    "            #此处添加一行代码，根据式(2.5)更新系数\n",
    "            #此处添加一行代码，根据式(2.6)更新常数项\n",
    "            #添加损失列表\n",
    "            loss_list.append(loss)\n",
    "        return loss_list\n",
    "    #训练过程可视化，绘制模型损失关于迭代次数的曲线\n",
    "    def training_visualize(self,loss_list):\n",
    "        import matplotlib.pyplot as plt\n",
    "        plt.plot(loss_list,color='red')\n",
    "        plt.xlabel(\"iterations\")\n",
    "        plt.ylabel(\"loss\")\n",
    "        plt.show()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.4 逻辑回归模型的基本原理}$\n",
    "\n",
    "对于分类问题，其输出值是离散类型的，而线性模型是通过属性的线性组合函数，将输入特征映射到整个实数空间。为了将线性模型应用于解决分类问题，需要通过采用某种函数将线性模型的实数输出值映射到离散的类别标记上，同时还要求采用的映射函数是可微可导的，以便于模型的学习。\n",
    "\n",
    "逻辑回归模型采用如式(2.7)所示的逻辑函数将线性模型的连续型输出值与类别标记对应起来，逻辑函数的函数图像如下图所示。\n",
    "\n",
    "$z(x)=\\frac{1}{1+e^{-x}}$   式(2.7)\n",
    "\n",
    "<img src=picture/图2.3.jpg>\n",
    "\n",
    "对于二分类问题，逻辑回归模型通过逻辑函数将线性模型的连续型输出值压缩到开区间(0,1)之间，因此压缩之后的输出值可以看作是数据样本被分类为正类的概率，然后通过对概率值设置阈值，对数据样本进行类别标签的预测。逻辑回归模型解决二分类问题的过程下图所示。\n",
    "<img src=picture/图2.4.jpg>\n",
    "\n",
    "对于二分类问题，给定数据集$D=\\{(\\textbf{x}_i,y_i)\\}_{i=1}^{m}$,其中$\\textbf{x}_i=[x_{i1},x_{i2},...,x_{id}]$是第$i$个样本的特征向量，$y_i \\in \\{0,1\\}$ 是其对应的类别，1表示正类，0表示负类。\n",
    "\n",
    "$\\textbf{逻辑回归模型的基本形式：}$\n",
    "\n",
    "$f(\\textbf{x}_{i})=w_{1}x_{i1}+...+w_dx_{id}+b, y=\\frac{1}{1+e^{-f(\\textbf{x}_{i})}}$ .   式(2.8)\n",
    "\n",
    "$\\textbf{逻辑回归模型的参数}：$\n",
    "\n",
    "要确定逻辑回归模型，需要从训练数据集中学习得到系数$\\textbf{w}=[w_1,w_2,...,w_d]$的值以及常数项$b$的值。\n",
    "\n",
    "$\\textbf{逻辑回归模型的理解：}$\n",
    "\n",
    "逻辑回归模型中，模型的输出$y$可以表示为样本$\\textbf{x}$被分为正类的概率。\n",
    "\n",
    "$p(y=1|\\textbf{x}) = \\frac{e^{f(\\textbf{x})}}{1+e^{f(\\textbf{x})}},p(y=0|\\textbf{x})=1-p(y=1|\\textbf{x}) = \\frac{1}{1+e^{f(\\textbf{x})}}$\n",
    "\n",
    "$\\textbf{逻辑回归模型的学习：}$\n",
    "\n",
    "逻辑回归模型通过“极大似然法”来估计线性模型的参数。给定训练集$D=\\{(\\textbf{x}_i,y_i)\\}_{i=1}^{m}$，逻辑回归模型最大化如式(2.9)所示的“对数似然函数”来求解模型的参数。\n",
    "\n",
    "$\\underset{\\textbf{w},b}{\\max} J(\\textbf{w},b)= \\sum_{i=1}^{m}ln(p(y_i|\\textbf{w},b,\\textbf{x}_i)), p(y_i|\\textbf{w},b,\\textbf{x}_i)=y_{i}p(y_i=1|\\textbf{x}_i)+(1-y_i)p(y_i=0|\\textbf{x}_i)$ 式(2.9)\n",
    "\n",
    "式(2.9)的求解可以转化为最小化如式(2.10)所示的函数。\n",
    "\n",
    "$\\underset{\\textbf{w},b}{\\min} \\sum_{i=1}^{m} -y_i(\\textbf{w}^{T}\\textbf{x}_i+b)+ln(1+e^{\\textbf{w}^{T}\\textbf{x}_i+b})$. 式(2.10)\n",
    "\n",
    "其中，$\\textbf{w}$是逻辑回归模型系数组成的系数向量，$\\textbf{w}^{T}$是向量$\\textbf{w}$的转置。可以看出，式(2.10)表示的是一个关于系数w和常数项b的高阶可导连续函数，可以通过梯度下降法进行求得最优解。\n",
    "\n",
    "在使用梯度下降法训练逻辑回归模型的过程中，关键的操作在于求系数和常数项关于如式(2.10)所示的目标函数的梯度。为方便理解，式(2.10)所示的目标函数对系数$\\textbf{w}=[w_1,w_2,...,w_d]$和常数项$b$的偏导数计算如下所示：\n",
    "<img src=picture/图2.5.jpg>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.5 逻辑回归模型的实现}$\n",
    "\n",
    "本节介绍如实使用Python编程语言和梯度下降算法训练逻辑回归模型，实现代码如下，其中有部分重要的代码需要读者按照提示补齐。本节通过实现LogisticRegression类来封装逻辑回归模型的具体实现细节。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "jupyter": {
     "source_hidden": true
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "class LogisticRegession:\n",
    "    #定义逻辑函数\n",
    "    def Sigmoid(self,z):\n",
    "        \"\"\"\n",
    "        输入参数 z: 逻辑函数的输入值\n",
    "        \"\"\"\n",
    "        return 1.0 / (1.0 + np.exp(-z))\n",
    "    #定义逻辑回归模型预测函数\n",
    "    def predict_prob(self,x):\n",
    "        \"\"\"\n",
    "        输入参数 x: 待预测数据样本\n",
    "        输出参数 y: 数据样本被预测为正类的概率\n",
    "        \"\"\"\n",
    "        y = self.Sigmoid(np.dot(x,self.w))\n",
    "        return y\n",
    "    def predict(self,x):\n",
    "        inst_num = x.shape[0]\n",
    "        y = self.predict_prob(x) #计算数据样本被预测为正类的概率\n",
    "        labels = np.zeros(inst_num)\n",
    "        for i in range(inst_num):\n",
    "            if y[i] >=0.5: #如果概率值大于阈值0.5，则预测为正类\n",
    "                labels[i] = 1\n",
    "        return y,labels #返回预测概率和预测标签\n",
    "    #定义逻辑回归模型损失函数\n",
    "    def calculate_loss(self,x_train,y_train):\n",
    "        inst_num = x_train.shape[0]\n",
    "        loss = 0.0\n",
    "        for i in range(inst_num):\n",
    "            z = np.dot(x_train[i,:],self.w)\n",
    "            loss += -y_train[i]*z + np.log(1+np.exp(z))\n",
    "        loss = loss / inst_num\n",
    "        return loss\n",
    "    #计算系数w和常数b的梯度\n",
    "    def calculate_grad(self,x_train,y_train):\n",
    "        m, n = np.shape(x_train) #训练样本个数以及特征维数\n",
    "        grad = np.zeros(n)#初始化的梯度       \n",
    "        probs = self.predict_prob(x_train)\n",
    "        term = np.zeros((m,n))        \n",
    "        #计算系数w和常数项b的梯度\n",
    "        #此处添加代码，计算系数w和常数项b的梯度，注意在本例中通过在训练数据的最后加一列全1，把常数项b吸收到系数w中。       \n",
    "        return grad\n",
    "    #采用梯度下降法学习逻辑回归模型\n",
    "    def fit(self, x_train, y_train, learn_rate, max_iter):\n",
    "         \n",
    "        \"\"\"    \n",
    "        输入参数 x_train：训练集样本的特征输入\n",
    "        输入参数 y_train：训练集样本的真实输出\n",
    "        输入参数 learn_rate: 梯度下降算法的学习速率\n",
    "        输入参数 max_iter: 梯度下降算法的最大迭代次数    \n",
    "        输出参数 loss: 线性回归模型在训练过程中的损失函数值,用于可视化训练过程\n",
    "        \"\"\"\n",
    "        m,n = np.shape(x_train) #训练样本个数，特征属性个数 \n",
    "        X = np.c_[x_train,[1 for x in range(m)]]\n",
    "        self.w = np.zeros(n+1)#初始化线性回归模型的参数\n",
    "        grad = np.zeros(n+1) \n",
    "        loss_list = []#存放训练过程中每次迭代的损失函数值\n",
    "        for i in range(max_iter):\n",
    "            loss = self.calculate_loss(X,y_train)\n",
    "            grad = self.calculate_grad(X,y_train) #计算参数的梯度\n",
    "            #更新模型参数\n",
    "            self.w += -learn_rate*grad            \n",
    "            #添加损失列表\n",
    "            loss_list.append(loss)\n",
    "            if i%100 == 0:\n",
    "                print(\"当前损失函数值:%.3f\"%loss)\n",
    "        return loss_list\n",
    "    #训练过程可视化\n",
    "    def training_visualize(self,loss_list):\n",
    "        import matplotlib.pyplot as plt\n",
    "        plt.plot(loss_list,color='red')\n",
    "        plt.xlabel(\"iterations\")\n",
    "        plt.ylabel(\"loss\")\n",
    "        plt.show()\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.6 岭回归模型的基本原理}$\n",
    "\n",
    "在线性回归模型中，模型的参数(即系数和常数项)可以通过最小如式(2.2)所示的目标函数求得，通过将训练样本的特征向量组成输入矩阵的后面增加一列常数1，形成新的输入矩阵$\\textbf{X}$，训练样本的输出值组成列向量$\\textbf{Y}$，线性回归模型的参数组成列向量$\\textbf{W}$，式(2.2)所示的目标函数的向量形式可以表示为如式(2.11)所示。\n",
    "<img src=picture/图2.6.jpg>\n",
    "\n",
    "$\\textbf{W}^{*}=\\underset{\\textbf{W}}\\min J(\\textbf{W},\\textbf{X})=\\underset{\\textbf{W}}\\min (\\textbf{y}-\\textbf{X}\\textbf{W})^{T}(\\textbf{y}-\\textbf{X}\\textbf{W})$. 式(2.11)\n",
    "\n",
    "在式(2.11)所示的目标函数$J(\\textbf{X,W})$上对$\\textbf{W}$求导数，并令导数等于零，得到$\\textbf{W}$的最优解$\\textbf{W}^{*}$如式(2.12)所示。\n",
    "\n",
    "$\\frac{\\partial{J(\\textbf{X,W})}}{\\partial{\\textbf{W}}}=-2\\textbf{X}^{T}(\\textbf{y}-\\textbf{X}\\textbf{W})$\n",
    "\n",
    "令上述偏导数为零，可得\n",
    "\n",
    "$\\textbf{W}^{*}=(\\textbf{X}^{T}\\textbf{X})^{-1}\\textbf{X}^{T}\\textbf{y}$. 式(2.12)\n",
    "\n",
    "根据线性代数的相关知识可知，当$\\textbf{X}^{T}\\textbf{X}$为满秩矩阵时，$\\textbf{W}$有唯一解；当$\\textbf{X}^{T}\\textbf{X}$不是满秩矩阵时(例如特征变量存在多重共线性)，$\\textbf{W}$的解不唯一，选择哪个解作为$\\textbf{W}$的输出，取决于学习算法的归纳偏好。当$\\textbf{X}^{T}\\textbf{X}$的行列式趋近接近于0时，得出的模型参数趋近于无穷大，此时得带的模型参数是无意义的。解决这一问题的常见方法就是引入正则项。\n",
    "\n",
    "岭回归模型(Ridge Regression)在线性回归模型的基础上，引入$L_2$正则项，通过牺牲模型在训练集上的预测精度，来换取模型参数求解的稳定性，在存在共线性问题和病态数据偏多的研究中有较大的实用价值。\n",
    "\n",
    "$\\textbf{岭回归模型的形式化描述如下所示：}$\n",
    "\n",
    "$\\textbf{岭回归的预测函数：}$\n",
    "\n",
    "$f(\\textbf{x}_i)=w_{1}x_{i1}+w_{2}x_{i2}+...+w_{d}x_{id}+w_{d+1}$\n",
    "\n",
    "$\\textbf{岭回归模型的目标函数：}$\n",
    "\n",
    "$\\underset{\\textbf{W}}\\min J(\\textbf{X},\\textbf{W})=\\frac{1}{2m}(y_i-f(\\textbf{x}_i))^2+\\lambda \\Vert \\textbf{W}\\Vert_{2}^{2}$. 式(2.13)\n",
    "\n",
    "其中，$\\Vert \\textbf{W}\\Vert_{2}^{2}$是模型参数的$L_2$正则项，$\\lambda$是一个大于等于零的常数，用于在模型精度和模型复杂度之间进行平衡，控制模型的复杂度。\n",
    "\n",
    "$\\textbf{岭回归模型的求解：}$\n",
    "\n",
    "式(2.13)所示的目标函数对于参数$\\textbf{W}$是可导的，因此可以使用梯度下降算法进行求解。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.7 岭回归模型的实现}$\n",
    "\n",
    "本节介绍如实使用Python编程语言和梯度下降算法训练岭回归模型，实现代码如下，其中有部分重要的代码需要读者按照提示补齐。本节通过实现RidgeRegression类来封装岭回归模型的具体实现细节。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "jupyter": {
     "source_hidden": true
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "class RidgeRegression:\n",
    "    #定义岭回归模型的预测函数\n",
    "    def predict(self, x):\n",
    "        #在预测数据中增加一列1,用于将常数项b统一表示为向量w\n",
    "        x_new = np.c_[x, [1 for x in range(x.shape[0])]]\n",
    "        return x_new.dot(self.w.T)\n",
    "    #定义梯度函数，用于在梯度下降法求解过程中计算模型参数的梯度\n",
    "    def calculate_grad(self, x_train, y_train,Namda):\n",
    "        #梯度求解如式(2.13对w求导数\n",
    "        #添加代码，计算式(2.13）对w求导数grad\n",
    "        return grad\n",
    "    #计算代价函数\n",
    "    def calculate_loss(self,x_train, y_train):\n",
    "        return 1/(2*self.m)*((x_train.dot(self.w.reshape(-1,1))-\n",
    "                             y_train.reshape(-1,1))**2).sum()\n",
    "    #采用梯度下降法学习岭回归模型\n",
    "    def fit(self, x_train, y_train, learn_rate, max_iter, Namda):\n",
    "        self.m = x_train.shape[0] #训练样本个数\n",
    "        self.w = np.array([0 for x in range(x_train.shape[1]+1)])#初始化模型参数为零\n",
    "        x_train_new = np.c_[x_train,[1 for x in range(self.m)]] #增加一列1\n",
    "        loss_list = []\n",
    "        for i in range(max_iter):\n",
    "            loss = self.calculate_loss(x_train_new, y_train)\n",
    "            loss_list.append(loss)\n",
    "            grad = self.calculate_grad(x_train_new, y_train,Namda)\n",
    "            self.w = self.w - grad*learn_rate\n",
    "        return loss_list\n",
    "   #训练过程可视化\n",
    "    def training_visualize(self,loss_list):\n",
    "        import matplotlib.pyplot as plt\n",
    "        plt.plot(loss_list,color='red')\n",
    "        plt.xlabel(\"iterations\")\n",
    "        plt.ylabel(\"loss\")\n",
    "        plt.show()     \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.8 Lasso回归模型的基本原理}$\n",
    "\n",
    "在机器学习问题中，通常能够得到数量非常多的特征，特征空间的维度高，但是有些特征对问题的求解并没有作用，甚至可能会对问题求解造成负面影响。因此，需要对特征进行选择，只选择部分对问题求解有益的特征，构建机器学习模型。在线性模型中，模型参数的绝对值大小表征了对应特征对学习问题的重要性，如果能使某些不重要的特征对应的参数值为零，使得不重要的特征与最终的模型无关，则不仅可以降低模型的复杂性，还可以降低模型过拟合的风险，提高模型的性能。岭回归模型可以使得模型的部分参数接近于零，但很难使得学习这些参数为零，因此无法剔除变量，进行特征选择，而Lasso回归模型将惩罚项由$L_2$范数变为$L_1$范数，可以将一些不重要的回归系数缩减为零，达到剔除变量，筛选重要特征的目的。\n",
    "\n",
    "$\\textbf{Lasso回归模型的形式化描述如下：}$\n",
    "\n",
    "$\\textbf{Lasso回归模型的预测函数：}$\n",
    "\n",
    "$f(\\textbf{x}_i)=w_{1}x_{i1}+w_{2}x_{i2}+...+w_{d}x_{id}+w_{d+1}$\n",
    "\n",
    "$\\textbf{Lasso回归模型的目标函数：}$\n",
    "\n",
    "$\\underset{\\textbf{W}}\\min J(\\textbf{X},\\textbf{W})=\\frac{1}{2m}(y_i-f(\\textbf{x}_i))^2+\\lambda \\Vert \\textbf{W}\\Vert_{1}$. 式(2.14)\n",
    "\n",
    "其中$\\Vert \\textbf{W}\\Vert_{1}$是模型参数的$L_1$正则项，$\\lambda$是一个大于等于零的常数，用于在模型精度和模型复杂度之间进行平衡，控制模型的复杂度。\n",
    "\n",
    "$\\textbf{Lasso回归模型的求解：}$\n",
    "\n",
    "由于$L_1$范数不可导，不能直接使用梯度下降法，但是可以使用坐标下降法求解。在如式(2.14)所示的目标函数中，我们需要求得$d+1$个参数$w_i,w_2,...,w_{d+1}$的最优值，采用坐标下降法求解时，我们可以对参数$w_j$求偏导，而保持其他参数$w_k,k\\neq j$不变，即把参数$w_k,k\\neq j$视为常数，分别对式(2.14)中的平方误差$(y_i-f(\\textbf{x}_i))^2$部分和正则化部分$\\Vert \\textbf{W}\\Vert_{1}$对参数$w_j$求偏导，并岭两部分的偏导数之和为零，可得出参数$w_j$的解。以此类推，可以求得其他参数的解。我们控制其他参数$w_k,k\\neq j$不变，求式(2.14)所示的目标函数中平方误差和正则化项关于参数$w_j$的偏导数，可以表示如下：\n",
    "\n",
    "平方误差关于参数$w_j$的偏导数\n",
    "\n",
    "$\\frac{\\partial{\\sum_{i=1}^{m} \\ (y_i-f(\\textbf{x}_i))^2}}{\\partial{w_j}}= \\frac{\\partial{\\sum_{i=1}^{m} \\ (y_i-\\sum_{j=1}^{d+1}w_{j}x_{ij} \\ )^2}}{\\partial{w_j}}=-2\\sum_{i=1}^{m}x_{ij}(y_i-\\sum_{k \\neq j}w_{k}x_{ik})-2w_{j}\\sum_{i=1}^{m}x_{ij}^{2}$\n",
    "\n",
    "正则化项关于参数$w_j$的偏导数\n",
    "\\begin{equation}\n",
    "\\frac{\\partial{\\lambda \\Vert \\textbf{W}\\Vert_{1}}}{\\partial{w_j}}=\\begin{cases}\\lambda, & if \\ w_j>0 \\cr [-\\lambda, +\\lambda], &if \\ w_j = 0 \\cr -\\lambda & if \\ w_j<0\\end{cases}\n",
    "\\end{equation}\n",
    "\n",
    "令$\\frac{\\partial{\\sum_{i=1}^{m} \\ (y_i-f(\\textbf{x}_i))^2}}{\\partial{w_j}}+\\frac{\\partial{\\lambda \\Vert \\textbf{W}\\Vert_{1}}}{\\partial{w_j}}=0$，可得\n",
    "\n",
    "\n",
    "\\begin{equation}\n",
    "w_{j}=\\begin{cases}\n",
    "\\frac{\\sum_{i=1}^{m}\\ x_{ij} \\ (y_i-\\sum_{k \\neq j} \\ w_{k} \\ x_{ik})+\\frac{\\lambda}{2}}{\\sum_{i=1}^{m}x_{ij}^{2}}, & if \\ \\sum_{i=1}^{m}\\ x_{ij} \\ (y_i-\\sum_{k \\neq j} \\ w_{k} \\ x_{ik})\\ < \\ - \\frac{\\lambda}{2}\n",
    "\\cr 0,& if  \\ \\sum_{i=1}^{m}\\ x_{ij} \\ (y_i-\\sum_{k \\neq j} \\ w_{k} \\ x_{ik})\\ \\in \\ [- \\frac{\\lambda}{2},+ \\frac{\\lambda}{2}] \n",
    "\\cr \\frac{\\sum_{i=1}^{m}\\ x_{ij} \\ (y_i-\\sum_{k \\neq j} \\ w_{k} \\ x_{ik})-\\frac{\\lambda}{2}}{\\sum_{i=1}^{m}x_{ij}^{2}}, & if \\ \\sum_{i=1}^{m}\\ x_{ij} \\ (y_i-\\sum_{k \\neq j} \\ w_{k} \\ x_{ik})\\ > \\  \\frac{\\lambda}{2}\n",
    "\\end{cases}\n",
    "\\end{equation}\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.9 Lasso回归模型的实现}$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "class LassoRegression:\n",
    "    #定义Lasso回归模型的预测函数     \n",
    "    def predict(self, x_test):\n",
    "        x_test = np.c_[x_test, [1 for x in range(x_test.shape[0])]]\n",
    "        return x_test.dot(self.w.T)\n",
    "    #计算Lasso回归模型的目标损失函数   \n",
    "    def calculate_loss(self,x_train,y_train,Namda):\n",
    "        return 1/(2*self.m)*((x_train.dot(self.w.reshape(-1,1))-\n",
    "                             y_train.reshape(-1,1))**2).sum()\n",
    "        +Namda*(np.abs(self.w)).sum()        \n",
    "    #坐标下降法求解Lasso回归模型\n",
    "    def coordinate(self,x_train,y_train,Namda):\n",
    "        m = x_train.shape[1] #训练样本个数\n",
    "        for i in range(m):\n",
    "            a = (x_train[:,i].T.dot(*x_train[:,i].reshape(1,-1)))\n",
    "            dw = np.matrix(np.zeros((1,m)))\n",
    "            dw[0,i]=self.w[i]\n",
    "            b = -2*(x_train.T*(y_train.reshape(-1,1)-\n",
    "                              x_train.dot((self.w-dw).T)))[i,0]\n",
    "            if b < -Namda:\n",
    "                self.w[i]=(-Namda-b)/a/2\n",
    "            elif b > Namda:\n",
    "                self.w[i] = (Namda-b)/a/2\n",
    "            else:\n",
    "                self.w[i] = 0\n",
    "    #训练Lasso回归模型\n",
    "    def fit(self, x_train, y_train,max_iter,Namda):\n",
    "        self.w = np.array([0. for x in range(x_train.shape[1]+1)]) \n",
    "        m = x_train.shape[0]\n",
    "        x_train_new = np.c_[x_train,[1 for x in range(m)]]\n",
    "        for i in range(max_iter):\n",
    "            self.coordinate(x_train_new, y_train, Namda)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{2.10 实践任务}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "本章介绍了线性模型的基本形式，同时也介绍了线性回归模型、逻辑回归模型、岭回归模型、Lasso回归模型的原理、实现方法以及应用。本章涉及的梯度下降算法求极值、坐标下降法求非凸函数的优化问题、正则化等在机器学习领域的其他学习算法中同样有着非常广泛的应用。\n",
    "\n",
    "$\\textbf{实践任务1：把2.3节中的LinearRegression类中缺失的代码补充完成，然后应LinearRegression类实现的线性回归模型完成客户价值预测模型的搭建和测试。}$\n",
    "\n",
    "实验步骤如下："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1) 读取客户价值数据，并将数据分成训练集和测试集，训练集占70%的样本，剩下30%的样本作为测试集。客户价值数据集存放在datasets文件夹中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "\n",
    "# 读取数据集\n",
    "f = open('datasets/客户价值.csv')\n",
    "data = pd.read_csv(f)\n",
    "\n",
    "# 划分训练集与测试集\n",
    "from sklearn.model_selection import train_test_split\n",
    "X_train, X_test, Y_train, Y_test = train_test_split(data.iloc[:, 1:], data.iloc[:, 0], test_size=0.3, shuffle=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>历史贷款金额</th>\n",
       "      <th>贷款次数</th>\n",
       "      <th>学历</th>\n",
       "      <th>月收入</th>\n",
       "      <th>性别</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>38</th>\n",
       "      <td>8317</td>\n",
       "      <td>3</td>\n",
       "      <td>2</td>\n",
       "      <td>9217</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>42</th>\n",
       "      <td>6253</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>10567</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>13</th>\n",
       "      <td>7528</td>\n",
       "      <td>3</td>\n",
       "      <td>3</td>\n",
       "      <td>11367</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>112</th>\n",
       "      <td>5062</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>10317</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>50</th>\n",
       "      <td>4376</td>\n",
       "      <td>3</td>\n",
       "      <td>2</td>\n",
       "      <td>11117</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "     历史贷款金额  贷款次数  学历    月收入  性别\n",
       "38     8317     3   2   9217   1\n",
       "42     6253     2   2  10567   0\n",
       "13     7528     3   3  11367   1\n",
       "112    5062     2   2  10317   0\n",
       "50     4376     3   2  11117   1"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "X_train.head()#打印出数据集的前5个样本"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(2) 使用LinearRegression类，在训练集上训练线性回归模型，构建客户价值预测模型，并对模型的预测性能进行评估。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "#添加代码，在训练集上构建客户价值预测模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(3) 在相同的数据集上使用sklearn库构建线性回归模型并与本章实现的线性回归模型在模型参数、模型预测性能上的差异进行比较。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "#添加代码，在测试集上对比sklearn库中的线性回归模型与本章实现的线性回归模型在模型参数，预测准确性(以均方根误差为评价指标)上的差异"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{实践任务2：把2.5节中的LogisticRegression类中缺失的代码补充完成，然后应用LogisticRegression类实现的逻辑回归模型完成股票客户流失预测模型的搭建和测试。}$\n",
    "\n",
    "实验步骤如下："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1)读取股票客户流失数据，并将数据分成训练集和测试集，训练集占70%的样本，剩下30%的样本作为测试集。股票客户流失数据集存放在datasets文件夹中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>账户资金（元）</th>\n",
       "      <th>最后一次交易距今时间（天）</th>\n",
       "      <th>上月交易佣金（元）</th>\n",
       "      <th>累计交易佣金（元）</th>\n",
       "      <th>本券商使用时长（年）</th>\n",
       "      <th>是否流失</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>22686.5</td>\n",
       "      <td>297</td>\n",
       "      <td>149.25</td>\n",
       "      <td>2029.85</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>190055.0</td>\n",
       "      <td>42</td>\n",
       "      <td>284.75</td>\n",
       "      <td>3889.50</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>29733.5</td>\n",
       "      <td>233</td>\n",
       "      <td>269.25</td>\n",
       "      <td>2108.15</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>185667.5</td>\n",
       "      <td>44</td>\n",
       "      <td>211.50</td>\n",
       "      <td>3840.75</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>33648.5</td>\n",
       "      <td>213</td>\n",
       "      <td>353.50</td>\n",
       "      <td>2151.65</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    账户资金（元）  最后一次交易距今时间（天）  上月交易佣金（元）  累计交易佣金（元）  本券商使用时长（年）  是否流失\n",
       "0   22686.5            297     149.25    2029.85           0     0\n",
       "1  190055.0             42     284.75    3889.50           2     0\n",
       "2   29733.5            233     269.25    2108.15           0     1\n",
       "3  185667.5             44     211.50    3840.75           3     0\n",
       "4   33648.5            213     353.50    2151.65           0     1"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "f = open('datasets/股票客户流失.csv')\n",
    "data = pd.read_csv(f)\n",
    "data.head()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(2)采用本章实现的LogisticRegression类，构建股票客户流失预测模型，测试模型的预测性能。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "#添加代码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(3)使用sklearn库中的逻辑回归模型，构建股票客户流失预测模型，测试模型的预测性能。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#添加代码"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
