{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<br>\n",
    "<center><font face=\"黑体\" size=4>《机器学习基础实践》课程实验指导书</font></center>\n",
    "<br>\n",
    "<center><font face=\"黑体\",size=4>第3章  神经网络模型</font></center>\n",
    "\n",
    "$\\textbf{1.实验目标}$\n",
    "\n",
    "了解神经网络模型的基本结构、训练方法、实现方法，并通过手写体数字识别案例展示神经网络模型在图像分类问题中的应用。\n",
    "\n",
    "$\\textbf{2.实验内容}$\n",
    "\n",
    "$\\textbf{3.1 神经网络的基本结构}$\n",
    "\n",
    "神经网络发展到今天已经成为一个多学科交叉的学科领域，关于神经网络的定义多种多样，其中被普遍接受的一种定义表述为“神经网络是由具有适应性的简单单元组成的广泛互联的网络，它的组织能够模拟生物神经系统对真实世界物体所作出的交互反应”。\n",
    "\n",
    "神经网络中最基本的成分是神经元模型，神经元模型接收来自n个其他神经元传递过来的输入信号，这些输入信号通过带权值的连接进行传递，神经元接收到的总输入值将与神经元的阈值进行比较，然后通过激活函数处理以产生神经元的输出。神经元模型的示意图如下图所示。\n",
    "\n",
    "<img src=picture/4.1.png>\n",
    "\n",
    "$\\textbf{激活函数}$\n",
    "\n",
    "在神经元模型中，激活函数的作用在于将输入数据的加权和转化为神经元的输出。数据的分布绝大多数是非线性的，而一般神经网络的计算是线性的，引入激活函数，是在神经网络中引入非线性，强化网络的学习能力。所以激活函数的最大特点就是非线性，在神经元模型中具有非常重要的作用。常用的激活函数有以下几种。\n",
    "\n",
    "$\\textbf{sigmoid函数}$\n",
    "\n",
    "sigmoid函数能够把输入的连续实值变换为0和1之间的输出，其表达式如式(3.1)所示，其导数的表达式如式(3.2)所示，函数图像及其导数的函数图像如下图所示。\n",
    "\n",
    "$f(x) = \\frac{1}{1+e^{-x}}$. 式(3.1)\n",
    "\n",
    "$\\frac{\\mathrm{d}f(x)}{\\mathrm{d}x}= (1-f(x))f(x)$. 式(3.2)\n",
    "\n",
    "<img src=picture/4.2.png>\n",
    "\n",
    "$\\textbf{tanh函数}$\n",
    "\n",
    "该函数又被称为双曲正切函数，与sigmoid函数相似，不同之处在于输出值范围由(0,1)变为了(-1,1)，其表达式如式(3.3)所示，其导数的表达式如式(3.4)所示，函数图像及其导数的函数图像如下图所示。\n",
    "\n",
    "$f(x) = \\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}$. 式(3.3)\n",
    "\n",
    "$\\frac{\\mathrm{d}f(x)}{\\mathrm{d}x}=1-(f(x))^2$. 式(3.4)\n",
    "\n",
    "<img src=picture/4.3new.png>\n",
    "\n",
    "$\\textbf{ReLU函数}$\n",
    "\n",
    "ReLU其实就是个取最大值的函数。ReLU函数其实是分段线性函数，把所有的负值都变为0，而正值不变，这种操作被成为单侧抑制，使得神经网络中的神经元也具有了稀疏激活性。其表达式如式(3.5)所示，其导数的表达式如式(3.6)所示，函数图像及其导数的函数图像如下图所示。\n",
    "\n",
    "$f(x) = \\max(0,x)$. 式(3.5)\n",
    "\n",
    "\\begin{equation}\n",
    "\\frac{\\mathrm{d}f(x)}{\\mathrm{d}x}=\\begin{cases}\n",
    "0, & if \\ x \\le 0\n",
    "\\cr 1,& if  \\ x > 0 \n",
    "\\end{cases}\n",
    "\\end{equation}\n",
    "\n",
    "<img src=picture/4.4.png>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{神经网络模型的组织结构}$\n",
    "\n",
    "常见的神经网络模型层级结构的，如下图所示。每层神经元与下一层神经元全互连，神经元之间不存在同层连接，也不存在跨层连接，其中输入层神经元接收外界输入，隐藏层与输出层神经元对信号进行加工，最终结果由输出层神经元输出。神经网络模型的学习过程，就是根据训练数据来调整神经元之间的连接权值以及每个功能神经元的阈值。神经网络学习得到的结果，蕴涵在连接权值和阈值中。\n",
    "\n",
    "<img src=picture/4.5.png>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{4.2 神经网络模型的学习算法}$\n",
    "\n",
    "误差逆传播算法(error BackPropagation，简称BP算法)是神经网络模型学习算法中的杰出代表，现实任务中使用神经网络时，大多是在使用BP算法进行训练。给定训练集$D=\\{(\\textbf{x}_1,\\textbf{y}_1),(\\textbf{x}_2,\\textbf{y}_1),…, (\\textbf{x}_m,\\textbf{y}_1)\\}$, $\\textbf{x}_i$是$d$维的输入特征向量，$\\textbf{y}_i$是$l$维的目标输出向量，接下来介绍如何通过BP算法学习一个如图4.6所示的单隐层神经网络的模型，该模型包含一个输入层，一个隐藏层以及一个输出层，输入层有$d$个神经元，对应$d$个输入特征，隐藏层有$q$个神经元，输出层有$l$个神经元，对应$l$个输出目标，输入层神经元只接收输入，隐藏层神经元和输出层神经元的激活函数都为sigmoid函数。\n",
    "\n",
    "<img src=picture/4.6.png>\n",
    "\n",
    "$\\textbf{BP算法学习神经网络模型的过程}$\n",
    "\n",
    "$\\textbf{输入：}$\n",
    "\n",
    "训练集:$D=\\{(\\textbf{x}_1,\\textbf{y}_1),(\\textbf{x}_2,\\textbf{y}_1),…, (\\textbf{x}_m,\\textbf{y}_1)\\}$，学习速率:$\\eta$，最大迭代次数:$N$\n",
    "\n",
    "$\\textbf{输出：}$\n",
    "\n",
    "神经网络模型的参数：输入层-隐藏层的连接权值矩阵$\\textbf{V}_{d,q}$，隐藏层神经元的阈值$\\boldsymbol{\\gamma}$，隐藏层-输出层的连接权值矩阵:$\\textbf{W}_{q,l}$，输出层神经元的阈值:$\\boldsymbol{\\theta}$。\n",
    "\n",
    "$\\textbf{算法过程：}$\n",
    "\n",
    "(1)随机初始化输入层-隐藏层的连接权值矩阵$\\textbf{V}_{d,q}$；隐藏层神经元的阈值$\\boldsymbol{\\gamma}$；隐藏层-输出层的连接权值矩阵$\\textbf{W}_{q,l}$；输出层神经元的阈值$\\boldsymbol{\\theta}$。当前迭代次数$T=0$.\n",
    "\n",
    "(2) while $T < N$:\n",
    "\n",
    "(3) for all $(\\textbf{x}_k,\\textbf{y}_k) \\in D$ do\n",
    "\n",
    "(4) 根据当前模型参数，通过将样本$((\\textbf{x}_k,\\textbf{y}_k)$在神经网络中正向传播，得到样本的预测输出，\n",
    "\n",
    "\\begin{equation}\n",
    "\\bar{\\textbf{y}}_k =[\\bar{y}_{1}^{k},\\bar{y}_{2}^{k},...,\\bar{y}_{l}^{k}]\\\\\n",
    "\\bar{y}_{j}^{k} = f(\\sum_{h=1}^{q}w_{hj}f(\\alpha_{h}-\\gamma_{h})-\\theta_{j})=f(\\sum_{h=1}^{q}w_{hj}f(\\sum_{i=1}^{d}v_{ih}x_{ki}-\\gamma_{h})-\\theta_{j})\n",
    "\\end{equation}\n",
    "\n",
    "(5) 计算神经网络模型对样本$((\\textbf{x}_k,\\textbf{y}_k)$的预测误差，\n",
    "\n",
    "\\begin{equation}\n",
    "E_k = \\frac{1}{2}\\sum_{j=1}^{k}(\\bar{y}_{j}^{k}-y_{j}^{k})^2\n",
    "\\end{equation}\n",
    "\n",
    "(6) 预测误差反向传播，计算预测误差$E_k$关于隐藏层-输出层连接权值$w_{hj}$的偏导数$\\Delta w_{hj}$，\n",
    " \n",
    "<img src=picture/4.7.png> \n",
    " \n",
    "(7) 预测误差反向传播，计算预测误差$E_k$关于输出层神经元阈值$\\theta_{j}$的偏导数$\\Delta \\theta_{j}$，\n",
    "\n",
    "<img src=picture/4.8.png> \n",
    "\n",
    "\n",
    "(8) 预测误差反向传播，计算预测误差$E_k$关于输入层-隐藏层连接权值$v_{ih}$的偏导数$\\Delta v_{ih}$，\n",
    "\n",
    "<img src=picture/4.9.png> \n",
    "\n",
    "\n",
    "(9)预测误差反向传播，计算预测误差$E_k$关于隐藏层神经元阈值$\\gamma_{h}$的偏导数$\\Delta \\gamma_{h}$，\n",
    "\n",
    "<img src=picture/4.10.png> \n",
    "\n",
    "(10) 更新权值$w_{hj}$： $w_{hj}$ = $w_{hj}$ - $\\eta \\Delta w_{hj}$\n",
    "\n",
    "(11) 更新阈值$\\theta_{j}$：$\\theta_{j}$= $\\theta_{j}$- $\\eta \\Delta \\theta_{j}$\n",
    "\n",
    "(12) 更新权值$v_{ih}$： $v_{ih}$ = $v_{ih}$ - $\\eta \\Delta v_{ih}$\n",
    "\n",
    "(13) 更新阈值$\\gamma_{j}$：$\\gamma_{j}$= $\\gamma_{j}$- $\\eta \\Delta \\gamma_{j}$\n",
    "\n",
    "(14) $T = T + 1$\n",
    "\n",
    "(15) End while\n",
    "\n",
    "$\\textbf{4.3 神经网络模型的实现}$\n",
    "<br>\n",
    "<br>\n",
    "本节通过使用Python编程语言，实现NeuralNetwork类来封装神经网络模型的实现，代码如下所示。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "class Neuralnetwork:\n",
    "    def __init__(self,layer_sizes):\n",
    "        \"\"\"\n",
    "        参数：\n",
    "        layer_sizes:神经网络层数，以及每层神经元个数，例如layer_size=[3,5,7]，表示神经网络有三层，第一层有3个神经元，二层5个，三层7个\n",
    "        \"\"\"\n",
    "        self.num_layers = len(layer_sizes)#神经网络层数,\n",
    "        self.layers = layer_sizes #每层神经元个数\n",
    "        #隐藏层以及输出层神经元的阈值\n",
    "        self.biases = [np.random.randn(y,1) for y in layer_sizes[1:]]\n",
    "        #各层之间的连接权值矩阵\n",
    "        self.weights = [np.random.randn(y,x) for x, y in zip(layer_sizes[:-1],layer_sizes[1:])]\n",
    "    \n",
    "    def sigmoid(self,z):\n",
    "        \"\"\"\n",
    "        sigmoid函数\n",
    "        \"\"\"\n",
    "        return 1.0/(1.0 + np.exp(-z))\n",
    "    \n",
    "    def sigmoid_prime(self,z):\n",
    "        \"\"\"\n",
    "        sigmoid函数的导数\n",
    "        \"\"\"\n",
    "        return self.sigmoid(z)*(1-self.sigmoid(z))\n",
    "    \n",
    "    def cost_func(self, output_layer_values, y):\n",
    "        \"\"\"      \n",
    "        代价函数求导\n",
    "        \"\"\"\n",
    "        return (output_layer_values - y)\n",
    "    \n",
    "    def feedforward(self,x):\n",
    "        \"\"\"\n",
    "        向前传播，得到网络的输出结果\n",
    "        \"\"\"\n",
    "        for w, b in zip(self.weights, self.biases):\n",
    "            x = self.sigmoid(np.dot(w,x)+b)\n",
    "        return x    \n",
    "    \n",
    "    def backpropagation(self, X, Y):\n",
    "        \"\"\"\n",
    "        误差逆向传播，计算权值和阈值的更新值\n",
    "        \"\"\"\n",
    "        delta_b = [np.zeros(b.shape) for b in self.biases]\n",
    "        delta_w = [np.zeros(w.shape) for w in self.weights]\n",
    "        \n",
    "        activation = np.transpose(X)\n",
    "        activations = [activation]\n",
    "        \n",
    "        zs = []\n",
    "        \n",
    "        for b, w in zip(self.biases,self.weights):\n",
    "            z = np.dot(w,activation) + b\n",
    "            zs.append(z)\n",
    "            activation = self.sigmoid(z)\n",
    "            activations.append(activation)\n",
    "            \n",
    "        Y = np.transpose(Y)\n",
    "        costs = self.cost_func(activations[-1], Y)\n",
    "        z =  zs[-1]\n",
    "        delta = np.multiply(costs,self.sigmoid_prime(z))\n",
    "        delta_b[-1] = np.sum(delta,axis=1,keepdims=True)\n",
    "        delta_w[-1] = np.dot(delta, np.transpose(activations[-2]))\n",
    "        for i in range(2,self.num_layers):\n",
    "            z= zs[-i]\n",
    "            sp = self.sigmoid_prime(z)\n",
    "            delta = np.multiply(np.dot(np.transpose(self.weights[-i+1]),delta),sp)\n",
    "            delta_b[-i] = np.sum(delta,axis=1,keepdims=True)\n",
    "            delta_w[-i] = np.dot(delta,np.transpose(activations[-i-1]))\n",
    "        return delta_b,delta_w\n",
    "    \n",
    "    def fit(self, X,Y,learnrate,mini_batch_size, epochs=1000):\n",
    "        \"\"\"\n",
    "        采用小批量方式训练神经网络模型\n",
    "        \"\"\"\n",
    "        N = len(X)\n",
    "        for i in range(epochs):\n",
    "            randomlist = np.random.randint(0,N-mini_batch_size,int(N/mini_batch_size))\n",
    "            batch_X = [X[k:k+mini_batch_size] for k in randomlist]\n",
    "            batch_Y = [Y[k:k+mini_batch_size] for k in randomlist]\n",
    "            for j in range(len(batch_X)):\n",
    "                delta_b,delta_w = self.backpropagation(batch_X[j], batch_Y[j])\n",
    "                self.weights = [w - (learnrate/mini_batch_size)*dw for w, dw in zip(self.weights,delta_w)]\n",
    "                self.biases = [b - (learnrate/mini_batch_size)*db for b, db in zip(self.biases,delta_b)]\n",
    "            if (i+1)%100 == 0:\n",
    "                labels = self.predict(X)\n",
    "                acc = 0.0\n",
    "                for k in range(len(labels)):\n",
    "                    if Y[k,labels[k]]==1.0:\n",
    "                        acc += 1.0\n",
    "                print(\"iterations %d accuracy %.3f\"%(i+1,acc/len(labels)))\n",
    "    \n",
    "    def predict(self, x):\n",
    "        results =  self.feedforward(x.T)\n",
    "        labels = [np.argmax(results[:,y]) for y in range(results.shape[1])]\n",
    "        return labels"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{4.4 实践任务}$\n",
    "\n",
    "使用Python语言实现一个n层神经网络(n>3)，并将其应用于手写体数字识别。具体要求如下：\n",
    "\n",
    "(1) 使用sklearn.datasets import load_digits读取手写体数字识别的数据集.\n",
    "\n",
    "(2)神经网络至少包含两个隐藏层。\n",
    "\n",
    "(3)比较使用不同激活函数的情况下，识别准确性的不同。\n",
    "\n",
    "(4)将自己实现的神经网络模型与sklearn库中的神经网络模型进行对比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "#此处添加实现代码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
