{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "27e36477",
   "metadata": {},
   "source": [
    "## 多层神经网络\n",
    "\n",
    "上一节讲到的感知机他的激活函数一般是阶跃函数（$1-2U(-x)$），而神经元本质上就是将激活函数换成了sigmoid或者tanh，那么多层神经网络实际上就是许多神经元的线性组合，可以处理非线性程度更高的问题，如何具体地去理解激活函数，我们稍后再进行解释。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3263db6f",
   "metadata": {},
   "source": [
    "### 1.神经元\n",
    "\n",
    "首先我们先展示一个神经元里面的计算\n",
    "![neuron](images/neuron.gif)\n",
    "\n",
    "计算一个神经元的输出的方法和计算一个感知器的输出是一样的。假设神经元的输入是向量$\\vec{x}$，权重向量是$\\vec{w}$(偏置项是$w_0$)，激活函数是sigmoid函数，则其输出y：\n",
    "$$\n",
    "y = sigmod(\\vec{w}^T \\cdot \\vec{x})\n",
    "$$\n",
    "\n",
    "sigmoid函数的定义如下：\n",
    "$$\n",
    "sigmod(x) = \\frac{1}{1+e^{-x}}\n",
    "$$\n",
    "\n",
    "函数图像如下图所示\n",
    "\n",
    "![sigmod_function](images/sigmod.jpg)\n",
    "\n",
    "将输入映射到sigmoid函数上，得到\n",
    "$$\n",
    "y = \\frac{1}{1+e^{-\\vec{w}^T \\cdot \\vec{x}}}\n",
    "$$\n",
    "\n",
    "在前面我们得到了sigmoid的导数的形式为$y'=y(1-y)$在这里的推导中我们还会用到"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c12c78d",
   "metadata": {},
   "source": [
    "### 2.神经网络\n",
    "![nn1](images/nn1.jpeg)\n",
    "\n",
    "如图所示就是一个神经网络的示意图，这个神经网络是一个全连接的形式（FC，full connected）\n",
    "\n",
    "* 左侧是输入层(input layer)，右侧是输出层(output layer)，中间是隐层（hidden）\n",
    "* 在全连接神经网络中，同一层的神经元之间没有连接关系\n",
    "* 第n层神经元和n-1层的全部神经元相连，并且上一层每个神经元的输入都有一个权值\n",
    "\n",
    "当然在后面还有诸如CNN、RNN(循环)等其他形式的神经网络，他们的连接方式又会有所不同"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b9d519be",
   "metadata": {},
   "source": [
    "### 3.神经网络的正向输出\n",
    "\n",
    "下面我们将逐步实现神经网络的输出\n",
    "\n",
    "\n",
    "正向，也就是我们知道他的权值的情况，然后我们需要去计算他的输出是怎样的，如果我们把整个网络看成一个函数，那么\n",
    "\n",
    "$$\n",
    "\\vec{y}=f_{network} ( \\vec{x} )\n",
    "$$\n",
    "\n",
    "我们对每个神经元进行编号\n",
    "![nn2](images/nn2.png)\n",
    "\n",
    "* 输入层有三个节点，我们将其依次编号为1、2、3；\n",
    "* 隐藏层的4个节点，编号依次为4、5、6、7；\n",
    "* 最后输出层的两个节点编号为8、9。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e4192a2",
   "metadata": {},
   "source": [
    "如果这时要计算一个节点输出，必须得到上有节点的输出，所以对于$a_4$:\n",
    "![eqn_3_4](images/eqn_3_4.png)\n",
    "\n",
    "也就是前面三个的输出权值进行求和，加上一个偏置项，然后套上sigmoid即可\n",
    "\n",
    "注意这里的下标$\\omega_41$表示的是1$\\rightarrow$4的输出权值（把接收的放在前）\n",
    "\n",
    "采用相同的原理我们就可以得到输出的$\\vec{y} = (y_1, y_2)^T$\n",
    "\n",
    "![eqn_5_6](images/eqn_5_6.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "56d4fa25",
   "metadata": {},
   "source": [
    "* **网络输出的矩阵形式表示**\n",
    "\n",
    "首先我们分别定义输入和权值矩阵\n",
    "\n",
    "$$\n",
    "\\vec{x}=\\begin{bmatrix}\n",
    "x_1 \\\\\n",
    "x_2 \\\\\n",
    "x_3 \\\\\n",
    "1\\\\\n",
    "\\end{bmatrix}\n",
    "$$\n",
    "\n",
    "$$\n",
    "W=\\begin{bmatrix}\n",
    "\\vec{\\omega_4} \\\\\n",
    "\\vec{\\omega_5} \\\\\n",
    "\\vec{\\omega_6} \\\\\n",
    "\\vec{\\omega_7} \\\\\n",
    "\\end{bmatrix}=\\begin{bmatrix}\n",
    "\\omega_{41} & \\omega_{42} & \\omega_{43} & \\omega_{4b} \\\\\n",
    "\\omega_{51} & \\omega_{52} & \\omega_{53} & \\omega_{5b} \\\\\n",
    "\\omega_{61} & \\omega_{62} & \\omega_{63} & \\omega_{6b} \\\\\n",
    "\\omega_{71} & \\omega_{72} & \\omega_{73} & \\omega_{7b} \\\\\n",
    "\\end{bmatrix}\n",
    "$$\n",
    "\n",
    "所以第一层的所有输出我们可以写成矩阵$\\vec{a}$的形式\n",
    "\n",
    "$$\n",
    "\\vec{a}=\\begin{bmatrix}\n",
    "f(\\vec{\\omega_4}\\cdot\\vec{x}） \\\\\n",
    "f(\\vec{\\omega_5}\\cdot\\vec{x}） \\\\\n",
    "f(\\vec{\\omega_6}\\cdot\\vec{x}） \\\\\n",
    "f(\\vec{\\omega_7}\\cdot\\vec{x}） \\\\\n",
    "\\end{bmatrix}=f(W\\cdot\\vec{x})\n",
    "$$\n",
    "\n",
    "$$\n",
    "f=sigmoid\n",
    "$$\n",
    "\n",
    "如果是如图所示的多层神经网络\n",
    "\n",
    "![nn_parameters_demo](images/nn_parameters_demo.png)\n",
    "那么每一层的输出可以表示为\n",
    "![eqn_17_20](images/eqn_17_20.png)\n",
    "\n",
    "最终的输出值就可以写成\n",
    "\n",
    "$$\n",
    "\\vec{y} = f(W4 \\cdot f(W3 \\cdot f(W2 \\cdot f(W1 \\cdot \\vec{x}))))\n",
    "$$\n",
    "\n",
    "正向传播就是这样一层一层地进行计算就可以实现了，下图是动态演示\n",
    "\n",
    "亮处表示激活了该神经元，输出一个高电平\n",
    "\n",
    "![](images/nn-forward.gif)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
