{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先，我们可以将每个神经元视为具有激活功能，此功能确定神经元是否激活。我们将使用 **sigmoid** 函数，在逻辑回归中，它应该是非常常见的函数。与逻辑回归不同，我们在使用神经网络时也需要 sigmoid 函数的导数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "def sigmoid(x):\n",
    "    return 1 / (1 + np.exp(-x))\n",
    "\n",
    "def dsigmoid(y):\n",
    "    return y * (1.0 - y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "神经网络中的 Sigmoid 函数将生成输入的端点（激活）乘以它们的权重。例如，假设我们有两列（特征）的输入数据和一个隐藏节点（神经元）在我们的神经网络。每个特征都会乘以相应的权重值，然后相加，然后通过 S 形。\n",
    "\n",
    "除了添加更多的隐藏单元外，我们还将每个输入特性的路径添加到每个隐藏单元，并将其乘以相应的权重。每个隐藏单元取其输入 * 权重之和，并通过 S 形传递，从而导致该单元的激活。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以下程序将设置数组来保存用于网络的数据，并初始化一些参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MLP_NeuralNetwork(object):\n",
    "    def __init__(self, input, hidden, output):\n",
    "        \"\"\"\n",
    "        param input：输入神经元数量\n",
    "        param hidden：隐藏神经元数量\n",
    "        param output：输出神经元数量\n",
    "        \"\"\"\n",
    "        self.input = input + 1 # 添加一个偏差节点\n",
    "        self.hidden = hidden\n",
    "        self.output = output\n",
    "        \n",
    "        # 设置一个 1 数组作为单元激活\n",
    "        self.ai = [1.0] * self.input\n",
    "        self.ah = [1.0] * self.hidden\n",
    "        self.ao = [1.0] * self.output\n",
    "        \n",
    "        # 创建随机权重\n",
    "        self.wi = np.random.randn(self.input, self.hidden)\n",
    "        self.wo = np.random.randn(self.hidden, self.output)\n",
    "        \n",
    "        # 创建一个 0 数组作为层改变\n",
    "        self.ci = np.zeros((self.input, self.hidden))\n",
    "        self.co = np.zeros((self.hidden, self.output))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用矩阵做所有的计算，因为它们速度快，而且非常容易阅读。我们的类接受三个输入：输入层的大小（特性）、隐藏层的大小（要调优的变量参数）和输出层的数量（可能的类的数量）。\n",
    "\n",
    "设置一个 1 数组作为单元激活的占位符，一个 0 数组作为层改变的占位符。\n",
    "\n",
    "将所有的权重初始化为数据数（此点最重要，如果所有的权重是一样的，那么所有隐藏的单位都是一样的，那么就无法调整网络了）。\n",
    "\n",
    "接下来做预测的运算操作：通过随机权重将所有数据通过网络提供给用户，并生成一些（不那么准确的）预测。每次做出预测时，都会计算出预测的错误程度，以及为了使预测更好（即误差），我们需要改变权重的方向。\n",
    "\n",
    "当权重被更新时，会创建一个前馈函数，这个函数被一次又一次地调用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def feedForward(self, inputs):\n",
    "    if len(inputs) != self.input - 1:\n",
    "        raise ValueError('错误输入')\n",
    "    # 输入激活\n",
    "    for i in range(self.input - 1):\n",
    "        self.ai[i] = inputs[i]\n",
    "    # 隐藏激活\n",
    "    for j in range(self.hidden):\n",
    "        sum = 0.0\n",
    "        for i in range(self.input):\n",
    "            sum += self.ai[i] * self.wi[i][j]\n",
    "        self.ah[j] = sigmoid(sum)\n",
    "    # 输出激活\n",
    "    for k in range(self.output):\n",
    "        sum = 0.0\n",
    "        for j in range(self.hidden):\n",
    "            sum += self.ah[j] * self.wo[j][k]\n",
    "        self.ao[k] = sigmoid(sum)\n",
    "    return self.ao[:]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "输入激活只是输入功能。但是，对于另一层，激活变成了前一层激活的总和乘以它们的相应的权值，反馈到 S 形去了。\n",
    "\n",
    "在第一次运算之后，我们的预测的误差相当大，所以要采用梯度下降。\n",
    "\n",
    "我们的反向传播算法首先计算我们预测的输出与真实输出的误差。然后在输出激活（预测值）上取 S 形的导数，以得到梯度的方向（斜率），并将该值乘以误差。这就给了我们误差的大小，隐藏的权值需要改变哪个方向来修正它。然后我们进入到隐藏层，并根据前面计算的幅度和误差计算隐藏层权值的误差。\n",
    "\n",
    "利用该误差和隐藏层激活的 S 形导数，我们计算了输入层的权重需要改变多少，以及在哪个方向上需要改变。\n",
    "\n",
    "现在我们有了价值网络，我们想改变利率的多少，以及在什么方向上，我们真正做到了这一点。我们更新连接每一层的权重。我们通过将当前权重乘以学习速率常数以及相应的权重层的大小和方向来实现这一点。就像在线性模型中，我们使用学习速率常数在每一步中做一些小的改变，这样我们就有更好的机会为最小化成本函数的权值找到真正的值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "def backPropagate(self, targets, N):\n",
    "    \"\"\"\n",
    "    :param targets: y\n",
    "    :param N: 学习率\n",
    "    :return: 更新的权重和误差\n",
    "    \"\"\"\n",
    "    if len(targets) != self.output:\n",
    "        raise ValueError('Wrong number of targets you silly goose!')\n",
    "    \n",
    "    # 计算输出层的误差项\n",
    "    # 通过 delta 值获取改变权重的方向\n",
    "    output_deltas = [0.0] * self.output\n",
    "    for k in range(self.output):\n",
    "        error = -(targets[k] - self.ao[k])\n",
    "        output_deltas[k] = dsigmoid(self.ao[k]) * error\n",
    "        \n",
    "    # 计算隐藏层的误差项\n",
    "    # 通过 delta 值获取改变权重的方向\n",
    "    hidden_deltas = [0.0] * self.hidden\n",
    "    for j in range(self.hidden):\n",
    "        error = 0.0\n",
    "        for k in range(self.output):\n",
    "            error += output_deltas[k] * self.wo[j][k]\n",
    "        hidden_deltas[j] = dsigmoid(self.ah[j]) * error\n",
    "        \n",
    "    # 更新连接隐藏层到输出层的权重\n",
    "    for j in range(self.hidden):\n",
    "        for k in range(self.output):\n",
    "            change = output_deltas[k] * self.ah[j]\n",
    "            self.wo[j][k] -= N * change + self.co[j][k]\n",
    "            self.co[j][k] = change\n",
    "        \n",
    "    # 更新连接输入层到隐藏层的权重\n",
    "    for  i in range(self.input):\n",
    "        for j in range(self.hidden):\n",
    "            change = hidden_deltas[j] * self.ai[i]\n",
    "            self.wi[i][j] -= N * change + self.ci[i][j]\n",
    "            self.ci[i][j] = change\n",
    "        \n",
    "    # 预测误差\n",
    "    error = 0.0\n",
    "    for k in range(len(targets)):\n",
    "        error += 0.5 * (targets[k] - self.ao[k]) ** 2\n",
    "    return error"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "把它们链接在一起，创建训练和预测功能。训练网络的步骤是非常直接和直观的。\n",
    "\n",
    "我们首先调用**“前馈”**函数，它给出我们初始化的随机权值的输出。然后，我们调用反向传播算法来调整和更新权值，已做出更好的预测。\n",
    "\n",
    "然后再调用前馈函数，但这一次它使用了更新后的权值，预测结果略好一些。\n",
    "\n",
    "我们将这个循环保持在一个预先确定的迭代数量中，在此期间，我们应该看到错误下降到接近 0。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train(self, patterns, iterations = 3000, N = 0.0002):\n",
    "    # N: 学习率\n",
    "    for i in range(iterations):\n",
    "        error = 0.0\n",
    "        for p in patterns:\n",
    "            inputs = p[0]\n",
    "            targets = p[1]\n",
    "            self.feedForward(inputs)\n",
    "            error = self.backPropagate(targets, N)\n",
    "        if i % 500 == 0:\n",
    "            print('error %-.5f' % error)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最后，对于预测操作。我们只是简单地调用前馈函数，它将返回输出层的激活。记住，每一层的激活是前一层输出的线性组合。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def predict(self, X):\n",
    "    \"\"\"\n",
    "    返回训练后的预测表\n",
    "    \"\"\"\n",
    "    predictions = []\n",
    "    for p in X:\n",
    "        predictions.append(self.feedForward(p))\n",
    "    return predictions"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
