{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 优化算法\n",
    "\n",
    "优化算法的功能是通过最小化或最大化目标函数，对模型的训练和表达能力造成影响的参数进行计算和更新，使这些参数达到或尽可能接近最优值，从而改善模型的训练，提高模型的学习能力。\n",
    "\n",
    "## 欠拟合和过拟合\n",
    "\n",
    "深度学习中，主要从以下两个角度来评价学习算法效果的好坏：\n",
    "\n",
    "- 降低训练集上的误差，即训练误差。\n",
    "\n",
    "- 减少训练集上的误差和测试集上的误差的差距。\n",
    "\n",
    "这两个角度体现了机器学习面临的两个主要挑战：欠拟合和过拟合。\n",
    "\n",
    "欠拟合是指模型不能在训练集上获得足够低的误差，即模型在训练集上的误差比人类水平达到的误差要高，此时模型还有提升的空间，可以通过增加模型深度和训练次数或选择一些优化算法继续提高模型的表现能力。\n",
    "\n",
    "而过拟合是指学习时选择的模型所包含的参数过多，以至于这一模型对已知数据预测得很好，但对未知数据预测得很差的现象。通常称为模型的泛化能力不好，可以通过增加数据集、加入一些正则化方法或者改变超参数来进行调整。\n",
    "\n",
    "下面我们针对过拟合的情况，首先对Dropout与Batch normalization进行简单介绍，之后分别使用Dropout和Batch normalization对第六章中的CNN与数字识别案例进行优化。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## Dropout\n",
    "\n",
    "Dropout是通过修改模型本身结构来实现的，计算方便但功能强大。如图所示的三层人工神经网络：\n",
    "\n",
    "<img src=\"image/dropout1.png\" style=\"width:250px;height:250px;\">\n",
    "\n",
    "对于上图所示的网络，在训练开始时，按照一定地概率随机选择一些隐藏层神经元进行删除，即认为这些神经元不存在，这样便得到如下图的网络： \n",
    "\n",
    "<img src=\"image/dropout2.png\" style=\"width:250px;height:250px;\">\n",
    "\n",
    "按照这样的网络计算梯度，进行参数更新（对删除的神经元不更新）。在下一次迭代时，再随机选择一些神经元，重复上面的做法，直到训练结束。\n",
    "Dropout也可以看作是一种集成（bagging）方法，每次迭代的模型都不一样，最后以某种权重平均起来，这样参数的更新不再依赖于某些共同作用的隐层节点之间的关系，能够有效地防止过拟合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Batch normalization\n",
    "\n",
    "机器学习的一个假设就是，数据是满足独立同分布的。而在深度学习模型中，原本做好预处理的同分布数据在经过层层的前向传导后，分布不断发生变化。随着网络的加深，上述变化带来的影响不断被放大。\n",
    "\n",
    "Batch normalization的目的就是对网络的每一层输入做一个处理，使得它们尽可能满足输入同分布的基本假设。\n",
    "\n",
    "可以对每一层的输入做标准化处理，使得输入均值为0方差为1：\n",
    "\n",
    "$$\\hat{x}^{(k)}=\\frac{x^k - E[x^{(k)}]}{\\sqrt{Var[x^{(k)}]}}$$\n",
    "\n",
    "但如果只是简单地对每一层做白化处理，会降低层的表达能力。比如下图，在使用sigmoid激活函数的时候，如果把数据限制到0均值单位方差，那么相当于只使用了激活函数中近似线性的部分，这显然会降低模型表达能力。\n",
    "<img src=\"image/batch_normalization.png\" style=\"width:300px;height:200px;\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## 1 - 引用库\n",
    "\n",
    "首先，载入几个需要用到的库，它们分别是：\n",
    "- numpy：一个python的基本库，用于科学计算\n",
    "- matplotlib.pyplot：用于生成图，在验证模型准确率和展示成本变化趋势时会使用到\n",
    "- PIL：用于最后使用自己的图片验证训练模型\n",
    "- paddle.v2：PaddlePaddle深度学习框架\n",
    "- paddle.v2.plot：PaddlePaddle深度学习框架的绘图工具"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import matplotlib\n",
    "import os\n",
    "from PIL import Image\n",
    "import numpy as np\n",
    "import paddle.v2 as paddle\n",
    "from paddle.v2.plot import Ploter"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "添加一些绘图相关标注，在绘制学习曲线时将用到。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "with_gpu = os.getenv('WITH_GPU', '0') != '0'\n",
    "\n",
    "step = 0\n",
    "\n",
    "# 绘图相关标注\n",
    "train_title_cost = \"Train cost\"\n",
    "test_title_cost = \"Test cost\"\n",
    "\n",
    "train_title_error = \"Train error rate\"\n",
    "test_title_error = \"Test error rate\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2 - 定义卷积神经网络分类器\n",
    "\n",
    "** 普通的卷积神经网络分类器 **\n",
    "\n",
    "首先，我们定义一个普通的卷积神经网络分类器convolutional_neural_network()，它的结构为卷积层-池化层-卷积层-池化层-全连接层组成，在PaddlePaddle中将一个卷积层和一个池化层当做一层，并使用paddle.networks.simple_img_conv_pool()来定义一个卷积-池化层，其中的各个参数分别表示：\n",
    "\n",
    "- input：输入数据\n",
    "- filter_size：卷积核大小\n",
    "- num_filters：卷积核数量\n",
    "- num_channel：卷积核通道数\n",
    "- pool_size：池化层大小\n",
    "- pool_stride：池化层步长\n",
    "- act：激活函数，这里我们采用Relu()激活函数\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def convolutional_neural_network(img):\n",
    "    \"\"\"\n",
    "    定义卷积神经网络分类器：\n",
    "        输入的二维图像，经过两个卷积-池化层，使用以softmax为激活函数的全连接层作为输出层\n",
    "    Args:\n",
    "        img -- 输入的原始图像数据\n",
    "    Return:\n",
    "        predict -- 分类的结果\n",
    "    \"\"\"\n",
    "    # 第一个卷积-池化层\n",
    "    conv_pool_1 = paddle.networks.simple_img_conv_pool(\n",
    "        input=img,\n",
    "        filter_size=5,\n",
    "        num_filters=20,\n",
    "        num_channel=1,\n",
    "        pool_size=2,\n",
    "        pool_stride=2,\n",
    "        act=paddle.activation.Relu())\n",
    "\n",
    "    # 第二个卷积-池化层\n",
    "    conv_pool_2 = paddle.networks.simple_img_conv_pool(\n",
    "        input=conv_pool_1,\n",
    "        filter_size=5,\n",
    "        num_filters=50,\n",
    "        num_channel=20,\n",
    "        pool_size=2,\n",
    "        pool_stride=2,\n",
    "        act=paddle.activation.Relu())\n",
    "    # 全连接层\n",
    "    predict = paddle.layer.fc(\n",
    "        input=conv_pool_2, size=10, act=paddle.activation.Softmax())\n",
    "    return predict\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** 使用Dropout优化的卷积神经网络分类器 **\n",
    "\n",
    "定义一个使用Dropout优化的卷积神经网络分类器convolutional_neural_network_with_dropout()，它的结构与普通的卷积神经网络分类器相同，但是在每个卷积-池化层中都加入了dropout设置，在PaddlePaddle中使用conv_layer_attr=paddle.attr.ExtraLayerAttribute(drop_rate=0.5)来在卷积-池化层中添加dropout并设置drop_rate=0.5。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def convolutional_neural_network_with_dropout(img):\n",
    "    \"\"\"\n",
    "    定义卷积神经网络分类器：\n",
    "        输入的二维图像，经过两个卷积-池化层，使用以softmax为激活函数的全连接层作为输出层\n",
    "    Args:\n",
    "        img -- 输入的原始图像数据\n",
    "    Return:\n",
    "        predict -- 分类的结果\n",
    "    \"\"\"\n",
    "    \"\"\"\n",
    "    不同之处：\n",
    "        在两个卷积-池化层中加入了dropout设置\n",
    "    \"\"\"\n",
    "    # 第一个卷积-池化层\n",
    "    conv_pool_1 = paddle.networks.simple_img_conv_pool(\n",
    "        input=img,\n",
    "        filter_size=5,\n",
    "        num_filters=20,\n",
    "        num_channel=1,\n",
    "        pool_size=2,\n",
    "        pool_stride=2,\n",
    "        act=paddle.activation.Relu(),\n",
    "        conv_layer_attr=paddle.attr.ExtraLayerAttribute(drop_rate=0.5))\n",
    "\n",
    "    # 第二个卷积-池化层\n",
    "    conv_pool_2 = paddle.networks.simple_img_conv_pool(\n",
    "        input=conv_pool_1,\n",
    "        filter_size=5,\n",
    "        num_filters=50,\n",
    "        num_channel=20,\n",
    "        pool_size=2,\n",
    "        pool_stride=2,\n",
    "        act=paddle.activation.Relu(),\n",
    "        conv_layer_attr=paddle.attr.ExtraLayerAttribute(drop_rate=0.5))\n",
    "    # 全连接层\n",
    "    predict = paddle.layer.fc(\n",
    "        input=conv_pool_2, size=10, act=paddle.activation.Softmax())\n",
    "    return predict\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** 使用Batch normalization优化的卷积神经网络分类器 **\n",
    "\n",
    "定义一个使用Batch normalization优化的卷积神经网络分类器convolutional_neural_network_with_batch_norm()，它的结构与普通的卷积神经网络分类器相同，但是在每个卷积-池化层之后加入Batch normalization操作，在PaddlePaddle中使用paddle.layer.batch_norm(input=conv_pool_1, act=paddle.activation.Relu()来添加Batch normalization。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def convolutional_neural_network_with_batch_norm(img):\n",
    "    \"\"\"\n",
    "    定义卷积神经网络分类器：\n",
    "        输入的二维图像，经过两个卷积-池化层，使用以softmax为激活函数的全连接层作为输出层\n",
    "    Args:\n",
    "        img -- 输入的原始图像数据\n",
    "    Return:\n",
    "        predict -- 分类的结果\n",
    "    \"\"\"\n",
    "    \"\"\"\n",
    "    与第六章代码不同之处：\n",
    "        在两个卷积-池化层之后都加入了batch normalization层norm1和norm2\n",
    "    \"\"\"\n",
    "    # 第一个卷积-池化层\n",
    "    conv_pool_1 = paddle.networks.simple_img_conv_pool(\n",
    "        input=img,\n",
    "        filter_size=5,\n",
    "        num_filters=20,\n",
    "        num_channel=1,\n",
    "        pool_size=2,\n",
    "        pool_stride=2,\n",
    "        act=paddle.activation.Relu())\n",
    "\n",
    "    norm1 = paddle.layer.batch_norm(input=conv_pool_1, act=paddle.activation.Relu())\n",
    "    \n",
    "    # 第二个卷积-池化层\n",
    "    conv_pool_2 = paddle.networks.simple_img_conv_pool(\n",
    "        input=conv_pool_1,\n",
    "        filter_size=5,\n",
    "        num_filters=50,\n",
    "        num_channel=20,\n",
    "        pool_size=2,\n",
    "        pool_stride=2,\n",
    "        act=paddle.activation.Relu())\n",
    "\n",
    "    norm2 = paddle.layer.batch_norm(input=conv_pool_2, act=paddle.activation.Relu())\n",
    "        \n",
    "    # 全连接层\n",
    "    predict = paddle.layer.fc(\n",
    "        input=norm2, size=10, act=paddle.activation.Softmax())\n",
    "    return predict"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3 - 配置网络结构 \n",
    "\n",
    "调用分类器（这里我们提供了三个不同的分类器，大家可以试着使用不同的分类器进行训练）得到分类结果。训练时，对该结果计算其损失函数，分类问题常常选择交叉熵损失函数。\n",
    "\n",
    "指定训练相关的参数。\n",
    "- 训练方法（optimizer)： 代表训练过程在更新权重时采用动量优化器 `Momentum` ，其中参数0.9代表动量优化每次保持前一次速度的0.9倍。\n",
    "- 训练速度（learning_rate）： 迭代的速度，与网络的训练收敛速度有关系。\n",
    "- 正则化（regularization）： 是防止网络过拟合的一种手段，此处采用L2正则化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def netconfig():\n",
    "    \"\"\"\n",
    "    配置网络结构\n",
    "    Args:\n",
    "    Return:\n",
    "        images -- 输入层\n",
    "        label -- 标签数据\n",
    "        predict -- 输出层\n",
    "        cost -- 损失函数\n",
    "        parameters -- 模型参数\n",
    "        optimizer -- 优化器\n",
    "    \"\"\"\n",
    "    \n",
    "    \"\"\"\n",
    "    输入层:\n",
    "        paddle.layer.data表示数据层,\n",
    "        name=’pixel’：名称为pixel,对应输入图片特征\n",
    "        type=paddle.data_type.dense_vector(784)：数据类型为784维(输入图片的尺寸为28*28)稠密向量\n",
    "    \"\"\"\n",
    "    images = paddle.layer.data(\n",
    "        name='pixel', type=paddle.data_type.dense_vector(784))\n",
    "        \n",
    "    \"\"\"\n",
    "    数据层:\n",
    "        paddle.layer.data表示数据层,\n",
    "        name=’label’：名称为label,对应输入图片的类别标签\n",
    "        type=paddle.data_type.dense_vector(10)：数据类型为10维(对应0-9这10个数字)稠密向量\n",
    "    \"\"\"\n",
    "    label = paddle.layer.data(\n",
    "        name='label', type=paddle.data_type.integer_value(10))\n",
    "    \n",
    "    # 使用普通的卷积神经网络\n",
    "    predict = convolutional_neural_network(images)\n",
    "    \n",
    "    # 使用带dropout优化的卷积神经网络\n",
    "#     predict = convolutional_neural_network_with_dropout(images)\n",
    "    \n",
    "    # 使用带batch_norm优化的卷积神经网络\n",
    "#     predict = convolutional_neural_network_with_batch_norm(images)\n",
    "\n",
    "    # 定义成本函数，addle.layer.classification_cost()函数内部采用的是交叉熵损失函数\n",
    "    cost = paddle.layer.classification_cost(input=predict, label=label)\n",
    "\n",
    "    # 利用cost创建参数parameters\n",
    "    parameters = paddle.parameters.create(cost)\n",
    "      \n",
    "    # 创建优化器optimizer，下面列举了2种常用的优化器，不同类型优化器选一即可\n",
    "    # 创建Momentum优化器，并设置学习率(learning_rate)、动量(momentum)和正则化项(regularization)\n",
    "    \"\"\"\n",
    "    与第六章代码不同之处：\n",
    "        学习率learning_rate和动量momentum设置的数值不同，\n",
    "            一方面，可以通过单纯修改某个参数值而不引入其他改变，对比第六章实验结果来验证该参数的影响;\n",
    "            另一方面，可以通过设置learning_rate=0.1 / 128.0，momentum=0.95，以使得模型的基础表现相对第六章中下降，如收敛程度或者速度下降\n",
    "                而进一步加入新的模块或者设置后（如加入dropout），模型表现得到提升，从而验证新加入的模块或者设置的有效性;\n",
    "    \"\"\"\n",
    "    optimizer = paddle.optimizer.Momentum(\n",
    "        learning_rate=0.1 / 128.0,\n",
    "        momentum=0.95,\n",
    "        regularization=paddle.optimizer.L2Regularization(rate=0.0005 * 128))\n",
    "    \n",
    "    # 创建Adam优化器，并设置参数beta1、beta2、epsilon\n",
    "    # optimizer = paddle.optimizer.Adam(beta1=0.9, beta2=0.99, epsilon=1e-06)\n",
    "    \n",
    "    config_data = [images, label, predict, cost, parameters, optimizer]\n",
    "    \n",
    "    return config_data\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4 - 训练模型\n",
    "\n",
    "下面，我们开始训练模型，我们定义需要用到的工具函数，分别为plot_init()、load_image()、infer()用来绘制学习曲线、载入图片和预测。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def plot_init():\n",
    "    \"\"\"\n",
    "    绘图初始化函数：\n",
    "        初始化绘图相关变量\n",
    "    Args:\n",
    "    Return:\n",
    "        cost_ploter -- 用于绘制cost曲线的变量\n",
    "        error_ploter -- 用于绘制error_rate曲线的变量\n",
    "    \"\"\"\n",
    "    # 绘制cost曲线所做的初始化设置\n",
    "    cost_ploter = Ploter(train_title_cost, test_title_cost)\n",
    "    \n",
    "    # 绘制error_rate曲线所做的初始化设置\n",
    "    error_ploter = Ploter(train_title_error, test_title_error)\n",
    "    \n",
    "    ploter = [cost_ploter, error_ploter]\n",
    "    \n",
    "    return ploter\n",
    "\n",
    "    \n",
    "def load_image(file):\n",
    "    \"\"\"\n",
    "    定义读取输入图片的函数：\n",
    "        读取指定路径下的图片，将其处理成分类网络输入数据对应形式的数据，如数据维度等\n",
    "    Args:\n",
    "        file -- 输入图片的文件路径\n",
    "    Return:\n",
    "        im -- 分类网络输入数据对应形式的数据\n",
    "    \"\"\"\n",
    "    im = Image.open(file).convert('L')\n",
    "    im = im.resize((28, 28), Image.ANTIALIAS)\n",
    "    im = np.array(im).astype(np.float32).flatten()\n",
    "    im = im / 255.0\n",
    "    return im\n",
    "\n",
    "\n",
    "def infer(predict, parameters, file):\n",
    "    \"\"\"\n",
    "    定义判断输入图片类别的函数：\n",
    "        读取并处理指定路径下的图片，然后调用训练得到的模型进行类别预测\n",
    "    Args:\n",
    "        predict -- 输出层\n",
    "        parameters -- 模型参数\n",
    "        file -- 输入图片的文件路径\n",
    "    Return:\n",
    "    \"\"\"\n",
    "    # 读取并预处理要预测的图片\n",
    "    test_data = []\n",
    "    cur_dir = os.getcwd()\n",
    "    test_data.append((load_image(cur_dir + '/image/infer_3.png'),))\n",
    "    \n",
    "    # 利用训练好的分类模型，对输入的图片类别进行预测\n",
    "    probs = paddle.infer(\n",
    "        output_layer=predict, parameters=parameters, input=test_data)\n",
    "    lab = np.argsort(-probs)\n",
    "    print \"Label of image/infer_3.png is: %d\" % lab[0][0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "开始训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[INFO 2017-12-27 04:31:42,058 layers.py:2707] output for __conv_pool_0___conv: c = 20, h = 24, w = 24, size = 11520\n",
      "[INFO 2017-12-27 04:31:42,062 layers.py:2849] output for __conv_pool_0___pool: c = 20, h = 12, w = 12, size = 2880\n",
      "[INFO 2017-12-27 04:31:42,068 layers.py:2707] output for __conv_pool_1___conv: c = 50, h = 8, w = 8, size = 3200\n",
      "[INFO 2017-12-27 04:31:42,075 layers.py:2849] output for __conv_pool_1___pool: c = 50, h = 4, w = 4, size = 800\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Pass 0, Batch 0, Cost 3.024604, {'classification_error_evaluator': 0.9140625}\n",
      "Pass 0, Batch 100, Cost 2.308956, {'classification_error_evaluator': 0.9453125}\n",
      "Pass 0, Batch 200, Cost 2.307811, {'classification_error_evaluator': 0.9140625}\n",
      "Pass 0, Batch 300, Cost 2.302251, {'classification_error_evaluator': 0.9140625}\n",
      "Pass 0, Batch 400, Cost 2.308545, {'classification_error_evaluator': 0.90625}\n",
      "Test with Pass 0, Cost 2.302830, {'classification_error_evaluator': 0.8865000009536743}\n",
      "\n",
      "Pass 1, Batch 0, Cost 2.304678, {'classification_error_evaluator': 0.8828125}\n",
      "Pass 1, Batch 100, Cost 2.300503, {'classification_error_evaluator': 0.8828125}\n",
      "Pass 1, Batch 200, Cost 2.305115, {'classification_error_evaluator': 0.921875}\n",
      "Pass 1, Batch 300, Cost 2.296468, {'classification_error_evaluator': 0.890625}\n",
      "Pass 1, Batch 400, Cost 2.307123, {'classification_error_evaluator': 0.9140625}\n",
      "Test with Pass 1, Cost 2.302628, {'classification_error_evaluator': 0.8865000009536743}\n",
      "\n",
      "Pass 2, Batch 0, Cost 2.306701, {'classification_error_evaluator': 0.8671875}\n",
      "Pass 2, Batch 100, Cost 2.308698, {'classification_error_evaluator': 0.9375}\n",
      "Pass 2, Batch 200, Cost 2.306354, {'classification_error_evaluator': 0.9375}\n",
      "Pass 2, Batch 300, Cost 2.288088, {'classification_error_evaluator': 0.8515625}\n",
      "Pass 2, Batch 400, Cost 2.308207, {'classification_error_evaluator': 0.9296875}\n",
      "Test with Pass 2, Cost 2.304044, {'classification_error_evaluator': 0.8971999883651733}\n",
      "\n",
      "Pass 3, Batch 0, Cost 2.285878, {'classification_error_evaluator': 0.890625}\n",
      "Pass 3, Batch 100, Cost 2.311186, {'classification_error_evaluator': 0.8828125}\n",
      "Pass 3, Batch 200, Cost 2.289637, {'classification_error_evaluator': 0.8515625}\n",
      "Pass 3, Batch 300, Cost 2.299899, {'classification_error_evaluator': 0.9296875}\n",
      "Pass 3, Batch 400, Cost 2.318275, {'classification_error_evaluator': 0.9140625}\n",
      "Test with Pass 3, Cost 2.303858, {'classification_error_evaluator': 0.8967999815940857}\n",
      "\n",
      "Pass 4, Batch 0, Cost 2.310157, {'classification_error_evaluator': 0.90625}\n",
      "Pass 4, Batch 100, Cost 2.316852, {'classification_error_evaluator': 0.9296875}\n",
      "Pass 4, Batch 200, Cost 2.307673, {'classification_error_evaluator': 0.8984375}\n",
      "Pass 4, Batch 300, Cost 2.307539, {'classification_error_evaluator': 0.890625}\n",
      "Pass 4, Batch 400, Cost 2.303037, {'classification_error_evaluator': 0.8984375}\n",
      "Test with Pass 4, Cost 2.303666, {'classification_error_evaluator': 0.8989999890327454}\n",
      "\n",
      "Pass 5, Batch 0, Cost 2.293892, {'classification_error_evaluator': 0.90625}\n",
      "Pass 5, Batch 100, Cost 2.296618, {'classification_error_evaluator': 0.890625}\n",
      "Pass 5, Batch 200, Cost 2.306005, {'classification_error_evaluator': 0.9375}\n",
      "Pass 5, Batch 300, Cost 2.314471, {'classification_error_evaluator': 0.90625}\n",
      "Pass 5, Batch 400, Cost 2.309715, {'classification_error_evaluator': 0.9140625}\n",
      "Test with Pass 5, Cost 2.303693, {'classification_error_evaluator': 0.8989999890327454}\n",
      "\n",
      "Pass 6, Batch 0, Cost 2.320574, {'classification_error_evaluator': 0.890625}\n",
      "Pass 6, Batch 100, Cost 2.292876, {'classification_error_evaluator': 0.8671875}\n",
      "Pass 6, Batch 200, Cost 2.309208, {'classification_error_evaluator': 0.90625}\n",
      "Pass 6, Batch 300, Cost 2.310432, {'classification_error_evaluator': 0.8671875}\n",
      "Pass 6, Batch 400, Cost 2.314997, {'classification_error_evaluator': 0.8828125}\n",
      "Test with Pass 6, Cost 2.305989, {'classification_error_evaluator': 0.8989999890327454}\n",
      "\n",
      "Pass 7, Batch 0, Cost 2.301244, {'classification_error_evaluator': 0.8828125}\n",
      "Pass 7, Batch 100, Cost 2.303125, {'classification_error_evaluator': 0.890625}\n",
      "Pass 7, Batch 200, Cost 2.315551, {'classification_error_evaluator': 0.890625}\n",
      "Pass 7, Batch 300, Cost 2.302401, {'classification_error_evaluator': 0.875}\n",
      "Pass 7, Batch 400, Cost 2.311713, {'classification_error_evaluator': 0.8984375}\n",
      "Test with Pass 7, Cost 2.302540, {'classification_error_evaluator': 0.8967999815940857}\n",
      "\n",
      "Pass 8, Batch 0, Cost 2.300676, {'classification_error_evaluator': 0.8828125}\n",
      "Pass 8, Batch 100, Cost 2.295153, {'classification_error_evaluator': 0.875}\n",
      "Pass 8, Batch 200, Cost 2.317982, {'classification_error_evaluator': 0.921875}\n",
      "Pass 8, Batch 300, Cost 2.301410, {'classification_error_evaluator': 0.9375}\n",
      "Pass 8, Batch 400, Cost 2.302398, {'classification_error_evaluator': 0.8984375}\n",
      "Test with Pass 8, Cost 2.303483, {'classification_error_evaluator': 0.8865000009536743}\n",
      "\n",
      "Pass 9, Batch 0, Cost 2.309558, {'classification_error_evaluator': 0.8828125}\n",
      "Pass 9, Batch 100, Cost 2.308438, {'classification_error_evaluator': 0.8828125}\n",
      "Pass 9, Batch 200, Cost 2.305791, {'classification_error_evaluator': 0.8984375}\n",
      "Pass 9, Batch 300, Cost 2.295423, {'classification_error_evaluator': 0.8359375}\n",
      "Pass 9, Batch 400, Cost 2.320831, {'classification_error_evaluator': 0.9453125}\n",
      "Test with Pass 9, Cost 2.304180, {'classification_error_evaluator': 0.8989999890327454}\n",
      "\n",
      "Best pass is 7, testing Avgcost is 2.30253956718\n",
      "The classification accuracy is 10.32%\n"
     ]
    }
   ],
   "source": [
    "# 初始化，设置是否使用gpu，trainer数量\n",
    "paddle.init(use_gpu=with_gpu, trainer_count=1)\n",
    "    \n",
    "# 定义神经网络结构\n",
    "images, label, predict, cost, parameters, optimizer = netconfig()\n",
    "\n",
    "# 构造trainer,配置三个参数cost、parameters、update_equation，它们分别表示成本函数、参数和更新公式\n",
    "trainer = paddle.trainer.SGD(\n",
    "    cost=cost, parameters=parameters, update_equation=optimizer)\n",
    "    \n",
    "# 初始化绘图变量\n",
    "cost_ploter, error_ploter = plot_init()\n",
    "    \n",
    "# lists用于存储训练的中间结果，包括cost和error_rate信息，初始化为空\n",
    "lists = []\n",
    "\n",
    "def event_handler_plot(event):\n",
    "    \"\"\"\n",
    "    定义event_handler_plot事件处理函数：\n",
    "        事件处理器，可以根据训练过程的信息做相应操作：包括绘图和输出训练结果信息\n",
    "    Args:\n",
    "        event -- 事件对象，包含event.pass_id, event.batch_id, event.cost等信息\n",
    "    Return:\n",
    "    \"\"\"\n",
    "    global step\n",
    "    if isinstance(event, paddle.event.EndIteration):\n",
    "        # 每训练100次（即100个batch），添加一个绘图点\n",
    "        if step % 100 == 0:\n",
    "            cost_ploter.append(train_title_cost, step, event.cost)\n",
    "            # 绘制cost图像，保存图像为‘train_test_cost.png’\n",
    "            cost_ploter.plot('./train_test_cost')\n",
    "            error_ploter.append(\n",
    "                train_title_error, step, event.metrics['classification_error_evaluator'])\n",
    "            # 绘制error_rate图像，保存图像为‘train_test_error_rate.png’\n",
    "            error_ploter.plot('./train_test_error_rate')\n",
    "        step += 1\n",
    "        # 每训练100个batch，输出一次训练结果信息\n",
    "        if event.batch_id % 100 == 0:\n",
    "            print \"Pass %d, Batch %d, Cost %f, %s\" % (\n",
    "                event.pass_id, event.batch_id, event.cost, event.metrics)\n",
    "    if isinstance(event, paddle.event.EndPass):\n",
    "        # 保存参数至文件\n",
    "        with open('params_pass_%d.tar' % event.pass_id, 'w') as f:\n",
    "            trainer.save_parameter_to_tar(f)\n",
    "        # 利用测试数据进行测试\n",
    "        result = trainer.test(reader=paddle.batch(\n",
    "            paddle.dataset.mnist.test(), batch_size=128))\n",
    "        print \"Test with Pass %d, Cost %f, %s\\n\" % (\n",
    "            event.pass_id, result.cost, result.metrics)\n",
    "        # 添加测试数据的cost和error_rate绘图数据\n",
    "        cost_ploter.append(test_title_cost, step, result.cost)\n",
    "        error_ploter.append(\n",
    "            test_title_error, step, result.metrics['classification_error_evaluator'])\n",
    "        # 存储测试数据的cost和error_rate数据\n",
    "        lists.append((\n",
    "            event.pass_id, result.cost, result.metrics['classification_error_evaluator']))\n",
    "                \n",
    "trainer.train(\n",
    "    reader=paddle.batch(\n",
    "        paddle.reader.shuffle(paddle.dataset.mnist.train(), buf_size=8192),\n",
    "        batch_size=128),\n",
    "    event_handler=event_handler_plot,\n",
    "    num_passes=10)\n",
    "\n",
    "# 在多次迭代中，找到在测试数据上表现最好的一组参数，并输出相应信息\n",
    "best = sorted(lists, key=lambda list: float(list[1]))[0]\n",
    "print 'Best pass is %s, testing Avgcost is %s' % (best[0], best[1])\n",
    "print 'The classification accuracy is %.2f%%' % (100 - float(best[2]) * 100)\n",
    "    \n",
    "# 预测输入图片的类型\n",
    "infer(predict, parameters, '/image/infer_3.png')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 总结"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "大家可以尝试使用三种不同的卷积神经网络来训练模型，可以发现三者的训练结果分别如下：\n",
    "\n",
    "** 普通的卷积神经网络 **\n",
    "\n",
    "<img src=\"image/default.png\" style=\"width:500px;height:50px\">\n",
    "\n",
    "** 使用dropout优化的卷积神经网络 **\n",
    "\n",
    "<img src=\"image/dropout.png\" style=\"width:500px;height:50px\">\n",
    "\n",
    "** 使用Batch normalization优化的卷积神经网络**\n",
    "\n",
    "<img src=\"image/norm.png\" style=\"width:500px;height:50px\">\n",
    "\n",
    "我们发现普通的卷积神经网络效果十分差，请不要惊讶，这是因为我们调整了Momentum和Learning_rate来是这个基础模型的学习效率下降，从而让大家能够清晰地发现Dropout和Batch normalization能够提升训练效果，减少过拟合的情况。同时，由于我们调整了Momentum和Learning_rate这两个参数，使得模型效果变差，其实反过来也间接表示我们可以通过调整这两个参数来让模型的效果变好。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "python2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
