{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "FLAGS = None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 读入数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "调用系统提供的Mnist数据函数读入数据。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting C:/Users/H/Install/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist\\train-images-idx3-ubyte.gz\n",
      "Extracting C:/Users/H/Install/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist\\train-labels-idx1-ubyte.gz\n",
      "Extracting C:/Users/H/Install/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist\\t10k-images-idx3-ubyte.gz\n",
      "Extracting C:/Users/H/Install/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist\\t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = 'C:/Users/H/Install/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 构建模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1 确定网络结构并初始化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在构建网络结构的阶段，需要调整的对象主要有：\n",
    "\n",
    "- 卷积层设置：由于数字相对简单，特征可能并不多，所以只设置了两个卷积层，卷积核个数分别为32和64个。考虑到一些特别微小的细节可能对数字识别意义不大，所以第一层直接采用了 5×5 的卷积核；第二个卷积层的输出特征可能需要对应原图像上足够大的感受野，以学得数字形状上的一些特征，从直观上看，采用 5×5 和 7×7 的卷积核基本都能满足该要求，试验结果显示 7×7 的卷积核效果略好。\n",
    "- 池化层设置：每次卷积后，均对每个不重叠的 2×2 区域进行一次池化最大池化，以增强对图像变化的对抗能力，并减小数据规模。\n",
    "- 权重初始化：权重初始化值服从均值为0，标准差为0.1的高斯分布，但将数值截断在两个标准差范围内，避免过大的权值出现；偏置项初始化为0。\n",
    "- 激活函数：采用relu函数。\n",
    "- Dropout: 在输出层之前加入dropout层，训练时以一定的概率对前一层神经元的输出予以保留，根据经验设为0.5。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Create the model\n",
    "# 输入层\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "# 卷积层1（32个 5×5 的卷积核）\n",
    "h_conv1 = tf.contrib.slim.conv2d(x_image, 32, [5,5],\n",
    "                                 padding='SAME',\n",
    "                                 activation_fn=tf.nn.relu,\n",
    "                                 weights_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                                 biases_initializer=tf.zeros_initializer(),\n",
    "                                 variables_collections=[tf.GraphKeys.GLOBAL_VARIABLES,'WEIGHTS'])\n",
    "\n",
    "# 池化层1（ 2×2 的不重叠池化区域）\n",
    "h_pool1 = tf.contrib.slim.max_pool2d(h_conv1, [2,2],\n",
    "                                     stride=2,\n",
    "                                     padding='VALID')\n",
    "\n",
    "# 卷积层2（64个 7×7 的卷积核）\n",
    "h_conv2 = tf.contrib.slim.conv2d(h_pool1, 64, [7,7],\n",
    "                                 padding='SAME',\n",
    "                                 activation_fn=tf.nn.relu,\n",
    "                                 weights_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                                 biases_initializer=tf.zeros_initializer(),\n",
    "                                 variables_collections=[tf.GraphKeys.GLOBAL_VARIABLES,'WEIGHTS'])\n",
    "\n",
    "# 池化层2（2×2 的不重叠池化区域）\n",
    "h_pool2 = tf.contrib.slim.max_pool2d(h_conv2, [2,2],\n",
    "                                     stride=[2, 2],\n",
    "                                     padding='VALID')\n",
    "\n",
    "# 全连接层（先进行全局平均池化，再使用 1×1 的卷积核）\n",
    "h_pool2_flat = tf.contrib.slim.avg_pool2d(h_pool2, h_pool2.shape[1:3],\n",
    "                        stride=[1, 1], padding='VALID')\n",
    "h_fc1 = tf.contrib.slim.conv2d(h_pool2_flat, 1024, [1,1], activation_fn=tf.nn.relu,\n",
    "                               weights_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                               biases_initializer=tf.zeros_initializer(),\n",
    "                               variables_collections=[tf.GraphKeys.GLOBAL_VARIABLES,'WEIGHTS'])\n",
    "\n",
    "# Dropout\n",
    "keep_prob = tf.placeholder(tf.float32)\n",
    "h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "prob = 0.5\n",
    "\n",
    "# 输出层（注意没有激活函数）\n",
    "y = tf.squeeze(tf.contrib.slim.conv2d(h_fc1_drop, 10, [1,1], activation_fn=None,\n",
    "                                      weights_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                                      biases_initializer=tf.zeros_initializer(),\n",
    "                                      variables_collections=[tf.GraphKeys.GLOBAL_VARIABLES,'WEIGHTS']))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义ground_truth占位符。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "y_ = tf.placeholder(tf.float32, [None, 10])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 定义损失函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "损失函数由交叉熵损失项和L2正则项组成。经试验，正则化系数设置为 5e-10。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Define loss and optimizer\n",
    "\n",
    "# The raw formulation of cross-entropy,\n",
    "#\n",
    "#   tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),\n",
    "#                                 reduction_indices=[1]))\n",
    "#\n",
    "# can be numerically unstable.\n",
    "#\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "\n",
    "# 设置正则化参数\n",
    "Lambda2 = tf.constant(5e-10)\n",
    "# Lambda1 = tf.constant(0.000001)\n",
    "\n",
    "# 计算损失函数（交叉熵+正则项）\n",
    "loss = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=y) + Lambda2 * (\n",
    "        tf.add_n([tf.nn.l2_loss(w) for w in tf.get_collection('WEIGHTS')])))\n",
    "\n",
    "\n",
    "# L2正则：Lambda2/2 * (tf.add_n([tf.square(w) for w in tf.get_collection('WEIGHTS')]))\n",
    "# L1正则：Lambda1/2 * (tf.add_n([tf.abs(w) for w in tf.get_collection('WEIGHTS')]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 模型训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "生成一个训练step，训练时的学习率采取逐渐降低的策略，初始值较大，设为0.3，每一个epoch更新一次，更新规则为 𝜂𝑡 = 𝜂/sqrt(epoch+1)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# 生成训练step\n",
    "epoch = tf.placeholder(tf.float32)                \n",
    "train_step = tf.train.GradientDescentOptimizer(0.3/tf.sqrt(epoch+1)).minimize(loss)     # 学习率逐渐降低（每个epoch更新一次）\n",
    "\n",
    "# 生成session\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义模型性能评价函数（此处为预测准确率）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "调用系统提供的读取数据，取得一个由50个样本组成的batch。 然后运行12k个step(10 epochs)，对权重进行优化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 600, train loss: 0.192467, train accuracy: 0.960000\n",
      "step 1200, train loss: 0.106852, train accuracy: 0.980000\n",
      "step 1800, train loss: 0.023503, train accuracy: 0.980000\n",
      "step 2400, train loss: 0.122873, train accuracy: 0.940000\n",
      "step 3000, train loss: 0.167995, train accuracy: 0.980000\n",
      "step 3600, train loss: 0.079535, train accuracy: 0.980000\n",
      "step 4200, train loss: 0.021470, train accuracy: 1.000000\n",
      "step 4800, train loss: 0.019596, train accuracy: 1.000000\n",
      "step 5400, train loss: 0.037137, train accuracy: 0.980000\n",
      "step 6000, train loss: 0.024907, train accuracy: 1.000000\n",
      "step 6600, train loss: 0.105709, train accuracy: 0.980000\n",
      "step 7200, train loss: 0.006549, train accuracy: 1.000000\n",
      "step 7800, train loss: 0.024336, train accuracy: 0.980000\n",
      "step 8400, train loss: 0.134059, train accuracy: 0.960000\n",
      "step 9000, train loss: 0.021777, train accuracy: 1.000000\n",
      "step 9600, train loss: 0.010807, train accuracy: 1.000000\n",
      "step 10200, train loss: 0.024602, train accuracy: 1.000000\n",
      "step 10800, train loss: 0.011574, train accuracy: 1.000000\n",
      "step 11400, train loss: 0.046223, train accuracy: 0.980000\n",
      "step 12000, train loss: 0.006186, train accuracy: 1.000000\n"
     ]
    }
   ],
   "source": [
    "# Train\n",
    "for step in range(12000):\n",
    "    batch_xs, batch_ys = mnist.train.next_batch(50)\n",
    "    _, train_loss, train_accuracy = sess.run(\n",
    "        [train_step, loss, accuracy], feed_dict={x: batch_xs, y_: batch_ys, epoch: step//1200, keep_prob: prob})\n",
    "    if (step+1) % 600 == 0:\n",
    "        print('step %d, train loss: %f, train accuracy: %f' %\n",
    "              (step+1, train_loss, train_accuracy))               # 每运行600个step输出一次在batch上的loss和accuracy\n",
    "    \n",
    "              # sess.run(accuracy, feed_dict={x: mnist.train.images, y_: mnist.train.labels, keep_prob: prob}),\n",
    "              # sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: prob})))     "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 模型测试"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "验证模型在测试数据上的准确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test accuracy:  0.9924\n"
     ]
    }
   ],
   "source": [
    "# Test trained model\n",
    "print('test accuracy: ',sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "参数汇总：\n",
    "\n",
    "- 卷积层：共设置两个卷积层，第一层由 32 个 5×5 的卷积核组成，第二层由 64 个 7×7 的卷积核组成\n",
    "- 池化层：每次卷积后，对每个不重叠的 2×2 区域进行一次最大池化\n",
    "- 权重初始化：权重初始化值服从均值为0，标准差为0.1的截断式高斯分布；偏置项初始化为0\n",
    "- 激活函数：卷积层采用relu函数，输出层采用softmax\n",
    "- Dropout：神经元输出的保留概率设为0.5\n",
    "- 正则化：采用L2正则，正则化参数为5e-10\n",
    "- 学习率：采用学习率逐渐降低的策略，初始值较大，设为0.3，每一个epoch更新一次，更新规则为 𝜂𝑡 = 𝜂/sqrt(epoch+1)"
   ]
  }
 ],
 "metadata": {
  "celltoolbar": "Raw Cell Format",
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
