{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 作业要求"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用tensorflow，构造并训练一个神经网络，在测试机上达到超过98%的准确率。  \n",
    "需要基础知识：  \n",
    "  深度神经网络  \n",
    "  激活函数  \n",
    "  正则化  \n",
    "  初始化  \n",
    "  卷积  \n",
    "  池化  \n",
    "\n",
    "探索超参数设置：  \n",
    "  卷积kernel size  \n",
    "  卷积kernel 数量  \n",
    "  学习率  \n",
    "  正则化因子  \n",
    "  权重初始化分布参数      "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
      "  from ._conv import register_converters as _register_converters\n"
     ]
    }
   ],
   "source": [
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting /tmp/tensorflow/mnist/input_data\\train-images-idx3-ubyte.gz\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\train-labels-idx1-ubyte.gz\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\t10k-images-idx3-ubyte.gz\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = '/tmp/tensorflow/mnist/input_data'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)\n",
    "# 导入数据，选择one-hot编码，数据集为10分类问题"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义placeholder，用于后续feed数据\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "learning_rate = tf.placeholder(tf.float32)    # 学习率"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用layers构建卷积神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "tf.layers.conv2d（）中参数很多，一般只需要设置一些关键参数，如inputs、filters、kernel_size、padding、activation，其他的都使用默认值，也得到了不错的训练结果，但从上次作业（使用全连接神经网络）来看，有些参数还是设置一下比较好，如对权重的初始化，默认是不进行初始化（可能就是初始化为零），至少在有隐层的全连接神经网络里面这样是不行的，而使用高斯分布的初始化效果很好，所以这边也进行一些额外参数设置，参数设置如下：   \n",
    "\n",
    "inputs————输入[28,28,1]的图像，高和宽为28个像素，深度方向为1，只有一个通道   \n",
    "filters————第一层网络先尝试使用32个卷积核   \n",
    "kernel_size——首先尝试[5,5]的的卷积核    \n",
    "strides=(1, 1)——直接使用默认步长 1     \n",
    "padding='SAME'——零填充，默认是valid（不进行填充），但一般用'SAME'的貌似多点，主要是不让卷积操作改变图像的尺寸，把降维的功能全部交给池化层    \n",
    "activation=tf.nn.relu——使用relu作为激活函数，上一份作业中几种激活函数对比，relu效果最好，而且卷积网络中也经常使用relu  \n",
    "kernel_initializer=tf.truncated_normal_initializer(stddev=0.1)——卷积核权重参数默认不进行初始化，这里初始化为高斯分布  \n",
    "bias_initializer=tf.constant_initializer(0.1)——偏差默认使用0进行初始化，这里选择常数0.1  \n",
    "\n",
    "其他参数暂时使用默认值"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "该网络的构建思路如下：  \n",
    "28x28x1---(kernel:5x5x32)---28x28x32---(pooling 2x2)--14x14x32---(kernel:5x5x64)---14x14x64---(pooling 2x2)---7x7x64([3136])---(FC)---[1024]--(FC)--[10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-4-4d8351cc2f40>:47: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "\n",
      "Future major versions of TensorFlow will allow gradients to flow\n",
      "into the labels input on backprop by default.\n",
      "\n",
      "See tf.nn.softmax_cross_entropy_with_logits_v2.\n",
      "\n",
      "step 100, entropy loss: 0.712071, l2_loss: 985.892944, total loss: 0.781084\n",
      "0.79\n",
      "step 200, entropy loss: 0.404489, l2_loss: 986.878662, total loss: 0.473571\n",
      "0.85\n",
      "step 300, entropy loss: 0.382436, l2_loss: 987.444214, total loss: 0.451557\n",
      "0.91\n",
      "step 400, entropy loss: 0.286373, l2_loss: 987.867310, total loss: 0.355524\n",
      "0.93\n",
      "step 500, entropy loss: 0.268985, l2_loss: 988.232178, total loss: 0.338161\n",
      "0.94\n",
      "step 600, entropy loss: 0.189156, l2_loss: 988.487671, total loss: 0.258350\n",
      "0.97\n",
      "step 700, entropy loss: 0.315328, l2_loss: 988.713989, total loss: 0.384538\n",
      "0.94\n",
      "step 800, entropy loss: 0.147726, l2_loss: 988.917847, total loss: 0.216950\n",
      "0.98\n",
      "step 900, entropy loss: 0.259858, l2_loss: 989.116638, total loss: 0.329096\n",
      "0.96\n",
      "step 1000, entropy loss: 0.135578, l2_loss: 989.268005, total loss: 0.204827\n",
      "0.95\n",
      "0.9601\n",
      "step 1100, entropy loss: 0.195829, l2_loss: 989.409363, total loss: 0.265087\n",
      "0.95\n",
      "step 1200, entropy loss: 0.207417, l2_loss: 989.558960, total loss: 0.276686\n",
      "0.94\n",
      "step 1300, entropy loss: 0.150434, l2_loss: 989.683594, total loss: 0.219711\n",
      "0.94\n",
      "step 1400, entropy loss: 0.205895, l2_loss: 989.800598, total loss: 0.275181\n",
      "0.94\n",
      "step 1500, entropy loss: 0.092533, l2_loss: 989.873352, total loss: 0.161824\n",
      "0.98\n",
      "step 1600, entropy loss: 0.108833, l2_loss: 989.977295, total loss: 0.178132\n",
      "0.97\n",
      "step 1700, entropy loss: 0.277553, l2_loss: 990.058960, total loss: 0.346857\n",
      "0.95\n",
      "step 1800, entropy loss: 0.165195, l2_loss: 990.130798, total loss: 0.234504\n",
      "0.98\n",
      "step 1900, entropy loss: 0.128126, l2_loss: 990.224243, total loss: 0.197441\n",
      "0.97\n",
      "step 2000, entropy loss: 0.092837, l2_loss: 990.274292, total loss: 0.162156\n",
      "0.96\n",
      "0.9739\n",
      "step 2100, entropy loss: 0.110917, l2_loss: 990.330017, total loss: 0.180240\n",
      "0.98\n",
      "step 2200, entropy loss: 0.110596, l2_loss: 990.392944, total loss: 0.179924\n",
      "0.96\n",
      "step 2300, entropy loss: 0.063500, l2_loss: 990.425781, total loss: 0.132830\n",
      "0.99\n",
      "step 2400, entropy loss: 0.066002, l2_loss: 990.469482, total loss: 0.135335\n",
      "1.0\n",
      "step 2500, entropy loss: 0.151820, l2_loss: 990.493408, total loss: 0.221154\n",
      "0.98\n",
      "step 2600, entropy loss: 0.100589, l2_loss: 990.532471, total loss: 0.169926\n",
      "0.98\n",
      "step 2700, entropy loss: 0.106739, l2_loss: 990.554321, total loss: 0.176077\n",
      "0.97\n",
      "step 2800, entropy loss: 0.059185, l2_loss: 990.606567, total loss: 0.128527\n",
      "0.99\n",
      "step 2900, entropy loss: 0.090291, l2_loss: 990.641602, total loss: 0.159636\n",
      "0.98\n",
      "step 3000, entropy loss: 0.246129, l2_loss: 990.648193, total loss: 0.315475\n",
      "0.96\n",
      "0.9787\n"
     ]
    }
   ],
   "source": [
    "# 将图像数据还原为28*28*1的格式，作为输入，高和宽为28像素，通道数为1\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#定义第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [5,5], padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Pooling layer - downsamples by 2X.\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Second convolutional layer -- maps 32 feature maps to 64.\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [5,5],padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Second pooling layer.\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n",
    "# is down to 7x7x64 feature maps -- maps this to 1024 features.\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:1.0}))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：  \n",
    "1、一般在训练阶段，需要进行神经元的随机失活（dropout），保持0.5左右的比例，防止或减轻过拟合，而在测试阶段则不进行dropout，不丢弃参数，故保持为1.0  \n",
    "2、经过初始化的权重，训练网络的初期，准确率就达到0.79--0.85--0.91--0.93，相对于智老师没初始化权重的训练过程0.58--0.81--0.86--0.87，网络在起点就收敛的很快，效果更好些   \n",
    "3、池化层主要负责对输入数据空间维度降采样，一般使用2x2感受野，步长为2，池化后将丢弃75%的数据。若采用更大的感受野尺寸，可能会因为操作太激烈，数据丢失太多导致算法性能变差  \n",
    "4、选择L2正则，因为上次作业（全连接神经网络）参数调优时发现L2正则的效果比L1好，这次参考着也使用L2  \n",
    "  \n",
    "以上的各种参数均由经验和参考代码得出并做一些优化，后续的代码和此代码类似，只针对部分参数微调，查看参数对训练结果的影响  \n",
    "\n",
    "测试集上最大正确率：1.0  \n",
    "训练集上最大正确率：0.9787  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 正则化因子增大 7e-3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.803969, l2_loss: 1941.744141, total loss: 14.396178\n",
      "0.83\n",
      "step 200, entropy loss: 0.512000, l2_loss: 1915.971802, total loss: 13.923803\n",
      "0.91\n",
      "step 300, entropy loss: 0.313385, l2_loss: 1890.097900, total loss: 13.544071\n",
      "0.93\n",
      "step 400, entropy loss: 0.189875, l2_loss: 1864.437988, total loss: 13.240941\n",
      "0.95\n",
      "step 500, entropy loss: 0.306793, l2_loss: 1839.048584, total loss: 13.180133\n",
      "0.89\n",
      "step 600, entropy loss: 0.242895, l2_loss: 1813.929932, total loss: 12.940405\n",
      "0.94\n",
      "step 700, entropy loss: 0.358895, l2_loss: 1789.142578, total loss: 12.882894\n",
      "0.88\n",
      "step 800, entropy loss: 0.328615, l2_loss: 1764.688965, total loss: 12.681438\n",
      "0.93\n",
      "step 900, entropy loss: 0.202620, l2_loss: 1740.521484, total loss: 12.386271\n",
      "0.95\n",
      "step 1000, entropy loss: 0.167415, l2_loss: 1716.698730, total loss: 12.184306\n",
      "0.96\n",
      "0.9583\n",
      "step 1100, entropy loss: 0.087045, l2_loss: 1693.206055, total loss: 11.939488\n",
      "0.99\n",
      "step 1200, entropy loss: 0.102258, l2_loss: 1670.014404, total loss: 11.792360\n",
      "0.99\n",
      "step 1300, entropy loss: 0.196855, l2_loss: 1647.106445, total loss: 11.726600\n",
      "0.95\n",
      "step 1400, entropy loss: 0.146668, l2_loss: 1624.514160, total loss: 11.518268\n",
      "0.97\n",
      "step 1500, entropy loss: 0.137354, l2_loss: 1602.266846, total loss: 11.353222\n",
      "0.96\n",
      "step 1600, entropy loss: 0.168581, l2_loss: 1580.323486, total loss: 11.230845\n",
      "0.97\n",
      "step 1700, entropy loss: 0.198130, l2_loss: 1558.651611, total loss: 11.108692\n",
      "0.97\n",
      "step 1800, entropy loss: 0.141961, l2_loss: 1537.291870, total loss: 10.903005\n",
      "0.99\n",
      "step 1900, entropy loss: 0.107050, l2_loss: 1516.205566, total loss: 10.720490\n",
      "0.98\n",
      "step 2000, entropy loss: 0.265429, l2_loss: 1495.420410, total loss: 10.733372\n",
      "0.93\n",
      "0.9714\n",
      "step 2100, entropy loss: 0.085667, l2_loss: 1474.919434, total loss: 10.410104\n",
      "0.98\n",
      "step 2200, entropy loss: 0.129435, l2_loss: 1454.706299, total loss: 10.312380\n",
      "0.96\n",
      "step 2300, entropy loss: 0.188147, l2_loss: 1434.769531, total loss: 10.231535\n",
      "0.95\n",
      "step 2400, entropy loss: 0.316815, l2_loss: 1415.088623, total loss: 10.222436\n",
      "0.92\n",
      "step 2500, entropy loss: 0.079127, l2_loss: 1395.690552, total loss: 9.848961\n",
      "0.99\n",
      "step 2600, entropy loss: 0.127514, l2_loss: 1376.565674, total loss: 9.763474\n",
      "0.95\n",
      "step 2700, entropy loss: 0.096453, l2_loss: 1357.694824, total loss: 9.600317\n",
      "0.98\n",
      "step 2800, entropy loss: 0.120154, l2_loss: 1339.058838, total loss: 9.493567\n",
      "0.98\n",
      "step 2900, entropy loss: 0.109715, l2_loss: 1320.707153, total loss: 9.354665\n",
      "0.98\n",
      "step 3000, entropy loss: 0.067166, l2_loss: 1302.613037, total loss: 9.185457\n",
      "0.97\n",
      "0.9771\n"
     ]
    }
   ],
   "source": [
    "# 将图像数据还原为28*28*1的格式，作为输入，高和宽为28像素，通道数为1\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#定义第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [5,5], padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Pooling layer - downsamples by 2X.\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Second convolutional layer -- maps 32 feature maps to 64.\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [5,5],padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Second pooling layer.\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n",
    "# is down to 7x7x64 feature maps -- maps this to 1024 features.\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-3*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:1.0}))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "增大正则因子，意味着增大对权重和模型复杂度的抑制，降低过拟合程度，从训练结果可看出确实有作用，训练集的正确率和测试集正确率差值变小了，但这个正则因子貌似太大了，以至于训练的正确率整体被拉低了\n",
    "\n",
    "测试集上最大正确率：0.99  \n",
    "训练集上最大正确率：0.9771  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 增大学习率 0.5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 2.301675, l2_loss: 401911094181888.000000, total loss: 28133777408.000000\n",
      "0.15\n",
      "step 200, entropy loss: 2.293039, l2_loss: 399107420061696.000000, total loss: 27937519616.000000\n",
      "0.15\n",
      "step 300, entropy loss: 2.298238, l2_loss: 396323475947520.000000, total loss: 27742644224.000000\n",
      "0.12\n",
      "step 400, entropy loss: 2.304266, l2_loss: 393559127621632.000000, total loss: 27549138944.000000\n",
      "0.12\n",
      "step 500, entropy loss: 2.313757, l2_loss: 390813569777664.000000, total loss: 27356950528.000000\n",
      "0.12\n",
      "step 600, entropy loss: 2.305824, l2_loss: 388087003742208.000000, total loss: 27166091264.000000\n",
      "0.11\n",
      "step 700, entropy loss: 2.302546, l2_loss: 385380134158336.000000, total loss: 26976610304.000000\n",
      "0.16\n",
      "step 800, entropy loss: 2.291070, l2_loss: 382691853729792.000000, total loss: 26788429824.000000\n",
      "0.17\n",
      "step 900, entropy loss: 2.303253, l2_loss: 380022430892032.000000, total loss: 26601570304.000000\n",
      "0.05\n",
      "step 1000, entropy loss: 2.291650, l2_loss: 377371597209600.000000, total loss: 26416013312.000000\n",
      "0.16\n",
      "0.1135\n",
      "step 1100, entropy loss: 2.298325, l2_loss: 374738983583744.000000, total loss: 26231730176.000000\n",
      "0.11\n",
      "step 1200, entropy loss: 2.303655, l2_loss: 372125026222080.000000, total loss: 26048751616.000000\n",
      "0.12\n",
      "step 1300, entropy loss: 2.305355, l2_loss: 369528919818240.000000, total loss: 25867024384.000000\n",
      "0.1\n",
      "step 1400, entropy loss: 2.301952, l2_loss: 366950966362112.000000, total loss: 25686568960.000000\n",
      "0.16\n",
      "step 1500, entropy loss: 2.302398, l2_loss: 364391467843584.000000, total loss: 25507403776.000000\n",
      "0.13\n",
      "step 1600, entropy loss: 2.302498, l2_loss: 361849417629696.000000, total loss: 25329459200.000000\n",
      "0.12\n",
      "step 1700, entropy loss: 2.299246, l2_loss: 359325352591360.000000, total loss: 25152776192.000000\n",
      "0.15\n",
      "step 1800, entropy loss: 2.296698, l2_loss: 356819037847552.000000, total loss: 24977334272.000000\n",
      "0.14\n",
      "step 1900, entropy loss: 2.301650, l2_loss: 354330003636224.000000, total loss: 24803100672.000000\n",
      "0.08\n",
      "step 2000, entropy loss: 2.303359, l2_loss: 351858082185216.000000, total loss: 24630067200.000000\n",
      "0.13\n",
      "0.1135\n",
      "step 2100, entropy loss: 2.304802, l2_loss: 349403609038848.000000, total loss: 24458252288.000000\n",
      "0.13\n",
      "step 2200, entropy loss: 2.299276, l2_loss: 346966382870528.000000, total loss: 24287647744.000000\n",
      "0.12\n",
      "step 2300, entropy loss: 2.289109, l2_loss: 344545900363776.000000, total loss: 24118214656.000000\n",
      "0.14\n",
      "step 2400, entropy loss: 2.292705, l2_loss: 342142731943936.000000, total loss: 23949991936.000000\n",
      "0.1\n",
      "step 2500, entropy loss: 2.310197, l2_loss: 339756105859072.000000, total loss: 23782928384.000000\n",
      "0.11\n",
      "step 2600, entropy loss: 2.300870, l2_loss: 337386089218048.000000, total loss: 23617026048.000000\n",
      "0.16\n",
      "step 2700, entropy loss: 2.300649, l2_loss: 335032715575296.000000, total loss: 23452291072.000000\n",
      "0.08\n",
      "step 2800, entropy loss: 2.298501, l2_loss: 332695481614336.000000, total loss: 23288684544.000000\n",
      "0.11\n",
      "step 2900, entropy loss: 2.305425, l2_loss: 330374957760512.000000, total loss: 23126247424.000000\n",
      "0.11\n",
      "step 3000, entropy loss: 2.308032, l2_loss: 328070170935296.000000, total loss: 22964912128.000000\n",
      "0.09\n",
      "0.1135\n"
     ]
    }
   ],
   "source": [
    "# 将图像数据还原为28*28*1的格式，作为输入，高和宽为28像素，通道数为1\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#定义第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [5,5], padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Pooling layer - downsamples by 2X.\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Second convolutional layer -- maps 32 feature maps to 64.\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [5,5],padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Second pooling layer.\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n",
    "# is down to 7x7x64 feature maps -- maps this to 1024 features.\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.5\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:1.0}))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "原本学习率为0.01，变为0.5，可发现训练结果惨不忍睹，模型根本无法收敛，训练结果只比随机猜测的0.1正确率大一点点，说明学习率太大了\n",
    "\n",
    "测试集上最大正确率：0.17  \n",
    "训练集上最大正确率：0.1135  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 学习率使用0.05，迭代次数5000"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.311653, l2_loss: 9288.714844, total loss: 0.961863\n",
      "0.94\n",
      "step 200, entropy loss: 0.292706, l2_loss: 9284.415039, total loss: 0.942615\n",
      "0.95\n",
      "step 300, entropy loss: 0.234343, l2_loss: 9279.426758, total loss: 0.883903\n",
      "0.97\n",
      "step 400, entropy loss: 0.180753, l2_loss: 9274.128906, total loss: 0.829942\n",
      "0.97\n",
      "step 500, entropy loss: 0.065057, l2_loss: 9268.803711, total loss: 0.713874\n",
      "0.99\n",
      "step 600, entropy loss: 0.147728, l2_loss: 9263.299805, total loss: 0.796159\n",
      "0.96\n",
      "step 700, entropy loss: 0.105183, l2_loss: 9257.693359, total loss: 0.753222\n",
      "0.99\n",
      "step 800, entropy loss: 0.068860, l2_loss: 9252.121094, total loss: 0.716509\n",
      "0.97\n",
      "step 900, entropy loss: 0.077154, l2_loss: 9246.436523, total loss: 0.724405\n",
      "1.0\n",
      "step 1000, entropy loss: 0.031039, l2_loss: 9240.721680, total loss: 0.677889\n",
      "0.99\n",
      "0.9773\n",
      "step 1100, entropy loss: 0.031446, l2_loss: 9234.909180, total loss: 0.677890\n",
      "0.97\n",
      "step 1200, entropy loss: 0.100049, l2_loss: 9229.095703, total loss: 0.746086\n",
      "0.98\n",
      "step 1300, entropy loss: 0.081730, l2_loss: 9223.206055, total loss: 0.727355\n",
      "0.99\n",
      "step 1400, entropy loss: 0.112781, l2_loss: 9217.373047, total loss: 0.757997\n",
      "0.99\n",
      "step 1500, entropy loss: 0.077599, l2_loss: 9211.515625, total loss: 0.722405\n",
      "0.99\n",
      "step 1600, entropy loss: 0.090971, l2_loss: 9205.633789, total loss: 0.735365\n",
      "0.98\n",
      "step 1700, entropy loss: 0.197394, l2_loss: 9199.665039, total loss: 0.841370\n",
      "0.96\n",
      "step 1800, entropy loss: 0.043329, l2_loss: 9193.734375, total loss: 0.686891\n",
      "0.98\n",
      "step 1900, entropy loss: 0.046628, l2_loss: 9187.786133, total loss: 0.689773\n",
      "0.98\n",
      "step 2000, entropy loss: 0.099995, l2_loss: 9181.882812, total loss: 0.742726\n",
      "0.96\n",
      "0.9881\n",
      "step 2100, entropy loss: 0.064272, l2_loss: 9175.979492, total loss: 0.706591\n",
      "0.97\n",
      "step 2200, entropy loss: 0.025081, l2_loss: 9169.951172, total loss: 0.666978\n",
      "0.99\n",
      "step 2300, entropy loss: 0.020609, l2_loss: 9164.025391, total loss: 0.662091\n",
      "1.0\n",
      "step 2400, entropy loss: 0.039692, l2_loss: 9158.034180, total loss: 0.680754\n",
      "0.99\n",
      "step 2500, entropy loss: 0.159724, l2_loss: 9152.030273, total loss: 0.800366\n",
      "0.98\n",
      "step 2600, entropy loss: 0.026872, l2_loss: 9146.064453, total loss: 0.667097\n",
      "1.0\n",
      "step 2700, entropy loss: 0.095718, l2_loss: 9140.027344, total loss: 0.735520\n",
      "0.99\n",
      "step 2800, entropy loss: 0.037812, l2_loss: 9134.089844, total loss: 0.677198\n",
      "0.99\n",
      "step 2900, entropy loss: 0.010193, l2_loss: 9128.066406, total loss: 0.649158\n",
      "1.0\n",
      "step 3000, entropy loss: 0.018305, l2_loss: 9122.043945, total loss: 0.656848\n",
      "1.0\n",
      "0.989\n",
      "step 3100, entropy loss: 0.057842, l2_loss: 9116.077148, total loss: 0.695967\n",
      "0.99\n",
      "step 3200, entropy loss: 0.047354, l2_loss: 9110.063477, total loss: 0.685059\n",
      "0.99\n",
      "step 3300, entropy loss: 0.021238, l2_loss: 9104.053711, total loss: 0.658522\n",
      "0.99\n",
      "step 3400, entropy loss: 0.055733, l2_loss: 9098.067383, total loss: 0.692598\n",
      "0.99\n",
      "step 3500, entropy loss: 0.018454, l2_loss: 9092.091797, total loss: 0.654900\n",
      "1.0\n",
      "step 3600, entropy loss: 0.072085, l2_loss: 9086.043945, total loss: 0.708108\n",
      "0.99\n",
      "step 3700, entropy loss: 0.024172, l2_loss: 9080.053711, total loss: 0.659776\n",
      "1.0\n",
      "step 3800, entropy loss: 0.025740, l2_loss: 9074.071289, total loss: 0.660925\n",
      "0.98\n",
      "step 3900, entropy loss: 0.050307, l2_loss: 9068.070312, total loss: 0.685072\n",
      "0.99\n",
      "step 4000, entropy loss: 0.019482, l2_loss: 9062.072266, total loss: 0.653827\n",
      "1.0\n",
      "0.9905\n",
      "step 4100, entropy loss: 0.013722, l2_loss: 9056.037109, total loss: 0.647644\n",
      "1.0\n",
      "step 4200, entropy loss: 0.046292, l2_loss: 9050.019531, total loss: 0.679793\n",
      "1.0\n",
      "step 4300, entropy loss: 0.039978, l2_loss: 9044.085938, total loss: 0.673064\n",
      "0.99\n",
      "step 4400, entropy loss: 0.022664, l2_loss: 9038.099609, total loss: 0.655331\n",
      "1.0\n",
      "step 4500, entropy loss: 0.019320, l2_loss: 9032.102539, total loss: 0.651568\n",
      "1.0\n",
      "step 4600, entropy loss: 0.039888, l2_loss: 9026.107422, total loss: 0.671716\n",
      "1.0\n",
      "step 4700, entropy loss: 0.024054, l2_loss: 9020.083008, total loss: 0.655460\n",
      "0.98\n",
      "step 4800, entropy loss: 0.075797, l2_loss: 9014.113281, total loss: 0.706785\n",
      "0.99\n",
      "step 4900, entropy loss: 0.014166, l2_loss: 9008.111328, total loss: 0.644734\n",
      "0.99\n",
      "step 5000, entropy loss: 0.003778, l2_loss: 9002.122070, total loss: 0.633927\n",
      "1.0\n",
      "0.9915\n"
     ]
    }
   ],
   "source": [
    "# 将图像数据还原为28*28*1的格式，作为输入，高和宽为28像素，通道数为1\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#定义第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [5,5], padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Pooling layer - downsamples by 2X.\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Second convolutional layer -- maps 32 feature maps to 64.\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [5,5],padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Second pooling layer.\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n",
    "# is down to 7x7x64 feature maps -- maps this to 1024 features.\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(5000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.05\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:1.0}))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "学习率调整为0.05，在这几次试验中收敛最快的一组参数了，第一次个batch迭代就达到了0.94的准确率，说明这个学习率选择的相当不错，既不会太小（训练慢，过拟合），也不会太大（无法收敛），而且照这个趋势，继续训练迭代貌似可以进一步提升准确率\n",
    "\n",
    "测试集上最大正确率：1.0  \n",
    "训练集上最大正确率:0.9915  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. kernel size调整为 9*9（深度保持不变）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.486888, l2_loss: 4383.535645, total loss: 0.793735\n",
      "0.84\n",
      "step 200, entropy loss: 0.451044, l2_loss: 4383.639648, total loss: 0.757899\n",
      "0.87\n",
      "step 300, entropy loss: 0.372829, l2_loss: 4383.531738, total loss: 0.679676\n",
      "0.91\n",
      "step 400, entropy loss: 0.235089, l2_loss: 4383.339355, total loss: 0.541923\n",
      "0.97\n",
      "step 500, entropy loss: 0.215314, l2_loss: 4383.067871, total loss: 0.522129\n",
      "0.98\n",
      "step 600, entropy loss: 0.243519, l2_loss: 4382.764648, total loss: 0.550312\n",
      "0.96\n",
      "step 700, entropy loss: 0.125505, l2_loss: 4382.443359, total loss: 0.432276\n",
      "0.97\n",
      "step 800, entropy loss: 0.254385, l2_loss: 4382.099609, total loss: 0.561132\n",
      "0.94\n",
      "step 900, entropy loss: 0.115422, l2_loss: 4381.747559, total loss: 0.422145\n",
      "0.97\n",
      "step 1000, entropy loss: 0.147517, l2_loss: 4381.381348, total loss: 0.454214\n",
      "0.97\n",
      "0.9656\n",
      "step 1100, entropy loss: 0.108037, l2_loss: 4380.986816, total loss: 0.414706\n",
      "0.96\n",
      "step 1200, entropy loss: 0.084366, l2_loss: 4380.604980, total loss: 0.391009\n",
      "1.0\n",
      "step 1300, entropy loss: 0.095509, l2_loss: 4380.209961, total loss: 0.402124\n",
      "0.99\n",
      "step 1400, entropy loss: 0.302520, l2_loss: 4379.792480, total loss: 0.609106\n",
      "0.92\n",
      "step 1500, entropy loss: 0.185266, l2_loss: 4379.349121, total loss: 0.491820\n",
      "0.97\n",
      "step 1600, entropy loss: 0.106318, l2_loss: 4378.942383, total loss: 0.412844\n",
      "0.98\n",
      "step 1700, entropy loss: 0.138737, l2_loss: 4378.528809, total loss: 0.445234\n",
      "0.94\n",
      "step 1800, entropy loss: 0.200590, l2_loss: 4378.071289, total loss: 0.507055\n",
      "0.94\n",
      "step 1900, entropy loss: 0.060629, l2_loss: 4377.625000, total loss: 0.367063\n",
      "0.97\n",
      "step 2000, entropy loss: 0.104710, l2_loss: 4377.173828, total loss: 0.411112\n",
      "0.98\n",
      "0.9783\n",
      "step 2100, entropy loss: 0.104531, l2_loss: 4376.709473, total loss: 0.410901\n",
      "0.98\n",
      "step 2200, entropy loss: 0.061556, l2_loss: 4376.250488, total loss: 0.367894\n",
      "0.99\n",
      "step 2300, entropy loss: 0.138107, l2_loss: 4375.801758, total loss: 0.444413\n",
      "0.97\n",
      "step 2400, entropy loss: 0.048662, l2_loss: 4375.337402, total loss: 0.354936\n",
      "0.99\n",
      "step 2500, entropy loss: 0.067540, l2_loss: 4374.876953, total loss: 0.373781\n",
      "0.98\n",
      "step 2600, entropy loss: 0.056280, l2_loss: 4374.402344, total loss: 0.362488\n",
      "0.99\n",
      "step 2700, entropy loss: 0.032940, l2_loss: 4373.942383, total loss: 0.339116\n",
      "1.0\n",
      "step 2800, entropy loss: 0.027011, l2_loss: 4373.483398, total loss: 0.333155\n",
      "1.0\n",
      "step 2900, entropy loss: 0.076293, l2_loss: 4372.976562, total loss: 0.382402\n",
      "0.99\n",
      "step 3000, entropy loss: 0.058771, l2_loss: 4372.491211, total loss: 0.364846\n",
      "0.98\n",
      "0.982\n"
     ]
    }
   ],
   "source": [
    "# 将图像数据还原为28*28*1的格式，作为输入，高和宽为28像素，通道数为1\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#定义第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [9,9], padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Pooling layer - downsamples by 2X.\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Second convolutional layer -- maps 32 feature maps to 64.\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [9,9],padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Second pooling layer.\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n",
    "# is down to 7x7x64 feature maps -- maps this to 1024 features.\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:1.0}))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "调整kernel size为9x9（深度不变），采用更大的感受野，卷积时重复的地方也多，感觉应该可以更细致的学到特征，相对于5x5的卷积核，它确实学到了更多特征，训练集正确率有所提升，但缺点是造成神经元个数增加了很多，从结果来看，貌似有点过拟合\n",
    "\n",
    "测试集上最大正确率：1.0  \n",
    "训练集上最大正确率：0.982  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. kernel 数量由 32和64 变为 16和32，size不变"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 1.006719, l2_loss: 5064.546875, total loss: 1.361237\n",
      "0.73\n",
      "step 200, entropy loss: 0.510245, l2_loss: 5065.219238, total loss: 0.864810\n",
      "0.8\n",
      "step 300, entropy loss: 0.449359, l2_loss: 5065.266113, total loss: 0.803928\n",
      "0.89\n",
      "step 400, entropy loss: 0.243581, l2_loss: 5065.145020, total loss: 0.598141\n",
      "0.96\n",
      "step 500, entropy loss: 0.333537, l2_loss: 5064.938965, total loss: 0.688083\n",
      "0.94\n",
      "step 600, entropy loss: 0.225112, l2_loss: 5064.624512, total loss: 0.579636\n",
      "0.95\n",
      "step 700, entropy loss: 0.379347, l2_loss: 5064.306641, total loss: 0.733849\n",
      "0.92\n",
      "step 800, entropy loss: 0.231903, l2_loss: 5063.961914, total loss: 0.586380\n",
      "0.9\n",
      "step 900, entropy loss: 0.182925, l2_loss: 5063.563477, total loss: 0.537375\n",
      "0.94\n",
      "step 1000, entropy loss: 0.249117, l2_loss: 5063.143555, total loss: 0.603537\n",
      "0.97\n",
      "0.9548\n",
      "step 1100, entropy loss: 0.170168, l2_loss: 5062.730469, total loss: 0.524559\n",
      "0.95\n",
      "step 1200, entropy loss: 0.156117, l2_loss: 5062.301758, total loss: 0.510478\n",
      "0.95\n",
      "step 1300, entropy loss: 0.179710, l2_loss: 5061.843262, total loss: 0.534039\n",
      "0.98\n",
      "step 1400, entropy loss: 0.213259, l2_loss: 5061.387695, total loss: 0.567556\n",
      "0.94\n",
      "step 1500, entropy loss: 0.254958, l2_loss: 5060.921387, total loss: 0.609223\n",
      "0.94\n",
      "step 1600, entropy loss: 0.121732, l2_loss: 5060.428223, total loss: 0.475962\n",
      "0.99\n",
      "step 1700, entropy loss: 0.296597, l2_loss: 5059.925293, total loss: 0.650792\n",
      "0.95\n",
      "step 1800, entropy loss: 0.145129, l2_loss: 5059.416016, total loss: 0.499289\n",
      "0.97\n",
      "step 1900, entropy loss: 0.101005, l2_loss: 5058.926270, total loss: 0.455129\n",
      "0.96\n",
      "step 2000, entropy loss: 0.118767, l2_loss: 5058.379883, total loss: 0.472854\n",
      "0.97\n",
      "0.968\n",
      "step 2100, entropy loss: 0.189725, l2_loss: 5057.853027, total loss: 0.543775\n",
      "0.95\n",
      "step 2200, entropy loss: 0.102049, l2_loss: 5057.318848, total loss: 0.456062\n",
      "0.99\n",
      "step 2300, entropy loss: 0.152030, l2_loss: 5056.793945, total loss: 0.506005\n",
      "0.93\n",
      "step 2400, entropy loss: 0.069774, l2_loss: 5056.240234, total loss: 0.423711\n",
      "0.98\n",
      "step 2500, entropy loss: 0.093574, l2_loss: 5055.714844, total loss: 0.447474\n",
      "0.98\n",
      "step 2600, entropy loss: 0.115760, l2_loss: 5055.168457, total loss: 0.469622\n",
      "0.98\n",
      "step 2700, entropy loss: 0.147168, l2_loss: 5054.645508, total loss: 0.500993\n",
      "0.98\n",
      "step 2800, entropy loss: 0.062450, l2_loss: 5054.086426, total loss: 0.416237\n",
      "0.98\n",
      "step 2900, entropy loss: 0.044261, l2_loss: 5053.548828, total loss: 0.398009\n",
      "0.98\n",
      "step 3000, entropy loss: 0.044737, l2_loss: 5052.991699, total loss: 0.398446\n",
      "0.99\n",
      "0.9775\n"
     ]
    }
   ],
   "source": [
    "# 将图像数据还原为28*28*1的格式，作为输入，高和宽为28像素，通道数为1\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#定义第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 16, [5,5], padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Pooling layer - downsamples by 2X.\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Second convolutional layer -- maps 32 feature maps to 64.\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 32, [5,5],padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Second pooling layer.\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n",
    "# is down to 7x7x64 feature maps -- maps this to 1024 features.\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:1.0}))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "调整kernel的数量，由原本的32和64变为16和32，size不变，发现不管是在训练集还是测试集，性能都有所下降，说明更少的kernel数量造成了网络学习到的特征不够，有点欠拟合\n",
    "\n",
    "测试集上最大正确率：0.99  \n",
    "训练集上最大正确率：0.9775  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. 使用单个卷积层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.696585, l2_loss: 6033.401855, total loss: 1.118923\n",
      "0.89\n",
      "step 200, entropy loss: 0.492299, l2_loss: 6033.671387, total loss: 0.914656\n",
      "0.85\n",
      "step 300, entropy loss: 0.325712, l2_loss: 6033.562012, total loss: 0.748061\n",
      "0.93\n",
      "step 400, entropy loss: 0.356928, l2_loss: 6033.257812, total loss: 0.779256\n",
      "0.93\n",
      "step 500, entropy loss: 0.359615, l2_loss: 6032.893066, total loss: 0.781918\n",
      "0.88\n",
      "step 600, entropy loss: 0.350746, l2_loss: 6032.481445, total loss: 0.773020\n",
      "0.88\n",
      "step 700, entropy loss: 0.338226, l2_loss: 6032.006836, total loss: 0.760467\n",
      "0.91\n",
      "step 800, entropy loss: 0.414592, l2_loss: 6031.524414, total loss: 0.836798\n",
      "0.9\n",
      "step 900, entropy loss: 0.315013, l2_loss: 6031.010254, total loss: 0.737184\n",
      "0.9\n",
      "step 1000, entropy loss: 0.189375, l2_loss: 6030.511719, total loss: 0.611511\n",
      "0.97\n",
      "0.9374\n",
      "step 1100, entropy loss: 0.241904, l2_loss: 6029.981445, total loss: 0.664003\n",
      "0.95\n",
      "step 1200, entropy loss: 0.284403, l2_loss: 6029.437500, total loss: 0.706464\n",
      "0.94\n",
      "step 1300, entropy loss: 0.183226, l2_loss: 6028.869629, total loss: 0.605246\n",
      "0.93\n",
      "step 1400, entropy loss: 0.179234, l2_loss: 6028.329102, total loss: 0.601217\n",
      "0.98\n",
      "step 1500, entropy loss: 0.114180, l2_loss: 6027.780273, total loss: 0.536125\n",
      "0.98\n",
      "step 1600, entropy loss: 0.179279, l2_loss: 6027.208984, total loss: 0.601184\n",
      "0.94\n",
      "step 1700, entropy loss: 0.206065, l2_loss: 6026.654297, total loss: 0.627931\n",
      "0.95\n",
      "step 1800, entropy loss: 0.229661, l2_loss: 6026.074219, total loss: 0.651486\n",
      "0.94\n",
      "step 1900, entropy loss: 0.199733, l2_loss: 6025.471680, total loss: 0.621516\n",
      "0.97\n",
      "step 2000, entropy loss: 0.222825, l2_loss: 6024.897461, total loss: 0.644567\n",
      "0.92\n",
      "0.9598\n",
      "step 2100, entropy loss: 0.113811, l2_loss: 6024.311523, total loss: 0.535512\n",
      "0.98\n",
      "step 2200, entropy loss: 0.128920, l2_loss: 6023.708496, total loss: 0.550579\n",
      "0.96\n",
      "step 2300, entropy loss: 0.253885, l2_loss: 6023.105957, total loss: 0.675502\n",
      "0.95\n",
      "step 2400, entropy loss: 0.048213, l2_loss: 6022.521973, total loss: 0.469790\n",
      "1.0\n",
      "step 2500, entropy loss: 0.125603, l2_loss: 6021.904297, total loss: 0.547137\n",
      "0.94\n",
      "step 2600, entropy loss: 0.142138, l2_loss: 6021.277832, total loss: 0.563628\n",
      "0.95\n",
      "step 2700, entropy loss: 0.112270, l2_loss: 6020.678223, total loss: 0.533718\n",
      "0.96\n",
      "step 2800, entropy loss: 0.153447, l2_loss: 6020.057617, total loss: 0.574851\n",
      "0.96\n",
      "step 2900, entropy loss: 0.088231, l2_loss: 6019.439941, total loss: 0.509591\n",
      "0.98\n",
      "step 3000, entropy loss: 0.122792, l2_loss: 6018.813477, total loss: 0.544109\n",
      "0.97\n",
      "0.9669\n"
     ]
    }
   ],
   "source": [
    "# 将图像数据还原为28*28*1的格式，作为输入，高和宽为28像素，通道数为1\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#定义第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 64, [7,7], padding='SAME', activation=tf.nn.relu, \n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1), \n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "\n",
    "# Pooling layer - downsamples by 2X.\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2], strides=[2, 2], padding='VALID')\n",
    "\n",
    "\n",
    "\n",
    "# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image\n",
    "# is down to 7x7x64 feature maps -- maps this to 1024 features.\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool1_flat = tf.layers.flatten(h_pool1)\n",
    "  h_fc1 = tf.layers.dense(h_pool1_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:1.0}))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以上的方案都是使用两个卷积层，分别用32个 5x5x1 的卷积核和64个 5x5x32 的卷积核，第一个卷积层对原图像的感受野为5x5，第二个卷积层对上一层数据的感受野为5x5，换算为对原图像的感受野为7x7，故尝试直接使用7x7x64的单个卷积层训练  \n",
    "\n",
    "训练前期收敛的非常快，最初就达到0.89准确率，但后期乏力，最终的训练结果也没有两个小卷积核训练的结果好  \n",
    "测试集上最大正确率：0.98  \n",
    "训练集上最大正确率：0.9669  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此卷积网络构建方式不好，原因如下：  \n",
    "1、参数太多  \n",
    "两个小卷积的参数：(5x5x32+32)+(5x5x64+64)=2496个参数（含bias）  \n",
    "一个大卷积的参数：7x7x64+64=3200个参数（含bias）  \n",
    "如果层数变多后，参数的个数差距更加明显  \n",
    "2、特征提取能力略弱  \n",
    "多个卷积层与非线性的激活层交替的结构，比单一卷积层的结构更能提取出深层的更好的特征，可以表达出输入数据中更多个强力特征  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 总结"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "所有训练结果中：  \n",
    "测试集上最大正确率：1.0  \n",
    "训练集上最大正确率:0.9915  \n",
    "\n",
    "Ps.训练神经网络挺麻烦的，在机器学习的时候参数调优可以使用GridSearchCV，训练时间也还好，神经网络的训练基本每改一次参数就要重新训练一次网络，时间好长。但确实深度学习更有意思一点。  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
