{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第七周作业"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现有的模型： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "FLAGS = None\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们在这里调用系统提供的Mnist数据函数为我们读入数据，如果没有下载的话则进行下载。\n",
    "\n",
    "<font color=#ff0000>**这里将data_dir改为适合你的运行环境的目录**</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting /data\\train-images-idx3-ubyte.gz\n",
      "Extracting /data\\train-labels-idx1-ubyte.gz\n",
      "Extracting /data\\t10k-images-idx3-ubyte.gz\n",
      "Extracting /data\\t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = '/data'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一个非常非常简陋的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "W = tf.Variable(tf.zeros([784, 10]))\n",
    "b = tf.Variable(tf.zeros([10]))\n",
    "y = tf.matmul(x, W) + b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义我们的ground truth 占位符"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们计算交叉熵，注意这里不要使用注释中的手动计算方式，而是使用系统函数。\n",
    "另一个注意点就是，softmax_cross_entropy_with_logits的logits参数是**未经激活的wx+b**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# The raw formulation of cross-entropy,\n",
    "#\n",
    "#   tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),\n",
    "#                                 reduction_indices=[1]))\n",
    "#\n",
    "# can be numerically unstable.\n",
    "#\n",
    "# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "# outputs of 'y', and then average across the batch.\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "生成一个训练step"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这里我们仍然调用系统提供的读取数据，为我们取得一个batch。\n",
    "然后我们运行3k个step(5 epochs)，对权重进行优化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "验证我们模型在测试数据上的准确率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9227\n"
     ]
    }
   ],
   "source": [
    "  # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "毫无疑问，这个模型是一个非常简陋，性能也不理想的模型。目前只能达到92%左右的准确率。\n",
    "接下来，希望大家利用现有的知识，将这个模型优化至98%以上的准确率。\n",
    "Hint：\n",
    "- 卷积\n",
    "- 池化\n",
    "- 激活函数\n",
    "- 正则化\n",
    "- 初始化\n",
    "- 摸索一下各个超参数\n",
    "  - 卷积kernel size\n",
    "  - 卷积kernel 数量\n",
    "  - 学习率\n",
    "  - 正则化惩罚因子\n",
    "  - 最好每隔几个step就对loss、accuracy等等进行一次输出，这样才能有根据地进行调整"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 卷积/池化/激活函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先做一个简单的卷积神经网络（即老师示例代码中layers写法）。把我们的图先还原成28x28的（黑白图所以1通道）。然后用5x5的卷积核做卷积（padding模式为same），取32个这样的核，卷积之后池化(最大值池化，2x2)。再去5x5的核64个，卷积后池化。加两个全连接层，两层中间随机dropout，最后输出10类。另外注意一下激活函数，选的是relu。在上次的作业中，已经验证过relu效果比sigmoid好"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\n",
    "learning_rate = tf.placeholder(tf.float32)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu)\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu)\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 2.131857, l2_loss: 2412.433350, total loss: 2.300728\n",
      "0.42\n",
      "step 200, entropy loss: 1.055425, l2_loss: 2413.691162, total loss: 1.224383\n",
      "0.79\n",
      "step 300, entropy loss: 0.510460, l2_loss: 2414.790283, total loss: 0.679496\n",
      "0.83\n",
      "step 400, entropy loss: 0.432618, l2_loss: 2415.326660, total loss: 0.601691\n",
      "0.94\n",
      "step 500, entropy loss: 0.441917, l2_loss: 2415.626221, total loss: 0.611011\n",
      "0.87\n",
      "step 600, entropy loss: 0.361923, l2_loss: 2415.790283, total loss: 0.531028\n",
      "0.88\n",
      "step 700, entropy loss: 0.302014, l2_loss: 2415.904541, total loss: 0.471127\n",
      "0.91\n",
      "step 800, entropy loss: 0.194853, l2_loss: 2415.971680, total loss: 0.363971\n",
      "0.98\n",
      "step 900, entropy loss: 0.199193, l2_loss: 2416.003418, total loss: 0.368313\n",
      "0.94\n",
      "step 1000, entropy loss: 0.226655, l2_loss: 2415.991211, total loss: 0.395774\n",
      "0.94\n",
      "0.9304\n",
      "step 1100, entropy loss: 0.227412, l2_loss: 2415.970215, total loss: 0.396530\n",
      "0.91\n",
      "step 1200, entropy loss: 0.192473, l2_loss: 2415.945801, total loss: 0.361589\n",
      "0.92\n",
      "step 1300, entropy loss: 0.270592, l2_loss: 2415.907959, total loss: 0.439706\n",
      "0.92\n",
      "step 1400, entropy loss: 0.112761, l2_loss: 2415.861572, total loss: 0.281871\n",
      "0.95\n",
      "step 1500, entropy loss: 0.115784, l2_loss: 2415.790283, total loss: 0.284889\n",
      "0.96\n",
      "step 1600, entropy loss: 0.127203, l2_loss: 2415.702148, total loss: 0.296302\n",
      "0.97\n",
      "step 1700, entropy loss: 0.141055, l2_loss: 2415.602539, total loss: 0.310148\n",
      "0.95\n",
      "step 1800, entropy loss: 0.072835, l2_loss: 2415.507568, total loss: 0.241921\n",
      "0.98\n",
      "step 1900, entropy loss: 0.199899, l2_loss: 2415.381836, total loss: 0.368976\n",
      "0.95\n",
      "step 2000, entropy loss: 0.113761, l2_loss: 2415.275635, total loss: 0.282830\n",
      "0.96\n",
      "0.9579\n",
      "step 2100, entropy loss: 0.186110, l2_loss: 2415.145996, total loss: 0.355170\n",
      "0.92\n",
      "step 2200, entropy loss: 0.135386, l2_loss: 2415.018066, total loss: 0.304438\n",
      "0.97\n",
      "step 2300, entropy loss: 0.080786, l2_loss: 2414.865479, total loss: 0.249826\n",
      "0.99\n",
      "step 2400, entropy loss: 0.102167, l2_loss: 2414.744141, total loss: 0.271199\n",
      "0.97\n",
      "step 2500, entropy loss: 0.115108, l2_loss: 2414.598145, total loss: 0.284130\n",
      "0.96\n",
      "step 2600, entropy loss: 0.144593, l2_loss: 2414.447021, total loss: 0.313604\n",
      "0.96\n",
      "step 2700, entropy loss: 0.143039, l2_loss: 2414.276855, total loss: 0.312038\n",
      "0.96\n",
      "step 2800, entropy loss: 0.141856, l2_loss: 2414.125000, total loss: 0.310844\n",
      "0.98\n",
      "step 2900, entropy loss: 0.167036, l2_loss: 2413.979492, total loss: 0.336015\n",
      "0.95\n",
      "step 3000, entropy loss: 0.127172, l2_loss: 2413.782715, total loss: 0.296136\n",
      "0.98\n",
      "0.9666\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "卷积神经网络的确很适合处理图像。上一次用普通的神经网络，需费很大功夫我们才将正确率提升到0.96。但用这个卷积网络，第一次就有这样的结果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 初始化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据上次的经验，我们需要注意权重的初始化。在上面的网络的基础上，我们首先去改初始化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层，增加对kernel的初始化\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层，增加对kernel的初始化\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.776397, l2_loss: 983.135437, total loss: 0.845216\n",
      "0.82\n",
      "step 200, entropy loss: 0.298851, l2_loss: 984.216736, total loss: 0.367746\n",
      "0.88\n",
      "step 300, entropy loss: 0.373850, l2_loss: 984.816345, total loss: 0.442787\n",
      "0.93\n",
      "step 400, entropy loss: 0.491511, l2_loss: 985.272400, total loss: 0.560480\n",
      "0.87\n",
      "step 500, entropy loss: 0.312781, l2_loss: 985.599243, total loss: 0.381773\n",
      "0.91\n",
      "step 600, entropy loss: 0.215917, l2_loss: 985.892517, total loss: 0.284930\n",
      "0.9\n",
      "step 700, entropy loss: 0.248533, l2_loss: 986.140503, total loss: 0.317563\n",
      "0.93\n",
      "step 800, entropy loss: 0.253736, l2_loss: 986.354553, total loss: 0.322781\n",
      "0.94\n",
      "step 900, entropy loss: 0.114789, l2_loss: 986.519043, total loss: 0.183845\n",
      "0.95\n",
      "step 1000, entropy loss: 0.233903, l2_loss: 986.680298, total loss: 0.302970\n",
      "0.93\n",
      "0.9468\n",
      "step 1100, entropy loss: 0.193779, l2_loss: 986.835266, total loss: 0.262858\n",
      "0.93\n",
      "step 1200, entropy loss: 0.143084, l2_loss: 986.986633, total loss: 0.212173\n",
      "0.97\n",
      "step 1300, entropy loss: 0.182371, l2_loss: 987.100342, total loss: 0.251468\n",
      "0.92\n",
      "step 1400, entropy loss: 0.105908, l2_loss: 987.224915, total loss: 0.175013\n",
      "0.98\n",
      "step 1500, entropy loss: 0.164307, l2_loss: 987.348816, total loss: 0.233422\n",
      "0.97\n",
      "step 1600, entropy loss: 0.186708, l2_loss: 987.442627, total loss: 0.255829\n",
      "0.94\n",
      "step 1700, entropy loss: 0.162056, l2_loss: 987.523865, total loss: 0.231183\n",
      "0.97\n",
      "step 1800, entropy loss: 0.121040, l2_loss: 987.621277, total loss: 0.190173\n",
      "0.98\n",
      "step 1900, entropy loss: 0.110867, l2_loss: 987.717163, total loss: 0.180007\n",
      "0.96\n",
      "step 2000, entropy loss: 0.122180, l2_loss: 987.793945, total loss: 0.191325\n",
      "0.99\n",
      "0.9662\n",
      "step 2100, entropy loss: 0.163962, l2_loss: 987.848999, total loss: 0.233111\n",
      "0.97\n",
      "step 2200, entropy loss: 0.149364, l2_loss: 987.912781, total loss: 0.218518\n",
      "0.98\n",
      "step 2300, entropy loss: 0.181880, l2_loss: 987.971069, total loss: 0.251038\n",
      "0.96\n",
      "step 2400, entropy loss: 0.167491, l2_loss: 988.007996, total loss: 0.236652\n",
      "0.94\n",
      "step 2500, entropy loss: 0.154625, l2_loss: 988.040710, total loss: 0.223788\n",
      "0.97\n",
      "step 2600, entropy loss: 0.081141, l2_loss: 988.105042, total loss: 0.150308\n",
      "0.99\n",
      "step 2700, entropy loss: 0.113984, l2_loss: 988.132629, total loss: 0.183153\n",
      "0.98\n",
      "step 2800, entropy loss: 0.068677, l2_loss: 988.169617, total loss: 0.137849\n",
      "0.98\n",
      "step 2900, entropy loss: 0.081292, l2_loss: 988.192688, total loss: 0.150465\n",
      "0.99\n",
      "step 3000, entropy loss: 0.033245, l2_loss: 988.226013, total loss: 0.102421\n",
      "1.0\n",
      "0.9712\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "更改初始化之后，我们只需100步就可以在训练集上取得正确率0.82的效果，比没有初始化的网络好很多。而且3000次迭代后，在测试集上的效果也有所提高。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.改Kernel Size"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1) 增加Kernel Size：感受野大了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层，感受野变大\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层，感受野变大\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.507280, l2_loss: 2568.161377, total loss: 0.687051\n",
      "0.88\n",
      "step 200, entropy loss: 0.326115, l2_loss: 2568.483154, total loss: 0.505908\n",
      "0.91\n",
      "step 300, entropy loss: 0.449885, l2_loss: 2568.622070, total loss: 0.629689\n",
      "0.92\n",
      "step 400, entropy loss: 0.197154, l2_loss: 2568.638428, total loss: 0.376959\n",
      "0.93\n",
      "step 500, entropy loss: 0.308423, l2_loss: 2568.620605, total loss: 0.488226\n",
      "0.93\n",
      "step 600, entropy loss: 0.295841, l2_loss: 2568.585449, total loss: 0.475642\n",
      "0.94\n",
      "step 700, entropy loss: 0.302819, l2_loss: 2568.505371, total loss: 0.482614\n",
      "0.94\n",
      "step 800, entropy loss: 0.157409, l2_loss: 2568.418945, total loss: 0.337198\n",
      "0.96\n",
      "step 900, entropy loss: 0.083037, l2_loss: 2568.289795, total loss: 0.262817\n",
      "0.99\n",
      "step 1000, entropy loss: 0.174491, l2_loss: 2568.136963, total loss: 0.354261\n",
      "0.96\n",
      "0.9559\n",
      "step 1100, entropy loss: 0.118642, l2_loss: 2568.012939, total loss: 0.298403\n",
      "0.97\n",
      "step 1200, entropy loss: 0.113038, l2_loss: 2567.874756, total loss: 0.292789\n",
      "0.98\n",
      "step 1300, entropy loss: 0.098264, l2_loss: 2567.745850, total loss: 0.278006\n",
      "0.98\n",
      "step 1400, entropy loss: 0.176877, l2_loss: 2567.580078, total loss: 0.356608\n",
      "0.96\n",
      "step 1500, entropy loss: 0.134288, l2_loss: 2567.409668, total loss: 0.314007\n",
      "0.99\n",
      "step 1600, entropy loss: 0.156474, l2_loss: 2567.253418, total loss: 0.336181\n",
      "0.95\n",
      "step 1700, entropy loss: 0.190002, l2_loss: 2567.071289, total loss: 0.369697\n",
      "0.93\n",
      "step 1800, entropy loss: 0.052236, l2_loss: 2566.889404, total loss: 0.231918\n",
      "0.99\n",
      "step 1900, entropy loss: 0.101720, l2_loss: 2566.695068, total loss: 0.281389\n",
      "0.97\n",
      "step 2000, entropy loss: 0.106514, l2_loss: 2566.514648, total loss: 0.286170\n",
      "0.99\n",
      "0.9694\n",
      "step 2100, entropy loss: 0.139273, l2_loss: 2566.327148, total loss: 0.318916\n",
      "0.95\n",
      "step 2200, entropy loss: 0.065999, l2_loss: 2566.126221, total loss: 0.245627\n",
      "0.98\n",
      "step 2300, entropy loss: 0.131850, l2_loss: 2565.924072, total loss: 0.311465\n",
      "0.98\n",
      "step 2400, entropy loss: 0.089435, l2_loss: 2565.712402, total loss: 0.269035\n",
      "0.98\n",
      "step 2500, entropy loss: 0.123210, l2_loss: 2565.495850, total loss: 0.302795\n",
      "0.93\n",
      "step 2600, entropy loss: 0.096008, l2_loss: 2565.274902, total loss: 0.275578\n",
      "0.97\n",
      "step 2700, entropy loss: 0.130751, l2_loss: 2565.088135, total loss: 0.310307\n",
      "0.93\n",
      "step 2800, entropy loss: 0.096405, l2_loss: 2564.864746, total loss: 0.275946\n",
      "0.96\n",
      "step 2900, entropy loss: 0.064535, l2_loss: 2564.649902, total loss: 0.244061\n",
      "1.0\n",
      "step 3000, entropy loss: 0.094257, l2_loss: 2564.435059, total loss: 0.273768\n",
      "0.99\n",
      "0.9746\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最直观的感受是，kernel size变大，跑的时间也跟着长好多。毕竟需要计算的参数变多了。虽然效果上，有所提升，但提升的也不太多。可能是图片原本就挺小的，kernel这么大，都快把半张图囊括进去了。（第二层卷积层，都已经把多半张图包括进去了。）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2）减小kernel size："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层，感受野变小\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [3,3],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层，感受野变小\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [3,3],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 1.694891, l2_loss: 3424.918945, total loss: 1.934635\n",
      "0.62\n",
      "step 200, entropy loss: 0.648232, l2_loss: 3426.431152, total loss: 0.888083\n",
      "0.85\n",
      "step 300, entropy loss: 0.418846, l2_loss: 3427.048584, total loss: 0.658740\n",
      "0.85\n",
      "step 400, entropy loss: 0.411290, l2_loss: 3427.302734, total loss: 0.651201\n",
      "0.91\n",
      "step 500, entropy loss: 0.270570, l2_loss: 3427.416992, total loss: 0.510489\n",
      "0.93\n",
      "step 600, entropy loss: 0.260549, l2_loss: 3427.431885, total loss: 0.500469\n",
      "0.93\n",
      "step 700, entropy loss: 0.266498, l2_loss: 3427.375488, total loss: 0.506414\n",
      "0.94\n",
      "step 800, entropy loss: 0.183277, l2_loss: 3427.299805, total loss: 0.423188\n",
      "0.96\n",
      "step 900, entropy loss: 0.196552, l2_loss: 3427.185791, total loss: 0.436455\n",
      "0.94\n",
      "step 1000, entropy loss: 0.296508, l2_loss: 3427.038574, total loss: 0.536401\n",
      "0.92\n",
      "0.9323\n",
      "step 1100, entropy loss: 0.261689, l2_loss: 3426.865234, total loss: 0.501570\n",
      "0.94\n",
      "step 1200, entropy loss: 0.186772, l2_loss: 3426.699707, total loss: 0.426641\n",
      "0.94\n",
      "step 1300, entropy loss: 0.162487, l2_loss: 3426.528809, total loss: 0.402344\n",
      "0.96\n",
      "step 1400, entropy loss: 0.143372, l2_loss: 3426.352539, total loss: 0.383217\n",
      "0.94\n",
      "step 1500, entropy loss: 0.160672, l2_loss: 3426.138428, total loss: 0.400502\n",
      "0.95\n",
      "step 1600, entropy loss: 0.183889, l2_loss: 3425.943604, total loss: 0.423705\n",
      "0.97\n",
      "step 1700, entropy loss: 0.172372, l2_loss: 3425.726318, total loss: 0.412173\n",
      "0.95\n",
      "step 1800, entropy loss: 0.189541, l2_loss: 3425.486572, total loss: 0.429325\n",
      "0.96\n",
      "step 1900, entropy loss: 0.180309, l2_loss: 3425.261475, total loss: 0.420078\n",
      "0.97\n",
      "step 2000, entropy loss: 0.082909, l2_loss: 3425.010254, total loss: 0.322660\n",
      "0.99\n",
      "0.958\n",
      "step 2100, entropy loss: 0.111972, l2_loss: 3424.777832, total loss: 0.351706\n",
      "1.0\n",
      "step 2200, entropy loss: 0.102078, l2_loss: 3424.527588, total loss: 0.341795\n",
      "0.98\n",
      "step 2300, entropy loss: 0.142143, l2_loss: 3424.290039, total loss: 0.381844\n",
      "0.98\n",
      "step 2400, entropy loss: 0.159693, l2_loss: 3424.010254, total loss: 0.399374\n",
      "0.97\n",
      "step 2500, entropy loss: 0.371152, l2_loss: 3423.747070, total loss: 0.610814\n",
      "0.95\n",
      "step 2600, entropy loss: 0.094588, l2_loss: 3423.472168, total loss: 0.334231\n",
      "0.99\n",
      "step 2700, entropy loss: 0.098857, l2_loss: 3423.196533, total loss: 0.338481\n",
      "0.98\n",
      "step 2800, entropy loss: 0.185453, l2_loss: 3422.925293, total loss: 0.425058\n",
      "0.92\n",
      "step 2900, entropy loss: 0.164575, l2_loss: 3422.650391, total loss: 0.404161\n",
      "0.95\n",
      "step 3000, entropy loss: 0.095456, l2_loss: 3422.352539, total loss: 0.335021\n",
      "0.99\n",
      "0.9662\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "看到感受野减小，所得正确率偏低。可能是因为我们的感受野过小，学习不到位。考虑一下加大前期的kernel size，但不变后期的kernel size.（希望这样既保持好一点的正确率又能快点搞定作业）\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层，感受野变大\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 32, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 64, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.527442, l2_loss: 4414.650391, total loss: 0.836467\n",
      "0.87\n",
      "step 200, entropy loss: 0.426888, l2_loss: 4414.943359, total loss: 0.735934\n",
      "0.88\n",
      "step 300, entropy loss: 0.402408, l2_loss: 4414.928223, total loss: 0.711453\n",
      "0.89\n",
      "step 400, entropy loss: 0.430429, l2_loss: 4414.837891, total loss: 0.739467\n",
      "0.89\n",
      "step 500, entropy loss: 0.268466, l2_loss: 4414.642090, total loss: 0.577491\n",
      "0.93\n",
      "step 600, entropy loss: 0.207080, l2_loss: 4414.371094, total loss: 0.516086\n",
      "0.97\n",
      "step 700, entropy loss: 0.192228, l2_loss: 4414.091797, total loss: 0.501215\n",
      "0.94\n",
      "step 800, entropy loss: 0.152725, l2_loss: 4413.795410, total loss: 0.461691\n",
      "0.93\n",
      "step 900, entropy loss: 0.197388, l2_loss: 4413.472168, total loss: 0.506331\n",
      "0.94\n",
      "step 1000, entropy loss: 0.145813, l2_loss: 4413.141602, total loss: 0.454733\n",
      "0.96\n",
      "0.9533\n",
      "step 1100, entropy loss: 0.188722, l2_loss: 4412.758789, total loss: 0.497615\n",
      "0.98\n",
      "step 1200, entropy loss: 0.123530, l2_loss: 4412.399902, total loss: 0.432398\n",
      "0.97\n",
      "step 1300, entropy loss: 0.090088, l2_loss: 4412.011719, total loss: 0.398928\n",
      "0.99\n",
      "step 1400, entropy loss: 0.168458, l2_loss: 4411.609375, total loss: 0.477271\n",
      "0.97\n",
      "step 1500, entropy loss: 0.147347, l2_loss: 4411.227539, total loss: 0.456133\n",
      "0.95\n",
      "step 1600, entropy loss: 0.155817, l2_loss: 4410.802246, total loss: 0.464574\n",
      "0.95\n",
      "step 1700, entropy loss: 0.054162, l2_loss: 4410.407227, total loss: 0.362891\n",
      "0.97\n",
      "step 1800, entropy loss: 0.072346, l2_loss: 4409.973145, total loss: 0.381044\n",
      "0.99\n",
      "step 1900, entropy loss: 0.149189, l2_loss: 4409.549316, total loss: 0.457858\n",
      "0.97\n",
      "step 2000, entropy loss: 0.140768, l2_loss: 4409.128418, total loss: 0.449407\n",
      "0.98\n",
      "0.9688\n",
      "step 2100, entropy loss: 0.066571, l2_loss: 4408.688965, total loss: 0.375179\n",
      "0.98\n",
      "step 2200, entropy loss: 0.115028, l2_loss: 4408.250977, total loss: 0.423605\n",
      "0.97\n",
      "step 2300, entropy loss: 0.074515, l2_loss: 4407.783691, total loss: 0.383059\n",
      "0.98\n",
      "step 2400, entropy loss: 0.124304, l2_loss: 4407.342773, total loss: 0.432818\n",
      "0.97\n",
      "step 2500, entropy loss: 0.054638, l2_loss: 4406.878418, total loss: 0.363119\n",
      "0.98\n",
      "step 2600, entropy loss: 0.105940, l2_loss: 4406.434570, total loss: 0.414390\n",
      "0.98\n",
      "step 2700, entropy loss: 0.115868, l2_loss: 4405.967285, total loss: 0.424285\n",
      "0.97\n",
      "step 2800, entropy loss: 0.058139, l2_loss: 4405.479980, total loss: 0.366523\n",
      "0.98\n",
      "step 2900, entropy loss: 0.054095, l2_loss: 4405.020996, total loss: 0.362446\n",
      "1.0\n",
      "step 3000, entropy loss: 0.096152, l2_loss: 4404.542480, total loss: 0.404470\n",
      "0.96\n",
      "0.9749\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.改Kernel数 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在上面的基础上，我们取最好结果的kernel size,然后去调一下kernel的数量。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1）首先考虑减小kernel数。通过对神经网络可视化一节的学习，第一层卷积主要学习的是边缘。我感觉mnist数据集的边缘比较简单。所以考虑减小kernel数，这样可能更快搞定。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层，改成16个核\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 16, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层，改成32个核\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 32, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.908640, l2_loss: 5100.686035, total loss: 1.265688\n",
      "0.84\n",
      "step 200, entropy loss: 0.538411, l2_loss: 5101.290039, total loss: 0.895501\n",
      "0.84\n",
      "step 300, entropy loss: 0.448682, l2_loss: 5101.395508, total loss: 0.805780\n",
      "0.88\n",
      "step 400, entropy loss: 0.469161, l2_loss: 5101.230469, total loss: 0.826247\n",
      "0.89\n",
      "step 500, entropy loss: 0.400803, l2_loss: 5101.026855, total loss: 0.757875\n",
      "0.88\n",
      "step 600, entropy loss: 0.255624, l2_loss: 5100.755371, total loss: 0.612677\n",
      "0.94\n",
      "step 700, entropy loss: 0.230114, l2_loss: 5100.410645, total loss: 0.587143\n",
      "0.95\n",
      "step 800, entropy loss: 0.237923, l2_loss: 5100.021973, total loss: 0.594924\n",
      "0.96\n",
      "step 900, entropy loss: 0.208959, l2_loss: 5099.645508, total loss: 0.565934\n",
      "0.96\n",
      "step 1000, entropy loss: 0.162670, l2_loss: 5099.234375, total loss: 0.519616\n",
      "0.98\n",
      "0.9443\n",
      "step 1100, entropy loss: 0.305237, l2_loss: 5098.781250, total loss: 0.662151\n",
      "0.93\n",
      "step 1200, entropy loss: 0.162168, l2_loss: 5098.350586, total loss: 0.519053\n",
      "0.95\n",
      "step 1300, entropy loss: 0.326315, l2_loss: 5097.905762, total loss: 0.683169\n",
      "0.94\n",
      "step 1400, entropy loss: 0.170236, l2_loss: 5097.424316, total loss: 0.527056\n",
      "0.97\n",
      "step 1500, entropy loss: 0.145264, l2_loss: 5096.952637, total loss: 0.502050\n",
      "0.98\n",
      "step 1600, entropy loss: 0.053116, l2_loss: 5096.453613, total loss: 0.409867\n",
      "0.99\n",
      "step 1700, entropy loss: 0.152347, l2_loss: 5095.946289, total loss: 0.509063\n",
      "0.96\n",
      "step 1800, entropy loss: 0.103204, l2_loss: 5095.434082, total loss: 0.459884\n",
      "0.97\n",
      "step 1900, entropy loss: 0.189725, l2_loss: 5094.934082, total loss: 0.546370\n",
      "0.96\n",
      "step 2000, entropy loss: 0.077204, l2_loss: 5094.399902, total loss: 0.433812\n",
      "0.99\n",
      "0.9595\n",
      "step 2100, entropy loss: 0.145338, l2_loss: 5093.891602, total loss: 0.501910\n",
      "0.96\n",
      "step 2200, entropy loss: 0.049978, l2_loss: 5093.362305, total loss: 0.406514\n",
      "0.99\n",
      "step 2300, entropy loss: 0.138174, l2_loss: 5092.834473, total loss: 0.494673\n",
      "0.95\n",
      "step 2400, entropy loss: 0.059934, l2_loss: 5092.267578, total loss: 0.416393\n",
      "1.0\n",
      "step 2500, entropy loss: 0.113518, l2_loss: 5091.738281, total loss: 0.469939\n",
      "0.97\n",
      "step 2600, entropy loss: 0.134636, l2_loss: 5091.204590, total loss: 0.491020\n",
      "0.96\n",
      "step 2700, entropy loss: 0.181131, l2_loss: 5090.633301, total loss: 0.537475\n",
      "0.96\n",
      "step 2800, entropy loss: 0.128528, l2_loss: 5090.083984, total loss: 0.484833\n",
      "0.98\n",
      "step 2900, entropy loss: 0.081981, l2_loss: 5089.536133, total loss: 0.438248\n",
      "0.96\n",
      "step 3000, entropy loss: 0.070619, l2_loss: 5088.979980, total loss: 0.426848\n",
      "0.96\n",
      "0.9736\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "结果不太好，在训练集上的结果都不尽如人意，应该是核太少了，欠拟合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2）增大kernel数。相当于对特征的学习更多样化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层，改成64个核\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 64, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层，改成128个核\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 128, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.387792, l2_loss: 6804.627930, total loss: 0.864116\n",
      "0.91\n",
      "step 200, entropy loss: 0.405900, l2_loss: 6804.462891, total loss: 0.882213\n",
      "0.9\n",
      "step 300, entropy loss: 0.250836, l2_loss: 6804.066406, total loss: 0.727121\n",
      "0.94\n",
      "step 400, entropy loss: 0.244408, l2_loss: 6803.552246, total loss: 0.720657\n",
      "0.96\n",
      "step 500, entropy loss: 0.217147, l2_loss: 6802.984863, total loss: 0.693356\n",
      "0.96\n",
      "step 600, entropy loss: 0.250365, l2_loss: 6802.401855, total loss: 0.726533\n",
      "0.92\n",
      "step 700, entropy loss: 0.118123, l2_loss: 6801.774414, total loss: 0.594248\n",
      "0.98\n",
      "step 800, entropy loss: 0.105218, l2_loss: 6801.119629, total loss: 0.581296\n",
      "0.99\n",
      "step 900, entropy loss: 0.169835, l2_loss: 6800.464355, total loss: 0.645868\n",
      "0.97\n",
      "step 1000, entropy loss: 0.095333, l2_loss: 6799.771484, total loss: 0.571317\n",
      "0.98\n",
      "0.9602\n",
      "step 1100, entropy loss: 0.093210, l2_loss: 6799.089355, total loss: 0.569146\n",
      "0.97\n",
      "step 1200, entropy loss: 0.227046, l2_loss: 6798.386719, total loss: 0.702934\n",
      "0.95\n",
      "step 1300, entropy loss: 0.172736, l2_loss: 6797.652344, total loss: 0.648572\n",
      "0.95\n",
      "step 1400, entropy loss: 0.158506, l2_loss: 6796.926270, total loss: 0.634290\n",
      "0.99\n",
      "step 1500, entropy loss: 0.123214, l2_loss: 6796.196289, total loss: 0.598948\n",
      "0.96\n",
      "step 1600, entropy loss: 0.056557, l2_loss: 6795.453125, total loss: 0.532239\n",
      "0.97\n",
      "step 1700, entropy loss: 0.085650, l2_loss: 6794.698730, total loss: 0.561278\n",
      "0.94\n",
      "step 1800, entropy loss: 0.088201, l2_loss: 6793.938965, total loss: 0.563777\n",
      "0.98\n",
      "step 1900, entropy loss: 0.070890, l2_loss: 6793.187988, total loss: 0.546414\n",
      "0.98\n",
      "step 2000, entropy loss: 0.087637, l2_loss: 6792.422363, total loss: 0.563107\n",
      "0.96\n",
      "0.972\n",
      "step 2100, entropy loss: 0.068112, l2_loss: 6791.660156, total loss: 0.543529\n",
      "0.97\n",
      "step 2200, entropy loss: 0.038219, l2_loss: 6790.891602, total loss: 0.513582\n",
      "1.0\n",
      "step 2300, entropy loss: 0.146743, l2_loss: 6790.115234, total loss: 0.622051\n",
      "0.98\n",
      "step 2400, entropy loss: 0.040048, l2_loss: 6789.318359, total loss: 0.515300\n",
      "0.99\n",
      "step 2500, entropy loss: 0.116732, l2_loss: 6788.518555, total loss: 0.591928\n",
      "0.97\n",
      "step 2600, entropy loss: 0.200193, l2_loss: 6787.723633, total loss: 0.675333\n",
      "0.96\n",
      "step 2700, entropy loss: 0.037951, l2_loss: 6786.952148, total loss: 0.513037\n",
      "0.99\n",
      "step 2800, entropy loss: 0.050786, l2_loss: 6786.151367, total loss: 0.525817\n",
      "0.99\n",
      "step 2900, entropy loss: 0.063622, l2_loss: 6785.363281, total loss: 0.538597\n",
      "1.0\n",
      "step 3000, entropy loss: 0.029274, l2_loss: 6784.557129, total loss: 0.504193\n",
      "1.0\n",
      "0.9756\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个可能是有点过拟合。在训练集上效果超好，但是测试集上，效果一般。。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 正则化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们可以通过正则化来抑制过拟合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "另外注意到，初始化的时候，还有偏置项没有更改过，在这块加一下偏执项的初始化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层，加了一下bias的初始化\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 64, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                             bias_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层，加了一下bias的初始化\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 128, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                            bias_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.373884, l2_loss: 11401.623047, total loss: 8.355021\n",
      "0.91\n",
      "step 200, entropy loss: 0.264061, l2_loss: 11386.431641, total loss: 8.234563\n",
      "0.89\n",
      "step 300, entropy loss: 0.249413, l2_loss: 11371.066406, total loss: 8.209160\n",
      "0.97\n",
      "step 400, entropy loss: 0.199625, l2_loss: 11355.615234, total loss: 8.148556\n",
      "0.97\n",
      "step 500, entropy loss: 0.391170, l2_loss: 11340.109375, total loss: 8.329247\n",
      "0.94\n",
      "step 600, entropy loss: 0.140535, l2_loss: 11324.624023, total loss: 8.067772\n",
      "0.96\n",
      "step 700, entropy loss: 0.184961, l2_loss: 11309.108398, total loss: 8.101337\n",
      "0.96\n",
      "step 800, entropy loss: 0.089154, l2_loss: 11293.606445, total loss: 7.994678\n",
      "0.93\n",
      "step 900, entropy loss: 0.205197, l2_loss: 11278.093750, total loss: 8.099862\n",
      "0.95\n",
      "step 1000, entropy loss: 0.168704, l2_loss: 11262.583984, total loss: 8.052513\n",
      "0.97\n",
      "0.9596\n",
      "step 1100, entropy loss: 0.120096, l2_loss: 11247.100586, total loss: 7.993066\n",
      "0.98\n",
      "step 1200, entropy loss: 0.272123, l2_loss: 11231.624023, total loss: 8.134259\n",
      "0.96\n",
      "step 1300, entropy loss: 0.066891, l2_loss: 11216.151367, total loss: 7.918197\n",
      "0.98\n",
      "step 1400, entropy loss: 0.105502, l2_loss: 11200.679688, total loss: 7.945977\n",
      "0.98\n",
      "step 1500, entropy loss: 0.075673, l2_loss: 11185.238281, total loss: 7.905339\n",
      "0.99\n",
      "step 1600, entropy loss: 0.105232, l2_loss: 11169.839844, total loss: 7.924120\n",
      "0.98\n",
      "step 1700, entropy loss: 0.118338, l2_loss: 11154.416016, total loss: 7.926430\n",
      "0.96\n",
      "step 1800, entropy loss: 0.041339, l2_loss: 11139.034180, total loss: 7.838663\n",
      "1.0\n",
      "step 1900, entropy loss: 0.095463, l2_loss: 11123.659180, total loss: 7.882024\n",
      "0.99\n",
      "step 2000, entropy loss: 0.087244, l2_loss: 11108.274414, total loss: 7.863036\n",
      "0.99\n",
      "0.9695\n",
      "step 2100, entropy loss: 0.097816, l2_loss: 11092.925781, total loss: 7.862863\n",
      "0.98\n",
      "step 2200, entropy loss: 0.164794, l2_loss: 11077.581055, total loss: 7.919100\n",
      "0.97\n",
      "step 2300, entropy loss: 0.076767, l2_loss: 11062.276367, total loss: 7.820360\n",
      "0.98\n",
      "step 2400, entropy loss: 0.066970, l2_loss: 11046.972656, total loss: 7.799850\n",
      "0.98\n",
      "step 2500, entropy loss: 0.069813, l2_loss: 11031.720703, total loss: 7.792017\n",
      "0.99\n",
      "step 2600, entropy loss: 0.099163, l2_loss: 11016.434570, total loss: 7.810667\n",
      "0.96\n",
      "step 2700, entropy loss: 0.137720, l2_loss: 11001.208008, total loss: 7.838565\n",
      "0.96\n",
      "step 2800, entropy loss: 0.034664, l2_loss: 10985.974609, total loss: 7.724845\n",
      "0.98\n",
      "step 2900, entropy loss: 0.094601, l2_loss: 10970.751953, total loss: 7.774127\n",
      "0.99\n",
      "step 3000, entropy loss: 0.034595, l2_loss: 10955.575195, total loss: 7.703497\n",
      "0.99\n",
      "0.9773\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "#正则系数稍微高一点\n",
    "total_loss = cross_entropy + 7e-4*l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.01\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "结果是略有提高。可以看出正则对训练数据的惩罚还是很可观的。一度以为，这个正则要失败。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 学习率 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在目前最好的模型的基础上，根据上次的经验，减小学习率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 64, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                             bias_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 128, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                            bias_initializer=tf.truncated_normal_initializer(stddev=0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.580224, l2_loss: 13123.667969, total loss: 9.766791\n",
      "0.85\n",
      "step 200, entropy loss: 0.407675, l2_loss: 13115.013672, total loss: 9.588184\n",
      "0.88\n",
      "step 300, entropy loss: 0.376717, l2_loss: 13106.220703, total loss: 9.551072\n",
      "0.88\n",
      "step 400, entropy loss: 0.357013, l2_loss: 13097.351562, total loss: 9.525159\n",
      "0.96\n",
      "step 500, entropy loss: 0.357205, l2_loss: 13088.464844, total loss: 9.519131\n",
      "0.93\n",
      "step 600, entropy loss: 0.280279, l2_loss: 13079.550781, total loss: 9.435965\n",
      "0.95\n",
      "step 700, entropy loss: 0.395372, l2_loss: 13070.625000, total loss: 9.544809\n",
      "0.9\n",
      "step 800, entropy loss: 0.175492, l2_loss: 13061.675781, total loss: 9.318665\n",
      "0.97\n",
      "step 900, entropy loss: 0.288133, l2_loss: 13052.734375, total loss: 9.425047\n",
      "0.95\n",
      "step 1000, entropy loss: 0.129253, l2_loss: 13043.778320, total loss: 9.259898\n",
      "0.96\n",
      "0.943\n",
      "step 1100, entropy loss: 0.178820, l2_loss: 13034.822266, total loss: 9.303196\n",
      "0.94\n",
      "step 1200, entropy loss: 0.144364, l2_loss: 13025.865234, total loss: 9.262469\n",
      "0.96\n",
      "step 1300, entropy loss: 0.139873, l2_loss: 13016.918945, total loss: 9.251717\n",
      "0.96\n",
      "step 1400, entropy loss: 0.150584, l2_loss: 13007.953125, total loss: 9.256150\n",
      "0.97\n",
      "step 1500, entropy loss: 0.220573, l2_loss: 12998.990234, total loss: 9.319865\n",
      "0.95\n",
      "step 1600, entropy loss: 0.194397, l2_loss: 12990.045898, total loss: 9.287429\n",
      "0.95\n",
      "step 1700, entropy loss: 0.163535, l2_loss: 12981.086914, total loss: 9.250296\n",
      "0.94\n",
      "step 1800, entropy loss: 0.211313, l2_loss: 12972.153320, total loss: 9.291821\n",
      "0.9\n",
      "step 1900, entropy loss: 0.207178, l2_loss: 12963.203125, total loss: 9.281420\n",
      "0.93\n",
      "step 2000, entropy loss: 0.160696, l2_loss: 12954.272461, total loss: 9.228686\n",
      "0.97\n",
      "0.9606\n",
      "step 2100, entropy loss: 0.169621, l2_loss: 12945.339844, total loss: 9.231359\n",
      "0.97\n",
      "step 2200, entropy loss: 0.194232, l2_loss: 12936.397461, total loss: 9.249710\n",
      "0.98\n",
      "step 2300, entropy loss: 0.121292, l2_loss: 12927.473633, total loss: 9.170524\n",
      "0.95\n",
      "step 2400, entropy loss: 0.115537, l2_loss: 12918.550781, total loss: 9.158523\n",
      "0.97\n",
      "step 2500, entropy loss: 0.136105, l2_loss: 12909.617188, total loss: 9.172836\n",
      "0.96\n",
      "step 2600, entropy loss: 0.116557, l2_loss: 12900.705078, total loss: 9.147050\n",
      "0.98\n",
      "step 2700, entropy loss: 0.053192, l2_loss: 12891.791992, total loss: 9.077446\n",
      "0.98\n",
      "step 2800, entropy loss: 0.090361, l2_loss: 12882.882812, total loss: 9.108379\n",
      "0.98\n",
      "step 2900, entropy loss: 0.204993, l2_loss: 12873.972656, total loss: 9.216774\n",
      "0.95\n",
      "step 3000, entropy loss: 0.061225, l2_loss: 12865.079102, total loss: 9.066780\n",
      "1.0\n",
      "0.9688\n",
      "step 3100, entropy loss: 0.090077, l2_loss: 12856.182617, total loss: 9.089404\n",
      "0.99\n",
      "step 3200, entropy loss: 0.062088, l2_loss: 12847.295898, total loss: 9.055195\n",
      "0.99\n",
      "step 3300, entropy loss: 0.099421, l2_loss: 12838.413086, total loss: 9.086309\n",
      "0.99\n",
      "step 3400, entropy loss: 0.067313, l2_loss: 12829.545898, total loss: 9.047995\n",
      "0.99\n",
      "step 3500, entropy loss: 0.061520, l2_loss: 12820.651367, total loss: 9.035975\n",
      "0.98\n",
      "step 3600, entropy loss: 0.107142, l2_loss: 12811.788086, total loss: 9.075393\n",
      "0.97\n",
      "step 3700, entropy loss: 0.110963, l2_loss: 12802.911133, total loss: 9.073000\n",
      "0.95\n",
      "step 3800, entropy loss: 0.063140, l2_loss: 12794.055664, total loss: 9.018978\n",
      "0.97\n",
      "step 3900, entropy loss: 0.075384, l2_loss: 12785.197266, total loss: 9.025022\n",
      "0.98\n",
      "step 4000, entropy loss: 0.231193, l2_loss: 12776.362305, total loss: 9.174645\n",
      "0.96\n",
      "0.9739\n",
      "step 4100, entropy loss: 0.069842, l2_loss: 12767.513672, total loss: 9.007101\n",
      "0.99\n",
      "step 4200, entropy loss: 0.075508, l2_loss: 12758.665039, total loss: 9.006574\n",
      "0.99\n",
      "step 4300, entropy loss: 0.043424, l2_loss: 12749.825195, total loss: 8.968301\n",
      "1.0\n",
      "step 4400, entropy loss: 0.092623, l2_loss: 12740.986328, total loss: 9.011312\n",
      "0.97\n",
      "step 4500, entropy loss: 0.068846, l2_loss: 12732.172852, total loss: 8.981366\n",
      "0.97\n",
      "step 4600, entropy loss: 0.022093, l2_loss: 12723.357422, total loss: 8.928443\n",
      "1.0\n",
      "step 4700, entropy loss: 0.076382, l2_loss: 12714.550781, total loss: 8.976567\n",
      "0.98\n",
      "step 4800, entropy loss: 0.128279, l2_loss: 12705.728516, total loss: 9.022289\n",
      "1.0\n",
      "step 4900, entropy loss: 0.094515, l2_loss: 12696.913086, total loss: 8.982354\n",
      "0.94\n",
      "step 5000, entropy loss: 0.046766, l2_loss: 12688.122070, total loss: 8.928452\n",
      "1.0\n",
      "0.9744\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-4*l2_loss \n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train 学习率减小成原先的一半，增加迭代次数。 \n",
    "for step in range(5000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.005\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "这个结果还是挺失望的。我感觉调小学习率和加大迭代次数是没啥错的。下一步试一下改一下初始化。根据上次的经验，想要大幅提高的话，还是要靠初始化。博文上说，因为kernel初始化已经打破了不平衡，所以bias可以全零初始化，或者加一个很小的常数进行初始化（有可能有利于所有神经元的激活）。所以打算试一下常数初始化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 64, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 128, [5,5],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                            bias_initializer=tf.constant_initializer(0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.520860, l2_loss: 1711.179565, total loss: 0.640642\n",
      "0.91\n",
      "step 200, entropy loss: 0.511958, l2_loss: 1711.559814, total loss: 0.631768\n",
      "0.9\n",
      "step 300, entropy loss: 0.278371, l2_loss: 1711.806519, total loss: 0.398197\n",
      "0.91\n",
      "step 400, entropy loss: 0.264579, l2_loss: 1711.990601, total loss: 0.384419\n",
      "0.94\n",
      "step 500, entropy loss: 0.312007, l2_loss: 1712.118530, total loss: 0.431855\n",
      "0.95\n",
      "step 600, entropy loss: 0.273405, l2_loss: 1712.253418, total loss: 0.393262\n",
      "0.94\n",
      "step 700, entropy loss: 0.193074, l2_loss: 1712.342651, total loss: 0.312938\n",
      "0.98\n",
      "step 800, entropy loss: 0.217225, l2_loss: 1712.435059, total loss: 0.337096\n",
      "0.9\n",
      "step 900, entropy loss: 0.161677, l2_loss: 1712.489136, total loss: 0.281551\n",
      "0.95\n",
      "step 1000, entropy loss: 0.279676, l2_loss: 1712.547485, total loss: 0.399554\n",
      "0.94\n",
      "0.9468\n",
      "step 1100, entropy loss: 0.227849, l2_loss: 1712.599487, total loss: 0.347731\n",
      "0.94\n",
      "step 1200, entropy loss: 0.161541, l2_loss: 1712.638550, total loss: 0.281425\n",
      "0.93\n",
      "step 1300, entropy loss: 0.170990, l2_loss: 1712.675537, total loss: 0.290878\n",
      "0.96\n",
      "step 1400, entropy loss: 0.139839, l2_loss: 1712.710205, total loss: 0.259728\n",
      "0.93\n",
      "step 1500, entropy loss: 0.110193, l2_loss: 1712.733765, total loss: 0.230085\n",
      "0.99\n",
      "step 1600, entropy loss: 0.185991, l2_loss: 1712.762573, total loss: 0.305884\n",
      "0.94\n",
      "step 1700, entropy loss: 0.095098, l2_loss: 1712.772217, total loss: 0.214992\n",
      "0.96\n",
      "step 1800, entropy loss: 0.105930, l2_loss: 1712.785034, total loss: 0.225825\n",
      "0.98\n",
      "step 1900, entropy loss: 0.194069, l2_loss: 1712.803101, total loss: 0.313965\n",
      "0.93\n",
      "step 2000, entropy loss: 0.123728, l2_loss: 1712.807373, total loss: 0.243625\n",
      "0.98\n",
      "0.9612\n",
      "step 2100, entropy loss: 0.058348, l2_loss: 1712.802124, total loss: 0.178244\n",
      "0.99\n",
      "step 2200, entropy loss: 0.105467, l2_loss: 1712.806885, total loss: 0.225363\n",
      "0.97\n",
      "step 2300, entropy loss: 0.140623, l2_loss: 1712.807739, total loss: 0.260519\n",
      "0.97\n",
      "step 2400, entropy loss: 0.042120, l2_loss: 1712.799561, total loss: 0.162016\n",
      "1.0\n",
      "step 2500, entropy loss: 0.154854, l2_loss: 1712.789062, total loss: 0.274749\n",
      "0.98\n",
      "step 2600, entropy loss: 0.093783, l2_loss: 1712.781006, total loss: 0.213678\n",
      "0.99\n",
      "step 2700, entropy loss: 0.113765, l2_loss: 1712.781494, total loss: 0.233659\n",
      "0.98\n",
      "step 2800, entropy loss: 0.147291, l2_loss: 1712.757324, total loss: 0.267184\n",
      "0.97\n",
      "step 2900, entropy loss: 0.087487, l2_loss: 1712.748169, total loss: 0.207379\n",
      "0.98\n",
      "step 3000, entropy loss: 0.168696, l2_loss: 1712.735229, total loss: 0.288587\n",
      "0.95\n",
      "0.9688\n",
      "step 3100, entropy loss: 0.091291, l2_loss: 1712.709229, total loss: 0.211180\n",
      "0.99\n",
      "step 3200, entropy loss: 0.124842, l2_loss: 1712.691040, total loss: 0.244730\n",
      "0.94\n",
      "step 3300, entropy loss: 0.094325, l2_loss: 1712.662720, total loss: 0.214211\n",
      "0.99\n",
      "step 3400, entropy loss: 0.095628, l2_loss: 1712.641968, total loss: 0.215513\n",
      "0.99\n",
      "step 3500, entropy loss: 0.122342, l2_loss: 1712.615845, total loss: 0.242226\n",
      "0.95\n",
      "step 3600, entropy loss: 0.132113, l2_loss: 1712.595581, total loss: 0.251995\n",
      "0.97\n",
      "step 3700, entropy loss: 0.083543, l2_loss: 1712.556885, total loss: 0.203422\n",
      "0.98\n",
      "step 3800, entropy loss: 0.076011, l2_loss: 1712.524902, total loss: 0.195888\n",
      "0.98\n",
      "step 3900, entropy loss: 0.098402, l2_loss: 1712.495972, total loss: 0.218276\n",
      "0.98\n",
      "step 4000, entropy loss: 0.047881, l2_loss: 1712.458374, total loss: 0.167753\n",
      "0.98\n",
      "0.9725\n",
      "step 4100, entropy loss: 0.060237, l2_loss: 1712.424805, total loss: 0.180106\n",
      "0.98\n",
      "step 4200, entropy loss: 0.079702, l2_loss: 1712.393799, total loss: 0.199570\n",
      "0.98\n",
      "step 4300, entropy loss: 0.181644, l2_loss: 1712.345093, total loss: 0.301508\n",
      "0.98\n",
      "step 4400, entropy loss: 0.060468, l2_loss: 1712.318726, total loss: 0.180330\n",
      "0.98\n",
      "step 4500, entropy loss: 0.036697, l2_loss: 1712.285767, total loss: 0.156558\n",
      "0.99\n",
      "step 4600, entropy loss: 0.034743, l2_loss: 1712.249268, total loss: 0.154600\n",
      "1.0\n",
      "step 4700, entropy loss: 0.062593, l2_loss: 1712.211670, total loss: 0.182447\n",
      "0.99\n",
      "step 4800, entropy loss: 0.296269, l2_loss: 1712.170288, total loss: 0.416120\n",
      "0.94\n",
      "step 4900, entropy loss: 0.089480, l2_loss: 1712.131348, total loss: 0.209329\n",
      "0.95\n",
      "step 5000, entropy loss: 0.077188, l2_loss: 1712.082642, total loss: 0.197034\n",
      "0.99\n",
      "0.9741\n",
      "step 5100, entropy loss: 0.074888, l2_loss: 1712.041016, total loss: 0.194731\n",
      "0.98\n",
      "step 5200, entropy loss: 0.042501, l2_loss: 1711.992310, total loss: 0.162341\n",
      "0.98\n",
      "step 5300, entropy loss: 0.062501, l2_loss: 1711.955200, total loss: 0.182338\n",
      "0.98\n",
      "step 5400, entropy loss: 0.077289, l2_loss: 1711.903320, total loss: 0.197122\n",
      "0.97\n",
      "step 5500, entropy loss: 0.064149, l2_loss: 1711.863770, total loss: 0.183979\n",
      "0.95\n",
      "step 5600, entropy loss: 0.052192, l2_loss: 1711.819214, total loss: 0.172019\n",
      "0.99\n",
      "step 5700, entropy loss: 0.062638, l2_loss: 1711.776245, total loss: 0.182462\n",
      "0.99\n",
      "step 5800, entropy loss: 0.070782, l2_loss: 1711.728638, total loss: 0.190603\n",
      "0.97\n",
      "step 5900, entropy loss: 0.043798, l2_loss: 1711.676025, total loss: 0.163615\n",
      "0.97\n",
      "step 6000, entropy loss: 0.119995, l2_loss: 1711.619263, total loss: 0.239808\n",
      "0.99\n",
      "0.9779\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss \n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train \n",
    "for step in range(6000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.005\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "这个结果也不好。下一步试一下增大第二卷积层的kernel size。因为看训练集上的效果也不太好，有可能是训练的不够。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 64, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 128, [8,8],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                            bias_initializer=tf.constant_initializer(0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false,
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.470774, l2_loss: 4650.697754, total loss: 0.796322\n",
      "0.86\n",
      "step 200, entropy loss: 0.416571, l2_loss: 4650.791992, total loss: 0.742127\n",
      "0.88\n",
      "step 300, entropy loss: 0.387145, l2_loss: 4650.786133, total loss: 0.712700\n",
      "0.95\n",
      "step 400, entropy loss: 0.227541, l2_loss: 4650.698242, total loss: 0.553090\n",
      "0.94\n",
      "step 500, entropy loss: 0.295318, l2_loss: 4650.603516, total loss: 0.620860\n",
      "0.95\n",
      "step 600, entropy loss: 0.152128, l2_loss: 4650.476074, total loss: 0.477661\n",
      "0.96\n",
      "step 700, entropy loss: 0.235863, l2_loss: 4650.333008, total loss: 0.561386\n",
      "0.91\n",
      "step 800, entropy loss: 0.144146, l2_loss: 4650.172852, total loss: 0.469658\n",
      "0.98\n",
      "step 900, entropy loss: 0.179686, l2_loss: 4650.010742, total loss: 0.505187\n",
      "0.98\n",
      "step 1000, entropy loss: 0.165506, l2_loss: 4649.824219, total loss: 0.490994\n",
      "0.93\n",
      "0.9476\n",
      "step 1100, entropy loss: 0.229824, l2_loss: 4649.646484, total loss: 0.555300\n",
      "0.92\n",
      "step 1200, entropy loss: 0.174262, l2_loss: 4649.469238, total loss: 0.499725\n",
      "0.97\n",
      "step 1300, entropy loss: 0.119441, l2_loss: 4649.284180, total loss: 0.444891\n",
      "0.97\n",
      "step 1400, entropy loss: 0.177493, l2_loss: 4649.083984, total loss: 0.502929\n",
      "0.95\n",
      "step 1500, entropy loss: 0.197526, l2_loss: 4648.896973, total loss: 0.522949\n",
      "0.95\n",
      "step 1600, entropy loss: 0.189612, l2_loss: 4648.685547, total loss: 0.515020\n",
      "0.97\n",
      "step 1700, entropy loss: 0.052559, l2_loss: 4648.470703, total loss: 0.377952\n",
      "0.97\n",
      "step 1800, entropy loss: 0.094538, l2_loss: 4648.262695, total loss: 0.419916\n",
      "0.98\n",
      "step 1900, entropy loss: 0.052467, l2_loss: 4648.055176, total loss: 0.377831\n",
      "1.0\n",
      "step 2000, entropy loss: 0.034931, l2_loss: 4647.841309, total loss: 0.360280\n",
      "0.98\n",
      "0.9622\n",
      "step 2100, entropy loss: 0.056715, l2_loss: 4647.620605, total loss: 0.382049\n",
      "0.99\n",
      "step 2200, entropy loss: 0.076564, l2_loss: 4647.406250, total loss: 0.401882\n",
      "0.98\n",
      "step 2300, entropy loss: 0.092221, l2_loss: 4647.183594, total loss: 0.417524\n",
      "0.96\n",
      "step 2400, entropy loss: 0.076508, l2_loss: 4646.974121, total loss: 0.401797\n",
      "0.99\n",
      "step 2500, entropy loss: 0.114598, l2_loss: 4646.746094, total loss: 0.439870\n",
      "0.97\n",
      "step 2600, entropy loss: 0.058912, l2_loss: 4646.515137, total loss: 0.384168\n",
      "0.98\n",
      "step 2700, entropy loss: 0.055487, l2_loss: 4646.285156, total loss: 0.380727\n",
      "0.99\n",
      "step 2800, entropy loss: 0.108981, l2_loss: 4646.060059, total loss: 0.434205\n",
      "0.96\n",
      "step 2900, entropy loss: 0.156623, l2_loss: 4645.829102, total loss: 0.481831\n",
      "0.97\n",
      "step 3000, entropy loss: 0.118448, l2_loss: 4645.603027, total loss: 0.443640\n",
      "0.97\n",
      "0.9687\n",
      "step 3100, entropy loss: 0.112249, l2_loss: 4645.359375, total loss: 0.437424\n",
      "0.96\n",
      "step 3200, entropy loss: 0.102813, l2_loss: 4645.110352, total loss: 0.427971\n",
      "0.98\n",
      "step 3300, entropy loss: 0.033287, l2_loss: 4644.878418, total loss: 0.358428\n",
      "0.99\n",
      "step 3400, entropy loss: 0.045297, l2_loss: 4644.632812, total loss: 0.370421\n",
      "1.0\n",
      "step 3500, entropy loss: 0.142418, l2_loss: 4644.396484, total loss: 0.467526\n",
      "0.99\n",
      "step 3600, entropy loss: 0.068724, l2_loss: 4644.151855, total loss: 0.393815\n",
      "0.99\n",
      "step 3700, entropy loss: 0.058308, l2_loss: 4643.908691, total loss: 0.383381\n",
      "0.98\n",
      "step 3800, entropy loss: 0.057891, l2_loss: 4643.662109, total loss: 0.382947\n",
      "0.99\n",
      "step 3900, entropy loss: 0.122202, l2_loss: 4643.433105, total loss: 0.447243\n",
      "0.95\n",
      "step 4000, entropy loss: 0.053191, l2_loss: 4643.180664, total loss: 0.378213\n",
      "0.99\n",
      "0.9739\n",
      "step 4100, entropy loss: 0.061358, l2_loss: 4642.937500, total loss: 0.386364\n",
      "0.98\n",
      "step 4200, entropy loss: 0.063662, l2_loss: 4642.695312, total loss: 0.388651\n",
      "1.0\n",
      "step 4300, entropy loss: 0.105240, l2_loss: 4642.444824, total loss: 0.430211\n",
      "0.99\n",
      "step 4400, entropy loss: 0.057944, l2_loss: 4642.191895, total loss: 0.382898\n",
      "0.98\n",
      "step 4500, entropy loss: 0.072841, l2_loss: 4641.947754, total loss: 0.397777\n",
      "0.98\n",
      "step 4600, entropy loss: 0.055165, l2_loss: 4641.695312, total loss: 0.380084\n",
      "1.0\n",
      "step 4700, entropy loss: 0.083659, l2_loss: 4641.453125, total loss: 0.408561\n",
      "0.97\n",
      "step 4800, entropy loss: 0.035277, l2_loss: 4641.200195, total loss: 0.360161\n",
      "0.99\n",
      "step 4900, entropy loss: 0.100253, l2_loss: 4640.941895, total loss: 0.425119\n",
      "0.98\n",
      "step 5000, entropy loss: 0.037966, l2_loss: 4640.684570, total loss: 0.362813\n",
      "0.98\n",
      "0.9767\n",
      "step 5100, entropy loss: 0.050711, l2_loss: 4640.438477, total loss: 0.375542\n",
      "0.99\n",
      "step 5200, entropy loss: 0.087496, l2_loss: 4640.188477, total loss: 0.412309\n",
      "0.98\n",
      "step 5300, entropy loss: 0.093924, l2_loss: 4639.926270, total loss: 0.418719\n",
      "0.98\n",
      "step 5400, entropy loss: 0.042147, l2_loss: 4639.666504, total loss: 0.366923\n",
      "1.0\n",
      "step 5500, entropy loss: 0.105130, l2_loss: 4639.404785, total loss: 0.429888\n",
      "0.95\n",
      "step 5600, entropy loss: 0.063562, l2_loss: 4639.149414, total loss: 0.388302\n",
      "0.97\n",
      "step 5700, entropy loss: 0.093693, l2_loss: 4638.903320, total loss: 0.418417\n",
      "0.98\n",
      "step 5800, entropy loss: 0.058022, l2_loss: 4638.642090, total loss: 0.382727\n",
      "0.98\n",
      "step 5900, entropy loss: 0.139473, l2_loss: 4638.385254, total loss: 0.464160\n",
      "0.91\n",
      "step 6000, entropy loss: 0.106427, l2_loss: 4638.128418, total loss: 0.431096\n",
      "0.96\n",
      "0.9794\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss \n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train \n",
    "for step in range(6000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.005\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "至少看到了些希望。。。。后来发现忘记算全连接层的神经元个数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#将图像做成28*28的\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "#第一层卷积层\n",
    "with tf.name_scope('conv1'):\n",
    "  h_conv1 = tf.layers.conv2d(x_image, 64, [10,10],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                             bias_initializer=tf.constant_initializer(0.1))\n",
    "#第一层池化\n",
    "with tf.name_scope('pool1'):\n",
    "  h_pool1 = tf.layers.max_pooling2d(h_conv1, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第二层卷积层\n",
    "with tf.name_scope('conv2'):\n",
    "  h_conv2 = tf.layers.conv2d(h_pool1, 128, [8,8],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,\n",
    "                             kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),\n",
    "                            bias_initializer=tf.constant_initializer(0.1))\n",
    "# 第二程池化\n",
    "with tf.name_scope('pool2'):\n",
    "  h_pool2 = tf.layers.max_pooling2d(h_conv2, pool_size=[2,2],\n",
    "                        strides=[2, 2], padding='VALID')\n",
    "#第一层全连接，仔细检查了一下，发现忘记改全连接层的神经元个数。。\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = tf.layers.flatten(h_pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 6272, activation=tf.nn.relu)\n",
    "#dropout防止过拟合\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "#第二层全连接输出十类\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.405876, l2_loss: 9844.585938, total loss: 1.094997\n",
      "0.91\n",
      "step 200, entropy loss: 0.216914, l2_loss: 9844.238281, total loss: 0.906011\n",
      "0.94\n",
      "step 300, entropy loss: 0.169266, l2_loss: 9843.818359, total loss: 0.858333\n",
      "0.97\n",
      "step 400, entropy loss: 0.135625, l2_loss: 9843.330078, total loss: 0.824658\n",
      "0.94\n",
      "step 500, entropy loss: 0.238031, l2_loss: 9842.832031, total loss: 0.927029\n",
      "0.94\n",
      "step 600, entropy loss: 0.093095, l2_loss: 9842.322266, total loss: 0.782058\n",
      "0.95\n",
      "step 700, entropy loss: 0.146281, l2_loss: 9841.791016, total loss: 0.835206\n",
      "0.96\n",
      "step 800, entropy loss: 0.100720, l2_loss: 9841.258789, total loss: 0.789608\n",
      "0.99\n",
      "step 900, entropy loss: 0.114472, l2_loss: 9840.704102, total loss: 0.803321\n",
      "0.98\n",
      "step 1000, entropy loss: 0.164626, l2_loss: 9840.164062, total loss: 0.853438\n",
      "0.97\n",
      "0.9611\n",
      "step 1100, entropy loss: 0.136324, l2_loss: 9839.619141, total loss: 0.825097\n",
      "0.97\n",
      "step 1200, entropy loss: 0.086681, l2_loss: 9839.080078, total loss: 0.775417\n",
      "1.0\n",
      "step 1300, entropy loss: 0.164620, l2_loss: 9838.513672, total loss: 0.853316\n",
      "0.96\n",
      "step 1400, entropy loss: 0.172223, l2_loss: 9837.953125, total loss: 0.860880\n",
      "0.98\n",
      "step 1500, entropy loss: 0.084693, l2_loss: 9837.390625, total loss: 0.773310\n",
      "0.97\n",
      "step 1600, entropy loss: 0.156768, l2_loss: 9836.812500, total loss: 0.845345\n",
      "0.98\n",
      "step 1700, entropy loss: 0.105717, l2_loss: 9836.232422, total loss: 0.794254\n",
      "0.98\n",
      "step 1800, entropy loss: 0.061543, l2_loss: 9835.672852, total loss: 0.750040\n",
      "0.99\n",
      "step 1900, entropy loss: 0.072038, l2_loss: 9835.087891, total loss: 0.760494\n",
      "0.98\n",
      "step 2000, entropy loss: 0.113482, l2_loss: 9834.494141, total loss: 0.801896\n",
      "0.98\n",
      "0.9712\n",
      "step 2100, entropy loss: 0.048387, l2_loss: 9833.916016, total loss: 0.736761\n",
      "0.99\n",
      "step 2200, entropy loss: 0.068634, l2_loss: 9833.326172, total loss: 0.756967\n",
      "0.99\n",
      "step 2300, entropy loss: 0.084705, l2_loss: 9832.739258, total loss: 0.772996\n",
      "1.0\n",
      "step 2400, entropy loss: 0.053509, l2_loss: 9832.158203, total loss: 0.741760\n",
      "0.99\n",
      "step 2500, entropy loss: 0.041913, l2_loss: 9831.559570, total loss: 0.730122\n",
      "1.0\n",
      "step 2600, entropy loss: 0.140569, l2_loss: 9830.958984, total loss: 0.828736\n",
      "0.98\n",
      "step 2700, entropy loss: 0.062287, l2_loss: 9830.359375, total loss: 0.750413\n",
      "0.98\n",
      "step 2800, entropy loss: 0.074268, l2_loss: 9829.759766, total loss: 0.762351\n",
      "0.99\n",
      "step 2900, entropy loss: 0.032633, l2_loss: 9829.150391, total loss: 0.720674\n",
      "1.0\n",
      "step 3000, entropy loss: 0.067211, l2_loss: 9828.568359, total loss: 0.755211\n",
      "0.98\n",
      "0.9771\n",
      "step 3100, entropy loss: 0.035250, l2_loss: 9827.965820, total loss: 0.723207\n",
      "1.0\n",
      "step 3200, entropy loss: 0.065344, l2_loss: 9827.365234, total loss: 0.753260\n",
      "1.0\n",
      "step 3300, entropy loss: 0.055652, l2_loss: 9826.763672, total loss: 0.743526\n",
      "0.98\n",
      "step 3400, entropy loss: 0.059680, l2_loss: 9826.160156, total loss: 0.747511\n",
      "0.99\n",
      "step 3500, entropy loss: 0.062347, l2_loss: 9825.558594, total loss: 0.750136\n",
      "1.0\n",
      "step 3600, entropy loss: 0.044389, l2_loss: 9824.963867, total loss: 0.732137\n",
      "0.99\n",
      "step 3700, entropy loss: 0.054038, l2_loss: 9824.345703, total loss: 0.741742\n",
      "0.99\n",
      "step 3800, entropy loss: 0.059483, l2_loss: 9823.733398, total loss: 0.747144\n",
      "0.98\n",
      "step 3900, entropy loss: 0.081454, l2_loss: 9823.125977, total loss: 0.769073\n",
      "0.98\n",
      "step 4000, entropy loss: 0.046990, l2_loss: 9822.520508, total loss: 0.734566\n",
      "0.98\n",
      "0.9796\n",
      "step 4100, entropy loss: 0.026339, l2_loss: 9821.913086, total loss: 0.713872\n",
      "0.97\n",
      "step 4200, entropy loss: 0.069889, l2_loss: 9821.292969, total loss: 0.757379\n",
      "0.98\n",
      "step 4300, entropy loss: 0.077711, l2_loss: 9820.685547, total loss: 0.765159\n",
      "0.97\n",
      "step 4400, entropy loss: 0.014977, l2_loss: 9820.073242, total loss: 0.702382\n",
      "0.99\n",
      "step 4500, entropy loss: 0.057962, l2_loss: 9819.444336, total loss: 0.745323\n",
      "0.99\n",
      "step 4600, entropy loss: 0.020119, l2_loss: 9818.838867, total loss: 0.707438\n",
      "0.99\n",
      "step 4700, entropy loss: 0.083836, l2_loss: 9818.214844, total loss: 0.771111\n",
      "0.99\n",
      "step 4800, entropy loss: 0.055063, l2_loss: 9817.599609, total loss: 0.742295\n",
      "0.98\n",
      "step 4900, entropy loss: 0.055399, l2_loss: 9816.978516, total loss: 0.742588\n",
      "1.0\n",
      "step 5000, entropy loss: 0.073519, l2_loss: 9816.367188, total loss: 0.760665\n",
      "0.99\n",
      "0.983\n",
      "step 5100, entropy loss: 0.043974, l2_loss: 9815.742188, total loss: 0.731076\n",
      "0.96\n",
      "step 5200, entropy loss: 0.088186, l2_loss: 9815.134766, total loss: 0.775245\n",
      "0.98\n",
      "step 5300, entropy loss: 0.025985, l2_loss: 9814.509766, total loss: 0.713000\n",
      "0.99\n",
      "step 5400, entropy loss: 0.031037, l2_loss: 9813.889648, total loss: 0.718009\n",
      "1.0\n",
      "step 5500, entropy loss: 0.048658, l2_loss: 9813.257812, total loss: 0.735586\n",
      "1.0\n",
      "step 5600, entropy loss: 0.041616, l2_loss: 9812.631836, total loss: 0.728501\n",
      "1.0\n",
      "step 5700, entropy loss: 0.036364, l2_loss: 9812.015625, total loss: 0.723205\n",
      "1.0\n",
      "step 5800, entropy loss: 0.046569, l2_loss: 9811.386719, total loss: 0.733366\n",
      "0.99\n",
      "step 5900, entropy loss: 0.005606, l2_loss: 9810.763672, total loss: 0.692359\n",
      "1.0\n",
      "step 6000, entropy loss: 0.071302, l2_loss: 9810.142578, total loss: 0.758012\n",
      "0.98\n",
      "0.984\n",
      "step 6100, entropy loss: 0.057722, l2_loss: 9809.516602, total loss: 0.744389\n",
      "0.99\n",
      "step 6200, entropy loss: 0.086603, l2_loss: 9808.894531, total loss: 0.773226\n",
      "0.99\n",
      "step 6300, entropy loss: 0.077756, l2_loss: 9808.267578, total loss: 0.764335\n",
      "0.98\n",
      "step 6400, entropy loss: 0.064699, l2_loss: 9807.636719, total loss: 0.751234\n",
      "0.99\n",
      "step 6500, entropy loss: 0.028661, l2_loss: 9807.004883, total loss: 0.715152\n",
      "1.0\n",
      "step 6600, entropy loss: 0.040936, l2_loss: 9806.386719, total loss: 0.727384\n",
      "1.0\n",
      "step 6700, entropy loss: 0.065273, l2_loss: 9805.753906, total loss: 0.751676\n",
      "0.99\n",
      "step 6800, entropy loss: 0.049547, l2_loss: 9805.127930, total loss: 0.735906\n",
      "0.98\n",
      "step 6900, entropy loss: 0.042937, l2_loss: 9804.506836, total loss: 0.729253\n",
      "0.98\n",
      "step 7000, entropy loss: 0.039637, l2_loss: 9803.885742, total loss: 0.725909\n",
      "0.99\n",
      "0.9826\n",
      "step 7100, entropy loss: 0.017571, l2_loss: 9803.243164, total loss: 0.703798\n",
      "1.0\n",
      "step 7200, entropy loss: 0.020943, l2_loss: 9802.607422, total loss: 0.707125\n",
      "1.0\n",
      "step 7300, entropy loss: 0.014113, l2_loss: 9801.986328, total loss: 0.700252\n",
      "1.0\n",
      "step 7400, entropy loss: 0.039650, l2_loss: 9801.371094, total loss: 0.725746\n",
      "0.99\n",
      "step 7500, entropy loss: 0.038014, l2_loss: 9800.734375, total loss: 0.724066\n",
      "1.0\n",
      "step 7600, entropy loss: 0.034692, l2_loss: 9800.107422, total loss: 0.720699\n",
      "1.0\n",
      "step 7700, entropy loss: 0.011240, l2_loss: 9799.474609, total loss: 0.697203\n",
      "1.0\n",
      "step 7800, entropy loss: 0.083991, l2_loss: 9798.837891, total loss: 0.769910\n",
      "0.99\n",
      "step 7900, entropy loss: 0.009112, l2_loss: 9798.216797, total loss: 0.694988\n",
      "1.0\n",
      "step 8000, entropy loss: 0.013625, l2_loss: 9797.585938, total loss: 0.699456\n",
      "1.0\n",
      "0.9842\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 7e-5*l2_loss \n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train \n",
    "for step in range(8000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  lr = 0.005\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "目前这个模型，属于又慢，效果也糟的。只是按照评分标准，我可以交差了。要是继续探索的话，可以试一下调大一点学习率，因为学习的速度非常慢。另外可以试试增加卷积层数，改dropout率，改池化层为用平均值池化。不过是在是无力跑下去了。这个作业花了太多功夫了。"
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python [default]",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
