{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "第七周作业：\n",
    "本次作业使用keras来完成。\n",
    "\n",
    "探索超参数：  \n",
    "卷积kernel size  \n",
    "卷积kernel 数量  \n",
    "学习率  \n",
    "正则化因子  \n",
    "权重初始化分布参数  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'channels_last'"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "import tensorflow as tf\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "from keras.layers.core import Dense, Flatten\n",
    "from keras.layers.convolutional import Conv2D\n",
    "from keras.layers.pooling import MaxPooling2D\n",
    "\n",
    "from keras import backend as K\n",
    "\n",
    "K.image_data_format() "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./input_data/train-images-idx3-ubyte.gz\n",
      "Extracting ./input_data/train-labels-idx1-ubyte.gz\n",
      "Extracting ./input_data/t10k-images-idx3-ubyte.gz\n",
      "Extracting ./input_data/t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = './input_data/'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)\n",
    "# Define loss and optimizer\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "learning_rate = tf.placeholder(tf.float32)\n",
    "\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1、调节卷积核数量和大小：  \n",
    "##### 以下是5*5卷积核得到的结果，总的来说，收敛速度快，效果不错  \n",
    "step 100, entropy loss: 1.972595, l2_loss: 790.710449, total loss: 2.027945，train accuracy: 0.66，test accuracy: 0.637  \n",
    "step 200, entropy loss: 0.693521, l2_loss: 792.764099, total loss: 0.749014，train accuracy: 0.94，test accuracy: 0.852  \n",
    "step 300, entropy loss: 0.559920, l2_loss: 793.880188, total loss: 0.615491，train accuracy: 0.9，test accuracy: 0.909  \n",
    "step 400, entropy loss: 0.352122, l2_loss: 794.489380, total loss: 0.407736，train accuracy: 0.96，test accuracy: 0.851  \n",
    "step 500, entropy loss: 0.296867, l2_loss: 794.874817, total loss: 0.352508，train accuracy: 0.96，test accuracy: 0.922  \n",
    "step 600, entropy loss: 0.206986, l2_loss: 795.275146, total loss: 0.262655，train accuracy: 0.98，test accuracy: 0.934  \n",
    "step 700, entropy loss: 0.251357, l2_loss: 795.531860, total loss: 0.307044，train accuracy: 0.96，test accuracy: 0.936  \n",
    "step 800, entropy loss: 0.457139, l2_loss: 795.739868, total loss: 0.512841，train accuracy: 0.92，test accuracy: 0.952  \n",
    "step 900, entropy loss: 0.211582, l2_loss: 795.968384, total loss: 0.267300，train accuracy: 0.98，test accuracy: 0.958  \n",
    "step 1000, entropy loss: 0.133377, l2_loss: 796.170410, total loss: 0.189109，train accuracy: 0.98，test accuracy: 0.955  \n",
    "##### 以下是3*3卷积核得到的结果，收敛速度快，执行时间较5*5短，性能比5*5差  \n",
    "step 100, entropy loss: 1.905364, l2_loss: 1581.038574, total loss: 2.016037，train accuracy: 0.8，test accuracy: 0.695  \n",
    "step 200, entropy loss: 0.786170, l2_loss: 1583.091064, total loss: 0.896987，train accuracy: 0.88，test accuracy: 0.799  \n",
    "step 300, entropy loss: 0.282845, l2_loss: 1584.192383, total loss: 0.393738，train accuracy: 0.98，test accuracy: 0.883  \n",
    "step 400, entropy loss: 0.414198, l2_loss: 1584.744873, total loss: 0.525130，train accuracy: 0.94，test accuracy: 0.897  \n",
    "step 500, entropy loss: 0.201517, l2_loss: 1585.066895, total loss: 0.312472，train accuracy: 0.96，test accuracy: 0.914  \n",
    "step 600, entropy loss: 0.319972, l2_loss: 1585.244629, total loss: 0.430939，train accuracy: 0.86，test accuracy: 0.92  \n",
    "step 700, entropy loss: 0.407061, l2_loss: 1585.441772, total loss: 0.518042，train accuracy: 0.92，test accuracy: 0.929  \n",
    "step 800, entropy loss: 0.217380, l2_loss: 1585.517334, total loss: 0.328366，train accuracy: 0.98，test accuracy: 0.924  \n",
    "step 900, entropy loss: 0.370214, l2_loss: 1585.678345, total loss: 0.481212，train accuracy: 0.92，test accuracy: 0.931  \n",
    "step 1000, entropy loss: 0.296941, l2_loss: 1585.740723, total loss: 0.407943，train accuracy: 0.9，test accuracy: 0.937  \n",
    "#### <font color=#00ffff0 size=5> 总结：选择5*5的卷积核更好，收敛快，效果好</font>\n",
    "##### 卷积核数量：第一层32，第二层64，收敛速度比（18,36）速度快，稳定。\n",
    " step 100, entropy loss: 1.972595, l2_loss: 790.710449, total loss: 2.027945，train accuracy: 0.66，test accuracy: 0.637  \n",
    "step 200, entropy loss: 0.693521, l2_loss: 792.764099, total loss: 0.749014，train accuracy: 0.94，test accuracy: 0.852  \n",
    "step 300, entropy loss: 0.559920, l2_loss: 793.880188, total loss: 0.615491，train accuracy: 0.9，test accuracy: 0.909  \n",
    "step 400, entropy loss: 0.352122, l2_loss: 794.489380, total loss: 0.407736，train accuracy: 0.96，test accuracy: 0.851  \n",
    "step 500, entropy loss: 0.296867, l2_loss: 794.874817, total loss: 0.352508，train accuracy: 0.96，test accuracy: 0.922  \n",
    "step 600, entropy loss: 0.206986, l2_loss: 795.275146, total loss: 0.262655，train accuracy: 0.98，test accuracy: 0.934  \n",
    "step 700, entropy loss: 0.251357, l2_loss: 795.531860, total loss: 0.307044，train accuracy: 0.96，test accuracy: 0.936  \n",
    "step 800, entropy loss: 0.457139, l2_loss: 795.739868, total loss: 0.512841，train accuracy: 0.92，test accuracy: 0.952  \n",
    "step 900, entropy loss: 0.211582, l2_loss: 795.968384, total loss: 0.267300，train accuracy: 0.98，test accuracy: 0.958  \n",
    "step 1000, entropy loss: 0.133377, l2_loss: 796.170410, total loss: 0.189109，train accuracy: 0.98，test accuracy: 0.955  \n",
    "##### 卷积核数量：第一层18，第二层36  ，训练结果不稳定，舍弃。\n",
    "step 100, entropy loss: 1.975561, l2_loss: 2241.770020, total loss: 2.132485，train accuracy: 0.66，test accuracy: 0.623  \n",
    "step 200, entropy loss: 0.686162, l2_loss: 2243.539551, total loss: 0.843210，train accuracy: 0.88，test accuracy: 0.841  \n",
    "step 300, entropy loss: 0.746328, l2_loss: 2244.381104, total loss: 0.903435，train accuracy: 0.8，test accuracy: 0.868  \n",
    "step 400, entropy loss: 0.338799, l2_loss: 2244.739258, total loss: 0.495931，train accuracy: 0.92，test accuracy: 0.902  \n",
    "step 500, entropy loss: 0.371145, l2_loss: 2244.997803, total loss: 0.528295，train accuracy: 0.94，test accuracy: 0.912  \n",
    "step 600, entropy loss: 0.325977, l2_loss: 2245.132812, total loss: 0.483136，train accuracy: 0.88，test accuracy: 0.911  \n",
    "step 700, entropy loss: 0.260019, l2_loss: 2245.179688, total loss: 0.417181，train accuracy: 1.0，test accuracy: 0.935  \n",
    "step 800, entropy loss: 0.209995, l2_loss: 2245.241455, total loss: 0.367162，train accuracy: 0.98，test accuracy: 0.924  \n",
    "step 900, entropy loss: 0.272064, l2_loss: 2245.229492, total loss: 0.429230，train accuracy: 0.92，test accuracy: 0.927  \n",
    "step 1000, entropy loss: 0.166010, l2_loss: 2245.236816, total loss: 0.323177，train accuracy: 0.98，test accuracy: 0.95  \n",
    "##### 卷积核数量：第一层50，第二层100，收敛速度不快，有过拟合现象。舍弃\n",
    "step 100, entropy loss: 1.830758, l2_loss: 3116.255615, total loss: 2.048896，train accuracy: 0.68，test accuracy: 0.703  \n",
    "step 200, entropy loss: 0.506070, l2_loss: 3118.018066, total loss: 0.724332，train accuracy: 0.92，test accuracy: 0.85  \n",
    "step 300, entropy loss: 0.479089, l2_loss: 3118.661621, total loss: 0.697396，train accuracy: 0.86，test accuracy: 0.88  \n",
    "step 400, entropy loss: 0.415219, l2_loss: 3118.919434, total loss: 0.633543，train accuracy: 0.92，test accuracy: 0.92  \n",
    "step 500, entropy loss: 0.502408, l2_loss: 3118.984375, total loss: 0.720736，train accuracy: 0.86，test accuracy: 0.912  \n",
    "step 600, entropy loss: 0.281466, l2_loss: 3118.973145, total loss: 0.499795，train accuracy: 0.94，test accuracy: 0.916  \n",
    "step 700, entropy loss: 0.130691, l2_loss: 3118.886475, total loss: 0.349013，train accuracy: 1.0，test accuracy: 0.933  \n",
    "step 800, entropy loss: 0.094076, l2_loss: 3118.853271, total loss: 0.312396，train accuracy: 1.0，test accuracy: 0.925  \n",
    "step 900, entropy loss: 0.256785, l2_loss: 3118.749756, total loss: 0.475097，train accuracy: 1.0，test accuracy: 0.926  \n",
    "step 1000, entropy loss: 0.186754, l2_loss: 3118.623291, total loss: 0.405058，train accuracy: 0.92，test accuracy: 0.935  \n",
    "#### <font color=#00ffff0 size=5> 总结：卷积核数量第一层选择32,第二层选择64比较好</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "net = Conv2D(32, kernel_size=[5,5], strides=[1,1],activation='relu',\n",
    "                 padding='same',\n",
    "                 input_shape=[28,28,1])(x_image)\n",
    "#第一次卷积，input告诉它输入数据的形态[28,28,1]，传入对应的tensor,Conv2D其实是一个类\n",
    "net = MaxPooling2D(pool_size=[2,2])(net)\n",
    "#第一次最大值池化\n",
    "net = Conv2D(64, kernel_size=[5,5], strides=[1,1],activation='relu',\n",
    "                padding='same')(net)\n",
    "#第二层卷积\n",
    "net = MaxPooling2D(pool_size=[2,2])(net)\n",
    "#第二层池化.\n",
    "net = Flatten()(net)\n",
    "#把多维的输入一维化，常用在从卷积层到全连接层的过渡"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2、调节初始化权重\n",
    "##### Dense默认初始化权重的结果如下，\n",
    "step 100, entropy loss: 1.972595, l2_loss: 790.710449, total loss: 2.027945，train accuracy: 0.66，test accuracy: 0.637  \n",
    "step 200, entropy loss: 0.693521, l2_loss: 792.764099, total loss: 0.749014，train accuracy: 0.94，test accuracy: 0.852  \n",
    "step 300, entropy loss: 0.559920, l2_loss: 793.880188, total loss: 0.615491，train accuracy: 0.9，test accuracy: 0.909  \n",
    "step 400, entropy loss: 0.352122, l2_loss: 794.489380, total loss: 0.407736，train accuracy: 0.96，test accuracy: 0.851  \n",
    "step 500, entropy loss: 0.296867, l2_loss: 794.874817, total loss: 0.352508，train accuracy: 0.96，test accuracy: 0.922  \n",
    "step 600, entropy loss: 0.206986, l2_loss: 795.275146, total loss: 0.262655，train accuracy: 0.98，test accuracy: 0.934  \n",
    "step 700, entropy loss: 0.251357, l2_loss: 795.531860, total loss: 0.307044，train accuracy: 0.96，test accuracy: 0.936  \n",
    "step 800, entropy loss: 0.457139, l2_loss: 795.739868, total loss: 0.512841，train accuracy: 0.92，test accuracy: 0.952  \n",
    "step 900, entropy loss: 0.211582, l2_loss: 795.968384, total loss: 0.267300，train accuracy: 0.98，test accuracy: 0.958  \n",
    "step 1000, entropy loss: 0.133377, l2_loss: 796.170410, total loss: 0.189109，train accuracy: 0.98，test accuracy: 0.955  \n",
    "##### 用he初始化权重的结果如下,收敛速度快，训练效果好，但是L2损失过大，导致总损失过大，下一步调节L2正则因子。\n",
    "step 100, entropy loss: 0.757492, l2_loss: 3144445591552.000000, total loss: 220111200.000000，train accuracy: 0.7，test accuracy: 0.716  \n",
    "step 200, entropy loss: 0.341661, l2_loss: 3144340733952.000000, total loss: 220103856.000000，train accuracy: 1.0，test accuracy: 0.893  \n",
    "step 300, entropy loss: 0.266244, l2_loss: 3144274673664.000000, total loss: 220099232.000000，train accuracy: 0.96，test accuracy: 0.918  \n",
    "step 400, entropy loss: 0.088371, l2_loss: 3144222769152.000000, total loss: 220095600.000000，train accuracy: 1.0，test accuracy: 0.941  \n",
    "step 500, entropy loss: 0.174852, l2_loss: 3144177942528.000000, total loss: 220092464.000000，train accuracy: 0.98，test accuracy: 0.93  \n",
    "step 600, entropy loss: 0.122784, l2_loss: 3144119222272.000000, total loss: 220088352.000000，train accuracy: 0.98，test accuracy: 0.954  \n",
    "step 700, entropy loss: 0.167331, l2_loss: 3144045035520.000000, total loss: 220083152.000000，train accuracy: 1.0，test accuracy: 0.966  \n",
    "step 800, entropy loss: 0.235171, l2_loss: 3143912390656.000000, total loss: 220073872.000000，train accuracy: 0.98，test accuracy: 0.964  \n",
    "step 900, entropy loss: 0.307853, l2_loss: 3143751434240.000000, total loss: 220062608.000000，train accuracy: 0.96，test accuracy: 0.952  \n",
    "step 1000, entropy loss: 0.122105, l2_loss: 3143618265088.000000, total loss: 220053280.000000，train accuracy: 0.98，test accuracy: 0.974  \n",
    "#### <font color=#00ffff0 size=5> 总结：选择he初始化好一点，收敛快，效果好</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "net = Dense(1000,\n",
    "            activation='relu',\n",
    "            kernel_initializer=tf.truncated_normal_initializer(mean=0,stddev=np.sqrt(2/64))   \n",
    "#             bias_initializer=tf.constant_initializer(0.01)\n",
    "            )(net)\n",
    "# keras.layers.core.Dense(\n",
    "# units, #代表该层的输出维度\n",
    "# activation=None, #激活函数.但是默认 liner\n",
    "# use_bias=True, #是否使用b\n",
    "# kernel_initializer='glorot_uniform', #初始化w权重，keras/initializers.py\n",
    "# bias_initializer='zeros', #初始化b权重\n",
    "# kernel_regularizer=None, #施加在权重w上的正则项,keras/regularizer.py\n",
    "# bias_regularizer=None, #施加在偏置向量b上的正则项\n",
    "# activity_regularizer=None, #施加在输出上的正则项\n",
    "# kernel_constraint=None, #施加在权重w上的约束项\n",
    "# bias_constraint=None #施加在偏置b上的约束项\n",
    "# )\n",
    "#Dense默认的权重是均匀分布的值。\n",
    "\n",
    "net = Dense(10,\n",
    "            activation='softmax',\n",
    "            kernel_initializer=tf.truncated_normal_initializer(mean=0,stddev=np.sqrt(2/1000))   \n",
    "#             bias_initializer=tf.constant_initializer(0.01),\n",
    "            )(net)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3、调节正则化因子和学习率\n",
    "##### 正则因子为7e-5时的结果\n",
    "step 100, entropy loss: 0.757492, l2_loss: 3144445591552.000000, total loss: 220111200.000000，train accuracy: 0.7，test accuracy: 0.716  \n",
    "step 200, entropy loss: 0.341661, l2_loss: 3144340733952.000000, total loss: 220103856.000000，train accuracy: 1.0，test accuracy: 0.893  \n",
    "step 300, entropy loss: 0.266244, l2_loss: 3144274673664.000000, total loss: 220099232.000000，train accuracy: 0.96，test accuracy: 0.918  \n",
    "step 400, entropy loss: 0.088371, l2_loss: 3144222769152.000000, total loss: 220095600.000000，train accuracy: 1.0，test accuracy: 0.941  \n",
    "step 500, entropy loss: 0.174852, l2_loss: 3144177942528.000000, total loss: 220092464.000000，train accuracy: 0.98，test accuracy: 0.93  \n",
    "step 600, entropy loss: 0.122784, l2_loss: 3144119222272.000000, total loss: 220088352.000000，train accuracy: 0.98，test accuracy: 0.954  \n",
    "step 700, entropy loss: 0.167331, l2_loss: 3144045035520.000000, total loss: 220083152.000000，train accuracy: 1.0，test accuracy: 0.966  \n",
    "step 800, entropy loss: 0.235171, l2_loss: 3143912390656.000000, total loss: 220073872.000000，train accuracy: 0.98，test accuracy: 0.964  \n",
    "step 900, entropy loss: 0.307853, l2_loss: 3143751434240.000000, total loss: 220062608.000000，train accuracy: 0.96，test accuracy: 0.952  \n",
    "step 1000, entropy loss: 0.122105, l2_loss: 3143618265088.000000, total loss: 220053280.000000，train accuracy: 0.98，test accuracy: 0.974  \n",
    "##### 调小正则因子为1e-15，惩罚太小，导致有些过拟合，泛化效果不好，接下来调大正则因子。\n",
    "step 100, entropy loss: 0.828835, l2_loss: 3144606547968.000000, total loss: 0.831979，train accuracy: 0.92，test accuracy: 0.801  \n",
    "step 200, entropy loss: 0.507846, l2_loss: 3144606547968.000000, total loss: 0.510990，train accuracy: 0.94，test accuracy: 0.878  \n",
    "step 300, entropy loss: 0.275858, l2_loss: 3144606547968.000000, total loss: 0.279003，train accuracy: 0.96，test accuracy: 0.915  \n",
    "step 400, entropy loss: 0.283015, l2_loss: 3144606547968.000000, total loss: 0.286160，train accuracy: 0.94，test accuracy: 0.91  \n",
    "step 500, entropy loss: 0.350819, l2_loss: 3144606547968.000000, total loss: 0.353963，train accuracy: 0.98，test accuracy: 0.941  \n",
    "step 600, entropy loss: 0.213628, l2_loss: 3144606547968.000000, total loss: 0.216773，train accuracy: 1.0，test accuracy: 0.912  \n",
    "step 700, entropy loss: 0.199091, l2_loss: 3144606547968.000000, total loss: 0.202235，train accuracy: 0.98，test accuracy: 0.955  \n",
    "step 800, entropy loss: 0.066865, l2_loss: 3144606547968.000000, total loss: 0.070010，train accuracy: 1.0，test accuracy: 0.958  \n",
    "step 900, entropy loss: 0.221289, l2_loss: 3144606547968.000000, total loss: 0.224434，train accuracy: 1.0，test accuracy: 0.941  \n",
    "step 1000, entropy loss: 0.169791, l2_loss: 3144606547968.000000, total loss: 0.172935，train accuracy: 0.98，test accuracy: 0.971 \n",
    "##### 调大正则因子为1e-4，可以看出训练效果不错，测试效果也不错，都能达到98%以上。此时的学习率是0.01，下一步调节学习率。\n",
    "step 100, entropy loss: 1.242514, l2_loss: 3144390017024.000000, total loss: 314439008.000000，train accuracy: 0.72，test accuracy: 0.737  \n",
    "step 200, entropy loss: 0.664470, l2_loss: 3144280702976.000000, total loss: 314428064.000000，train accuracy: 0.84，test accuracy: 0.858  \n",
    "step 300, entropy loss: 0.435289, l2_loss: 3144207564800.000000, total loss: 314420736.000000，train accuracy: 0.92，test accuracy: 0.909  \n",
    "step 400, entropy loss: 0.213909, l2_loss: 3144131805184.000000, total loss: 314413184.000000，train accuracy: 0.96，test accuracy: 0.922  \n",
    "step 500, entropy loss: 0.264836, l2_loss: 3144016723968.000000, total loss: 314401664.000000，train accuracy: 0.96，test accuracy: 0.951  \n",
    "step 600, entropy loss: 0.125755, l2_loss: 3143792590848.000000, total loss: 314379264.000000，train accuracy: 0.98，test accuracy: 0.963  \n",
    "step 700, entropy loss: 0.145120, l2_loss: 3143594147840.000000, total loss: 314359392.000000，train accuracy: 1.0，test accuracy: 0.97  \n",
    "step 800, entropy loss: 0.212400, l2_loss: 3143213252608.000000, total loss: 314321312.000000，train accuracy: 0.98，test accuracy: 0.962  \n",
    "step 900, entropy loss: 0.285791, l2_loss: 3142644137984.000000, total loss: 314264416.000000，train accuracy: 0.96，test accuracy: 0.966  \n",
    "step 1000, entropy loss: 0.179836, l2_loss: 3142422888448.000000, total loss: 314242272.000000，train accuracy: 0.98，test accuracy: 0.955  \n",
    "step 1100, entropy loss: 0.122338, l2_loss: 3142056411136.000000, total loss: 314205632.000000，train accuracy: 1.0，test accuracy: 0.977  \n",
    "step 1200, entropy loss: 0.153638, l2_loss: 3141606572032.000000, total loss: 314160640.000000，train accuracy: 1.0，test accuracy: 0.973  \n",
    "step 1300, entropy loss: 0.192305, l2_loss: 3140741496832.000000, total loss: 314074144.000000，train accuracy: 0.96，test accuracy: 0.973  \n",
    "step 1400, entropy loss: 0.130661, l2_loss: 3138447212544.000000, total loss: 313844704.000000，train accuracy: 0.98，test accuracy: 0.967  \n",
    "step 1500, entropy loss: 0.106697, l2_loss: 3137692237824.000000, total loss: 313769216.000000，train accuracy: 1.0，test accuracy: 0.978  \n",
    "step 1600, entropy loss: 0.084769, l2_loss: 3137243971584.000000, total loss: 313724384.000000，train accuracy: 1.0，test accuracy: 0.966  \n",
    "step 1700, entropy loss: 0.036589, l2_loss: 3136890601472.000000, total loss: 313689056.000000，train accuracy: 1.0，test accuracy: 0.978  \n",
    "step 1800, entropy loss: 0.073894, l2_loss: 3136663060480.000000, total loss: 313666304.000000，train accuracy: 1.0，test accuracy: 0.982  \n",
    "step 1900, entropy loss: 0.158693, l2_loss: 3136064323584.000000, total loss: 313606432.000000，train accuracy: 0.98，test accuracy: 0.972  \n",
    "step 2000, entropy loss: 0.090512, l2_loss: 3135722225664.000000, total loss: 313572224.000000，train accuracy: 1.0，test accuracy: 0.974  \n",
    "step 2100, entropy loss: 0.119038, l2_loss: 3135406342144.000000, total loss: 313540640.000000，train accuracy: 1.0，test accuracy: 0.98  \n",
    "step 2200, entropy loss: 0.065522, l2_loss: 3134834868224.000000, total loss: 313483488.000000，train accuracy: 1.0，test accuracy: 0.972  \n",
    "step 2300, entropy loss: 0.087289, l2_loss: 3134095884288.000000, total loss: 313409568.000000，train accuracy: 1.0，test accuracy: 0.983  \n",
    "step 2400, entropy loss: 0.046724, l2_loss: 3131143356416.000000, total loss: 313114336.000000，train accuracy: 1.0，test accuracy: 0.977  \n",
    "step 2500, entropy loss: 0.019356, l2_loss: 3128483119104.000000, total loss: 312848320.000000，train accuracy: 1.0，test accuracy: 0.983  \n",
    "step 2600, entropy loss: 0.042081, l2_loss: 3127761436672.000000, total loss: 312776128.000000，train accuracy: 1.0，test accuracy: 0.981  \n",
    "step 2700, entropy loss: 0.097403, l2_loss: 3127207526400.000000, total loss: 312720736.000000，train accuracy: 1.0，test accuracy: 0.982  \n",
    "step 2800, entropy loss: 0.004176, l2_loss: 3126913925120.000000, total loss: 312691392.000000，train accuracy: 1.0，test accuracy: 0.983  \n",
    "step 2900, entropy loss: 0.064784, l2_loss: 3126538272768.000000, total loss: 312653824.000000，train accuracy: 1.0，test accuracy: 0.978  \n",
    "step 3000, entropy loss: 0.121315, l2_loss: 3125965488128.000000, total loss: 312596544.000000，train accuracy: 1.0，test accuracy: 0.983 \n",
    "#### <font color=#00ffff0 size=5> 总结：正则因子选择1e-4 </font>\n",
    "##### 上一步的学习率为0.01，效果不错，现在调节学习率为0.575，结果如下，只看前三条就发现收敛不行，学习率过大了。\n",
    "step 100, entropy loss: 14.183924, l2_loss: 3111391854592.000000, total loss: 311139168.000000，train accuracy: 0.12，test accuracy: 0.109  \n",
    "step 200, entropy loss: 14.183925, l2_loss: 3073950351360.000000, total loss: 307395040.000000，train accuracy: 0.12，test accuracy: 0.095  \n",
    "step 300, entropy loss: 15.473372, l2_loss: 3040865943552.000000, total loss: 304086592.000000，train accuracy: 0.04，test accuracy: 0.117  \n",
    "##### 调小学习率，调节学习率为0.001，结果如下，只看前三条就发现收敛太慢，学习率过小了。\n",
    "step 100, entropy loss: 2.182085, l2_loss: 3144577449984.000000, total loss: 314457728.000000，train accuracy: 0.38，test accuracy: 0.398  \n",
    "step 200, entropy loss: 1.983412, l2_loss: 3144546254848.000000, total loss: 314454624.000000，train accuracy: 0.56，test accuracy: 0.475  \n",
    "step 300, entropy loss: 1.748306, l2_loss: 3144515846144.000000, total loss: 314451584.000000，train accuracy: 0.58，test accuracy: 0.582 \n",
    "#### <font color=#00ffff0 size=5> 总结：学习率选择0.01 </font>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font color=#00ffff0 size=5> 颜色 </font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.789696, l2_loss: 3144390017024.000000, total loss: 314439008.000000，train accuracy: 0.84，test accuracy: 0.759\n",
      "step 200, entropy loss: 0.465611, l2_loss: 3144280965120.000000, total loss: 314428096.000000，train accuracy: 0.9，test accuracy: 0.877\n",
      "step 300, entropy loss: 0.308926, l2_loss: 3144207302656.000000, total loss: 314420736.000000，train accuracy: 0.9，test accuracy: 0.881\n",
      "step 400, entropy loss: 0.211295, l2_loss: 3144131543040.000000, total loss: 314413152.000000，train accuracy: 0.98，test accuracy: 0.927\n",
      "step 500, entropy loss: 0.133692, l2_loss: 3144016723968.000000, total loss: 314401664.000000，train accuracy: 1.0，test accuracy: 0.948\n",
      "step 600, entropy loss: 0.250196, l2_loss: 3143792852992.000000, total loss: 314379264.000000，train accuracy: 0.98，test accuracy: 0.936\n",
      "step 700, entropy loss: 0.137696, l2_loss: 3143594147840.000000, total loss: 314359392.000000，train accuracy: 0.98，test accuracy: 0.946\n",
      "step 800, entropy loss: 0.090897, l2_loss: 3143215087616.000000, total loss: 314321504.000000，train accuracy: 1.0，test accuracy: 0.97\n",
      "step 900, entropy loss: 0.166236, l2_loss: 3142644662272.000000, total loss: 314264448.000000，train accuracy: 0.98，test accuracy: 0.958\n",
      "step 1000, entropy loss: 0.074754, l2_loss: 3142422364160.000000, total loss: 314242240.000000，train accuracy: 1.0，test accuracy: 0.971\n",
      "step 1100, entropy loss: 0.027120, l2_loss: 3142055362560.000000, total loss: 314205536.000000，train accuracy: 1.0，test accuracy: 0.966\n",
      "step 1200, entropy loss: 0.072592, l2_loss: 3141605523456.000000, total loss: 314160544.000000，train accuracy: 1.0，test accuracy: 0.969\n",
      "step 1300, entropy loss: 0.072636, l2_loss: 3140739399680.000000, total loss: 314073920.000000，train accuracy: 1.0，test accuracy: 0.973\n",
      "step 1400, entropy loss: 0.097192, l2_loss: 3138445115392.000000, total loss: 313844512.000000，train accuracy: 1.0，test accuracy: 0.968\n",
      "step 1500, entropy loss: 0.057934, l2_loss: 3137692762112.000000, total loss: 313769280.000000，train accuracy: 1.0，test accuracy: 0.971\n",
      "step 1600, entropy loss: 0.056100, l2_loss: 3137244495872.000000, total loss: 313724448.000000，train accuracy: 1.0，test accuracy: 0.98\n",
      "step 1700, entropy loss: 0.111514, l2_loss: 3136890601472.000000, total loss: 313689056.000000，train accuracy: 1.0，test accuracy: 0.977\n",
      "step 1800, entropy loss: 0.212752, l2_loss: 3136663584768.000000, total loss: 313666336.000000，train accuracy: 0.98，test accuracy: 0.975\n",
      "step 1900, entropy loss: 0.079855, l2_loss: 3136064847872.000000, total loss: 313606464.000000，train accuracy: 1.0，test accuracy: 0.976\n",
      "step 2000, entropy loss: 0.038614, l2_loss: 3135721701376.000000, total loss: 313572160.000000，train accuracy: 1.0，test accuracy: 0.968\n",
      "step 2100, entropy loss: 0.052928, l2_loss: 3135407915008.000000, total loss: 313540768.000000，train accuracy: 1.0，test accuracy: 0.987\n",
      "step 2200, entropy loss: 0.016959, l2_loss: 3134836178944.000000, total loss: 313483616.000000，train accuracy: 1.0，test accuracy: 0.979\n",
      "step 2300, entropy loss: 0.082180, l2_loss: 3134096408576.000000, total loss: 313409632.000000，train accuracy: 1.0，test accuracy: 0.986\n",
      "step 2400, entropy loss: 0.018657, l2_loss: 3131144142848.000000, total loss: 313114400.000000，train accuracy: 1.0，test accuracy: 0.972\n",
      "step 2500, entropy loss: 0.013499, l2_loss: 3128482070528.000000, total loss: 312848192.000000，train accuracy: 1.0，test accuracy: 0.985\n",
      "step 2600, entropy loss: 0.019327, l2_loss: 3127760912384.000000, total loss: 312776096.000000，train accuracy: 1.0，test accuracy: 0.982\n",
      "step 2700, entropy loss: 0.225655, l2_loss: 3127208050688.000000, total loss: 312720800.000000，train accuracy: 1.0，test accuracy: 0.963\n",
      "step 2800, entropy loss: 0.046586, l2_loss: 3126914449408.000000, total loss: 312691424.000000，train accuracy: 1.0，test accuracy: 0.981\n",
      "step 2900, entropy loss: 0.114158, l2_loss: 3126540107776.000000, total loss: 312654016.000000，train accuracy: 1.0，test accuracy: 0.985\n",
      "step 3000, entropy loss: 0.134276, l2_loss: 3125966274560.000000, total loss: 312596608.000000，train accuracy: 1.0，test accuracy: 0.979\n"
     ]
    }
   ],
   "source": [
    "from keras.objectives import categorical_crossentropy\n",
    "cross_entropy = tf.reduce_mean(categorical_crossentropy(y_, net))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 1e-4*l2_loss  #确定正则化因子\n",
    "\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "\n",
    "K.set_session(sess)\n",
    "\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(50)  #由于电脑性能不太好，一次选择50条数据，循环3000，相当于2个epoch。\n",
    "  lr = 0.01    #确定学习率， 0.01,0.575,0.1\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f，' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value),end=\"\")\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(net, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(\"train accuracy:\",sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys}),end=\"，\")\n",
    "#   if (step+1) % 1000 == 0:\n",
    "    test_x,test_y=mnist.test.next_batch(1000)  #由于电脑性能的原因，自己随机选取1000条数据作为测试。\n",
    "    print(\"test accuracy:\",sess.run(accuracy, feed_dict={x: test_x,\n",
    "                                    y_: test_y}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 总结：\n",
    "\n",
    "经过系列调参，选择参数如下：  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "|卷积核数量 |卷积核size |权重初始化|正则化因子|学习率|\n",
    "|--|--|--|--|--|\n",
    "|32,64|5,5|he初始化|1e-4|0.01|"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
