{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 作业\n",
    "使用tensorflow，构造并训练一个神经网络，在测试机上达到超过98%的准确率。\n",
    "在完成过程中，需要综合运用目前学到的基础知识：\n",
    "- 深度神经网络\n",
    "- 激活函数\n",
    "- 正则化\n",
    "- 初始化\n",
    "- 卷积\n",
    "- 池化\n",
    "\n",
    "\n",
    "并探索如下超参数设置：\n",
    "- 卷积kernel size\n",
    "- 卷积kernel 数量\n",
    "- 学习率\n",
    "- 正则化因子\n",
    "- 权重初始化分布参数\n",
    "\n",
    "\n",
    "## 数据集\n",
    "```\n",
    "下载地址：\n",
    "http://yann.lecun.com/exdb/mnist/\n",
    "或\n",
    "https://storage.googleapis.com/cvdf-datasets/mnist/train-images-idx3-ubyte.gz\n",
    "https://storage.googleapis.com/cvdf-datasets/mnist/train-labels-idx1-ubyte.gz\n",
    "https://storage.googleapis.com/cvdf-datasets/mnist/t10k-images-idx3-ubyte.gz\n",
    "https://storage.googleapis.com/cvdf-datasets/mnist/t10k-labels-idx1-ubyte.gz\n",
    "\n",
    "```\n",
    "Mnist 数据集由Yann LeCun（杨立坤）建立，基础数据部分来自美国国家标准与技术研究所（National Institute of Standards and Technology，NIST）。训练集 (training set) 由来自 250 个不同人手写的数字构成，其中 50% 是高中学生, 50% 来自人口普查局 (the Census Bureau) 的工作人员。测试集(test set) 也是同样比例的手写数字数据。\n",
    "整个数据集包括60000张训练图片，10000张测试图片。每张图片为一个28x28的灰度图片。每个像素的数据类型为uint8，取值从0（背景）到255（前景）。\n",
    "\n",
    "## 评价标准\n",
    "\n",
    "- 准确度达到98%或者以上60分，作为及格标准，未达到者本作业不及格，不予打分。\n",
    "- 使用了正则化因子或文档中给出描述：10分。\n",
    "- 手动初始化参数或文档中给出描述：10分，不设置初始化参数的，只使用默认初始化认为学员没考虑到初始化问题，不给分。\n",
    "- 学习率调整：10分，需要文档中给出描述。\n",
    "- 卷积kernel size和数量调整：10分，需要文档中给出描述。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n"
     ]
    }
   ],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "import tensorflow as tf\n",
    "from keras.layers.core import Dense, Flatten\n",
    "from keras.layers.convolutional import Conv2D\n",
    "from keras.layers.pooling import MaxPooling2D\n",
    "from keras.layers import Dropout\n",
    "from keras import backend as K\n",
    "\n",
    "\n",
    "K.image_data_format() \n",
    "FLAGS = None\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们在这里调用系统提供的Mnist数据函数为我们读入数据，如果没有下载的话则进行下载。\n",
    "\n",
    "<font color=#ff0000>**这里将data_dir改为适合你的运行环境的目录**</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./MNIST_data/train-images-idx3-ubyte.gz\n",
      "Extracting ./MNIST_data/train-labels-idx1-ubyte.gz\n",
      "Extracting ./MNIST_data/t10k-images-idx3-ubyte.gz\n",
      "Extracting ./MNIST_data/t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = './MNIST_data/'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一个非常非常简陋的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create the model\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "#W = tf.Variable(tf.zeros([784, 10]))\n",
    "#b = tf.Variable(tf.zeros([10]))\n",
    "#y = tf.matmul(x, W) + b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义我们的ground truth 占位符"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "\n",
    "#增加学习率\n",
    "learning_rate = tf.placeholder(tf.float32)\n",
    "lr = 0.05 #可以通过此桉树调整学习率 ,分别尝试 0.5,0.01,0.05\n",
    "#从尝试的结果来看，激活函数采用relu 效果更好 \n",
    "#actFunc='sigmoid'  \n",
    "actFunc='relu'\n",
    "padding='same'\n",
    "#kernel_size=[5,5]\n",
    "kernel_size=[10,10]   #kernel zise 修改为 [10,10] 收敛更快\n",
    "strides=[1,1]\n",
    "#drop 函数的概率\n",
    "keep_prob = 0.25 #keep_prob 默认0.25\n",
    "#keep_prob=tf.Variable(tf.truncated_normal([1],0.25, 0)) \n",
    "pool_size=[2,2]\n",
    "regularization_parameter=7e-5     #tf.Variable(tf.truncated_normal([1]))\n",
    "#tf.Variable(tf.truncated_normal([1],7e-5, 0))  #正则化超参数(shape, mean, stddev)\n",
    "\n",
    "batch_size=64\n",
    "batch_cycle=10000\n",
    "\n",
    "with tf.name_scope('reshape'):\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])\n",
    "\n",
    "#卷积层\n",
    "net = Conv2D(32, kernel_size=kernel_size, strides=strides,activation=actFunc,\n",
    "                 padding=padding,\n",
    "                input_shape=[28,28,1])(x_image)\n",
    "\n",
    "#池化层\n",
    "net = MaxPooling2D(pool_size=pool_size)(net)\n",
    "\n",
    "\n",
    "#dropout函数，以概率keep_prob 置零神经元, 改善过拟合情况 \n",
    "#net=tf.nn.dropout(net, keep_prob)/(1-keep_prob)\n",
    "net = Dropout(keep_prob)(net)\n",
    "\n",
    "#第二次卷积\n",
    "net = Conv2D(64, kernel_size=kernel_size, strides=strides,activation=actFunc,\n",
    "                padding=padding)(net)\n",
    "\n",
    "#第二次池化层\n",
    "net = MaxPooling2D(pool_size=pool_size)(net)\n",
    "\n",
    "net = Dropout(keep_prob)(net)\n",
    "\n",
    "\n",
    "##增加一次卷积和池化\n",
    "#第三次卷积\n",
    "net = Conv2D(128, kernel_size=kernel_size, strides=strides,activation=actFunc,\n",
    "                padding=padding)(net)\n",
    "\n",
    "#第三次池化层\n",
    "net = MaxPooling2D(pool_size=pool_size)(net)\n",
    "\n",
    "#dropout函数，以概率keep_prob 置零神经元, 改善过拟合情况 \n",
    "#net=tf.nn.dropout(net, keep_prob)/(1-keep_prob)\n",
    "net = Dropout(keep_prob)(net)\n",
    "#向量展开\n",
    "net = Flatten()(net)\n",
    "\n",
    "#全连接\n",
    "net = Dense(1000, activation=actFunc)(net)\n",
    "\n",
    "#全连接层\n",
    "net = Dense(10,activation='softmax')(net)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们计算交叉熵，注意这里不要使用注释中的手动计算方式，而是使用系统函数。\n",
    "另一个注意点就是，softmax_cross_entropy_with_logits的logits参数是**未经激活的wx+b**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from keras.objectives import categorical_crossentropy\n",
    "cross_entropy = tf.reduce_mean(categorical_crossentropy(y_, net))\n",
    "\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + regularization_parameter*l2_loss\n",
    "\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "生成一个训练step"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "K.set_session(sess)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这里我们仍然调用系统提供的读取数据，为我们取得一个batch。\n",
    "然后我们运行3k个step(5 epochs)，对权重进行优化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train:\n",
      "step 100, entropy loss: 0.577356, l2_loss: 615.062256, total loss: 0.620410\n",
      "0.890625\n",
      "Train:\n",
      "step 200, entropy loss: 0.143775, l2_loss: 617.774048, total loss: 0.187019\n",
      "0.984375\n",
      "Train:\n",
      "step 300, entropy loss: 0.296238, l2_loss: 619.115845, total loss: 0.339577\n",
      "0.96875\n",
      "Train:\n",
      "step 400, entropy loss: 0.186276, l2_loss: 620.107178, total loss: 0.229684\n",
      "0.984375\n",
      "Train:\n",
      "step 500, entropy loss: 0.074523, l2_loss: 620.720093, total loss: 0.117973\n",
      "1.0\n",
      "Train:\n",
      "step 600, entropy loss: 0.057568, l2_loss: 621.307129, total loss: 0.101059\n",
      "1.0\n",
      "Train:\n",
      "step 700, entropy loss: 0.034184, l2_loss: 621.824463, total loss: 0.077712\n",
      "1.0\n",
      "Train:\n",
      "step 800, entropy loss: 0.080785, l2_loss: 622.162292, total loss: 0.124336\n",
      "1.0\n",
      "Train:\n",
      "step 900, entropy loss: 0.013119, l2_loss: 622.431213, total loss: 0.056689\n",
      "1.0\n",
      "Train:\n",
      "step 1000, entropy loss: 0.036810, l2_loss: 622.569153, total loss: 0.080390\n",
      "1.0\n",
      "Test:\n",
      "0.9757\n",
      "Train:\n",
      "step 1100, entropy loss: 0.110768, l2_loss: 622.678772, total loss: 0.154355\n",
      "0.984375\n",
      "Train:\n",
      "step 1200, entropy loss: 0.065430, l2_loss: 623.023315, total loss: 0.109042\n",
      "1.0\n",
      "Train:\n",
      "step 1300, entropy loss: 0.165809, l2_loss: 623.089294, total loss: 0.209426\n",
      "0.96875\n",
      "Train:\n",
      "step 1400, entropy loss: 0.054678, l2_loss: 623.300049, total loss: 0.098309\n",
      "1.0\n",
      "Train:\n",
      "step 1500, entropy loss: 0.045798, l2_loss: 623.389099, total loss: 0.089435\n",
      "1.0\n",
      "Train:\n",
      "step 1600, entropy loss: 0.010290, l2_loss: 623.497498, total loss: 0.053934\n",
      "1.0\n",
      "Train:\n",
      "step 1700, entropy loss: 0.047730, l2_loss: 623.512146, total loss: 0.091376\n",
      "1.0\n",
      "Train:\n",
      "step 1800, entropy loss: 0.084143, l2_loss: 623.555603, total loss: 0.127792\n",
      "0.984375\n",
      "Train:\n",
      "step 1900, entropy loss: 0.030062, l2_loss: 623.573914, total loss: 0.073712\n",
      "1.0\n",
      "Train:\n",
      "step 2000, entropy loss: 0.004401, l2_loss: 623.670959, total loss: 0.048058\n",
      "1.0\n",
      "Test:\n",
      "0.9871\n",
      "Train:\n",
      "step 2100, entropy loss: 0.160453, l2_loss: 623.741516, total loss: 0.204115\n",
      "0.984375\n",
      "Train:\n",
      "step 2200, entropy loss: 0.049977, l2_loss: 623.784729, total loss: 0.093642\n",
      "1.0\n",
      "Train:\n",
      "step 2300, entropy loss: 0.019952, l2_loss: 623.714539, total loss: 0.063612\n",
      "1.0\n",
      "Train:\n",
      "step 2400, entropy loss: 0.007764, l2_loss: 623.632996, total loss: 0.051418\n",
      "1.0\n",
      "Train:\n",
      "step 2500, entropy loss: 0.007073, l2_loss: 623.540955, total loss: 0.050721\n",
      "1.0\n",
      "Train:\n",
      "step 2600, entropy loss: 0.017274, l2_loss: 623.493713, total loss: 0.060919\n",
      "1.0\n",
      "Train:\n",
      "step 2700, entropy loss: 0.089793, l2_loss: 623.429932, total loss: 0.133433\n",
      "1.0\n",
      "Train:\n",
      "step 2800, entropy loss: 0.017433, l2_loss: 623.398315, total loss: 0.061070\n",
      "1.0\n",
      "Train:\n",
      "step 2900, entropy loss: 0.008302, l2_loss: 623.203247, total loss: 0.051926\n",
      "1.0\n",
      "Train:\n",
      "step 3000, entropy loss: 0.005221, l2_loss: 623.179871, total loss: 0.048843\n",
      "1.0\n",
      "Test:\n",
      "0.9875\n",
      "Train:\n",
      "step 3100, entropy loss: 0.009494, l2_loss: 623.133972, total loss: 0.053113\n",
      "1.0\n",
      "Train:\n",
      "step 3200, entropy loss: 0.001560, l2_loss: 623.040283, total loss: 0.045172\n",
      "1.0\n",
      "Train:\n",
      "step 3300, entropy loss: 0.044282, l2_loss: 622.821167, total loss: 0.087880\n",
      "1.0\n",
      "Train:\n",
      "step 3400, entropy loss: 0.005072, l2_loss: 622.662109, total loss: 0.048658\n",
      "1.0\n",
      "Train:\n",
      "step 3500, entropy loss: 0.005230, l2_loss: 622.699463, total loss: 0.048819\n",
      "1.0\n",
      "Train:\n",
      "step 3600, entropy loss: 0.029752, l2_loss: 622.611633, total loss: 0.073335\n",
      "1.0\n",
      "Train:\n",
      "step 3700, entropy loss: 0.026792, l2_loss: 622.489929, total loss: 0.070366\n",
      "1.0\n",
      "Train:\n",
      "step 3800, entropy loss: 0.015912, l2_loss: 622.359558, total loss: 0.059477\n",
      "1.0\n",
      "Train:\n",
      "step 3900, entropy loss: 0.019498, l2_loss: 622.148804, total loss: 0.063048\n",
      "1.0\n",
      "Train:\n",
      "step 4000, entropy loss: 0.039050, l2_loss: 622.021118, total loss: 0.082591\n",
      "1.0\n",
      "Test:\n",
      "0.9882\n",
      "Train:\n",
      "step 4100, entropy loss: 0.045596, l2_loss: 621.879761, total loss: 0.089127\n",
      "1.0\n",
      "Train:\n",
      "step 4200, entropy loss: 0.037102, l2_loss: 621.738281, total loss: 0.080623\n",
      "1.0\n",
      "Train:\n",
      "step 4300, entropy loss: 0.005048, l2_loss: 621.584045, total loss: 0.048559\n",
      "1.0\n",
      "Train:\n",
      "step 4400, entropy loss: 0.009905, l2_loss: 621.415588, total loss: 0.053404\n",
      "1.0\n",
      "Train:\n",
      "step 4500, entropy loss: 0.009639, l2_loss: 621.287354, total loss: 0.053129\n",
      "1.0\n",
      "Train:\n",
      "step 4600, entropy loss: 0.001352, l2_loss: 621.131165, total loss: 0.044831\n",
      "1.0\n",
      "Train:\n",
      "step 4700, entropy loss: 0.005206, l2_loss: 620.971619, total loss: 0.048674\n",
      "1.0\n",
      "Train:\n",
      "step 4800, entropy loss: 0.015420, l2_loss: 620.866577, total loss: 0.058881\n",
      "1.0\n",
      "Train:\n",
      "step 4900, entropy loss: 0.031399, l2_loss: 620.737183, total loss: 0.074850\n",
      "1.0\n",
      "Train:\n",
      "step 5000, entropy loss: 0.001188, l2_loss: 620.551147, total loss: 0.044627\n",
      "1.0\n",
      "Test:\n",
      "0.9914\n",
      "Train:\n",
      "step 5100, entropy loss: 0.003206, l2_loss: 620.342834, total loss: 0.046630\n",
      "1.0\n",
      "Train:\n",
      "step 5200, entropy loss: 0.018870, l2_loss: 620.201294, total loss: 0.062284\n",
      "1.0\n",
      "Train:\n",
      "step 5300, entropy loss: 0.009420, l2_loss: 620.031006, total loss: 0.052822\n",
      "1.0\n",
      "Train:\n",
      "step 5400, entropy loss: 0.001205, l2_loss: 619.866455, total loss: 0.044596\n",
      "1.0\n",
      "Train:\n",
      "step 5500, entropy loss: 0.001230, l2_loss: 619.637085, total loss: 0.044605\n",
      "1.0\n",
      "Train:\n",
      "step 5600, entropy loss: 0.055110, l2_loss: 619.494751, total loss: 0.098474\n",
      "1.0\n",
      "Train:\n",
      "step 5700, entropy loss: 0.001366, l2_loss: 619.287292, total loss: 0.044716\n",
      "1.0\n",
      "Train:\n",
      "step 5800, entropy loss: 0.007942, l2_loss: 619.106140, total loss: 0.051280\n",
      "1.0\n",
      "Train:\n",
      "step 5900, entropy loss: 0.008173, l2_loss: 618.928406, total loss: 0.051498\n",
      "1.0\n",
      "Train:\n",
      "step 6000, entropy loss: 0.015563, l2_loss: 618.768555, total loss: 0.058877\n",
      "1.0\n",
      "Test:\n",
      "0.9897\n",
      "Train:\n",
      "step 6100, entropy loss: 0.006554, l2_loss: 618.542847, total loss: 0.049852\n",
      "1.0\n",
      "Train:\n",
      "step 6200, entropy loss: 0.022524, l2_loss: 618.348389, total loss: 0.065808\n",
      "1.0\n",
      "Train:\n",
      "step 6300, entropy loss: 0.008521, l2_loss: 618.125122, total loss: 0.051790\n",
      "1.0\n",
      "Train:\n",
      "step 6400, entropy loss: 0.011361, l2_loss: 617.932739, total loss: 0.054616\n",
      "1.0\n",
      "Train:\n",
      "step 6500, entropy loss: 0.008317, l2_loss: 617.687439, total loss: 0.051555\n",
      "1.0\n",
      "Train:\n",
      "step 6600, entropy loss: 0.005841, l2_loss: 617.419006, total loss: 0.049060\n",
      "1.0\n",
      "Train:\n",
      "step 6700, entropy loss: 0.008163, l2_loss: 617.203369, total loss: 0.051367\n",
      "1.0\n",
      "Train:\n",
      "step 6800, entropy loss: 0.004545, l2_loss: 617.009583, total loss: 0.047735\n",
      "1.0\n",
      "Train:\n",
      "step 6900, entropy loss: 0.004814, l2_loss: 616.787415, total loss: 0.047989\n",
      "1.0\n",
      "Train:\n",
      "step 7000, entropy loss: 0.004007, l2_loss: 616.570557, total loss: 0.047167\n",
      "1.0\n",
      "Test:\n",
      "0.9914\n",
      "Train:\n",
      "step 7100, entropy loss: 0.047199, l2_loss: 616.334717, total loss: 0.090343\n",
      "1.0\n",
      "Train:\n",
      "step 7200, entropy loss: 0.000162, l2_loss: 616.059509, total loss: 0.043286\n",
      "1.0\n",
      "Train:\n",
      "step 7300, entropy loss: 0.004521, l2_loss: 615.864441, total loss: 0.047632\n",
      "1.0\n",
      "Train:\n",
      "step 7400, entropy loss: 0.001694, l2_loss: 615.634338, total loss: 0.044788\n",
      "1.0\n",
      "Train:\n",
      "step 7500, entropy loss: 0.045596, l2_loss: 615.380005, total loss: 0.088673\n",
      "1.0\n",
      "Train:\n",
      "step 7600, entropy loss: 0.001254, l2_loss: 615.155518, total loss: 0.044315\n",
      "1.0\n",
      "Train:\n",
      "step 7700, entropy loss: 0.000500, l2_loss: 614.832764, total loss: 0.043538\n",
      "1.0\n",
      "Train:\n",
      "step 7800, entropy loss: 0.001028, l2_loss: 614.613037, total loss: 0.044051\n",
      "1.0\n",
      "Train:\n",
      "step 7900, entropy loss: 0.003508, l2_loss: 614.370544, total loss: 0.046513\n",
      "1.0\n",
      "Train:\n",
      "step 8000, entropy loss: 0.001794, l2_loss: 614.054565, total loss: 0.044777\n",
      "1.0\n",
      "Test:\n",
      "0.9903\n",
      "Train:\n",
      "step 8100, entropy loss: 0.008580, l2_loss: 613.786438, total loss: 0.051545\n",
      "1.0\n",
      "Train:\n",
      "step 8200, entropy loss: 0.001245, l2_loss: 613.519348, total loss: 0.044192\n",
      "1.0\n",
      "Train:\n",
      "step 8300, entropy loss: 0.000533, l2_loss: 613.268677, total loss: 0.043462\n",
      "1.0\n",
      "Train:\n",
      "step 8400, entropy loss: 0.116296, l2_loss: 612.996094, total loss: 0.159206\n",
      "1.0\n",
      "Train:\n",
      "step 8500, entropy loss: 0.016072, l2_loss: 612.734558, total loss: 0.058963\n",
      "1.0\n",
      "Train:\n",
      "step 8600, entropy loss: 0.000795, l2_loss: 612.453674, total loss: 0.043667\n",
      "1.0\n",
      "Train:\n",
      "step 8700, entropy loss: 0.038145, l2_loss: 612.201965, total loss: 0.080999\n",
      "1.0\n",
      "Train:\n",
      "step 8800, entropy loss: 0.001335, l2_loss: 611.944702, total loss: 0.044171\n",
      "1.0\n",
      "Train:\n",
      "step 8900, entropy loss: 0.002262, l2_loss: 611.668274, total loss: 0.045079\n",
      "1.0\n",
      "Train:\n",
      "step 9000, entropy loss: 0.009008, l2_loss: 611.416077, total loss: 0.051807\n",
      "1.0\n",
      "Test:\n",
      "0.9917\n",
      "Train:\n",
      "step 9100, entropy loss: 0.013816, l2_loss: 611.129272, total loss: 0.056595\n",
      "1.0\n",
      "Train:\n",
      "step 9200, entropy loss: 0.009219, l2_loss: 610.814880, total loss: 0.051976\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.0\n",
      "Train:\n",
      "step 9300, entropy loss: 0.000210, l2_loss: 610.569519, total loss: 0.042950\n",
      "1.0\n",
      "Train:\n",
      "step 9400, entropy loss: 0.001483, l2_loss: 610.307861, total loss: 0.044205\n",
      "1.0\n",
      "Train:\n",
      "step 9500, entropy loss: 0.034398, l2_loss: 610.035339, total loss: 0.077101\n",
      "1.0\n",
      "Train:\n",
      "step 9600, entropy loss: 0.000308, l2_loss: 609.748474, total loss: 0.042991\n",
      "1.0\n",
      "Train:\n",
      "step 9700, entropy loss: 0.002237, l2_loss: 609.466858, total loss: 0.044900\n",
      "1.0\n",
      "Train:\n",
      "step 9800, entropy loss: 0.000521, l2_loss: 609.178467, total loss: 0.043163\n",
      "1.0\n",
      "Train:\n",
      "step 9900, entropy loss: 0.001945, l2_loss: 608.875549, total loss: 0.044567\n",
      "1.0\n",
      "Train:\n",
      "step 10000, entropy loss: 0.000486, l2_loss: 608.588501, total loss: 0.043087\n",
      "1.0\n",
      "Test:\n",
      "0.9913\n"
     ]
    }
   ],
   "source": [
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(batch_cycle):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr})\n",
    "  \n",
    "  if (step+1) % 100 == 0:\n",
    "    print(\"Train:\")\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(net, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys}))\n",
    "  if (step+1) % 1000 == 0:\n",
    "    print(\"Test:\")\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "验证我们模型在测试数据上的准确率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "  # Test trained model\n",
    "#correct_prediction = tf.equal(tf.argmax(net, 1), tf.argmax(y_, 1))\n",
    "#accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "#print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "#                                      y_: mnist.test.labels}))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "毫无疑问，这个模型是一个非常简陋，性能也不理想的模型。目前只能达到92%左右的准确率。\n",
    "接下来，希望大家利用现有的知识，将这个模型优化至98%以上的准确率。\n",
    "Hint：\n",
    "- 卷积\n",
    "- 池化\n",
    "- 激活函数\n",
    "- 正则化\n",
    "- 初始化\n",
    "- 摸索一下各个超参数\n",
    "  - 卷积kernel size\n",
    "  - 卷积kernel 数量\n",
    "  - 学习率\n",
    "  - 正则化惩罚因子\n",
    "  - 最好每隔几个step就对loss、accuracy等等进行一次输出，这样才能有根据地进行调整"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
