{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 深度学习第二周作业\n",
    "数据集：Mnist     http://yann.lecun.com/exdb/mnist/    \n",
    "使用和第一周相同的数据集，用CNN训练模型。    \n",
    "作业要求：使用 tensorflow，构造并训练一个卷积神经网络，在测试集上达到超过 98%的准确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入tensorflow\n",
    "import tensorflow as tf"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 读入数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-2-c3d55fec490c>:2: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From /usr/local/miniconda3/envs/tf36/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please write your own downloading logic.\n",
      "WARNING:tensorflow:From /usr/local/miniconda3/envs/tf36/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/data/train-images-idx3-ubyte.gz\n",
      "WARNING:tensorflow:From /usr/local/miniconda3/envs/tf36/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/data/train-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From /usr/local/miniconda3/envs/tf36/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.one_hot on tensors.\n",
      "Extracting /tmp/data/t10k-images-idx3-ubyte.gz\n",
      "Extracting /tmp/data/t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From /usr/local/miniconda3/envs/tf36/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n"
     ]
    }
   ],
   "source": [
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "mnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((55000, 784), (55000, 10), (10000, 784), (10000, 10))"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 训练集，测试集\n",
    "mnist.train.images.shape, mnist.train.labels.shape, mnist.test.images.shape, mnist.test.labels.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 卷积神经网络\n",
    "练习步骤：\n",
    "1. 尝试单层卷积+池化，无正则项，采用较小的学习率（根据上周作业的参数设定）\n",
    "2. 增加为双层卷积+池化，无正则项，学习率不变\n",
    " - 输入层 -> filter -> relu -> pooling -> filter -> relu -> pooling -> full connection（relu）-> dropout -> sigmoid -> y\n",
    "3. 双层卷积+池化，加入正则项（dropout/l2），并调整学习率"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**tf.nn.conv2d**   \n",
    "tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=True, data_format='NHWC', \n",
    "            dilations=[1, 1, 1, 1], name=None)\n",
    " - Computes a 2-D convolution given 4-D input and filter tensors.\n",
    " - input tensor of shape [batch, in_height, in_width, in_channels] \n",
    " - a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels]\n",
    " - strides: A list of ints. 1-D tensor of length 4.\n",
    " - padding: A string from: \"SAME\", \"VALID\".\n",
    " \n",
    "**tf.nn.max_pool**    \n",
    "tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)     \n",
    " - value: A 4-D Tensor, specified by data_format.\n",
    " - ksize: A list or tuple of 4 ints. The size of the window for each dimension of the input tensor.\n",
    " - strides: A list or tuple of 4 ints. The stride of the sliding window for each dimension of the input tensor."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义变量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 输入x和真值y_的占位符，并非特定值，根据数据不同shape不同。\n",
    "x = tf.placeholder(tf.float32, shape=[None, 784])   \n",
    "y_ = tf.placeholder(tf.float32, shape=[None, 10])   \n",
    "\n",
    "# 1. 定义权重、偏置、卷积和池化函数，方便后面调用\n",
    "# 生成w：防止正态分布生成数据的对称性，0的斜率的计算，用切断的正态分布随机生成权重（均值方差可自定义，生成的值在均值两倍以内）\n",
    "def weight_variable(shape):\n",
    "    initial = tf.truncated_normal(shape, stddev=0.1)\n",
    "    return tf.Variable(initial)\n",
    "\n",
    "# 生成b：定义为正值，防止用relu激活函数时神经元死亡的情况\n",
    "def bias_variable(shape):\n",
    "    initial = tf.constant(0.1, shape=shape)\n",
    "    return tf.Variable(initial)\n",
    "\n",
    "# 卷积函数：步长=[1,1]，padding=same时输入和输出相同size\n",
    "def conv2d(x, W):\n",
    "    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n",
    "\n",
    "# 池化函数：最大池化，2*2区域\n",
    "def max_pool_2x2(x):\n",
    "    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 加入卷积+池化\n",
    "尝试基础单层卷积+池化，不加正则函数，观察结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 0, train accuracy 0.08, test accuracy 0.1046\n",
      "step 1000, train accuracy 0.66, test accuracy 0.8107\n",
      "step 2000, train accuracy 0.86, test accuracy 0.8989\n",
      "step 3000, train accuracy 0.94, test accuracy 0.9137\n"
     ]
    }
   ],
   "source": [
    "# 2. 定义网络\n",
    "# 第一层：生成k和b，kernel = 2*2, input通道=1，10个out_channels\n",
    "kernel_conv1 = weight_variable([2, 2, 1, 10])\n",
    "b_conv1 = bias_variable([10])  # b设定为和kernel数量相同的长度\n",
    "\n",
    "# 为适应卷积计算 [batch, in_height, in_width, in_channels]，变更x shape，\n",
    "# -1相当于None（并不确定输入多少数据），28*28的图像，1是输入通道数\n",
    "x_input = tf.reshape(x, [-1,28,28,1])\n",
    "\n",
    "# 对input图像数据进行卷积计算，加上bias用relu激活，再池化\n",
    "output_conv1 = tf.nn.relu(conv2d(x_input, kernel_conv1) + b_conv1)\n",
    "output_pool1 = max_pool_2x2(output_conv1)\n",
    "\n",
    "\n",
    "# full connection层权重和偏置：现图像input size=14*14*10(channels)，设定128个神经元全链接计算, 所以w的size=[14*14*10,128]\n",
    "w_fc1 = weight_variable([14 * 14 * 10, 128])\n",
    "b_fc1 = bias_variable([128])\n",
    "# 第二层池化后的输出转换为向量形式 \n",
    "output_pool1_flat = tf.reshape(output_pool1, [-1, 14*14*10])\n",
    "# 全连接层的输出：第二层池化结果*权重w+bias，再用relu激活\n",
    "output_fc1 = tf.nn.relu(tf.matmul(output_pool1_flat, w_fc1) + b_fc1)\n",
    "\n",
    "# 最后预测结果y层的权重和偏置，label为0～10，所以w size = 10\n",
    "w_fc2 = weight_variable([128, 10])\n",
    "b_fc2 = bias_variable([10])\n",
    "\n",
    "# 输出层y，预测结果为0～10\n",
    "y_conv=tf.nn.softmax(tf.matmul(output_fc1, w_fc2) + b_fc2)\n",
    "\n",
    "# 生成并初始化session\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 交叉熵损失\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=y_conv))\n",
    "train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)    # 梯度下降：学习率0.0001\n",
    "correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))  # 预测结果：预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 准确率 = true的数量/全部结果\n",
    "sess.run(tf.global_variables_initializer())     # 初始化所有变量\n",
    "\n",
    "# 训练\n",
    "for i in range(3000+1):\n",
    "    batch = mnist.train.next_batch(50)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1]})\n",
    "    if i%1000 == 0:\n",
    "        train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1]})\n",
    "        print(\"step %d, train accuracy %g, test accuracy %g\"%\n",
    "              (i, train_accuracy, accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一层卷积+池化，无正则函数\n",
    "1. kernel1 = [2,2,1,10], max_pool1 = 2 * 2, full connection=128, 准确率0.913\n",
    "2. kernel1 = [2,2,1,10], max_pool1 = 2 * 2, full connection=512, 准确率0.926   \n",
    "全连接层神经元的增加提高了准确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 0, train accuracy 0.04, test accuracy 0.0958\n",
      "step 1000, train accuracy 0.88, test accuracy 0.8746\n",
      "step 2000, train accuracy 0.92, test accuracy 0.9212\n",
      "step 3000, train accuracy 1, test accuracy 0.9356\n"
     ]
    }
   ],
   "source": [
    "# 2. 定义网络\n",
    "# 第一层：生成k和b，kernel = 2*2, input通道=1，10个out_channels\n",
    "kernel_conv1 = weight_variable([2, 2, 1, 10])\n",
    "b_conv1 = bias_variable([10])  # b设定为和kernel数量相同的长度\n",
    "\n",
    "# 为适应卷积计算 [batch, in_height, in_width, in_channels]，变更x shape，\n",
    "# -1相当于None（并不确定输入多少数据），28*28的图像，1是输入通道数\n",
    "x_input = tf.reshape(x, [-1,28,28,1])\n",
    "\n",
    "# 对input图像数据进行卷积计算，加上bias用relu激活，再池化\n",
    "output_conv1 = tf.nn.relu(conv2d(x_input, kernel_conv1) + b_conv1)\n",
    "output_pool1 = max_pool_2x2(output_conv1)\n",
    "\n",
    "# 第二层：kernel = 2*2，input size = 10，output size = 32； bias size = kernel size\n",
    "kernel_conv2 = weight_variable([2, 2, 10, 32])\n",
    "b_conv2 = bias_variable([32])\n",
    "\n",
    "# 卷积+池化，激活函数：relu，计算同第一层\n",
    "output_conv2 = tf.nn.relu(conv2d(output_pool1, kernel_conv2) + b_conv2)\n",
    "output_pool2 = max_pool_2x2(output_conv2)\n",
    "\n",
    "# full connection层权重和偏置：现图像input size=7*7*32(channels)，设定512个神经元全链接计算, 所以w的size=[7*7*32,512]\n",
    "w_fc1 = weight_variable([7 * 7 * 32, 512])\n",
    "b_fc1 = bias_variable([512])\n",
    "# 第二层池化后的输出转换为向量形式 \n",
    "output_pool2_flat = tf.reshape(output_pool2, [-1, 7*7*32])\n",
    "# 全连接层的输出：第二层池化结果*权重w+bias，再用relu激活\n",
    "output_fc1 = tf.nn.relu(tf.matmul(output_pool2_flat, w_fc1) + b_fc1)\n",
    "\n",
    "# 最后预测结果y层的权重和偏置，label为0～10，所以w size = 10\n",
    "w_fc2 = weight_variable([512, 10])\n",
    "b_fc2 = bias_variable([10])\n",
    "\n",
    "# 对全连接层输出结果进行dropout\n",
    "keep_prob = tf.placeholder(tf.float32)   # dropout的比例\n",
    "output_fc1_drop = tf.nn.dropout(output_fc1, keep_prob)\n",
    "\n",
    "# 输出层y，预测结果为0～10\n",
    "y_conv=tf.nn.softmax(tf.matmul(output_fc1_drop, w_fc2) + b_fc2)\n",
    "\n",
    "# 生成并初始化session\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 交叉熵损失\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=y_conv))\n",
    "train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)    # 梯度下降：学习率0.001\n",
    "correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))  # 预测结果：预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 准确率 = true的数量/全部结果\n",
    "sess.run(tf.global_variables_initializer())     # 初始化所有变量\n",
    "\n",
    "# 训练\n",
    "for i in range(3000+1):\n",
    "    batch = mnist.train.next_batch(50)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n",
    "    if i%1000 == 0:\n",
    "        train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})\n",
    "        print(\"step %d, train accuracy %g, test accuracy %g\"%\n",
    "              (i, train_accuracy, accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})))\n",
    "       # print(\"test accuracy %g\"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "两层卷积+池化，无正则函数\n",
    "1. kernel1 = [2,2,1,10], max_pool1 = 2*2, kernel2 = [2,2,10,64], max_pool2 = 2 * 2, full connection=1024, 准确率0.947 \n",
    "2. kernel1 = [2,2,1,10], max_pool1 = 2*2, kernel2 = [2,2,10,32], max_pool2 = 2 * 2, full connection=512, 准确率0.935   \n",
    "kernel数量减少，准确率也下降了。所以kernel的增加和全连接层神经元的增加能提高准确率？"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 尝试增加卷积和池化层数\n",
    "增加为两层卷积+池化，并加入正则项"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.1 定义网络"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2. 定义网络\n",
    "# 第一层：生成k和b，kernel = 5*5, input通道=1，32个out_channels\n",
    "kernel_conv1 = weight_variable([5, 5, 1, 32])\n",
    "b_conv1 = bias_variable([32])  # b设定为和kernel数量相同的长度\n",
    "\n",
    "# 为适应卷积计算 [batch, in_height, in_width, in_channels]，变更x shape，\n",
    "# -1相当于None（并不确定输入多少数据），28*28的图像，1是输入通道数\n",
    "x_input = tf.reshape(x, [-1,28,28,1])\n",
    "\n",
    "# 对input图像数据进行卷积计算，加上bias用relu激活，再池化\n",
    "output_conv1 = tf.nn.relu(conv2d(x_input, kernel_conv1) + b_conv1)\n",
    "output_pool1 = max_pool_2x2(output_conv1)\n",
    "\n",
    "# 第二层：kernel = 5*5，input size = 32，output size = 64； bias size = kernel size\n",
    "kernel_conv2 = weight_variable([5, 5, 32, 64])\n",
    "b_conv2 = bias_variable([64])\n",
    "\n",
    "# 卷积+池化，激活函数：relu，计算同第一层\n",
    "output_conv2 = tf.nn.relu(conv2d(output_pool1, kernel_conv2) + b_conv2)\n",
    "output_pool2 = max_pool_2x2(output_conv2)\n",
    "\n",
    "# full connection层权重和偏置：现图像input size=7*7*64(channels)，设定1024个神经元全链接计算, 所以w的size=[7*7*64,1024]\n",
    "w_fc1 = weight_variable([7 * 7 * 64, 1024])\n",
    "b_fc1 = bias_variable([1024])\n",
    "# 第二层池化后的输出转换为向量形式 \n",
    "output_pool2_flat = tf.reshape(output_pool2, [-1, 7*7*64])\n",
    "# 全连接层的输出：第二层池化结果*权重w+bias，再用relu激活\n",
    "output_fc1 = tf.nn.relu(tf.matmul(output_pool2_flat, w_fc1) + b_fc1)\n",
    "\n",
    "# 最后预测结果y层的权重和偏置，label为0～10，所以w size = 10\n",
    "w_fc2 = weight_variable([1024, 10])\n",
    "b_fc2 = bias_variable([10])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.2 训练网络并评价\n",
    "两层卷积+池化，无正则项参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 0, train accuracy 0.22, test accuracy 0.1006\n",
      "step 1000, train accuracy 0.8, test accuracy 0.7597\n",
      "step 2000, train accuracy 1, test accuracy 0.964\n",
      "step 3000, train accuracy 0.98, test accuracy 0.9808\n"
     ]
    }
   ],
   "source": [
    "# 输出层y，预测结果为0～10\n",
    "y_conv=tf.nn.softmax(tf.matmul(output_fc1, w_fc2) + b_fc2)\n",
    "\n",
    "# 生成并初始化session\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 交叉熵损失\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=y_conv))\n",
    "train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)    # 梯度下降：学习率0.001\n",
    "correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))  # 预测结果：预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 准确率 = true的数量/全部结果\n",
    "sess.run(tf.global_variables_initializer())     # 初始化所有变量\n",
    "\n",
    "# 训练\n",
    "for i in range(3000+1):\n",
    "    batch = mnist.train.next_batch(50)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n",
    "    if i%1000 == 0:\n",
    "        train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})\n",
    "        print(\"step %d, train accuracy %g, test accuracy %g\"%\n",
    "              (i, train_accuracy, accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "两层卷积+池化，不加正则项，3000个train_step，获得98的准确率"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.3 尝试调整正则化参数和学习率\n",
    "1）dropout  &emsp;2）l2正则"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 0, train accuracy 0.04, test accuracy 0.0586\n",
      "step 1000, train accuracy 0.94, test accuracy 0.955\n",
      "step 2000, train accuracy 0.96, test accuracy 0.9703\n",
      "step 3000, train accuracy 0.98, test accuracy 0.9773\n",
      "step 4000, train accuracy 1, test accuracy 0.9805\n"
     ]
    }
   ],
   "source": [
    "# 对全连接层输出结果进行dropout\n",
    "keep_prob = tf.placeholder(tf.float32)   # dropout的比例\n",
    "output_fc1_drop = tf.nn.dropout(output_fc1, keep_prob)\n",
    "\n",
    "# 输出层y，预测结果为0～10\n",
    "y_conv=tf.nn.softmax(tf.matmul(output_fc1_drop, w_fc2) + b_fc2)\n",
    "\n",
    "# 生成并初始化session\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 交叉熵损失\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=y_conv))\n",
    "train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)    # 梯度下降：学习率0.001\n",
    "correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))  # 预测结果：预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 准确率 = true的数量/全部结果\n",
    "sess.run(tf.global_variables_initializer())     # 初始化所有变量\n",
    "\n",
    "# 训练\n",
    "for i in range(4000+1):\n",
    "    batch = mnist.train.next_batch(50)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n",
    "    if i%1000 == 0:\n",
    "        train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})\n",
    "        print(\"step %d, train accuracy %g, test accuracy %g\"%\n",
    "              (i, train_accuracy, accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})))\n",
    "       # print(\"test accuracy %g\"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 不加正则化函数，使用dropout正则，4000个训练step在测试集上有98的准确率\n",
    "2. 用常用的学习率调整，当learning_rate = 0.001和0.01时，准确率下降。下降过快，跳过了最低点？"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 0, train accuracy 0.16, test accuracy 0.0982\n",
      "step 1000, train accuracy 0.86, test accuracy 0.8701\n",
      "step 2000, train accuracy 0.96, test accuracy 0.9719\n",
      "step 3000, train accuracy 0.98, test accuracy 0.9794\n",
      "step 4000, train accuracy 0.98, test accuracy 0.9822\n"
     ]
    }
   ],
   "source": [
    "# 交叉熵损失 + l2正则\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))\n",
    "regularizers = tf.nn.l2_loss(w_fc1) + tf.nn.l2_loss(w_fc2)\n",
    "loss = tf.reduce_mean(cross_entropy + 1e-5 * regularizers)\n",
    "# 梯度下降optimizer：adam，学习率0.0001\n",
    "train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\n",
    "\n",
    "# 预测与模型评价\n",
    "correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))  # 预测结果：预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 准确率 = true的数量/全部结果\n",
    "\n",
    "# 生成session并初始化所有变量\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 训练\n",
    "for i in range(4000+1):\n",
    "    batch = mnist.train.next_batch(50)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n",
    "    if i%1000 == 0:\n",
    "        train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})\n",
    "        print(\"step %d, train accuracy %g, test accuracy %g\"%\n",
    "              (i, train_accuracy, accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})))\n",
    "       # print(\"test accuracy %g\"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对全连接层和输出层进行l2正则，与dropout比较，结果差不多。所以对于正则化的选择dropout和l2正则都可行？ "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
