{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "本小节对卷积核的深度进行调整-这里将深度调整为之前的一倍大小"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting ./MNIST/train-images-idx3-ubyte.gz\n",
      "Extracting ./MNIST/train-labels-idx1-ubyte.gz\n",
      "Extracting ./MNIST/t10k-images-idx3-ubyte.gz\n",
      "Extracting ./MNIST/t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# 导入数据\n",
    "data_dir = './MNIST'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "#定义数据\n",
    "x = tf.placeholder(tf.float32, [None, 784])   # 输入图片的大小，28x28=784\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])   # 输出0-9共10个数字\n",
    "learning_rate = tf.placeholder(tf.float32)    # 用于接收dropout操作的值，dropout为了防止过拟合\n",
    "\n",
    "with tf.name_scope('reshape'):\n",
    "#-1代表先不考虑输入的图片例子多少这个维度，后面的1是channel的数量，因为我们输入的图片是黑白的，因此channel是1，例如如果是RGB图像，那么channel就是3\n",
    "  x_image = tf.reshape(x, [-1, 28, 28, 1])    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "#from keras.layers.initializers import he_normal\n",
    "# 卷积层定义\n",
    "#函数参数中的filter_size是指卷积核的大小,step表示布长\n",
    "#这里使用函数tf.contrib.layers.variance_scaling_initializer来对权重参数进行He/MRSA初始化，更改参数可以实现Xavier初始化\n",
    "def conv_op(input_op, filter_size, channel_out, name):\n",
    "    h_conv1 = tf.layers.conv2d(input_op, channel_out, [filter_size,filter_size],\n",
    "                             padding='SAME',\n",
    "                             activation=tf.nn.relu,name=name,kernel_initializer=tf.contrib.layers.variance_scaling_initializer())    \n",
    "    return h_conv1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 最大池化层\n",
    "def maxPool_op(input_op, filter_size, step, name):\n",
    "    h_pool1 = tf.layers.max_pooling2d(input_op, pool_size=[filter_size,filter_size],\n",
    "                        strides=[step, step], padding='VALID',name=name)\n",
    "    return h_pool1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"\\ndef full_connection(input_op, channel_out, name):\\n    channel_in = input_op.get_shape()[-1].value\\n    with tf.name_scope(name) as scope:\\n        weight = tf.Variable(tf.truncated_normal([channel_in, channel_out],mean=0,\\n                                                  dtype=tf.float32, stddev=0.1),\\n                                                  collections=[tf.GraphKeys.GLOBAL_VARIABLES,'WEIGHTS'])\\n        #weight = tf.get_variable(shape=[channel_in, channel_out], dtype=tf.float32,\\n        #                         initializer=xavier_initializer_conv2d(), name=scope + 'weight')\\n        bias = tf.Variable(tf.constant(value=0.0, shape=[channel_out], dtype=tf.float32), name='bias')\\n        input_op_reshape = tf.reshape(input_op, [-1, 7 * 7 * 64])\\n        fc = tf.nn.relu(tf.matmul(input_op_reshape, weight) + bias)\\n        return fc\\n\""
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 全连接层\n",
    "'''\n",
    "def full_connection(input_op, channel_out, name):\n",
    "    channel_in = input_op.get_shape()[-1].value\n",
    "    with tf.name_scope(name) as scope:\n",
    "        weight = tf.Variable(tf.truncated_normal([channel_in, channel_out],mean=0,\n",
    "                                                  dtype=tf.float32, stddev=0.1),\n",
    "                                                  collections=[tf.GraphKeys.GLOBAL_VARIABLES,'WEIGHTS'])\n",
    "        #weight = tf.get_variable(shape=[channel_in, channel_out], dtype=tf.float32,\n",
    "        #                         initializer=xavier_initializer_conv2d(), name=scope + 'weight')\n",
    "        bias = tf.Variable(tf.constant(value=0.0, shape=[channel_out], dtype=tf.float32), name='bias')\n",
    "        input_op_reshape = tf.reshape(input_op, [-1, 7 * 7 * 64])\n",
    "        fc = tf.nn.relu(tf.matmul(input_op_reshape, weight) + bias)\n",
    "        return fc\n",
    "'''"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "#第一层卷积层，卷积核为7*7，深度为32，步长为1，输出为28*28*64\n",
    "conv1=conv_op(x_image,filter_size=7,channel_out=64,name='conv1')\n",
    "#第一个池化层，输出14*14*64\n",
    "pool1=maxPool_op(conv1,filter_size=2,step=2,name='pool1')\n",
    "#第二层卷积层，卷积核为3*3，深度为64，步长为1，输出为28*28*128\n",
    "conv2=conv_op(pool1,filter_size=7,channel_out=128,name='conv2')\n",
    "#第二个池化层，输出7*7*128\n",
    "pool2=maxPool_op(conv2,filter_size=2,step=2,name='pool2')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow.contrib.layers import flatten\n",
    "#全连接层，映射7*7*64特征图，映射为1024个特征\n",
    "with tf.name_scope('fc1'):\n",
    "  h_pool2_flat = flatten(pool2)\n",
    "  h_fc1 = tf.layers.dense(h_pool2_flat, 1024, activation=tf.nn.relu)\n",
    "\n",
    "# Dropout - controls the complexity of the model, prevents co-adaptation of\n",
    "# features.\n",
    "with tf.name_scope('dropout'):\n",
    "  keep_prob = tf.placeholder(tf.float32)\n",
    "  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n",
    "\n",
    "# Map the 1024 features to 10 classes, one for each digit\n",
    "#这里同上，需要注意的是，最后暂不需要使用激活函数\n",
    "with tf.name_scope('fc2'):\n",
    "  y = tf.layers.dense(h_fc1_drop, 10, activation=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 设置正则化方法\n",
    "REGULARIZATION_RATE = 0.0001 # 比较合适的参数\n",
    "#REGULARIZATION_RATE = 0.001 # 比较合适的参数\n",
    "regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)  # 定义L2正则化损失函数\n",
    "#regularization = regularizer(weights1) + regularizer(weights2)  # 计算模型的正则化损失"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 50, entropy loss: 0.250030, l2_loss: 0.121494, total loss: 0.371524\n",
      "0.92\n",
      "step 100, entropy loss: 0.139898, l2_loss: 0.121725, total loss: 0.261623\n",
      "0.99\n",
      "step 150, entropy loss: 0.181177, l2_loss: 0.121758, total loss: 0.302935\n",
      "0.98\n",
      "step 200, entropy loss: 0.089727, l2_loss: 0.121710, total loss: 0.211437\n",
      "0.96\n",
      "step 250, entropy loss: 0.111228, l2_loss: 0.121624, total loss: 0.232852\n",
      "0.98\n",
      "step 300, entropy loss: 0.113803, l2_loss: 0.121582, total loss: 0.235385\n",
      "0.99\n",
      "step 350, entropy loss: 0.049701, l2_loss: 0.121479, total loss: 0.171179\n",
      "0.99\n",
      "step 400, entropy loss: 0.070000, l2_loss: 0.121365, total loss: 0.191364\n",
      "1.0\n",
      "step 450, entropy loss: 0.037058, l2_loss: 0.121254, total loss: 0.158312\n",
      "1.0\n",
      "step 500, entropy loss: 0.040252, l2_loss: 0.121104, total loss: 0.161356\n",
      "1.0\n",
      "0.9836\n",
      "step 550, entropy loss: 0.023562, l2_loss: 0.120980, total loss: 0.144542\n",
      "1.0\n",
      "step 600, entropy loss: 0.030737, l2_loss: 0.120863, total loss: 0.151600\n",
      "1.0\n",
      "step 650, entropy loss: 0.025749, l2_loss: 0.120719, total loss: 0.146468\n",
      "1.0\n",
      "step 700, entropy loss: 0.023180, l2_loss: 0.120574, total loss: 0.143755\n",
      "1.0\n",
      "step 750, entropy loss: 0.043189, l2_loss: 0.120439, total loss: 0.163628\n",
      "1.0\n",
      "step 800, entropy loss: 0.036021, l2_loss: 0.120289, total loss: 0.156310\n",
      "1.0\n",
      "step 850, entropy loss: 0.046107, l2_loss: 0.120147, total loss: 0.166254\n",
      "1.0\n",
      "step 900, entropy loss: 0.016459, l2_loss: 0.119998, total loss: 0.136457\n",
      "1.0\n",
      "step 950, entropy loss: 0.037150, l2_loss: 0.119852, total loss: 0.157001\n",
      "1.0\n",
      "step 1000, entropy loss: 0.026581, l2_loss: 0.119696, total loss: 0.146277\n",
      "1.0\n",
      "0.9884\n",
      "step 1050, entropy loss: 0.184612, l2_loss: 0.119533, total loss: 0.304145\n",
      "0.99\n",
      "step 1100, entropy loss: 0.010344, l2_loss: 0.119394, total loss: 0.129738\n",
      "1.0\n",
      "step 1150, entropy loss: 0.046921, l2_loss: 0.119241, total loss: 0.166162\n",
      "0.99\n",
      "step 1200, entropy loss: 0.041485, l2_loss: 0.119090, total loss: 0.160575\n",
      "1.0\n",
      "step 1250, entropy loss: 0.033401, l2_loss: 0.118935, total loss: 0.152336\n",
      "1.0\n",
      "step 1300, entropy loss: 0.044435, l2_loss: 0.118781, total loss: 0.163216\n",
      "1.0\n",
      "step 1350, entropy loss: 0.003758, l2_loss: 0.118616, total loss: 0.122374\n",
      "1.0\n",
      "step 1400, entropy loss: 0.015504, l2_loss: 0.118441, total loss: 0.133946\n",
      "1.0\n",
      "step 1450, entropy loss: 0.022747, l2_loss: 0.118281, total loss: 0.141028\n",
      "1.0\n",
      "step 1500, entropy loss: 0.017098, l2_loss: 0.118110, total loss: 0.135208\n",
      "1.0\n",
      "0.9897\n",
      "step 1550, entropy loss: 0.008418, l2_loss: 0.117937, total loss: 0.126355\n",
      "1.0\n",
      "step 1600, entropy loss: 0.019578, l2_loss: 0.117768, total loss: 0.137346\n",
      "1.0\n",
      "step 1650, entropy loss: 0.014743, l2_loss: 0.117592, total loss: 0.132335\n",
      "1.0\n",
      "step 1700, entropy loss: 0.015234, l2_loss: 0.117440, total loss: 0.132675\n",
      "1.0\n",
      "step 1750, entropy loss: 0.025730, l2_loss: 0.117270, total loss: 0.143000\n",
      "1.0\n",
      "step 1800, entropy loss: 0.014318, l2_loss: 0.117096, total loss: 0.131414\n",
      "1.0\n",
      "step 1850, entropy loss: 0.005709, l2_loss: 0.116918, total loss: 0.122627\n",
      "1.0\n",
      "step 1900, entropy loss: 0.014127, l2_loss: 0.116751, total loss: 0.130879\n",
      "1.0\n",
      "step 1950, entropy loss: 0.002003, l2_loss: 0.116591, total loss: 0.118593\n",
      "1.0\n",
      "step 2000, entropy loss: 0.001333, l2_loss: 0.116418, total loss: 0.117751\n",
      "1.0\n",
      "0.9899\n"
     ]
    }
   ],
   "source": [
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularization=0.0\n",
    "for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES):\n",
    "    regularization=regularization+regularizer(w)\n",
    "l2_loss=regularization\n",
    "#l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection('WEIGHTS')] )\n",
    "#total_loss = cross_entropy + 7e-5*l2_loss\n",
    "total_loss = cross_entropy + l2_loss\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for step in range(2000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  #lr = 0.01\n",
    "  lr = 0.2    #比较合适的学习率\n",
    "    \n",
    "    \n",
    "\n",
    "  _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr, keep_prob:0.5})\n",
    "  \n",
    "  if (step+1) % 50 == 0:\n",
    "    print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % \n",
    "            (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys, keep_prob:0.5}))\n",
    "  if (step+1) % 500 == 0:\n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                    y_: mnist.test.labels, keep_prob:0.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "心得与小结:\n",
    "这里可以看到正确率很快就达到了接近99%，在训练集上的准确率一直都是100%，这里受限制与电脑的性能，不做太多的参数搜索。实际使用中需要根据实际情况进行参数搜索。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "总结:\n",
    "本周学习到如下:\n",
    "卷积神经网络的层级结构主要包含:\n",
    "• 数据输入层/ Input layer\n",
    "• 卷积计算层/ CONV layer\n",
    "• ReLU激励层 / ReLU layer\n",
    "• 池化层 / Pooling layer\n",
    "• 全连接层 / FC layer\n",
    "卷积层应该可以这么理解：在卷积层中每个神经元连接数据窗的权重是固定的，每个神经元只关注一个特性。神经元就是图像处理中的滤波器，比如边缘检测专用的Sobel滤波器，即卷积层的每个滤波器都会有自己所关注一个图像特征，比如垂直边缘，水平边缘，颜色，纹理等等，这些所有神经元加起来就好比就是整张图像的特征提取器集合。\n",
    "池化层夹在连续的卷积层中间， 用于压缩数据和参数的量，减小过拟合。\n",
    "简而言之，如果输入是图像的话，那么池化层的最主要作用就是压缩图像。因为池化层具有的特征不变性，很适合用于压缩图像。也可以在一定程度上防止过拟合的情况。\n",
    "一般来说深度神经网络的权重初始化参数的初始化方式有很多种，这里根据我们前面的代码，我们知道选择合适的初始化方式有助于我们加速进行收敛。在某些情况下增加卷积核的大小和数量有助于提高网络的收敛速度和准确率。\n",
    "对于深度学习这种包含很多隐层的网络结构，在训练过程中，因为各层参数不停在变化，所以每个隐层都会面临covariate shift的问题，也就是在训练过程中，隐层的输入分布老是变来变去，这就是所谓的“Internal Covariate Shift”，Internal指的是深层网络的隐层，是发生在网络内部的事情，而不是covariate shift问题只发生在输入层。因此BatchNorm就可以用来解决这个问题。\n",
    "BN的基本思想其实相当直观：因为深层神经网络在做非线性变换前的激活输入值（就是那个x=WU+B，U是输入）随着网络深度加深或者在训练过程中，其分布逐渐发生偏移或者变动，之所以训练收敛慢，一般是整体分布逐渐往非线性函数的取值区间的上下限两端靠近（对于Sigmoid函数来说，意味着激活输入值WU+B是大的负值或正值），所以这导致反向传播时低层神经网络的梯度消失，这是训练深层神经网络收敛越来越慢的本质原因，而BN就是通过一定的规范化手段，把每层神经网络任意神经元这个输入值的分布强行拉回到均值为0方差为1的标准正态分布，其实就是把越来越偏的分布强制拉回比较标准的分布，这样使得激活输入值落在非线性函数对输入比较敏感的区域，这样输入的小变化就会导致损失函数较大的变化，意思是这样让梯度变大，避免梯度消失问题产生，而且梯度变大意味着学习收敛速度快，能大大加快训练速度。这里可以有时间做个测试。这里没有使用BatchNorm。\n",
    "同样使用Dropout技术也有助于实现系统正则化，在一定程度上防止系统过拟合。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.15"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
