{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "ccxssRWbFleF"
   },
   "source": [
    "## **第七周作业  卷积神经网络实现  实现 MNIST 手写数字识别**\n",
    "\n",
    "采用框架   TensorFlow Slim\n",
    "\n",
    "评价标准\n",
    "\n",
    "    准确度达到98%或者以上60分，作为及格标准，未达到者本作业不及格，不予打分。\n",
    "    \n",
    "    使用了正则化因子或文档中给出描述：10分。\n",
    "    \n",
    "    手动初始化参数或文档中给出描述：10分，不设置初始化参数的，只使用默认初始化认为学员没考虑到初始化问题，不给分。\n",
    "    \n",
    "    学习率调整：10分，需要文档中给出描述。\n",
    "    \n",
    "    卷积kernel size和数量调整：10分，需要文档中给出描述。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "syqoh-8RGmlz"
   },
   "source": [
    "## 1. 导入数据包\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "autoexec": {
      "startup": false,
      "wait_interval": 0
     }
    },
    "colab_type": "code",
    "id": "sxIQUiYeVGqj"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use the retry module or similar alternatives.\n"
     ]
    }
   ],
   "source": [
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf\n",
    "import tensorflow.contrib.slim as slim\n",
    "FLAGS = None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "TqRnS3yrHFoo"
   },
   "source": [
    "## 2. 读取数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {
     "autoexec": {
      "startup": false,
      "wait_interval": 0
     },
     "base_uri": "https://localhost:8080/",
     "height": 437
    },
    "colab_type": "code",
    "executionInfo": {
     "elapsed": 1343,
     "status": "ok",
     "timestamp": 1531762880665,
     "user": {
      "displayName": "Fan Henry",
      "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
      "userId": "100574945560652196640"
     },
     "user_tz": -480
    },
    "id": "QH16WvKEWJAp",
    "outputId": "9a15b9b6-892c-41c7-f378-48bfd3997b01"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-2-698ada706af1>:3: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please write your own downloading logic.\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\train-images-idx3-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\train-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.one_hot on tensors.\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\t10k-images-idx3-ubyte.gz\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = '/tmp/tensorflow/mnist/input_data'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "0OII3bY9HOXe"
   },
   "source": [
    "## 3. 建立模型\n",
    "        神经网络架构采用 四层模型\n",
    "        输入层为  n*28*28*1\n",
    "        第一层：1. 卷积层  5*5 的核 通道（数量） 32个   2.池化层 最大池化核 2*2 不改变通道数 \n",
    "        第二层： 1. 卷积层  5*5 的核 通道 （数量）64个   2.池化层 最大池化核 2*2 不改变通道数\n",
    "        第三层： 全连接层  包括1024个节点 \n",
    "        第四层： 输出层     包括10个节点\n",
    "        \n",
    "        初始化参数：\n",
    "        \n",
    "        各层权重初始化参数 truncated_normal_initializer 截断的正态分布中输出随机值  标准偏差 0.01\n",
    "        各层偏置初始化参数 constant_initializer 常数初始化 值为 0.012\n",
    "        正则化参数 ：经过测试 l2 正则比 l1 或 l1 加 l2 效果好，故采取 l2 值为 0.075\n",
    "        激活函数：  自定义 swish 激活函数\n",
    "        另使用了 batch normalization 技术 和 dropout 技术\n",
    "        为了防止过拟合严重。 dropout 的参数设定为 0.3 效果较好\n",
    "        \n",
    "                               "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {
     "autoexec": {
      "startup": false,
      "wait_interval": 0
     }
    },
    "colab_type": "code",
    "id": "PXd1pHtI5yy_"
   },
   "outputs": [],
   "source": [
    "def swish(x):\n",
    "    return tf.nn.sigmoid(x)*x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {
     "autoexec": {
      "startup": false,
      "wait_interval": 0
     }
    },
    "colab_type": "code",
    "id": "RUtuTKIXzK06"
   },
   "outputs": [],
   "source": [
    "def convolution(inputs):  \n",
    "    batch_norm_params = {'is_training': True, 'decay': 0.91, 'updates_collections': None}\n",
    "    with slim.arg_scope([slim.conv2d, slim.fully_connected],\n",
    "                        normalizer_fn=slim.batch_norm,\n",
    "                        activation_fn = swish,\n",
    "                        normalizer_params=batch_norm_params,\n",
    "                        \n",
    "                        weights_initializer=tf.truncated_normal_initializer(stddev=0.01), \n",
    "                        biases_initializer= tf.constant_initializer(0.012),      \n",
    "                        weights_regularizer = slim.l2_regularizer(0.075)):\n",
    "        x = tf.reshape(inputs, [-1, 28, 28, 1])\n",
    " \n",
    "        net = slim.conv2d(x, 32, [5, 5], scope='conv1')\n",
    "        net = slim.max_pool2d(net, [2, 2], scope='pool1')\n",
    "        net = slim.conv2d(net, 64, [5, 5], scope='conv2')\n",
    "        net = slim.max_pool2d(net, [2, 2], scope='pool2')\n",
    "        net = slim.flatten(net, scope='flatten3')\n",
    "\n",
    "        net = slim.fully_connected(net, 1024, scope='fc3')\n",
    "        net = slim.dropout(net, is_training=True, scope='dropout4',keep_prob=0.3)  # 0.5 by default\n",
    "        outputs = slim.fully_connected(net, 10, activation_fn=None, normalizer_fn=None, scope='fco')\n",
    "    return outputs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "autoexec": {
      "startup": false,
      "wait_interval": 0
     }
    },
    "colab_type": "code",
    "id": "0p0SwD61zWFC"
   },
   "outputs": [],
   "source": [
    "x_ = tf.placeholder(tf.float32, [None, 784])\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "y = convolution(x_)    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "3eVWKt77KeJc"
   },
   "source": [
    "## 4. 计算梯度、准确率\n",
    "         使用 python 中的 with 语句进行上下文管理，进行代码简化\n",
    "         1. 使用 sparse_softmax_cross_entropy 加快计算交叉熵的速度，当问题只有一个正确答案时。可运用这个函数\n",
    "         2. 定义学习率，并且调用 exponential_decay 设定指数衰减的学习率 。初始值设定为 10的 -3 次,设定每 1/2 的 epoch 衰减一次.衰减率为 0.92,staircase:如果为true，即楼梯为真，说明学习率要向楼梯一样下降               \n",
    "         3. 梯度下降优化器仍然采用Adam 收敛速度较快\n",
    "        \n",
    "         "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "autoexec": {
      "startup": false,
      "wait_interval": 0
     },
     "base_uri": "https://localhost:8080/",
     "height": 201
    },
    "colab_type": "code",
    "executionInfo": {
     "elapsed": 922,
     "status": "ok",
     "timestamp": 1531762885819,
     "user": {
      "displayName": "Fan Henry",
      "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
      "userId": "100574945560652196640"
     },
     "user_tz": -480
    },
    "id": "XQmLoOvqWTOq",
    "outputId": "78def7d3-ba94-44a5-b15e-f9d365472168"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-6-b4d755e91b70>:2: sparse_softmax_cross_entropy (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
      "Instructions for updating:\n",
      "Use tf.losses.sparse_softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.\n",
      "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:434: compute_weighted_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
      "Instructions for updating:\n",
      "Use tf.losses.compute_weighted_loss instead.\n",
      "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:147: add_arg_scope.<locals>.func_with_args (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
      "Instructions for updating:\n",
      "Use tf.losses.add_loss instead.\n"
     ]
    }
   ],
   "source": [
    "#交叉熵\n",
    "with tf.name_scope(\"crossEntro\"):\n",
    "    loss = slim.losses.sparse_softmax_cross_entropy(labels=tf.argmax(y_,1),logits=y)\n",
    "\n",
    "#梯度优化器\n",
    "with tf.name_scope(\"adamOptimizer\"):\n",
    "    batch = tf.Variable(0)\n",
    "    learning_rate = tf.train.exponential_decay(\n",
    "          1e-3,  \n",
    "          batch * 300,   \n",
    "          mnist.train.images.shape[0]//2,  \n",
    "          0.92,  \n",
    "          staircase=True)\n",
    "    train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss,global_step=batch)    \n",
    "\n",
    "#预测结果评估\n",
    "with tf.name_scope(\"accuracy\"):\n",
    "    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Uz1fnB-HPiB8"
   },
   "source": [
    "## 5.创建会话，训练模型并且预测\n",
    "\n",
    "采用了 30000次迭代，每次迭代采用 300个数据作为一个batch 进行训练。\n",
    "可以看出  测试集准确率在 99.37左右。 很多时候上了 99.4。 最高达到了 99.45 \n",
    "最后在 99.36。没有上 99.5  遗憾。。。。。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "autoexec": {
      "startup": false,
      "wait_interval": 0
     },
     "base_uri": "https://localhost:8080/",
     "height": 562
    },
    "colab_type": "code",
    "executionInfo": {
     "elapsed": 1015696,
     "status": "ok",
     "timestamp": 1531763901619,
     "user": {
      "displayName": "Fan Henry",
      "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128",
      "userId": "100574945560652196640"
     },
     "user_tz": -480
    },
    "id": "Os_N_wxnWci5",
    "outputId": "65a3f2db-443a-40cf-ae66-e8f37d212e5c"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 1000, training accuracy 0.998218 --- testing accuracy 0.9916\n",
      "step 2000, training accuracy 0.999873 --- testing accuracy 0.9929\n",
      "step 3000, training accuracy 0.999945 --- testing accuracy 0.9937\n",
      "step 4000, training accuracy 1 --- testing accuracy 0.993\n",
      "step 5000, training accuracy 1 --- testing accuracy 0.9932\n",
      "step 6000, training accuracy 1 --- testing accuracy 0.9941\n",
      "step 7000, training accuracy 1 --- testing accuracy 0.994\n",
      "step 8000, training accuracy 1 --- testing accuracy 0.9944\n",
      "step 9000, training accuracy 1 --- testing accuracy 0.9934\n",
      "step 10000, training accuracy 1 --- testing accuracy 0.9937\n",
      "step 11000, training accuracy 1 --- testing accuracy 0.9931\n",
      "step 12000, training accuracy 1 --- testing accuracy 0.9942\n",
      "step 13000, training accuracy 1 --- testing accuracy 0.9944\n",
      "step 14000, training accuracy 1 --- testing accuracy 0.994\n",
      "step 15000, training accuracy 0.999982 --- testing accuracy 0.9938\n",
      "step 16000, training accuracy 1 --- testing accuracy 0.9937\n",
      "step 17000, training accuracy 1 --- testing accuracy 0.9931\n",
      "step 18000, training accuracy 1 --- testing accuracy 0.9934\n",
      "step 19000, training accuracy 1 --- testing accuracy 0.9938\n",
      "step 20000, training accuracy 0.999982 --- testing accuracy 0.9936\n",
      "step 21000, training accuracy 1 --- testing accuracy 0.9944\n",
      "step 22000, training accuracy 1 --- testing accuracy 0.9939\n",
      "step 23000, training accuracy 0.999982 --- testing accuracy 0.9942\n",
      "step 24000, training accuracy 0.999982 --- testing accuracy 0.9939\n",
      "step 25000, training accuracy 1 --- testing accuracy 0.9939\n",
      "step 26000, training accuracy 1 --- testing accuracy 0.9941\n",
      "step 27000, training accuracy 1 --- testing accuracy 0.9941\n",
      "step 28000, training accuracy 0.999982 --- testing accuracy 0.9945\n",
      "step 29000, training accuracy 1 --- testing accuracy 0.9939\n",
      "step 30000, training accuracy 1 --- testing accuracy 0.9936\n"
     ]
    }
   ],
   "source": [
    "with tf.Session() as sess:\n",
    "    tf.global_variables_initializer().run()\n",
    "    for i in range(30000):\n",
    "        X_batch, y_batch = mnist.train.next_batch(batch_size=300)\n",
    "        sess.run(train_step,feed_dict={x_: X_batch, y_: y_batch})\n",
    "        if (i+1) % 1000 == 0:\n",
    "            train_accuracy = sess.run(accuracy,feed_dict={x_: mnist.train.images, y_: mnist.train.labels})\n",
    "            test_accuracy = sess.run(accuracy,feed_dict={x_: mnist.test.images, y_: mnist.test.labels})\n",
    "            print (\"step %d, training accuracy %g --- testing accuracy %g\" % (i+1, train_accuracy,test_accuracy))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "rFQBAEIrN_Og"
   },
   "source": [
    "## slim 框架 学习总结：\n",
    "\n",
    "\n",
    "slim被放在tensorflow.contrib这个库下面，导入的方法如下：\n",
    " \n",
    "import tensorflow.contrib.slim as slim\n",
    "\n",
    "\n",
    "**slim 包含的各种模块**：\n",
    "\n",
    "arg_scope、data、evaluation、layers、learning、losses、metrics、nets、queues、regularizers \n",
    "\n",
    "\n",
    "**slim.arg_scope** 意义及用法：\n",
    "\n",
    "意义： 为给定的 list_ops_or_scope 存储默认的参数\n",
    "\n",
    "用法 一般和with 一起用， 就类似于定义了一个范围 ，并且提供一些通用的参数\n",
    "在这个范围内的一些操作就不用自己定义参数了，使用这些通用参数就行。如果有特殊参数，这些操作可以另外定义。 非常方便\n",
    "\n",
    "\n",
    "**slim中 的 repeat** 操作 \n",
    "如果有多个重复性的 操作如卷积层操作可以采用 repeat 简化代码：\n",
    "\n",
    "net = slim.conv2d(net, 256, [3, 3], scope='conv3_1')\n",
    "net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')\n",
    "net = slim.conv2d(net, 256, [3, 3], scope='conv3_3')\n",
    "\n",
    "net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')\n",
    "\n",
    "**stack 操作**则可以处理 卷积核或者输出不一样的操作 从而简化代码：\n",
    "\n",
    "\n",
    "x = slim.conv2d(x, 32, [3, 3], scope='core/core_1')\n",
    "x = slim.conv2d(x, 32, [1, 1], scope='core/core_2')\n",
    "x = slim.conv2d(x, 64, [3, 3], scope='core/core_3')\n",
    "x = slim.conv2d(x, 64, [1, 1], scope='core/core_4')\n",
    "\n",
    "slim.stack(x, slim.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core')\n",
    "\n",
    "\n",
    "\n",
    "### 指数衰减\n",
    "动态调节学习率，最本质的作用当优化到了一定的瓶颈后，出现当前的学习率已不适用于优化，相对而言，学习率偏大，迈的步子较大，到不了底部；即需要降低学习速率。\n",
    "\n",
    "\n",
    "### dropout \n",
    "dropout是指在深度学习网络的训练过程中，对于神经网络单元，按照一定的概率将其暂时从网络中丢弃。用于防止过拟合\n",
    "\n",
    "### batch normalization\n",
    "可以使训练更深的网络变容易，加速收敛，还有一定正则化的效果"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "collapsed_sections": [],
   "default_view": {},
   "name": "CNN 卷积实现MNIST 七周.ipynb",
   "provenance": [],
   "version": "0.3.2",
   "views": {}
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
