{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第六周作业  自己构建神经网络训练MNIST 数据集\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "MNIST由来自 250 个不同人手写的数字构成, 其中 50% 是高中学生, 50% 来自人口普查局 (the Census Bureau) 的工作人员. 测试集(test set) 也是同样比例的手写数字数据.\n",
    "\n",
    "问题描述\n",
    "使用tensorflow，构造并训练一个神经网络，在测试机上达到超过98%的准确率。\n",
    "解题提示\n",
    "在完成过程中，需要综合运用目前学到的基础知识：\n",
    "深度神经网络\n",
    "激活函数\n",
    "正则化\n",
    "初始化\n",
    "\n",
    "并探索如下超参数设置：\n",
    "隐层数量\n",
    "各隐层中神经元数量\n",
    "学习率\n",
    "正则化因子\n",
    "权重初始化分布参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 导入必要的包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use the retry module or similar alternatives.\n"
     ]
    }
   ],
   "source": [
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "FLAGS = None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.读取数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用了 tensorflow 中自带的 mnist相关方法获取了 mnist 数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-2-698ada706af1>:3: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please write your own downloading logic.\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\train-images-idx3-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\train-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.one_hot on tensors.\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\t10k-images-idx3-ubyte.gz\n",
      "Extracting /tmp/tensorflow/mnist/input_data\\t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Anaconda\\envs\\python3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = '/tmp/tensorflow/mnist/input_data'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "本设计采用了双隐层，一共三层的神经网络\n",
    "第一层隐层采用了2056个神经元\n",
    "第二层隐层采用了512个神经元\n",
    "由于神经元数目较多，在没有采用正则项的时候，测试准确度没有达到 98%以上。\n",
    "后来采用了L2 正则项。\n",
    "隐层采用ReLu 激活函数，输出层采用softmax 激活函数进行激活。\n",
    "\n",
    "经过测试初始化权重采用截断正态分布随机数 效果最好，偏置初始化为0.3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 搭建网络架构"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "LAYER1_SHAPE_X = 784\n",
    "LAYER1_SHAPE_Y = 2056\n",
    "LAYER2_SHAPE_X = 2056\n",
    "LAYER2_SHAPE_Y = 512\n",
    "LAYER3_SHAPE_X = 512\n",
    "LAYER3_SHAPE_Y = 10\n",
    "REGULATION_FACTOR = 0.01\n",
    "LEARNING_RATE = 0.001"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "#定义输入X 因为图像是28*28=784个像素，所以定义特征量为784\n",
    "X_ = tf.placeholder(tf.float32, [None, LAYER1_SHAPE_X])\n",
    "#定义预估输出Y\n",
    "y_ = tf.placeholder(tf.float32, [None, LAYER3_SHAPE_Y])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 第一层隐层\n",
    "hidden_W1 = tf.Variable(tf.truncated_normal([LAYER1_SHAPE_X,LAYER1_SHAPE_Y],stddev=0.05))\n",
    "hidden_B1 = tf.Variable(tf.constant(0.3,shape=[LAYER1_SHAPE_Y]))\n",
    "hidden_H1 = tf.nn.relu(tf.matmul(X_, hidden_W1) + hidden_B1)\n",
    "\n",
    "# 第二层隐层\n",
    "hidden_W2 = tf.Variable(tf.truncated_normal([LAYER2_SHAPE_X,LAYER2_SHAPE_Y],stddev=0.05))\n",
    "hidden_B2 = tf.Variable(tf.constant(0.3,shape=[LAYER2_SHAPE_Y]))\n",
    "hidden_H2 = tf.nn.relu(tf.matmul(hidden_H1, hidden_W2) + hidden_B2)\n",
    "\n",
    "# 输出层\n",
    "OUPTUT_W3 = tf.Variable(tf.truncated_normal([LAYER3_SHAPE_X,LAYER3_SHAPE_Y],stddev=0.05))\n",
    "OUPTUT_B3 = tf.Variable(tf.constant(0.3,shape=[LAYER3_SHAPE_Y]))\n",
    "y_pre = tf.matmul(hidden_H2, OUPTUT_W3) + OUPTUT_B3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 创建正则项"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 设置L2正则参数，得到正则化的值\n",
    "regularizer = tf.contrib.layers.l2_regularizer(REGULATION_FACTOR)  # 定义L2正则化损失函数\n",
    "reg_val = regularizer(OUPTUT_W3)  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. 计算交叉熵损失，梯度值，和准确率"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-7-291da9a8ac92>:2: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "\n",
      "Future major versions of TensorFlow will allow gradients to flow\n",
      "into the labels input on backprop by default.\n",
      "\n",
      "See tf.nn.softmax_cross_entropy_with_logits_v2.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "#算出交叉熵损失并于正则值相加得到目标函数\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_pre))\n",
    "cross_entropy_loss = cross_entropy + reg_val\n",
    "#计算梯度\n",
    "train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(cross_entropy_loss)\n",
    "\n",
    "#预测结果评估\n",
    "correct_prediction = tf.equal(tf.argmax(y_pre, 1), tf.argmax(y_, 1))  \n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. 创建Session 初始化，并且迭代计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "#创建Session\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "#启动初始化操作和内存分配任务\n",
    "sess.run(init_op)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, training accuracy 0.941655 --- testing accuracy 0.9389\n",
      "step 200, training accuracy 0.958345 --- testing accuracy 0.956\n",
      "step 300, training accuracy 0.963673 --- testing accuracy 0.9569\n",
      "step 400, training accuracy 0.9716 --- testing accuracy 0.9669\n",
      "step 500, training accuracy 0.975873 --- testing accuracy 0.969\n",
      "step 600, training accuracy 0.979127 --- testing accuracy 0.9727\n",
      "step 700, training accuracy 0.978182 --- testing accuracy 0.9667\n",
      "step 800, training accuracy 0.982073 --- testing accuracy 0.972\n",
      "step 900, training accuracy 0.983218 --- testing accuracy 0.9767\n",
      "step 1000, training accuracy 0.984164 --- testing accuracy 0.975\n",
      "step 1100, training accuracy 0.982636 --- testing accuracy 0.9728\n",
      "step 1200, training accuracy 0.989455 --- testing accuracy 0.9793\n",
      "step 1300, training accuracy 0.989727 --- testing accuracy 0.9784\n",
      "step 1400, training accuracy 0.989309 --- testing accuracy 0.9769\n",
      "step 1500, training accuracy 0.9874 --- testing accuracy 0.9749\n",
      "step 1600, training accuracy 0.988436 --- testing accuracy 0.9753\n",
      "step 1700, training accuracy 0.993073 --- testing accuracy 0.9815\n",
      "step 1800, training accuracy 0.992091 --- testing accuracy 0.9785\n",
      "step 1900, training accuracy 0.991364 --- testing accuracy 0.9769\n",
      "step 2000, training accuracy 0.9902 --- testing accuracy 0.978\n",
      "step 2100, training accuracy 0.992582 --- testing accuracy 0.9779\n",
      "step 2200, training accuracy 0.992345 --- testing accuracy 0.9783\n",
      "step 2300, training accuracy 0.994655 --- testing accuracy 0.9819\n",
      "step 2400, training accuracy 0.993745 --- testing accuracy 0.9792\n",
      "step 2500, training accuracy 0.995309 --- testing accuracy 0.9814\n",
      "step 2600, training accuracy 0.995364 --- testing accuracy 0.9822\n",
      "step 2700, training accuracy 0.993036 --- testing accuracy 0.9803\n",
      "step 2800, training accuracy 0.994945 --- testing accuracy 0.9801\n",
      "step 2900, training accuracy 0.996927 --- testing accuracy 0.9837\n",
      "step 3000, training accuracy 0.995091 --- testing accuracy 0.9805\n",
      "step 3100, training accuracy 0.994091 --- testing accuracy 0.9806\n",
      "step 3200, training accuracy 0.994655 --- testing accuracy 0.9808\n",
      "step 3300, training accuracy 0.995873 --- testing accuracy 0.9812\n",
      "step 3400, training accuracy 0.997182 --- testing accuracy 0.9806\n",
      "step 3500, training accuracy 0.996036 --- testing accuracy 0.9827\n",
      "step 3600, training accuracy 0.995727 --- testing accuracy 0.9813\n",
      "step 3700, training accuracy 0.991909 --- testing accuracy 0.9754\n",
      "step 3800, training accuracy 0.994582 --- testing accuracy 0.9807\n",
      "step 3900, training accuracy 0.997055 --- testing accuracy 0.9814\n",
      "step 4000, training accuracy 0.9962 --- testing accuracy 0.9804\n",
      "step 4100, training accuracy 0.995836 --- testing accuracy 0.9803\n",
      "step 4200, training accuracy 0.996836 --- testing accuracy 0.9805\n",
      "step 4300, training accuracy 0.993964 --- testing accuracy 0.9782\n",
      "step 4400, training accuracy 0.990364 --- testing accuracy 0.9772\n",
      "step 4500, training accuracy 0.995073 --- testing accuracy 0.9806\n",
      "step 4600, training accuracy 0.994909 --- testing accuracy 0.9814\n",
      "step 4700, training accuracy 0.997745 --- testing accuracy 0.9826\n",
      "step 4800, training accuracy 0.994255 --- testing accuracy 0.9788\n",
      "step 4900, training accuracy 0.9956 --- testing accuracy 0.9802\n",
      "step 5000, training accuracy 0.997273 --- testing accuracy 0.9826\n",
      "step 5100, training accuracy 0.997782 --- testing accuracy 0.983\n",
      "step 5200, training accuracy 0.997764 --- testing accuracy 0.9828\n",
      "step 5300, training accuracy 0.996055 --- testing accuracy 0.9815\n",
      "step 5400, training accuracy 0.9982 --- testing accuracy 0.9842\n",
      "step 5500, training accuracy 0.997345 --- testing accuracy 0.9807\n",
      "step 5600, training accuracy 0.998273 --- testing accuracy 0.9831\n",
      "step 5700, training accuracy 0.9988 --- testing accuracy 0.9833\n",
      "step 5800, training accuracy 0.998091 --- testing accuracy 0.9841\n",
      "step 5900, training accuracy 0.998964 --- testing accuracy 0.9852\n",
      "step 6000, training accuracy 0.997255 --- testing accuracy 0.9818\n"
     ]
    }
   ],
   "source": [
    "#总共约11个epoch,训练数据集为55000 一共迭代600000 ，当迭代到230000条数据 4个多epoch的时候，准确率已经达到了 98以上，最终 98.15\n",
    "for i in range(6000):\n",
    "    X_batch, y_batch = mnist.train.next_batch(batch_size=100)\n",
    "    sess.run(train_step,feed_dict={X_: X_batch, y_: y_batch})\n",
    "    if (i+1) % 100 == 0:\n",
    "        train_accuracy = sess.run(accuracy,feed_dict={X_: mnist.train.images, y_: mnist.train.labels})\n",
    "        test_accuracy = sess.run(accuracy,feed_dict={X_: mnist.test.images, y_: mnist.test.labels})\n",
    "        print (\"step %d, training accuracy %g --- testing accuracy %g\" % (i+1, train_accuracy,test_accuracy))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorFlow 基础知识总结：\n",
    "\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorFlow\n",
    "\n",
    "TensorFlow计算模型——计算图  \n",
    "图模型包括，各个节点，Op，变量等 只有创建Session并且run的时候才会对我们所创建的架构进行计算\n",
    "Op 包含各种计算的相关信息\n",
    "\n",
    "TensorFlow的数据模型——张量\n",
    "\n",
    "TensorFlow运行模式——会话\n",
    "\n",
    "\n",
    "tensorflow里面几个随机函数的用法,这几个都是用于生成随机数tensor的。尺寸是shape \n",
    "\n",
    "1. tf.random_normal(shape,mean=0.0,stddev=1.0,dtype=tf.float32) \n",
    "   \n",
    "   random_normal: 正太分布随机数，均值mean,标准差stddev  使用这个初始化，准确率为 97.8%\n",
    "\n",
    "2. tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32) \n",
    "   \n",
    "   truncated_normal:截断正态分布随机数，均值mean,标准差stddev,不过只保留[mean-2*stddev,mean+2*stddev]范围内的随机数 使用这个初始化，准确率为 98.15%                                                                  \n",
    "3. tf.random_uniform(shape,minval=0,maxval=None,dtype=tf.float32) \n",
    "   \n",
    "   random_uniform:均匀分布随机数，范围为[minval,maxval]  使用初始化的准确率最低,训练完成后只有87%左右 \n",
    "\n",
    "\n",
    "\n",
    "tensorflow Regularizers\n",
    "在损失函数上加上正则项是防止过拟合的一个重要方法,下面介绍如何在TensorFlow中使用正则项.\n",
    "tensorflow中对参数使用正则项分为两步: \n",
    "1. 创建一个正则方法(函数/对象) \n",
    "2. 将这个正则方法(函数/对象),应用到参数上\n",
    "\n",
    "L1 正则方法：tf.contrib.layers.l1_regularizer(scale, scope=None) \n",
    "\n",
    "L2 正则方法:tf.contrib.layers.l2_regularizer(scale, scope=None)\n",
    "\n",
    "\n",
    "\n",
    "AdamOptimizer 与 GradientDescentOptimizer 比较\n",
    "\n",
    "Adam 算法根据损失函数对每个参数的梯度的一阶矩估计和二阶矩估计动态调整针对于每个参数的学习速率。TensorFlow提供的tf.train.AdamOptimizer可控制学习速度。Adam 也是基于梯度下降的方法，但是每次迭代参数的学习步长都有一个确定的范围，不会因为很大的梯度导致很大的学习步长，参数的值比较稳定。\n",
    "\n",
    "AdamOptimizer 优化速度，收敛速度感觉都要优于 GradientDescentOptimizer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
