{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第六周作业"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据现有的模型（一个非常简陋，性能也不理想的模型。目前只能达到92%左右的准确率。）\n",
    "将这个模型优化至98%以上的准确率。\n",
    "Hint：\n",
    "- 多隐层\n",
    "- 激活函数\n",
    "- 正则化\n",
    "- 初始化\n",
    "- 摸索一下各个超参数\n",
    "  - 隐层神经元数量\n",
    "  - 学习率\n",
    "  - 正则化惩罚因子\n",
    "  - 最好每隔几个step就对loss、accuracy等等进行一次输出，这样才能有根据地进行调整"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "跑一下现有模型，为比较后面的模型做基准："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "FLAGS = None\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting /mnistData\\train-images-idx3-ubyte.gz\n",
      "Extracting /mnistData\\train-labels-idx1-ubyte.gz\n",
      "Extracting /mnistData\\t10k-images-idx3-ubyte.gz\n",
      "Extracting /mnistData\\t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "#data_dir = '/tmp/tensorflow/mnist/input_data'\n",
    "data_dir = '/mnistData'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "W = tf.Variable(tf.zeros([784, 10]))\n",
    "b = tf.Variable(tf.zeros([10]))\n",
    "y = tf.matmul(x, W) + b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9197\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "  # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据提示，尝试多隐层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model1\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.zeros([784, 784])) #增加隐层\n",
    "b1 = tf.Variable(tf.zeros([784]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.sigmoid(logist1)    #第一层的输出 等于第二层的输入\n",
    "\n",
    "W2 = tf.Variable(tf.zeros([784, 10]))\n",
    "b2 = tf.Variable(tf.zeros([10]))\n",
    "y = tf.matmul(y1, W2) + b2  #新的输出为第二层神经元的输出\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.3237\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "两层的网络比只有一层的网络还差，猜测可能是发生了过拟合。 尝试减少每层的神经元数目。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model2\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.zeros([784, 392])) #增加隐层\n",
    "b1 = tf.Variable(tf.zeros([392]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.sigmoid(logist1)    #第一层的输出 等于第二层的输入\n",
    "\n",
    "W2 = tf.Variable(tf.zeros([392, 10]))\n",
    "b2 = tf.Variable(tf.zeros([10]))\n",
    "y = tf.matmul(y1, W2) + b2  #新的输出为第二层神经元的输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.3536\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "结果稍微好了一点，也是比单层的差好多。看了一下网上的博文，发现问题出在了初始化上面，因为全零初始化，或者所有神经元都是同一个数值，相当于前向传和反向传每个神经元的影响都一致。所以接下来改变初始化。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "先试一下random_normal初始化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model3\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "W = tf.Variable(tf.random_normal([784, 10]))\n",
    "b = tf.Variable(tf.random_normal([10]))# 用random_normal初始化\n",
    "y = tf.matmul(x, W) + b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.8971\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "  # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个结果也比全零赋值的差了。有可能是一层的网络中，对初始化的要求不是很敏感。试一下二层网络。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Create the model4\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 392])) #增加隐层\n",
    "b1 = tf.Variable(tf.random_normal([392]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.sigmoid(logist1)    #第一层的输出 等于第二层的输入\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([392, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  #新的输出为第二层神经元的输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9269\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "二层网络改了初始化后，结果稍微提升一点。二层网络效果还可以的话，说明最初分析过拟合的结论，禁不起推敲。所以试一下更多神经元。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model5\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 784])) #增加隐层\n",
    "b1 = tf.Variable(tf.random_normal([784]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.sigmoid(logist1)    #第一层的输出 等于第二层的输入\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([784, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  #新的输出为第二层神经元的输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9346\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "效果果然好了一些。在尝试更多神经元和隐层。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Create the model6\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 784])) #增加隐层\n",
    "b1 = tf.Variable(tf.random_normal([784]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.sigmoid(logist1)    #第一层的输出 等于第二层的输入\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([784, 784])) #增加隐层\n",
    "b2 = tf.Variable(tf.random_normal([784]))\n",
    "logist2 = tf.matmul(y1, W2) + b2    \n",
    "y2 = tf.nn.sigmoid(logist2)  \n",
    "\n",
    "W3 = tf.Variable(tf.random_normal([784, 10]))\n",
    "b3 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y2, W3) + b3  #新的输出为第三层神经元的输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9355\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "更多隐层在这一步有所提升，但是效果没那么明显。算起来的时间还是有点长的。可以试一下两层加更多神经元，以及考虑激活函数和正则了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model7\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) #增加隐层\n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.sigmoid(logist1)    #第一层的输出 等于第二层的输入\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  #新的输出为第二层神经元的输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9372\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "两层和三层的效果差不多，而且更好一点。所以下一步先试一下激活函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) #\n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    #激活函数从sigmoid变relu\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9536\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "激活函数还是有很大作用的！同样的模型试一下正则。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) \n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    #激活函数从sigmoid变relu\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.65\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) #先用L2正则\n",
    "beta=0.1 #正则项系数\n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)#minimize total loss instead of just cross_entropy\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "加正则反而效果更差了。调低一点正则的力度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) \n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    #激活函数从sigmoid变relu\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.927\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) #先用L2正则\n",
    "beta=0.01 #正则项系数\n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)#minimize total loss instead of just cross_entropy\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如果正则力度更低还不行。那就不用加正则了..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) \n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    #激活函数从sigmoid变relu\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9564\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) #先用L2正则\n",
    "beta=0.0001 #正则项系数\n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)#minimize total loss instead of just cross_entropy\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "看来在正则系数极低的情况下，有正则能稍微提高一点正确率"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "目前模型和损失函数上，看起来没啥好做了。试一下学习率，batch size和循环数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) \n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    #激活函数从sigmoid变relu\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9584\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) \n",
    "beta=0.0001 \n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.25).minimize(loss)#改学习率，降低学习率可能收敛速度变慢，但是在最优解徘徊的步子也能小一点\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "降低学习率的策略是比较成功的。但效果也是提升的不大。接下来改一下batch size。应该是增大batch size这样每次用的数据多一些，能带来的信息就更多一点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) \n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    #激活函数从sigmoid变relu\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9573\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) \n",
    "beta=0.0001 \n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.25).minimize(loss)\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(3000):\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(200)#batch size改了\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个跟预想的不太一样。再改回来，试下改循环次数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568])) \n",
    "b1 = tf.Variable(tf.random_normal([1568]))\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    #激活函数从sigmoid变relu\n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10]))\n",
    "b2 = tf.Variable(tf.random_normal([10]))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9598\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) \n",
    "beta=0.0001 \n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.25).minimize(loss)\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(6000):#循环次数改了\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个循环次数的结果跟预想一样。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "目前各种参数我们貌似都尝试了一把。但是效果仍然不能达到98%。回到最初提升效果最好的就是初始化函数。所以在初始化这步上，在挖掘一下，看一下有没有提升的可能。尝试一下其他初始化，和不同std的值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.random_normal([784, 1568],stddev=0.1)) \n",
    "b1 = tf.Variable(tf.random_normal([1568],stddev=0.1))#改stddev值,默认值是1\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    \n",
    "\n",
    "W2 = tf.Variable(tf.random_normal([1568, 10],stddev=0.1))\n",
    "b2 = tf.Variable(tf.random_normal([10],stddev=0.1))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9808\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) \n",
    "beta=0.0001 \n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.25).minimize(loss)\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(6000):#循环次数改了\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "至少作业终于可以交差了..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "试一下截断高斯分布。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create the model8\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "W1 = tf.Variable(tf.truncated_normal([784, 1568],stddev=0.1)) \n",
    "b1 = tf.Variable(tf.truncated_normal([1568],stddev=0.1))#改stddev值,默认值是1\n",
    "logist1 = tf.matmul(x, W1) + b1    \n",
    "y1 = tf.nn.relu(logist1)    \n",
    "\n",
    "W2 = tf.Variable(tf.truncated_normal([1568, 10],stddev=0.1))\n",
    "b2 = tf.Variable(tf.truncated_normal([10],stddev=0.1))\n",
    "y = tf.matmul(y1, W2) + b2  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9812\n"
     ]
    }
   ],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "cross_entropy = tf.reduce_mean(\n",
    "    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "regularizers=tf.nn.l2_loss(W1)+tf.nn.l2_loss(W2) \n",
    "beta=0.0001 \n",
    "loss=cross_entropy+beta*regularizers\n",
    "train_step = tf.train.GradientDescentOptimizer(0.25).minimize(loss)\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "# Train\n",
    "for _ in range(6000):#循环次数改了\n",
    "  batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "      # Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这次最直观的感受就是初始化对神经网络的影响。我也试过以同样参数跑多次，所得结果差不多，略有些偏差。如果要是再做的细致点，可以做一下联合调参。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:Anaconda3]",
   "language": "python",
   "name": "conda-env-Anaconda3-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
