{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2018-02-07T15:22:53.211497+08:00",
     "start_time": "2018-02-07T15:22:53.206496Z"
    }
   },
   "source": [
    "# W6_冯炳驹_124298228"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2018-02-07T15:39:16.870730+08:00",
     "start_time": "2018-02-07T15:39:11.525485Z"
    },
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "import numpy as np\n",
    "\n",
    "FLAGS = None\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们在这里调用系统提供的Mnist数据函数为我们读入数据，如果没有下载的话则进行下载。\n",
    "\n",
    "<font color=#ff0000>**这里将data_dir改为适合你的运行环境的目录**</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2018-02-07T15:39:17.431207+08:00",
     "start_time": "2018-02-07T15:39:16.871695Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting MNIST_data/train-images-idx3-ubyte.gz\n",
      "Extracting MNIST_data/train-labels-idx1-ubyte.gz\n",
      "Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n",
      "Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "550.0"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = 'MNIST_data/'\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)\n",
    "\n",
    "n_batch = mnist.train.num_examples\n",
    "n_batch/100"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 初始化参数 多层隐层  激活函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2018-02-07T15:39:17.508388+08:00",
     "start_time": "2018-02-07T15:39:17.433198Z"
    }
   },
   "outputs": [],
   "source": [
    "# Create the model\n",
    "x_units = 784\n",
    "x = tf.placeholder(tf.float32, [None, x_units])\n",
    "\n",
    "#隐层l 相关参数\n",
    "\n",
    "#隐层1 的输出节点数\n",
    "h1_units = 500\n",
    "W1 = tf.Variable(tf.truncated_normal([x_units,h1_units], stddev=0.05)) # 初始化参数\n",
    "b1 = tf.Variable(tf.zeros([h1_units]))\n",
    "logit1 = tf.matmul(x,W1) + b1\n",
    "#h1 = tf.nn.sigmoid(logit1)\n",
    "h1 = tf.nn.relu(logit1)  # 激活函数\n",
    "#h1 = h1 = tf.nn.tanh(logit1)\n",
    "\n",
    "#层2 的输出节点数\n",
    "h2_units = 300\n",
    "W2 = tf.Variable(tf.truncated_normal([h1_units,h2_units], stddev=0.05)) # 初始化参数\n",
    "b2 = tf.Variable(tf.zeros([h2_units]))\n",
    "logit2 = tf.matmul(h1,W2) + b2\n",
    "h2 = tf.nn.tanh(logit2)  # 激活函数\n",
    "\n",
    "# #层3 的输出节点数\n",
    "# h3_units = 100\n",
    "# W3 = tf.Variable(tf.truncated_normal([h2_units,h3_units], stddev=0.05)) # 初始化参数\n",
    "# b3 = tf.Variable(tf.zeros([h3_units]))\n",
    "# logit3 = tf.matmul(h2,W3) + b3\n",
    "# h3 = tf.nn.sigmoid(logit3)\n",
    "\n",
    "#L 层 相关参数\n",
    "\n",
    "#L 层输出节点数\n",
    "y_nuits = 10\n",
    "WL = tf.Variable(tf.zeros([h2_units,y_nuits]))\n",
    "bL = tf.Variable(tf.zeros([y_nuits]))\n",
    "\n",
    "logit_y = tf.matmul(h2, WL) + bL"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义我们的ground truth 占位符"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2018-02-07T15:39:17.515406+08:00",
     "start_time": "2018-02-07T15:39:17.509395Z"
    }
   },
   "outputs": [],
   "source": [
    "# Define loss and optimizer\n",
    "y_ = tf.placeholder(tf.float32, [None, y_nuits])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们计算交叉熵，注意这里不要使用注释中的手动计算方式，而是使用系统函数。\n",
    "另一个注意点就是，softmax_cross_entropy_with_logits的logits参数是**未经激活的wx+b**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这里我们仍然调用系统提供的读取数据，为我们取得一个batch。\n",
    "然后我们运行3k个step(5 epochs)，对权重进行优化。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 学习率 正则化惩罚因子 正则化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2018-02-07T15:41:08.238742+08:00",
     "start_time": "2018-02-07T15:39:17.517413Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "lambda: 0.000002 \n",
      "epoch:0 acc:0.963900\n",
      "epoch:1 acc:0.975700\n",
      "epoch:2 acc:0.975500\n",
      "epoch:3 acc:0.978300\n",
      "epoch:4 acc:0.979100\n",
      "epoch:5 acc:0.982000\n",
      "epoch:6 acc:0.979600\n",
      "epoch:7 acc:0.979500\n",
      "epoch:8 acc:0.979800\n",
      "epoch:9 acc:0.984200\n",
      "epoch:10 acc:0.984200\n",
      "epoch:11 acc:0.985000\n",
      "epoch:12 acc:0.985800\n",
      "epoch:13 acc:0.985300\n",
      "epoch:14 acc:0.985300\n",
      "epoch:15 acc:0.985400\n",
      "epoch:16 acc:0.985400\n",
      "epoch:17 acc:0.985100\n",
      "epoch:18 acc:0.985200\n",
      "epoch:19 acc:0.985200\n"
     ]
    }
   ],
   "source": [
    "# Train\n",
    "# 学习率\n",
    "learning_rate = 0.6\n",
    "\n",
    "# 正则化惩罚因子\n",
    "lambda1 = 0.000002\n",
    "\n",
    "#lambda_val = [0.01, 0.001, 0.0001, 0.00001, 0.00004, 0.00005, 0.00006]\n",
    "#lambda_val = [0.00001, 0.00004, 0.00005, 0.00006]\n",
    "#lambda_val = [0.00003, 0.00004, 0.000045]\n",
    "#lambda_val = [0.00004, 0.000004]\n",
    "#lambda_val = [0.00005]\n",
    "#for lambda1 in lambda_val:\n",
    "print('lambda: %f '%lambda1)\n",
    "\n",
    "#tf_beta = tf.placeholder(tf.float32)\n",
    "#cost = cross_entropy + lambda1 * (tf.nn.l2_loss(W1) + tf.nn.l2_loss(b1) + tf.nn.l2_loss(W2) + tf.nn.l2_loss(b2) + tf.nn.l2_loss(WL) + tf.nn.l2_loss(bL))\n",
    "\n",
    "#cost = cross_entropy + lambda1 * (tf.nn.l2_loss(WL))\n",
    "\n",
    "# 正则化\n",
    "regularizer = tf.contrib.layers.l1_regularizer(lambda1)\n",
    "regularization = regularizer(WL) + regularizer(W1) + regularizer(W2) + regularizer(b1) + regularizer(b2) + regularizer(bL)\n",
    "\n",
    "loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = logit_y))\n",
    "cost = loss + regularization\n",
    "\n",
    "train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n",
    "\n",
    "sess = tf.Session()\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)\n",
    "\n",
    "for epoch in range(20):\n",
    "    for _ in range(550):\n",
    "\n",
    "      batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "\n",
    "      sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "\n",
    "    #验证我们模型在测试数据上的准确率\n",
    "    # Test trained model\n",
    "    correct_prediction = tf.equal(tf.argmax(logit_y, 1), tf.argmax(y_, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    acc = sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels})\n",
    "    print('epoch:%d acc:%f'%(epoch, acc))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2018-02-07T15:41:08.452316+08:00",
     "start_time": "2018-02-07T15:41:08.241751Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.9852\n"
     ]
    }
   ],
   "source": [
    "# Test trained model\n",
    "correct_prediction = tf.equal(tf.argmax(logit_y, 1), tf.argmax(y_, 1))\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "毫无疑问，这个模型是一个非常简陋，性能也不理想的模型。目前只能达到92%左右的准确率。\n",
    "接下来，希望大家利用现有的知识，将这个模型优化至98%以上的准确率。\n",
    "Hint：\n",
    "- 多隐层\n",
    "- 激活函数\n",
    "- 正则化\n",
    "- 初始化\n",
    "- 摸索一下各个超参数\n",
    "  - 隐层神经元数量\n",
    "  - 学习率\n",
    "  - 正则化惩罚因子\n",
    "  - 最好每隔几个step就对loss、accuracy等等进行一次输出，这样才能有根据地进行调整"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  },
  "toc": {
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": "block",
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
