{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 深度学习第一周作业\n",
    "数据集：Mnist     http://yann.lecun.com/exdb/mnist/    \n",
    "Mnist 数据集由YannLeCun(杨立坤)建立，基础数据部分来自美国国家标准与技术研究所(National Institute of Standards and Technology，NIST)。训练集 (training set) 由来自 250 个不同人手写的数字(0~9)构成，其中 50% 是高中学生, 50% 来自人口普查局 (the Census Bureau) 的工作人员。测试集(test set) 也是同样比 例的手写数字数据。\n",
    "整个数据集包括 60000 张训练图片，10000 张测试图片。每张图片为一个 28x28 的灰度图片（以28*28=784维的向量表示）。每个像素的数据类型为 uint8，取值从 0(背景)到 255(前景)。    \n",
    "作业要求：使用 tensorflow，构造并训练一个神经网络，在测试集上达到超过 98%的准确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/hankaei/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
      "  from ._conv import register_converters as _register_converters\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.12.0\n"
     ]
    }
   ],
   "source": [
    "# 导入tensorflow并查看当前版本\n",
    "#import numpy as np\n",
    "import tensorflow as tf\n",
    "print(tf.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 0. 读入数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-2-cf19ccc9a6aa>:2: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From /Users/hankaei/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please write your own downloading logic.\n",
      "WARNING:tensorflow:From /Users/hankaei/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/data/train-images-idx3-ubyte.gz\n",
      "WARNING:tensorflow:From /Users/hankaei/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting /tmp/data/train-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From /Users/hankaei/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.one_hot on tensors.\n",
      "Extracting /tmp/data/t10k-images-idx3-ubyte.gz\n",
      "Extracting /tmp/data/t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From /Users/hankaei/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n"
     ]
    }
   ],
   "source": [
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "mnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((55000, 784), (55000, 10), (10000, 784), (10000, 10))"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 训练集，测试集\n",
    "mnist.train.images.shape, mnist.train.labels.shape, mnist.test.images.shape, mnist.test.labels.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(array([0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.19215688, 0.3137255 , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.21568629, 0.46274513, 0.93725497, 0.9803922 ,\n",
       "        0.9843138 , 0.9725491 , 0.81568635, 0.46274513, 0.10588236,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.21568629, 0.72156864,\n",
       "        0.9686275 , 0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.9568628 , 0.82745105, 0.34901962,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.69803923, 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.97647065, 0.28235295, 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 1.        ,\n",
       "        0.9921569 , 0.9921569 , 0.63529414, 0.14901961, 0.09803922,\n",
       "        0.27058825, 0.60784316, 0.7176471 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.92549026, 0.49803925, 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.9960785 , 0.9921569 , 0.9921569 ,\n",
       "        0.07450981, 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.03529412, 0.5176471 , 0.90196085, 0.9921569 , 0.9921569 ,\n",
       "        0.9725491 , 0.5411765 , 0.13725491, 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        1.        , 0.9921569 , 0.9921569 , 0.24705884, 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.1137255 , 0.25490198, 0.89019614, 0.9921569 , 0.9921569 ,\n",
       "        0.8470589 , 0.13725491, 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.83921576, 0.9921569 ,\n",
       "        0.9921569 , 0.9215687 , 0.2392157 , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.53333336, 0.9921569 , 0.9921569 , 0.9921569 , 0.82745105,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.09803922, 0.8313726 , 0.9921569 , 0.9921569 ,\n",
       "        0.7176471 , 0.18039216, 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.3254902 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.97647065, 0.34509805, 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.34509805, 0.9803922 , 0.9921569 , 0.9921569 , 0.909804  ,\n",
       "        0.59607846, 0.25882354, 0.25882354, 0.25882354, 0.6392157 ,\n",
       "        0.7686275 , 0.92549026, 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.56078434, 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.24313727,\n",
       "        0.7254902 , 0.9686275 , 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 , 0.94117653,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.78823537,\n",
       "        0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.9921569 , 0.94117653, 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.11764707, 0.7490196 , 0.8117648 ,\n",
       "        0.9607844 , 0.9921569 , 0.9921569 , 0.9921569 , 0.9921569 ,\n",
       "        0.9921569 , 0.8745099 , 0.8352942 , 0.9921569 , 0.9921569 ,\n",
       "        0.9490197 , 0.08235294, 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.43137258, 0.69411767,\n",
       "        0.32941177, 0.5921569 , 0.69803923, 0.48627454, 0.10980393,\n",
       "        0.10980393, 0.9333334 , 0.9921569 , 0.9921569 , 0.45882356,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.6117647 ,\n",
       "        0.9921569 , 0.9921569 , 0.52156866, 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.6117647 , 0.9921569 , 0.9921569 ,\n",
       "        0.7019608 , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.3137255 , 0.9921569 , 0.9921569 , 0.9686275 , 0.03921569,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.04705883, 0.76470596,\n",
       "        0.9921569 , 0.9921569 , 0.48235297, 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.57254905, 0.9686275 , 0.9686275 ,\n",
       "        0.73333335, 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        , 0.        ,\n",
       "        0.        , 0.        , 0.        , 0.        ], dtype=float32),\n",
       " array([0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]))"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 观察训练集和测试集的数据形式\n",
    "mnist.train.images[0], mnist.train.labels[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 简单的感知器模型\n",
    "layer_number = 2（784,10）   \n",
    "learning_rate = 0.5   \n",
    "w,b 初始化为0   \n",
    "step_number = 1000"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Step 1: Accuracy: 0.300600\n",
      "Step 100: Accuracy: 0.896400\n",
      "Step 200: Accuracy: 0.903800\n",
      "Step 300: Accuracy: 0.905600\n",
      "Step 400: Accuracy: 0.913800\n",
      "Step 500: Accuracy: 0.912200\n",
      "Step 600: Accuracy: 0.914200\n",
      "Step 700: Accuracy: 0.915300\n",
      "Step 800: Accuracy: 0.917500\n",
      "Step 900: Accuracy: 0.917100\n",
      "Step 1000: Accuracy: 0.920700\n"
     ]
    }
   ],
   "source": [
    "# 1. 定义变量\n",
    "# 输入x和真值y_的占位符，并非特定值，根据数据不同shape不同。\n",
    "x = tf.placeholder(tf.float32, shape=[None, 784])   # 784是x向量的长度，None是batch的size\n",
    "y_ = tf.placeholder(tf.float32, shape=[None, 10])   # 输出是0～9的分类\n",
    "\n",
    "# 权重w，偏置b，初始化为0\n",
    "W = tf.Variable(tf.zeros([784,10]))\n",
    "b = tf.Variable(tf.zeros([10]))\n",
    "\n",
    "# 2. 预测并计算误差\n",
    "# 计算预测值y\n",
    "y = tf.matmul(x,W) + b\n",
    "\n",
    "# 定义损失函数：平均误差\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=y))\n",
    "\n",
    "# 模型评价\n",
    "correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))    # 预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 预测正确的比例=true的数量/全部结果\n",
    "\n",
    "# 训练step，梯度下降更新权重。之后通过反复运行当前step训练模型\n",
    "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "# 在session可以使用变量之前，生成session并初始化所有变量\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 3. 训练模型\n",
    "# 每次从训练集取100个样本反复1000次训练\n",
    "for i in range(1, 1000+1):\n",
    "    batch = mnist.train.next_batch(100)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1]})   # feed_dict：从样本取出x，y\n",
    "    # 显示测试集上准确率结果  \n",
    "    if i % 100 == 0 or i == 1:\n",
    "        print('Step %i: Accuracy: %f' % (i, sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))    \n",
    "        \n",
    "sess.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "没有隐层也有将近92的准确率。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "加一层隐层：用zero矩阵初始化w，30000次训练准确率0.6；而用random初始化，同样训练次数准确率上升。\n",
    "  - 如果w设定为同样的值，同一层的神经元就会有一样的变化，导致收敛慢。   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 多层神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1 初步调整参数：单隐层、修改权重偏执初始化、增加训练次数\n",
    "size = [784,30,10]   \n",
    "learning_rate = 3，无衰减    \n",
    "w 用正态高斯分布初始化     \n",
    "b 常量初始化    \n",
    "activation: sigmoid    \n",
    "损失：交叉熵损失，不加正则化     \n",
    "训练次数：增加到20000"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Step 1: Accuracy on train: 0.100527 test: 0.103800\n",
      "Step 1000: Accuracy on train: 0.914945 test: 0.910600\n",
      "Step 2000: Accuracy on train: 0.934836 test: 0.925700\n",
      "Step 3000: Accuracy on train: 0.945964 test: 0.934500\n",
      "Step 4000: Accuracy on train: 0.953618 test: 0.940100\n",
      "Step 5000: Accuracy on train: 0.957491 test: 0.942800\n",
      "Step 6000: Accuracy on train: 0.959327 test: 0.942200\n",
      "Step 7000: Accuracy on train: 0.965964 test: 0.947000\n",
      "Step 8000: Accuracy on train: 0.967655 test: 0.948100\n",
      "Step 9000: Accuracy on train: 0.972091 test: 0.949700\n",
      "Step 10000: Accuracy on train: 0.972200 test: 0.952000\n",
      "Step 11000: Accuracy on train: 0.973655 test: 0.951300\n",
      "Step 12000: Accuracy on train: 0.973618 test: 0.950300\n",
      "Step 13000: Accuracy on train: 0.975836 test: 0.951000\n",
      "Step 14000: Accuracy on train: 0.977673 test: 0.951000\n",
      "Step 15000: Accuracy on train: 0.980364 test: 0.951100\n",
      "Step 16000: Accuracy on train: 0.979600 test: 0.952000\n",
      "Step 17000: Accuracy on train: 0.979636 test: 0.947700\n",
      "Step 18000: Accuracy on train: 0.982727 test: 0.952600\n",
      "Step 19000: Accuracy on train: 0.985218 test: 0.952000\n",
      "Step 20000: Accuracy on train: 0.985309 test: 0.950200\n"
     ]
    }
   ],
   "source": [
    "# 1. 定义变量\n",
    "# 输入x和真值y_的占位符，并非特定值，根据数据不同shape不同。\n",
    "x = tf.placeholder(tf.float32, shape=[None, 784])   \n",
    "y_ = tf.placeholder(tf.float32, shape=[None, 10])   \n",
    "\n",
    "# 权重w，偏置b\n",
    "w_h1 = tf.Variable(tf.random_normal([784, 30]))  # 正态分布：mean=0.0, stddev=1.0\n",
    "w_h2 = tf.Variable(tf.random_normal([30, 10]))\n",
    "\n",
    "b_h1 = tf.Variable(tf.constant(0.001, shape = [30]))\n",
    "b_h2 = tf.Variable(tf.constant(0.001, shape = [10]))\n",
    "                \n",
    "\n",
    "# 2. 预测并计算误差\n",
    "# 计算logits,预测值y\n",
    "logits_h1 = tf.matmul(x,w_h1) + b_h1\n",
    "y_h1 = tf.nn.sigmoid(logits_h1) \n",
    "logits_h2 = tf.matmul(y_h1,w_h2) + b_h2 \n",
    "y = logits_h2\n",
    "\n",
    "# 定义损失函数：平均误差\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=logits_h2))\n",
    "\n",
    "# 模型评价\n",
    "correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))    # 预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 预测正确的比例=true的数量/全部结果\n",
    "\n",
    "# 训练step，梯度下降更新权重。之后通过反复运行当前step训练模型\n",
    "train_step = tf.train.GradientDescentOptimizer(3).minimize(cross_entropy)\n",
    "\n",
    "# 在session可以使用变量之前，生成session并初始化所有变量\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 3. 训练模型\n",
    "# 每次从训练集取100个样本反复1000次训练\n",
    "for i in range(1, 20000+1):\n",
    "    batch = mnist.train.next_batch(100)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1]})   # feed_dict：从样本取出x，y\n",
    "    # 显示测试集上准确率结果  \n",
    "    if i % 1000 == 0 or i == 1:\n",
    "        train_accuracy = sess.run(accuracy, feed_dict={x: mnist.train.images, y_: mnist.train.labels})\n",
    "        test_accuracy = sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})\n",
    "        print('Step %i: Accuracy on train: %f test: %f' %  (i, train_accuracy, test_accuracy))   \n",
    "        \n",
    "sess.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "结果：训练集上有过拟合趋势，测试集上的准确率提高。  \n",
    "思考：增加了隐层，模型更复杂；对w的权重进行随机初始化，比初始化为0收敛的更快；大的学习率时梯度下降的更快，但训练次数的增加可能会梯度爆炸？  \n",
    "下一步尝试增加正则化，更换激活函数（activation）和优化函数（optimizer）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 进一步调整参数\n",
    "w 横断正态分布初始化   \n",
    "b 常量初始化   \n",
    "activation  sigmoid/relu/swish   \n",
    "隐层神经元数量  30～100    \n",
    "loss：corss_entropy + l2   \n",
    "step_number: 20000～60000"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/hankaei/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py:1702: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).\n",
      "  warnings.warn('An interactive session is already active. This can '\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Step 1: Accuracy on train: 0.233691 test: 0.237200\n",
      "Step 5000: Accuracy on train: 0.957127 test: 0.945100\n",
      "Step 10000: Accuracy on train: 0.978364 test: 0.964200\n",
      "Step 15000: Accuracy on train: 0.986964 test: 0.971500\n",
      "Step 20000: Accuracy on train: 0.993182 test: 0.977000\n",
      "Step 25000: Accuracy on train: 0.995127 test: 0.976800\n",
      "Step 30000: Accuracy on train: 0.996782 test: 0.979200\n",
      "Step 35000: Accuracy on train: 0.996491 test: 0.979400\n",
      "Step 40000: Accuracy on train: 0.997418 test: 0.980000\n",
      "Step 45000: Accuracy on train: 0.996836 test: 0.978700\n",
      "Step 50000: Accuracy on train: 0.996964 test: 0.979400\n",
      "Step 55000: Accuracy on train: 0.996782 test: 0.981000\n",
      "Step 60000: Accuracy on train: 0.997400 test: 0.978700\n"
     ]
    }
   ],
   "source": [
    "beta = 0.001/5\n",
    "# 1. 定义变量\n",
    "# 输入x和真值y_的占位符，并非特定值，根据数据不同shape不同。\n",
    "x = tf.placeholder(tf.float32, shape=[None, 784])   \n",
    "y_ = tf.placeholder(tf.float32, shape=[None, 10])   \n",
    "\n",
    "# 权重w，偏置b\n",
    "w_h1 = tf.Variable(tf.truncated_normal([784, 80]))   # 横断正态分布: 生成方差2倍以内的数\n",
    "w_h2 = tf.Variable(tf.truncated_normal([80, 10]))\n",
    "\n",
    "b_h1 = tf.Variable(tf.constant(0.001, shape = [80]))\n",
    "b_h2 = tf.Variable(tf.constant(0.001, shape = [10]))\n",
    "                \n",
    "\n",
    "# 2. 预测并计算误差\n",
    "# 激活函数：swish = x * sigmoid(x)\n",
    "logits_h1 = tf.matmul(x,w_h1) + b_h1\n",
    "y_h1 = logits_h1 * tf.nn.sigmoid(logits_h1) \n",
    "logits_h2 = tf.matmul(y_h1,w_h2) + b_h2 \n",
    "y = logits_h2\n",
    "\n",
    "# 定失函数 = 平均误差 + l2正则\n",
    "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=logits_h2))\n",
    "regularizers = tf.nn.l2_loss(w_h1) + tf.nn.l2_loss(w_h2)\n",
    "loss = tf.reduce_mean(cross_entropy + beta * regularizers)\n",
    "# 训练step / optimizer : 梯度下降更新权重。之后通过反复运行当前step训练模型\n",
    "train_step = tf.train.GradientDescentOptimizer(0.55).minimize(loss)\n",
    "\n",
    "# 模型评价\n",
    "correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))    # 预测值与真值是否相等，返回boolean值\n",
    "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))   # 预测正确的比例=true的数量/全部结果\n",
    "\n",
    "# 在session可以使用变量之前，生成session并初始化所有变量\n",
    "sess = tf.InteractiveSession()\n",
    "sess.run(tf.global_variables_initializer())\n",
    "\n",
    "# 3. 训练模型\n",
    "# 每次从训练集取100个样本反复1000次训练\n",
    "for i in range(1, 60000+1):\n",
    "    batch = mnist.train.next_batch(100)\n",
    "    train_step.run(feed_dict={x: batch[0], y_: batch[1]})   # feed_dict：从样本取出x，y\n",
    "    # 显示测试集上准确率结果  \n",
    "    if i % 5000 == 0 or i == 1:\n",
    "        train_accuracy = sess.run(accuracy, feed_dict={x: mnist.train.images, y_: mnist.train.labels})\n",
    "        test_accuracy = sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})\n",
    "        print('Step %i: Accuracy on train: %f test: %f' %  (i, train_accuracy, test_accuracy))   \n",
    "        \n",
    "sess.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. [784,30,10], activation = relu, learning_rate = 3，准确率0.1\n",
    "2. [784,30,10], activation = sigmoid, learning_rate = 0.5, beta = 0.01, 准确率0.89\n",
    "3. [784,30,10], activation = sigmoid, learing_rate = 0.1, beta = 0.001, 准确率0.94\n",
    "4. [784,50,10], activation = sigmoid, learning_rate = 0.5, beta = 0.001, 准确率0.95\n",
    "5. [784,80,10], activation = swish = x*sigmoid(x), learning_rate = 0.5, beta = 0.001, 准确率0.975\n",
    "6. [784,100,10], activation = swish = x*sigmoid(x), learning_rate = 0.55, beta = 0.001/2, 准确率0.9797\n",
    "7. [784,100,10], activation = swish = x*sigmoid(x), learning_rate = 0.55, beta = 0.001/5, 准确率0.9810"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "增加隐层的其他尝试：\n",
    "learning rate = 0.7\n",
    "1. sigmoid，四层(256,256,256,10)/(256,128,64,10)，step 30000，准确率0.94\n",
    "2. relu，四层(256,256,256,10)，step 10000，准确率稳定0.0979。（神经元死亡）\n",
    "3. sigmoid，两层(10~50,5~25)/(256,128)/，step 30000，准确率0.93\n",
    "4. sigmoid，四层（512,512,256,10），step 10000，准确率0.90    \n",
    "_层数增加，模型变复杂，并不一定会提升准确率？_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "问题： \n",
    "1. 虽然得到98的准确率，但是结果并不稳定，是正常的？\n",
    "2. 先调整哪个参数，有没有一般性的调整顺序？例如：先层数、神经元数量，再学习率？\n",
    "3. 还是说不管先调整哪个参数，以什么顺序调整最后都能获得好的结果？\n",
    "4. 增加神经元数量，同时增加训练次数会获得更好的结果？隐层神经元数量与训练次数有直接关联吗？"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
