{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第七周作业\n",
    "使用tensorflow，构造并训练一个神经网络，在测试机上达到超过98%的准确率。 在完成过程中，需要综合运用目前学到的基础知识：\n",
    "\n",
    "- 深度神经网络 DNN\n",
    "- 激活函数 active func\n",
    "- 正则化 reg\n",
    "- 初始化 init\n",
    "- 卷积 conv\n",
    "- 池化 pooling\n",
    "\n",
    "并探索如下超参数设置：\n",
    "\n",
    "- 卷积kernel size\n",
    "- 卷积kernel 数量\n",
    "- 学习率 lr\n",
    "- 正则化因子 reg factor\n",
    "- 权重初始化分布参数 w0\n",
    "\n",
    "\n",
    "##### 评价标准\n",
    "1. 准确度达到98%或者以上60分，作为及格标准，未达到者本作业不及格，不予打分。\n",
    "2. 使用了正则化因子或文档中给出描述：10分。\n",
    "3. 手动初始化参数或文档中给出描述：10分，不设置初始化参数的，只使用默认初始化认为学员没考虑到初始化问题，不给分。\n",
    "4. 学习率调整：10分，需要文档中给出描述。\n",
    "5. 卷积kernel size和数量调整：10分，需要文档中给出描述。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\zhuhaier\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
      "  from ._conv import register_converters as _register_converters\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-1-bfcd73fd3894>:3: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From C:\\Users\\zhuhaier\\Anaconda3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please write your own downloading logic.\n",
      "WARNING:tensorflow:From C:\\Users\\zhuhaier\\Anaconda3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting input_data\\train-images-idx3-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Users\\zhuhaier\\Anaconda3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting input_data\\train-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Users\\zhuhaier\\Anaconda3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.one_hot on tensors.\n",
      "Extracting input_data\\t10k-images-idx3-ubyte.gz\n",
      "Extracting input_data\\t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From C:\\Users\\zhuhaier\\Anaconda3\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "55000\n",
      "10000\n"
     ]
    }
   ],
   "source": [
    "# 导入数据\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "mnist = input_data.read_data_sets('input_data', one_hot=True)\n",
    "print(mnist.train.num_examples)\n",
    "print(mnist.test.num_examples)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "# 定义TensorFlow样本集和ground truth占位符\n",
    "x = tf.placeholder(tf.float32, [None, 784]) \n",
    "y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "\n",
    "learning_rate=tf.placeholder(tf.float32)# 学习率占位符\n",
    "\n",
    "with tf.name_scope('reshape'):\n",
    "    x_image=tf.reshape(x,[-1,28,28,1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n"
     ]
    }
   ],
   "source": [
    "# 定义model\n",
    "from keras.layers.core import Dense, Flatten, Dropout\n",
    "from keras.layers.convolutional import Conv2D\n",
    "from keras.layers.pooling import MaxPooling2D\n",
    "# from keras.layers.pooling import GlobalAveragePooling2D\n",
    "from keras.layers.normalization import BatchNormalization\n",
    "\n",
    "\n",
    "# keras.layers.convolutional.Conv2D默认使用Xavier均匀分布初始化\n",
    "net = Conv2D(32, kernel_size=[3,3], strides=[1,1],activation='relu', padding='same', input_shape=[28,28,1])(x_image)\n",
    "# net = BatchNormalization(axis=1)(net)\n",
    "net = MaxPooling2D(pool_size=[2,2])(net)\n",
    "net = Conv2D(64, kernel_size=[3,3], strides=[1,1],activation='relu', padding='same')(net)\n",
    "# net = BatchNormalization(axis=1)(net)\n",
    "net = MaxPooling2D(pool_size=[2,2])(net)\n",
    "# net = BatchNormalization()(net)\n",
    "net = Flatten()(net)\n",
    "# 全连接层相当于1*1的核卷积，参数比较少，故不使用dropout\n",
    "net = Dense(1024, activation='relu')(net)\n",
    "# net = Dropout(0.1)(net)\n",
    "net = Dense(10,activation='softmax')(net)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 接下来我们计算交叉熵和l2正则\n",
    "from keras.objectives import categorical_crossentropy\n",
    "cross_entropy = tf.reduce_mean(categorical_crossentropy(y_, net))\n",
    "\n",
    "l2_loss = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "total_loss = cross_entropy + 2e-9*l2_loss\n",
    "\n",
    "#BP算法\n",
    "train_step = tf.train.RMSPropOptimizer(learning_rate).minimize(total_loss)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from keras import backend as K\n",
    "sess = tf.Session()\n",
    "K.set_session(sess)\n",
    "init_op = tf.global_variables_initializer()\n",
    "sess.run(init_op)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 100, entropy loss: 0.622347, l2_loss: 847.122009, total loss: 0.622349\n",
      "0.83\n",
      "step 200, entropy loss: 0.229656, l2_loss: 1622.977661, total loss: 0.229659\n",
      "1.0\n",
      "step 300, entropy loss: 0.066979, l2_loss: 3536.133545, total loss: 0.066986\n",
      "1.0\n",
      "accuracy: 0.972\n",
      "lr changed to 0.006999999999999999\n",
      "step 400, entropy loss: 0.045577, l2_loss: 5214.858887, total loss: 0.045587\n",
      "1.0\n",
      "lr changed to 0.004899999999999999\n",
      "step 500, entropy loss: 0.083619, l2_loss: 5863.112305, total loss: 0.083630\n",
      "1.0\n",
      "lr changed to 0.003429999999999999\n",
      "step 600, entropy loss: 0.005876, l2_loss: 6206.247559, total loss: 0.005889\n",
      "1.0\n",
      "accuracy: 0.9875\n",
      "lr changed to 0.002400999999999999\n",
      "step 700, entropy loss: 0.044716, l2_loss: 6360.597168, total loss: 0.044729\n",
      "1.0\n",
      "lr changed to 0.0016806999999999992\n",
      "step 800, entropy loss: 0.014521, l2_loss: 6447.746582, total loss: 0.014534\n",
      "1.0\n",
      "lr changed to 0.0011764899999999994\n",
      "step 900, entropy loss: 0.017895, l2_loss: 6496.502441, total loss: 0.017908\n",
      "1.0\n",
      "accuracy: 0.9909\n",
      "lr changed to 0.0008235429999999996\n",
      "step 1000, entropy loss: 0.084523, l2_loss: 6524.476562, total loss: 0.084536\n",
      "0.99\n",
      "lr changed to 0.0005764800999999997\n",
      "step 1100, entropy loss: 0.006978, l2_loss: 6539.962402, total loss: 0.006991\n",
      "1.0\n",
      "lr changed to 0.00040353606999999974\n",
      "step 1200, entropy loss: 0.040978, l2_loss: 6553.230957, total loss: 0.040991\n",
      "1.0\n",
      "accuracy: 0.9924\n",
      "lr changed to 0.0002824752489999998\n",
      "step 1300, entropy loss: 0.000887, l2_loss: 6561.635742, total loss: 0.000900\n",
      "1.0\n",
      "lr changed to 0.00019773267429999984\n",
      "step 1400, entropy loss: 0.002271, l2_loss: 6566.764160, total loss: 0.002284\n",
      "1.0\n",
      "lr changed to 0.00013841287200999988\n",
      "step 1500, entropy loss: 0.000228, l2_loss: 6569.791504, total loss: 0.000241\n",
      "1.0\n",
      "accuracy: 0.993\n",
      "lr changed to 9.688901040699991e-05\n",
      "step 1600, entropy loss: 0.060183, l2_loss: 6572.027344, total loss: 0.060196\n",
      "0.99\n",
      "lr changed to 6.782230728489993e-05\n",
      "step 1700, entropy loss: 0.000360, l2_loss: 6573.709961, total loss: 0.000373\n",
      "1.0\n",
      "lr changed to 4.747561509942995e-05\n",
      "step 1800, entropy loss: 0.005713, l2_loss: 6574.936035, total loss: 0.005726\n",
      "1.0\n",
      "accuracy: 0.9931\n",
      "lr changed to 3.323293056960096e-05\n",
      "step 1900, entropy loss: 0.000602, l2_loss: 6575.854980, total loss: 0.000615\n",
      "1.0\n",
      "lr changed to 2.3263051398720672e-05\n",
      "step 2000, entropy loss: 0.005529, l2_loss: 6576.595703, total loss: 0.005542\n",
      "1.0\n",
      "lr changed to 1.628413597910447e-05\n",
      "step 2100, entropy loss: 0.000883, l2_loss: 6576.970703, total loss: 0.000896\n",
      "1.0\n",
      "accuracy: 0.9932\n",
      "lr changed to 1.1398895185373128e-05\n",
      "step 2200, entropy loss: 0.005026, l2_loss: 6577.258301, total loss: 0.005040\n",
      "1.0\n",
      "lr changed to 7.97922662976119e-06\n",
      "step 2300, entropy loss: 0.004736, l2_loss: 6577.490723, total loss: 0.004750\n",
      "1.0\n",
      "lr changed to 5.5854586408328325e-06\n",
      "step 2400, entropy loss: 0.000085, l2_loss: 6577.642578, total loss: 0.000098\n",
      "1.0\n",
      "accuracy: 0.9931\n",
      "lr changed to 3.909821048582983e-06\n",
      "step 2500, entropy loss: 0.000129, l2_loss: 6577.735352, total loss: 0.000142\n",
      "1.0\n",
      "lr changed to 2.7368747340080875e-06\n",
      "step 2600, entropy loss: 0.004867, l2_loss: 6577.804199, total loss: 0.004880\n",
      "1.0\n",
      "lr changed to 1.9158123138056613e-06\n",
      "step 2700, entropy loss: 0.055856, l2_loss: 6577.855469, total loss: 0.055870\n",
      "0.98\n",
      "accuracy: 0.9931\n",
      "lr changed to 1.3410686196639628e-06\n",
      "step 2800, entropy loss: 0.002574, l2_loss: 6577.884277, total loss: 0.002587\n",
      "1.0\n",
      "lr changed to 9.38748033764774e-07\n",
      "step 2900, entropy loss: 0.003703, l2_loss: 6577.907715, total loss: 0.003716\n",
      "1.0\n",
      "lr changed to 6.571236236353417e-07\n",
      "step 3000, entropy loss: 0.005828, l2_loss: 6577.918457, total loss: 0.005841\n",
      "1.0\n",
      "accuracy: 0.9931\n"
     ]
    }
   ],
   "source": [
    "# Train\n",
    "lr = 0.01\n",
    "for step in range(3000):\n",
    "    batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "    \n",
    "    _, loss, l2_loss_value, total_loss_value = sess.run(\n",
    "               [train_step, cross_entropy, l2_loss, total_loss], \n",
    "               feed_dict={x: batch_xs, y_: batch_ys, learning_rate:lr})\n",
    "  \n",
    "    if (step+1) % 100 == 0:\n",
    "        if step>300:\n",
    "            lr = lr*0.7\n",
    "            print(\"lr changed to {}\".format(lr))\n",
    "        print('step %d, entropy loss: %f, l2_loss: %f, total loss: %f' % (step+1, loss, l2_loss_value, total_loss_value))\n",
    "    # Test trained model\n",
    "        correct_prediction = tf.equal(tf.argmax(net, 1), tf.argmax(y_, 1))\n",
    "        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "        print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys}))\n",
    "    if (step+1) % 300 == 0:\n",
    "\n",
    "        print('accuracy:', sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "### 总结\n",
    "本周作业花了不少时间去理解conv2d函数，以及学习时间少了些，所以我拖得晚交了。\n",
    "\n",
    "tinymind平台的服务器总是连接失败，我是用自己笔记本的CPU跑的，太耗时的网络就没有采用，例如模型中注释掉的Batch Normalization在我笔记本上会多耗去好几倍的时间，加速收敛的效果有，但不是特别大，相当于收敛/时间的性价比不高，所以没有采用。\n",
    "\n",
    "另外，dropout在最后一层全连接层，抛弃比例小时对收敛速度和结果影响不大，大时收敛很慢，所以也没有采用。\n",
    "\n",
    "##### 分析结果：\n",
    "1. kernel size由5\\*5调为3\\*3后，收敛速度有较明显提高。\n",
    "2. kernel数量采用2的幂次倍（32/64）没什么问题。\n",
    "3. lr如果不衰减，收敛速度会变慢而且最终结果也不太好，目前的收敛速度（每100个step衰减30%）也是经过多次测试试出来的。\n",
    "4. 由于手写数字形状应该较为复杂，所以把正则因子尽量调小（2e-9）了，测试结果也证明小一点会比较好。\n",
    "5. 权重初始化查了keras的API，Conv2D默认是xavier均匀分布初始化就没改了。\n",
    "6. optimizer经测试是对收敛速度和最终结果影响最大的。gdo的test accuracy只有98左右，adam收敛更快，约600个step达到98%，约2000个step后稳定与99.1%。而rmsprop收敛更快，900个step后即达到99%，1500个step后就稳定到99.3%。\n",
    "\n",
    "注：有时初始化后会出现死亡的情况，概率大约20%。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
