{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "问题描述\n",
    "以课上给出的代码为基础，通过适当的改造，修改初始化⽅式，增加正则化，调整神经元个数，增加隐层等，将这 \n",
    "个模型的验证集validation准确率提⾼到98%以上。 \n",
    "解题提示\n",
    "https://www.tinymind.com/ai100/notebooks/74 \n",
    "给出代码的运⾏log截图并提供⼼得体会⽂档解释对模型的各种修改起了什么样的作⽤。 \n",
    "批改标准\n",
    "代码不作为评判标准，如果运⾏正确，则认为代码没有错误。 \n",
    "没有明显报错的正常的log输出 ，log中的模型准确率达到98`分。 \n",
    "如何修改隐层数量，修改后会起到什么样的效果10分。 \n",
    "如何神经元个数，起到了什么样的效果10分。 \n",
    "如何在模型中添加L1/L2正则化，正则化起什么作⽤10分。 \n",
    "使⽤不同的初始化⽅式对模型有什么影响10分。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-1-6b6fa26f2842>:10: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please write your own downloading logic.\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting ./train-images-idx3-ubyte.gz\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting ./train-labels-idx1-ubyte.gz\n",
      "Extracting ./t10k-images-idx3-ubyte.gz\n",
      "Extracting ./t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "(55000, 784)\n",
      "(55000,)\n",
      "(5000, 784)\n",
      "(5000,)\n",
      "(10000, 784)\n",
      "(10000,)\n"
     ]
    }
   ],
   "source": [
    "#首先导入一些用到的库。\n",
    "\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "tf.logging.set_verbosity(tf.logging.INFO)\n",
    "\n",
    "#先来读取数据集看看数据长什么样子\n",
    "mnist = input_data.read_data_sets(\"./\") #此处网络不通可能需要到“http://yann.lecun.com/exdb/mnist/”手动下载数据集放到指定目录“./”\n",
    "\n",
    "#训练集\n",
    "print(mnist.train.images.shape)\n",
    "print(mnist.train.labels.shape)\n",
    "\n",
    "#验证集\n",
    "print(mnist.validation.images.shape)\n",
    "print(mnist.validation.labels.shape)\n",
    "\n",
    "#测试集\n",
    "print(mnist.test.images.shape)\n",
    "print(mnist.test.labels.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [],
   "source": [
    "def calLogits(inputArg,cellNum):\n",
    "    length = len(cellNum)\n",
    "    \n",
    "    for i,units_count in enumerate(cellNum):\n",
    "        if i == 0:\n",
    "            last_units_count = 784\n",
    "        \n",
    "        #参数使用Xavier算法进行初始化\n",
    "        tnw = tf.truncated_normal([last_units_count, units_count],stddev=1.0/last_units_count)\n",
    "        tnb = tf.truncated_normal([units_count],stddev=1.0/units_count)\n",
    "        W = tf.Variable(tnw)\n",
    "        tf.add_to_collection(tf.GraphKeys.WEIGHTS, W) #将权重收集起来，后续做L1/L2需要\n",
    "        \n",
    "        logits = tf.matmul(inputArg, W) + tf.Variable(tnb)\n",
    "        if i < length-1:\n",
    "            inputArg = tf.nn.relu(logits)\n",
    "            last_units_count = units_count\n",
    "        else:\n",
    "            return logits\n",
    "        \n",
    "x = tf.placeholder(\"float\", [None, 784])\n",
    "y = tf.placeholder(\"int64\", [None])\n",
    "learning_rate = tf.placeholder(\"float\")\n",
    "\n",
    "batch_size = 55\n",
    "trainig_step = 100000\n",
    "\n",
    "def run(ckpt_path,cellNum):\n",
    "    logits = calLogits(x,cellNum) #拼接神经网络，并初始化各连接权重参数\n",
    "    regularizer = tf.contrib.layers.l2_regularizer(scale=5.5/55000) #L2正则化，避免过拟合\n",
    "    reg_term = tf.contrib.layers.apply_regularization(regularizer)\n",
    "    cross_entropy_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y) + reg_term)\n",
    "    optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cross_entropy_loss)\n",
    "    \n",
    "    pred = tf.nn.softmax(logits)\n",
    "    correct_pred = tf.equal(tf.argmax(pred, 1), y)\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\n",
    "    \n",
    "    saver = tf.train.Saver()\n",
    "\n",
    "    with tf.Session() as sess:\n",
    "        sess.run(tf.global_variables_initializer())\n",
    "\n",
    "        #定义验证集与测试集\n",
    "        validate_data = {x: mnist.validation.images, y: mnist.validation.labels}\n",
    "        test_data = {x: mnist.test.images, y: mnist.test.labels}\n",
    "\n",
    "        retry = acc = 0\n",
    "        for i in range(trainig_step):\n",
    "            xs, ys = mnist.train.next_batch(batch_size)\n",
    "            _, loss = sess.run([optimizer, cross_entropy_loss],feed_dict={x: xs, y: ys,learning_rate: 0.3})\n",
    "\n",
    "            #每100次训练验证准确率\n",
    "            if i > 0 and i % 100 == 0:\n",
    "                validate_accuracy = sess.run(accuracy, feed_dict=validate_data)\n",
    "                if i % 500 == 0:#每500次训练打印一次损失值\n",
    "                    print(\"after %d training steps, the loss is %g, the validation accuracy is %g\"% (i, loss, validate_accuracy))\n",
    "                saver.save(sess, ckpt_path, global_step=i)\n",
    "                \n",
    "                if validate_accuracy > 0.98:\n",
    "                    acc = sess.run(accuracy, feed_dict=test_data)\n",
    "                    if acc > 0.98:\n",
    "                        print(\"the training is finish!\")\n",
    "                        print(\"the validation accuarcy is:\", validate_accuracy)\n",
    "                        break \n",
    "                    else:\n",
    "                        retry += 1\n",
    "                        if retry > 10:\n",
    "                            retry = 0\n",
    "                            print(\"过拟合倾向：steps %d. the validation accuracy is %g,the test accuracy is %g\"% (i, validate_accuracy, acc))\n",
    "\n",
    "        if acc < 0.98:\n",
    "            print(\"the training is fail!\")\n",
    "\n",
    "        #最终的测试准确率\n",
    "        acc = sess.run(accuracy, feed_dict=test_data)\n",
    "        print(\"the finally test accuarcy is:\", acc)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "after 500 training steps, the loss is 0.31566, the validation accuracy is 0.9044\n",
      "after 1000 training steps, the loss is 0.281617, the validation accuracy is 0.9466\n",
      "after 1500 training steps, the loss is 0.149256, the validation accuracy is 0.9572\n",
      "after 2000 training steps, the loss is 0.05462, the validation accuracy is 0.968\n",
      "after 2500 training steps, the loss is 0.182056, the validation accuracy is 0.9684\n",
      "after 3000 training steps, the loss is 0.0316385, the validation accuracy is 0.9762\n",
      "after 3500 training steps, the loss is 0.161887, the validation accuracy is 0.9678\n",
      "after 4000 training steps, the loss is 0.0598631, the validation accuracy is 0.9752\n",
      "after 4500 training steps, the loss is 0.0237931, the validation accuracy is 0.9754\n",
      "after 5000 training steps, the loss is 0.0273783, the validation accuracy is 0.9758\n",
      "after 5500 training steps, the loss is 0.0231833, the validation accuracy is 0.9786\n",
      "after 6000 training steps, the loss is 0.0485235, the validation accuracy is 0.9776\n",
      "after 6500 training steps, the loss is 0.0838955, the validation accuracy is 0.9776\n",
      "after 7000 training steps, the loss is 0.0318792, the validation accuracy is 0.9786\n",
      "after 7500 training steps, the loss is 0.0226011, the validation accuracy is 0.9798\n",
      "after 8000 training steps, the loss is 0.028769, the validation accuracy is 0.9772\n",
      "after 8500 training steps, the loss is 0.173965, the validation accuracy is 0.9724\n",
      "after 9000 training steps, the loss is 0.0256064, the validation accuracy is 0.9736\n",
      "after 9500 training steps, the loss is 0.0338893, the validation accuracy is 0.9786\n",
      "after 10000 training steps, the loss is 0.0735565, the validation accuracy is 0.9798\n",
      "the training is finish!\n",
      "the validation accuarcy is: 0.9806\n",
      "the finally test accuarcy is: 0.98\n"
     ]
    }
   ],
   "source": [
    "# run(ckpt_path='./model/src.ckpt',(100,10)) #分数很低\n",
    "run('./model/add-hiden.ckpt',(100,50,10)) #增加隐层数量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 如何修改隐层数量，修改后会起到什么样的效果\n",
    "如上述代码，增加隐层数可以降低网络误差，提高精度（此处准确率由93%提升到98%以上），但也会使网络复杂化，从而增加了网络的训练时间和出现“过拟合”的倾向，此处使用L2正则对模型进行惩罚，避免参数膨胀导致过拟合。\n",
    "此外还发现，SGD刚开始收敛很快，越到很后面参数更新就越慢，还可能在最优点附近往复。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "after 500 training steps, the loss is 0.360228, the validation accuracy is 0.939\n",
      "after 1000 training steps, the loss is 0.183346, the validation accuracy is 0.9588\n",
      "after 1500 training steps, the loss is 0.0974428, the validation accuracy is 0.9668\n",
      "after 2000 training steps, the loss is 0.0663642, the validation accuracy is 0.974\n",
      "after 2500 training steps, the loss is 0.101621, the validation accuracy is 0.9736\n",
      "after 3000 training steps, the loss is 0.0990657, the validation accuracy is 0.9748\n",
      "after 3500 training steps, the loss is 0.0596581, the validation accuracy is 0.978\n",
      "after 4000 training steps, the loss is 0.0489726, the validation accuracy is 0.9794\n",
      "after 4500 training steps, the loss is 0.0238625, the validation accuracy is 0.9786\n",
      "after 5000 training steps, the loss is 0.0692927, the validation accuracy is 0.9774\n",
      "after 5500 training steps, the loss is 0.0731904, the validation accuracy is 0.9764\n",
      "after 6000 training steps, the loss is 0.0671887, the validation accuracy is 0.9792\n",
      "after 6500 training steps, the loss is 0.086815, the validation accuracy is 0.9796\n",
      "过拟合倾向：steps 6900. the validation accuracy is 0.9802,the test accuracy is 0.9757\n",
      "after 7000 training steps, the loss is 0.0573371, the validation accuracy is 0.982\n",
      "the training is finish!\n",
      "the validation accuarcy is: 0.982\n",
      "the finally test accuarcy is: 0.9811\n"
     ]
    }
   ],
   "source": [
    "run('./model/add-cell.ckpt',(392,10)) #增加神经元数量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 如何神经元个数，起到了什么样的效果\n",
    "如上述代码，增加神经元个数也可以降低网络误差，提高精度（此处准确率由93%提升到98%以上），但同时也会使网络复杂化，从而增加了网络的训练时间和出现“过拟合”的倾向，可使用L1/L2正则或dropout算法解决，此处使用L2进行参数惩罚。与添加隐层数量相比，添加神经元个数收敛速度更快，效果更好。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 如何在模型中添加L1/L2正则化，正则化起什么作⽤\n",
    " regularizer = tf.contrib.layers.l2_regularizer(scale=λ/n) #L2正则化，λ（lambda）为自定义参数（超参数），n是训练样本的数量\n",
    " reg_term = tf.contrib.layers.apply_regularization(regularizer)\n",
    " cross_entropy_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y) + reg_term)\n",
    " \n",
    " 作用：训练尽可能的小的权重，较大的权重需要保证能显著降低原有损失才能保留，可缓解过拟合。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 使⽤不同的初始化⽅式对模型有什么影响\n",
    "\n",
    "1)全部为固定值:设大设小对模型训练影响很大，如果权重一开始很小，信号到达最后也会很小；如果权重一开始很大，信号到达最后也会很大，特别的，初始化权重全为0时，很可能直接导致模型失效，无法收敛。\n",
    "2）服从固定方差的独立高斯分布：随着层级的加深，高斯分布初始化的参数容易出现过大或过小的问题\n",
    "3）Xavier（服从参数为n的均匀分布或独立高斯分布）或MSRA（服从参数为n的独立高斯分布）初始化：帮助减少梯度弥散问题，使得信号在神经网络中可以传递得更深。使得信号在经过多层神经元后保持在合理的范围（不至于太小或太大）。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
