{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 作业题目\n",
    "详见《readme.md》"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 导入工具和数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
      "  from ._conv import register_converters as _register_converters\n",
      "Using TensorFlow backend.\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'channels_last'"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "import tensorflow as tf\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "\n",
    "FLAGS = None\n",
    "#\n",
    "import numpy as np\n",
    "import time\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "#\n",
    "\n",
    "import keras as keras\n",
    "from keras.layers.core import Dense, Flatten\n",
    "from keras.layers.convolutional import Conv2D\n",
    "from keras.layers.pooling import MaxPooling2D\n",
    "\n",
    "from keras import backend as K\n",
    "from keras.objectives import categorical_crossentropy\n",
    "\n",
    "K.image_data_format() \n",
    "# C:\\ProgramData\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: \n",
    "# FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. \n",
    "# In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
    "# from ._conv import register_converters as _register_converters\n",
    "# Using TensorFlow backend."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.2.1\n",
      "['C:\\\\ProgramData\\\\Anaconda3\\\\lib\\\\site-packages\\\\tensorflow']\n",
      "2.1.0\n"
     ]
    }
   ],
   "source": [
    "#查询tensorflow版本\n",
    "print(tf.__version__)\n",
    "#查询tensorflow安装路径\n",
    "print(tf.__path__)\n",
    "#查询keras版本\n",
    "print(keras.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们在这里调用系统提供的Mnist数据函数为我们读入数据，如果没有下载的话则进行下载。\n",
    "\n",
    "这里将data_dir改为适合你的运行环境的目录 <font color=#ff0000>** **</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting input_data\\train-images-idx3-ubyte.gz\n",
      "Extracting input_data\\train-labels-idx1-ubyte.gz\n",
      "Extracting input_data\\t10k-images-idx3-ubyte.gz\n",
      "Extracting input_data\\t10k-labels-idx1-ubyte.gz\n"
     ]
    }
   ],
   "source": [
    "# Import data\n",
    "data_dir = 'input_data'  #E:\\lzp\\csdn\\prj\\week6\\week6_mnist\\\n",
    "mnist = input_data.read_data_sets(data_dir, one_hot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 熟悉样本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " 类型是 <class 'tensorflow.contrib.learn.python.learn.datasets.base.Datasets'>\n",
      " 训练数据有 55000\n",
      " 测试数据有 10000\n",
      " 数据类型 is <class 'numpy.ndarray'>\n",
      " 标签类型 <class 'numpy.ndarray'>\n",
      " 训练集的shape (55000, 784)\n",
      " 训练集的标签的shape (55000, 10)\n",
      " 测试集的shape' is (10000, 784)\n",
      " 测试集的标签的shape (10000, 10)\n"
     ]
    }
   ],
   "source": [
    "#观察\n",
    "print (\" 类型是 %s\" % (type(mnist)))\n",
    "print (\" 训练数据有 %d\" % (mnist.train.num_examples))\n",
    "print (\" 测试数据有 %d\" % (mnist.test.num_examples))\n",
    "trainimg   = mnist.train.images\n",
    "trainlabel = mnist.train.labels\n",
    "testimg    = mnist.test.images\n",
    "testlabel  = mnist.test.labels\n",
    "# 28 * 28 * 1\n",
    "print (\" 数据类型 is %s\"    % (type(trainimg)))\n",
    "print (\" 标签类型 %s\"  % (type(trainlabel)))\n",
    "print (\" 训练集的shape %s\"   % (trainimg.shape,))\n",
    "print (\" 训练集的标签的shape %s\" % (trainlabel.shape,))\n",
    "print (\" 测试集的shape' is %s\"    % (testimg.shape,))\n",
    "print (\" 测试集的标签的shape %s\"  % (testlabel.shape,))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#样本显示\n",
    "img_show = False\n",
    "if img_show == True :\n",
    "    nsample = 1\n",
    "    # randidx = np.random.randint(trainimg.shape[0], size=nsample) \n",
    "    randidx = [1] #第n个图\n",
    "    for i in randidx:\n",
    "        curr_img   = np.reshape(trainimg[i, :], (28, 28)) # 28 by 28 matrix \n",
    "        curr_label = np.argmax(trainlabel[i, :] ) # Label\n",
    "        plt.matshow(curr_img, cmap=plt.get_cmap('gray'))\n",
    "        print (\"\" + str(i) + \"th 训练数据 \" + \"标签是 \" + str(curr_label))\n",
    "        plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 熟悉激活函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "激活函数(Activation Functions)： "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "sigmoid tanh relu softplus RReLU  leakRelu PReLU MaxOut ELU SELU Swish cRelu MPELU"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "官网地址：http://www.tensorfly.cn/tfdoc/api_docs/python/nn.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "优缺点分析：\n",
    "1. https://zhuanlan.zhihu.com/p/22142013   \n",
    "2. https://blog.csdn.net/weixin_39881922/article/details/79045687"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "激活函数有如下分类，将在训练中起到不同作用：\n",
    "1. 可导：sigmoid、selu，不可导：relu\n",
    "2. 线性： relu、prelu,非线性:selu、elu、sigmoid、tanh\n",
    "3. 变化率：例如selu大于elu"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#导入自定义激活函数类\n",
    "#经测试，kernel一次执行，同一类名只导入1次，如果类内容修改，需要kernel restart\n",
    "# keras自带激活，因此用不到自定义函数\n",
    "# from ActivationFunction import ActivationFunction\n",
    "# af = ActivationFunction()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 获取输入和标签："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "placeholder,数值不可改变"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create the model\n",
    "x = tf.placeholder(tf.float32, [None, 784])\n",
    "# 定义我们的ground truth 占位符 Define loss and optimizer\n",
    "y = tf.placeholder(tf.float32, [None, 10])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "#reshape,适合图像处理\n",
    "with tf.name_scope('reshape'):\n",
    "    x_image = tf.reshape(x, [-1, 28, 28, 1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 熟悉keras"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 中文综述：https://blog.csdn.net/gjq246/article/details/72638343\n",
    "2. 初始化：http://keras-cn.readthedocs.io/en/latest/other/initializations/\n",
    "3. 正则化：http://keras-cn.readthedocs.io/en/latest/other/regularizers/\n",
    "4. 激活 ：http://keras-cn.readthedocs.io/en/latest/other/activations/\n",
    "5. ……"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 定义函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 网络层"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "卷积，池化，层连接，激活，权重正则,激活函数，……"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "def kerasNetNN(x,\n",
    "               filters=np.array([32,64]),\n",
    "               kernel_size=np.array([[5,5],[5,5]]),\n",
    "               strides=np.array([[1,1],[1,1]]),\n",
    "               padding='same',\n",
    "               kernel_initializer = 'glorot_uniform',  #\n",
    "               bias_initializer='zeros',\n",
    "               pool_size = [2,2],\n",
    "               activation = 'relu',\n",
    "               regular = 'l2',regular_lambda = 0.0001,\n",
    "               input_shape = np.array([28,28,1]),\n",
    "               units = 1000,\n",
    "               numClasses = 10\n",
    "               ):\n",
    "#----------------------------------------------------\n",
    "# 1 功能说明：\n",
    "#   层连接，激活函数等，支持多隐层\n",
    "# 2 参数说明：\n",
    "#   filters：格式array,卷积核的数目（即输出的维度）,\n",
    "#   kernel_initializer = ['RandomNormal','RandomUniform','TruncatedNormal','VarianceScaling','Orthogonal','lecun_uniform']\n",
    "#   activation：激活函数，支持3种，'relu','selu','elu'\n",
    "#   yout:输出\n",
    "#----------------------------------------------------\n",
    "# validation of input values \n",
    "    assert len(filters) >= 1, 'filters len not less than 1 required!' \n",
    "#     print(len(filters))\n",
    "#     print(len(kernel_size))\n",
    "    assert len(filters) == len(kernel_size), 'kernel_size and filters size match required!'\n",
    "#---------------------------------\n",
    "    hidden_layer_num = len(filters)\n",
    "    \n",
    "    if(regular == 'l1'):\n",
    "        regularizer = keras.regularizers.l1(regular_lambda)\n",
    "    elif(regular == 'l2'):\n",
    "        regularizer = keras.regularizers.l2(regular_lambda)\n",
    "    elif(regular == 'l1_l2'):\n",
    "        regularizer = keras.regularizers.l1_l2(regular_lambda)        \n",
    "    else:\n",
    "        regularizer = None\n",
    "    #--------------------\n",
    "    # for start:\n",
    "    for i in range(hidden_layer_num):\n",
    "        #conv\n",
    "        if i == 0 :\n",
    "            net = Conv2D(filters = filters[i], kernel_size=kernel_size[i], strides=strides[i],padding=padding,\n",
    "                         kernel_initializer = kernel_initializer,bias_initializer=bias_initializer,\n",
    "                         kernel_regularizer=regularizer,bias_regularizer=regularizer,activity_regularizer=regularizer,\n",
    "                         activation=activation,input_shape=input_shape)(x)\n",
    "        else:\n",
    "            net = Conv2D(filters = filters[i], kernel_size=kernel_size[i], strides=strides[i],padding=padding,\n",
    "                         kernel_initializer = kernel_initializer,bias_initializer=bias_initializer,\n",
    "                         kernel_regularizer=regularizer,bias_regularizer=regularizer,activity_regularizer=regularizer,\n",
    "                         activation=activation)(net)\n",
    "        #pooling\n",
    "        net = MaxPooling2D(pool_size=pool_size)(net)\n",
    "    # for end\n",
    "    #--------------------    \n",
    "    net = Flatten()(net)\n",
    "    net = Dense(units=units, activation=activation)(net)\n",
    "    net = Dense(units=numClasses,activation='softmax')(net)\n",
    "    return net\n",
    "#----------------------------------------------------"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "#网络层测试\n",
    "# y_pred = kerasNetNN(x=x_image)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 损失计算函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "def lossNN(y_pred):\n",
    "#----------------------------------------------------\n",
    "# 1 功能说明：\n",
    "#   定义损失函数，及其计算方法\n",
    "# 2 参数说明：\n",
    "#   y_pred：预测值\n",
    "#----------------------------------------------------\n",
    "    entropy = tf.reduce_mean(categorical_crossentropy(y, y_pred))\n",
    "    #-----------------------\n",
    "    cross_entropy = tf.cast(entropy,tf.float32)\n",
    "    return cross_entropy\n",
    "#----------------------------------------------------"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 损失计算函数测试\n",
    "# cross_entropy = lossNN(y_pred)\n",
    "# cross_entropy.dtype"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 正则函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "参考例程：\n",
    "1. 官网正则函数：\n",
    "  * https://www.tensorflow.org/api_guides/python/nn#Losses\n",
    "  * https://blog.csdn.net/JNingWei/article/details/77839385\n",
    "2. 可训练变量库：\n",
    "  * https://blog.csdn.net/chaowang1994/article/details/80388990"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "def regularNN(regular = 'l2',regular_lambda = 0.0001):\n",
    "#----------------------------------------------------\n",
    "# 1 功能说明：\n",
    "#   添加全局正则项\n",
    "# 2 参数说明：\n",
    "#   regular：正则函数，支持，'l2' , 'log_poisson_loss',None，不支持l1\n",
    "#----------------------------------------------------\n",
    "# 写法1：\n",
    "# loss_regular = tf.add_n( [tf.nn.l2_loss(w) for w in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)] )\n",
    "# 对bias不正则时：\n",
    "# lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars if 'bias' not in v.name ]) * 0.001\n",
    "#     gv = tf.trainable_variables() \n",
    "    gv = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)\n",
    "    if(regular == 'l2'):\n",
    "        loss_regular = tf.add_n([ tf.nn.l2_loss(tf.cast(v,tf.float32)) for v in gv ]) * regular_lambda \n",
    "#         loss_regular = tf.add_n([tf.cast(np.sum((v**2) / 2),tf.float64) for v in gv ]) * regular_lambda\n",
    "\n",
    "    elif(regular == 'log_poisson_loss'):\n",
    "        #待解决log_input问题\n",
    "#         log_input = tf.cast(0.9,tf.float32)\n",
    "#         loss_regular = tf.add_n([ tf.nn.log_poisson_loss(tf.cast(v,tf.float32),tf.cast(v,tf.float32)) for v in gv ]) * regular_lambda\n",
    "        loss_regular = tf.add_n([ tf.nn.l2_loss(tf.cast(v,tf.float32)) for v in gv ]) * regular_lambda \n",
    "    else:\n",
    "        loss_regular = tf.Variable(0.0)\n",
    "    return loss_regular\n",
    "#----------------------------------------------------"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 正则测试\n",
    "# regular = ['l1','l2','log_poisson_loss']\n",
    "# regular_lambda = 0.0001\n",
    "# cost_regular = regularNN(regular[2],regular_lambda)\n",
    "# cost_regular.dtype"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 学习率函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "参考例程：\n",
    "1. https://blog.csdn.net/zSean/article/details/75196092  讲理论\n",
    "2. https://blog.csdn.net/uestc_c2_403/article/details/72213286  有图示"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "def learningRateNN(learning_rate = 0.3,global_step=tf.Variable(0),decay_steps = 150,decay_rate = 0.98,staircase=True):\n",
    "#----------------------------------------------------\n",
    "# 1 功能说明：\n",
    "#   定义学习率，梯度下降\n",
    "# 2 参数说明：如下\n",
    "#----------------------------------------------------\n",
    "    #--------------学习速率的设置（学习速率呈指数下降）---------------------  \n",
    "    # tf.train.exponential_decay(learning_rate, global_, decay_steps, decay_rate, staircase=False)\n",
    "    # 公式：decayed_learning_rate=learining_rate*decay_rate^(global_step/decay_steps) \n",
    "    # learning_rate为事先设定的初始学习率；\n",
    "    # global_ = tf.Variable(0)  #必须是tf变量，且最好是0\n",
    "    # decay_steps为衰减速度。\n",
    "    # decay_rate为衰减系数；\n",
    "    # 默认值为False,当为True时，（global_step/decay_steps）则被转化为整数) ,选择不同的衰减方式，因此一般设置为Ture更好\n",
    "    eita = tf.train.exponential_decay(learning_rate,global_step,decay_steps,decay_rate,staircase) \n",
    "    return eita"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 学习率测试\n",
    "# eita =  learningRateNN(learning_rate = 0.3,global_step=tf.Variable(0),decay_steps = 150,decay_rate = 0.98,staircase=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 优化函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 关于global_step这个奇怪的变量：\n",
    "  * https://blog.csdn.net/leviopku/article/details/78508951  \n",
    "2. 官网优化函数： \n",
    "  * training库：https://www.tensorflow.org/api_guides/python/train  \n",
    "  * contrib库：https://www.tensorflow.org/api_docs/python/tf/contrib/opt\n",
    "  * https://www.cnblogs.com/wuzhitj/p/6648641.html"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "def optimizerNN(eita,cost,optimizer = 'GD',momentum = 0.9, global_step=None):\n",
    "#----------------------------------------------------\n",
    "# 1 功能说明：\n",
    "#   定义优化方法，此处只使用training库，函数太多了\n",
    "# 2 参数说明：\n",
    "#----------------------------------------------------\n",
    "    if optimizer == 'GD' :\n",
    "        opti = tf.train.GradientDescentOptimizer(learning_rate = eita)\n",
    "    elif optimizer == 'Adam' :\n",
    "        opti = tf.train.AdamOptimizer(learning_rate = eita,epsilon=1e-08)\n",
    "    elif optimizer == 'Adadelta' :\n",
    "        opti = tf.train.AdadeltaOptimizer(learning_rate = eita,rho=0.95, epsilon=1e-08)\n",
    "    elif optimizer == 'AdagradDA' :\n",
    "        opti = tf.train.AdagradDAOptimizer(learning_rate = eita,initial_accumulator_value=0.1)\n",
    "    elif optimizer == 'Momentum' :\n",
    "        opti = tf.train.MomentumOptimizer(learning_rate = eita,momentum=momentum)\n",
    "    elif optimizer == 'NesterovMomentum' :\n",
    "        opti = tf.train.MomentumOptimizer(learning_rate = eita,momentum=momentum, use_nesterov=True)        \n",
    "    elif optimizer == 'Ftrl' :\n",
    "        opti = tf.train.FtrlOptimizer(learning_rate = eita)\n",
    "    elif optimizer == 'ProximalGradientDescent' :\n",
    "        opti = tf.train.ProximalGradientDescentOptimizer(learning_rate = eita)\n",
    "    elif optimizer == 'ProximalAdagrad' :\n",
    "        opti = tf.train.ProximalAdagradOptimizer(learning_rate = eita)\n",
    "    elif optimizer == 'RMSProp' :\n",
    "        opti = tf.train.RMSPropOptimizer(learning_rate = eita) \n",
    "    else :# None\n",
    "        opti = tf.train.GradientDescentOptimizer(learning_rate = eita)   \n",
    "    #-----------------------\n",
    "    # 此处eita已经按照指数衰减了，因此global_step=None即可，(global_step=0时，每次调用Optimizer，global_step会自动+1)\n",
    "    # global_step = tf.train.get_or_create_global_step()\n",
    "    train_step = opti.minimize(cost,global_step=global_step) \n",
    "    return train_step"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 优化函数测试\n",
    "# train_step = optimizerNN(eita = 0.9,cost = tf.Variable(0.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 训练函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "def trainingNN(trainingIterations = 1001,batchSize = 100,\n",
    "#                #全连接网络结构参数\n",
    "#                numHiddenUnits1 = 200,numHiddenUnits2 = 80,inputSize = 784,numClasses = 10, \n",
    "               #keras网络层参数\n",
    "               filters=np.array([32,64]),\n",
    "               kernel_size=np.array([[5,5],[5,5]]),\n",
    "               strides=np.array([[1,1],[1,1]]),\n",
    "               padding='same',\n",
    "               kernel_initializer = 'lecun_uniform',\n",
    "               bias_initializer='zeros',\n",
    "               pool_size=[2,2],\n",
    "               activation = 'relu',\n",
    "               regular = 'l2',regular_lambda = 7e-5,\n",
    "               input_shape = np.array([28,28,1]),\n",
    "               units = 1000,\n",
    "               numClasses = 10,\n",
    "               #学习率参数\n",
    "               learning_rate = 0.01,global_step=tf.Variable(0),decay_steps = 100,decay_rate = 0.98,staircase=True,\n",
    "               #优化参数\n",
    "               optimizer = 'GD',momentum = 0.9,\n",
    "               #损失函数参数\n",
    "               loss_calculation = 'softmax',\n",
    "               #监测参数\n",
    "               print_step_num = 100\n",
    "               ):\n",
    "#----------------------------------------------------\n",
    "# 1 功能说明：\n",
    "#   调用初始化、激活、正则、学习率、损失计算等各函数，完成训练、精度计算、预测\n",
    "# 2 参数说明：\n",
    "#   见具体函数。\n",
    "#----------------------------------------------------\n",
    "    #网络层定义:深度、卷积参数、初始化、激活、正则\n",
    "    y_pred = kerasNetNN(\n",
    "               x           = x_image,\n",
    "               filters     = filters,\n",
    "               kernel_size = kernel_size,\n",
    "               strides     = strides,\n",
    "               padding     = padding,\n",
    "               kernel_initializer = kernel_initializer,\n",
    "               bias_initializer   = bias_initializer,\n",
    "               pool_size          = pool_size,\n",
    "               activation     = activation,\n",
    "               regular        = regular,\n",
    "               regular_lambda = regular_lambda,\n",
    "               input_shape    = input_shape,\n",
    "               units          = units,\n",
    "               numClasses     = numClasses)\n",
    "    #-----------------------\n",
    "    #正则：全局\n",
    "    cost_regular = regularNN(regular=regular,regular_lambda=regular_lambda)\n",
    "    #-----------------------\n",
    "    #损失:交叉熵或mse\n",
    "    cross_entropy = lossNN(y_pred = y_pred)\n",
    "    #-----------------------\n",
    "    #代价\n",
    "    total_loss = tf.reduce_mean(cross_entropy + cost_regular)\n",
    "    # 学习率：\n",
    "#     global_step = tf.Variable(0)\n",
    "    eita =  learningRateNN(learning_rate,global_step,decay_steps,decay_rate,staircase)\n",
    "    #-----------------------\n",
    "    # 优化方法：梯度下降\n",
    "    opti = optimizerNN(eita=eita,cost = total_loss,optimizer = optimizer,momentum = momentum)    \n",
    "    #-----------------------  \n",
    "    # 精度      \n",
    "    correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.argmax(y, 1))\n",
    "    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "    #-----------------------\n",
    "    # 声明session\n",
    "    sess = tf.Session()\n",
    "    K.set_session(sess)\n",
    "    #所有变量初始化\n",
    "    init_op = tf.global_variables_initializer()\n",
    "    sess.run(init_op)\n",
    "    #-----------------------\n",
    "    for i in range(trainingIterations): #trainingIterations\n",
    "        batch_xs, batch_ys = mnist.train.next_batch(batchSize)\n",
    "        batch_eita = sess.run(eita,feed_dict={global_step: i}) \n",
    "        feed_dict={x: batch_xs, y: batch_ys}\n",
    "        _, batch_cross_entropy, batch_cost_regular, batch_total_loss = sess.run(\n",
    "                   [opti, cross_entropy, cost_regular, total_loss], \n",
    "                   feed_dict=feed_dict)\n",
    "\n",
    "        if (i%print_step_num == 0) or (i == (trainingIterations-1)):\n",
    "            trainAccuracy = accuracy.eval(session=sess, feed_dict=feed_dict)\n",
    "            testAccuracy  = accuracy.eval(session=sess, feed_dict = {x: mnist.test.images,y: mnist.test.labels})\n",
    "            print('step %5d, entropy loss: %1.6f, regular loss: %1.6f, total loss: %1.6f,eita=%1.5f,trainAccuracy=%1.2f,testAccuracy=%1.4f' % \n",
    "                    (i, batch_cross_entropy, batch_cost_regular, batch_total_loss,batch_eita,trainAccuracy,testAccuracy))\n",
    "    #-------------\n",
    "#     acc = accuracy.eval(session=sess, feed_dict = {x: mnist.test.images,y: mnist.test.labels})\n",
    "#     print(\"testing accuracy=%1.4f\"%(acc))\n",
    "#----------------------------------------------------"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 了解训练时间\n",
    "time_test = False\n",
    "if time_test == True :\n",
    "    start = time.time()\n",
    "    trainingNN(#训练迭代参数\n",
    "               trainingIterations = 1001,batchSize = 100,\n",
    "               #-----------------------------------------------\n",
    "               #keras网络层参数\n",
    "               #卷积\n",
    "               filters=np.array([32,64]),\n",
    "               kernel_size=np.array([[5,5],[5,5]]),\n",
    "               strides=np.array([[1,1],[1,1]]),\n",
    "               padding='same',\n",
    "               #初始化\n",
    "               kernel_initializer = 'lecun_uniform',\n",
    "               bias_initializer='zeros',\n",
    "               #池化\n",
    "               pool_size=[2,2],\n",
    "               #激活函数参数\n",
    "               activation = 'relu',\n",
    "               #正则参数\n",
    "               regular = 'l2',regular_lambda = 7e-5,\n",
    "               #结构\n",
    "               input_shape = np.array([28,28,1]),\n",
    "               units = 1000,\n",
    "               numClasses = 10,\n",
    "               #-----------------------------------------------\n",
    "               #学习率参数\n",
    "               learning_rate = 0.3,global_step=tf.Variable(0),decay_steps = 100,decay_rate = 0.98,staircase=True,\n",
    "               #-----------------------------------------------\n",
    "               #优化参数\n",
    "               optimizer = 'GD',momentum = 0.9,\n",
    "               #-----------------------------------------------\n",
    "               #损失函数参数\n",
    "               loss_calculation = 'softmax',\n",
    "               #监测参数\n",
    "               print_step_num = 100\n",
    "               )\n",
    "    end = time.time()\n",
    "    print(\"time of conv2d for 1 map:{}\".format(int(end-start)))\n",
    "#-----------------------------------------------"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 开始训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "开启/关闭单项训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "training_term = True\n",
    "# training_term = False"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 训练参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "#类别数量\n",
    "numClasses = 10 \n",
    "#特征维度\n",
    "inputSize = 784 \n",
    "#样本遍历次数\n",
    "epochs = 20\n",
    "#训练迭代次数\n",
    "trainingIterations = 11000 #int(trainimg.shape[0]*epochs/batchSize)\n",
    "# print(\"trainingIterations = %d\"% trainingIterations)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 监测量选择说明"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "选取了随迭代变化、对性能影响大的量，打印输出包括：       \n",
    "1. step      ： 迭代序号，标记训练过程\n",
    "2. entropy loss： 训练损失，观察是否训练不足或过拟合\n",
    "3. regular loss:  正则损失，观察是否训练不足或过拟合\n",
    "4. total loss  :  总损失，观察与entropy loss和regular loss的关系，并观察训练不足或过拟合\n",
    "4. eita      ： 学习率，观察步进情况，了解参数的量级\n",
    "5. trainAccuracy：训练精度，观察变化 \n",
    "6. testAccuracy ：预测精度，观察拟合情况和预测效果"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 训练参数的影响"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以batchSize为例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "#定义一个批次的样本数量，并训练\n",
    "# batchSize   = [10,100]\n",
    "# for i in range(len(batchSize)):\n",
    "#     print('batchSize = ' ,batchSize[i])\n",
    "#     trainingNN(batchSize = batchSize[i])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "* 整体上，训练中对样本使用重复覆盖多，有助于提高精度\n",
    "* 不一定训练次数越多越好，训练少会欠拟合，训练多容易过拟合，需要根据参数和表现调试\n",
    "* 模型越复杂，相对要求的训练时间越长才能出好效果，需要在效果和时间上找到平衡"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 初始化的影响"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "InitializationMethod =  glorot_uniform\n",
      "step     0, entropy loss: 2.303738, regular loss: 0.055361, total loss: 2.359098,eita=0.01000,trainAccuracy=0.12,testAccuracy=0.1381\n",
      "step    50, entropy loss: 2.185931, regular loss: 0.055363, total loss: 2.241293,eita=0.01000,trainAccuracy=0.53,testAccuracy=0.4135\n",
      "step   100, entropy loss: 1.902498, regular loss: 0.055386, total loss: 1.957884,eita=0.00980,trainAccuracy=0.60,testAccuracy=0.6801\n",
      "step   150, entropy loss: 0.942529, regular loss: 0.055463, total loss: 0.997992,eita=0.00980,trainAccuracy=0.88,testAccuracy=0.8036\n",
      "step   200, entropy loss: 0.510322, regular loss: 0.055533, total loss: 0.565856,eita=0.00960,trainAccuracy=0.92,testAccuracy=0.8452\n",
      "step   250, entropy loss: 0.459922, regular loss: 0.055574, total loss: 0.515496,eita=0.00960,trainAccuracy=0.88,testAccuracy=0.8895\n",
      "step   300, entropy loss: 0.351729, regular loss: 0.055604, total loss: 0.407332,eita=0.00941,trainAccuracy=0.91,testAccuracy=0.8645\n",
      "InitializationMethod =  random_normal\n",
      "step     0, entropy loss: 2.304384, regular loss: 0.113632, total loss: 2.418016,eita=0.01000,trainAccuracy=0.16,testAccuracy=0.1238\n",
      "step    50, entropy loss: 2.084194, regular loss: 0.113636, total loss: 2.197830,eita=0.01000,trainAccuracy=0.61,testAccuracy=0.6151\n",
      "step   100, entropy loss: 1.436329, regular loss: 0.113688, total loss: 1.550016,eita=0.00980,trainAccuracy=0.75,testAccuracy=0.7622\n",
      "step   150, entropy loss: 0.745896, regular loss: 0.113765, total loss: 0.859661,eita=0.00980,trainAccuracy=0.81,testAccuracy=0.8250\n",
      "step   200, entropy loss: 0.403096, regular loss: 0.113809, total loss: 0.516905,eita=0.00960,trainAccuracy=0.92,testAccuracy=0.8564\n",
      "step   250, entropy loss: 0.446674, regular loss: 0.113837, total loss: 0.560511,eita=0.00960,trainAccuracy=0.91,testAccuracy=0.8849\n",
      "step   300, entropy loss: 0.344627, regular loss: 0.113857, total loss: 0.458484,eita=0.00941,trainAccuracy=0.93,testAccuracy=0.9062\n",
      "InitializationMethod =  lecun_uniform\n",
      "step     0, entropy loss: 2.357577, regular loss: 0.170735, total loss: 2.528312,eita=0.01000,trainAccuracy=0.14,testAccuracy=0.1227\n",
      "step    50, entropy loss: 0.974975, regular loss: 0.170790, total loss: 1.145765,eita=0.01000,trainAccuracy=0.78,testAccuracy=0.8228\n",
      "step   100, entropy loss: 0.683842, regular loss: 0.170850, total loss: 0.854692,eita=0.00980,trainAccuracy=0.91,testAccuracy=0.8339\n",
      "step   150, entropy loss: 0.461978, regular loss: 0.170883, total loss: 0.632861,eita=0.00980,trainAccuracy=0.86,testAccuracy=0.9008\n",
      "step   200, entropy loss: 0.375442, regular loss: 0.170901, total loss: 0.546343,eita=0.00960,trainAccuracy=0.93,testAccuracy=0.9058\n",
      "step   250, entropy loss: 0.303551, regular loss: 0.170914, total loss: 0.474465,eita=0.00960,trainAccuracy=0.93,testAccuracy=0.9032\n",
      "step   300, entropy loss: 0.178714, regular loss: 0.170925, total loss: 0.349639,eita=0.00941,trainAccuracy=0.99,testAccuracy=0.9289\n"
     ]
    }
   ],
   "source": [
    "#定义初始化方法，并训练\n",
    "#keras预定义函数['RandomNormal','RandomUniform','TruncatedNormal','VarianceScaling','Orthogonal','lecun_uniform']\n",
    "# kernel_initializer = ['glorot_uniform','random_normal','random_uniform','TruncatedNormal','VarianceScaling','Orthogonal','lecun_uniform']\n",
    "# 太多了，选3个训练\n",
    "if training_term == True :\n",
    "    InitializationMethod = ['glorot_uniform','random_normal','lecun_uniform'] \n",
    "    for i in range(len(InitializationMethod)):\n",
    "        print('InitializationMethod = ' ,InitializationMethod[i])\n",
    "        trainingNN(kernel_initializer = InitializationMethod[i],trainingIterations = 301,print_step_num = 50)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "* 1 glorot_uniform：作为默认keras的默认初始化方法，表现中规中距\n",
    "* 2 random_normal ：效果有提升，收敛更平稳\n",
    "* 3 lecun_uniform：大神作品果然名不虚传，从loss看，entropy loss起到了关键作用。具体原因还要看论文原理。收敛最快，效果最好，就选这个。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 网络结构的影响"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "卷积、池化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 127,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "net 0 : filters =  [32 64] , kernel_size =  [[5 5]\n",
      " [5 5]] , strides =  [[1 1]\n",
      " [1 1]]\n",
      "2\n",
      "2\n",
      "step     0, entropy loss: 2.286559, regular loss: 1.169547, total loss: 3.456106,eita=0.01000,trainAccuracy=0.14,testAccuracy=0.1506\n",
      "step    50, entropy loss: 1.193821, regular loss: 1.169527, total loss: 2.363347,eita=0.01000,trainAccuracy=0.79,testAccuracy=0.7724\n",
      "step   100, entropy loss: 0.540454, regular loss: 1.169526, total loss: 1.709980,eita=0.00980,trainAccuracy=0.91,testAccuracy=0.8566\n",
      "step   150, entropy loss: 0.417946, regular loss: 1.169496, total loss: 1.587441,eita=0.00980,trainAccuracy=0.94,testAccuracy=0.8814\n",
      "step   200, entropy loss: 0.362834, regular loss: 1.169447, total loss: 1.532281,eita=0.00960,trainAccuracy=0.90,testAccuracy=0.9029\n",
      "step   250, entropy loss: 0.340368, regular loss: 1.169392, total loss: 1.509760,eita=0.00960,trainAccuracy=0.93,testAccuracy=0.9130\n",
      "step   300, entropy loss: 0.458305, regular loss: 1.169334, total loss: 1.627638,eita=0.00941,trainAccuracy=0.89,testAccuracy=0.9113\n",
      "net 1 : filters =  [16 32 64] , kernel_size =  [[5 5]\n",
      " [5 5]\n",
      " [5 5]] , strides =  [[2 2]\n",
      " [2 2]\n",
      " [1 1]]\n",
      "3\n",
      "3\n",
      "step     0, entropy loss: 2.302821, regular loss: 1.178348, total loss: 3.481168,eita=0.30000,trainAccuracy=0.15,testAccuracy=0.1032\n",
      "step    50, entropy loss: 1.657056, regular loss: 1.176475, total loss: 2.833531,eita=0.30000,trainAccuracy=0.55,testAccuracy=0.4774\n",
      "step   100, entropy loss: 0.585684, regular loss: 1.175003, total loss: 1.760686,eita=0.29400,trainAccuracy=0.86,testAccuracy=0.7889\n",
      "step   150, entropy loss: 0.448587, regular loss: 1.173174, total loss: 1.621760,eita=0.29400,trainAccuracy=0.83,testAccuracy=0.8132\n",
      "step   200, entropy loss: 0.091689, regular loss: 1.171090, total loss: 1.262779,eita=0.28812,trainAccuracy=1.00,testAccuracy=0.9540\n",
      "step   250, entropy loss: 0.140695, regular loss: 1.168892, total loss: 1.309587,eita=0.28812,trainAccuracy=0.99,testAccuracy=0.9568\n",
      "step   300, entropy loss: 0.338985, regular loss: 1.166616, total loss: 1.505601,eita=0.28236,trainAccuracy=0.96,testAccuracy=0.9517\n"
     ]
    }
   ],
   "source": [
    "#设置网络结构，并训练\n",
    "if training_term == True :\n",
    "    # 网络结构1: 2层卷积\n",
    "    filters1=np.array([32,   #卷积第1层\n",
    "                      64]   #卷积第2层\n",
    "                    )       #……\n",
    "    kernel_size1=np.array([[5,5], #卷积第1层\n",
    "                          [5,5]] #卷积第2层\n",
    "                        )        #……\n",
    "    strides1=np.array([[1,1],  #卷积第1层\n",
    "                      [1,1]]  #卷积第2层\n",
    "                    )         #……\n",
    "    learning_rate1 = 0.01\n",
    "    # 网络结构2： 3层卷积\n",
    "    filters2=np.array([16,   #卷积第1层\n",
    "                      32,\n",
    "                      64]   \n",
    "                    )       #……\n",
    "    kernel_size2=np.array([[5,5], #卷积第1层\n",
    "                          [5,5],\n",
    "                          [5,5]] #\n",
    "                        )        #……\n",
    "    strides2=np.array([[2,2],  #卷积第1层\n",
    "                      [2,2],\n",
    "                      [1,1]]  #\n",
    "                    )         #……\n",
    "    learning_rate2 = 0.3\n",
    "    # 网络组合:暂时手写\n",
    "#     filters     = np.vstack((np.expand_dims(filters1, axis=0),np.expand_dims(filters2, axis=0)))\n",
    "#     kernel_size = np.vstack((np.expand_dims(kernel_size1, axis=0),np.expand_dims(kernel_size2, axis=0)))\n",
    "#     strides     = np.vstack((np.expand_dims(strides1, axis=0),np.expand_dims(strides2, axis=0)))\n",
    "    ##暂定公用，简化训练\n",
    "    pool_size=np.array([2,2]) \n",
    "    dense_units = 1000 \n",
    "    #训练\n",
    "    for i in range(2):\n",
    "        if i == 0:\n",
    "            filters     = filters1\n",
    "            kernel_size = kernel_size1\n",
    "            strides     = strides1\n",
    "            learning_rate = learning_rate1\n",
    "        elif i == 1:\n",
    "            filters     = filters2\n",
    "            kernel_size = kernel_size2\n",
    "            strides     = strides2\n",
    "            learning_rate = learning_rate2\n",
    "        print('net',i,': filters = ',filters,', kernel_size = ',kernel_size,', strides = ',strides)\n",
    "        trainingNN(filters = filters, kernel_size = kernel_size,strides=strides,pool_size=pool_size,units = dense_units,\n",
    "                   learning_rate = learning_rate,\n",
    "                   trainingIterations = 301,print_step_num = 50)\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "1. 3层网络比2层网络效果好太多，从entropy、regular、total三个损失和预测精度上，都很明显\n",
    "2. 学习率初值由0.01改为0.3，复杂网络结构更需要正则、学习率、迭代次数等参数配合，才能出效果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 激活函数的影响"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "activationFunction =  relu\n",
      "step     0, entropy loss: 2.315694, regular loss: 0.810987, total loss: 3.126681,eita=0.01000,trainAccuracy=0.20,testAccuracy=0.1618\n",
      "step    50, entropy loss: 1.097668, regular loss: 0.810996, total loss: 1.908663,eita=0.01000,trainAccuracy=0.80,testAccuracy=0.7991\n",
      "step   100, entropy loss: 0.501487, regular loss: 0.811014, total loss: 1.312501,eita=0.00980,trainAccuracy=0.88,testAccuracy=0.8436\n",
      "step   150, entropy loss: 0.544240, regular loss: 0.811005, total loss: 1.355245,eita=0.00980,trainAccuracy=0.90,testAccuracy=0.8952\n",
      "step   200, entropy loss: 0.370791, regular loss: 0.810983, total loss: 1.181774,eita=0.00960,trainAccuracy=0.90,testAccuracy=0.8933\n",
      "step   250, entropy loss: 0.215439, regular loss: 0.810953, total loss: 1.026392,eita=0.00960,trainAccuracy=0.96,testAccuracy=0.9014\n",
      "step   300, entropy loss: 0.373012, regular loss: 0.810921, total loss: 1.183933,eita=0.00941,trainAccuracy=0.89,testAccuracy=0.9019\n",
      "activationFunction =  selu\n",
      "step     0, entropy loss: 2.799730, regular loss: 0.868080, total loss: 3.667810,eita=0.01000,trainAccuracy=0.18,testAccuracy=0.1122\n",
      "step    50, entropy loss: 0.508687, regular loss: 0.868072, total loss: 1.376759,eita=0.01000,trainAccuracy=0.88,testAccuracy=0.8757\n",
      "step   100, entropy loss: 0.480867, regular loss: 0.868047, total loss: 1.348914,eita=0.00980,trainAccuracy=0.85,testAccuracy=0.9139\n",
      "step   150, entropy loss: 0.392501, regular loss: 0.868009, total loss: 1.260510,eita=0.00980,trainAccuracy=0.89,testAccuracy=0.9255\n",
      "step   200, entropy loss: 0.402473, regular loss: 0.867968, total loss: 1.270441,eita=0.00960,trainAccuracy=0.91,testAccuracy=0.9306\n",
      "step   250, entropy loss: 0.277631, regular loss: 0.867925, total loss: 1.145556,eita=0.00960,trainAccuracy=0.93,testAccuracy=0.9373\n",
      "step   300, entropy loss: 0.271104, regular loss: 0.867879, total loss: 1.138984,eita=0.00941,trainAccuracy=0.92,testAccuracy=0.9430\n",
      "activationFunction =  elu\n",
      "step     0, entropy loss: 2.423570, regular loss: 0.925250, total loss: 3.348819,eita=0.01000,trainAccuracy=0.16,testAccuracy=0.1462\n",
      "step    50, entropy loss: 0.631970, regular loss: 0.925255, total loss: 1.557226,eita=0.01000,trainAccuracy=0.89,testAccuracy=0.8605\n",
      "step   100, entropy loss: 0.433155, regular loss: 0.925251, total loss: 1.358406,eita=0.00980,trainAccuracy=0.93,testAccuracy=0.8938\n",
      "step   150, entropy loss: 0.407994, regular loss: 0.925224, total loss: 1.333218,eita=0.00980,trainAccuracy=0.91,testAccuracy=0.9154\n",
      "step   200, entropy loss: 0.244179, regular loss: 0.925187, total loss: 1.169366,eita=0.00960,trainAccuracy=0.94,testAccuracy=0.9279\n",
      "step   250, entropy loss: 0.209136, regular loss: 0.925143, total loss: 1.134280,eita=0.00960,trainAccuracy=0.92,testAccuracy=0.9278\n",
      "step   300, entropy loss: 0.250810, regular loss: 0.925098, total loss: 1.175908,eita=0.00941,trainAccuracy=0.95,testAccuracy=0.9355\n"
     ]
    }
   ],
   "source": [
    "#定义激活函数，并训练\n",
    "# keras预定义的函数：\n",
    "# softmax elu selu softplus softsign relu tanh sigmoid hard_sigmoid linear\n",
    "if training_term == True :  \n",
    "    activationFunction = ['relu','selu','elu']\n",
    "    for i in range(len(activationFunction)):\n",
    "        print('activationFunction = ' ,activationFunction[i])\n",
    "        trainingNN(activation = activationFunction[i],trainingIterations = 301,print_step_num = 50)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "* relu作为基本的激活函数，收敛快，表现稳定\n",
    "* selu收敛最快，斜率最大，配合其他参数例如较小的学习率，发挥selu的效果，选择selu\n",
    "* elu收敛快，且收敛平稳，大多数情况下效果均衡"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 学习率的影响"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "初始学习率和步进下降系数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "learningRate =  0.2 decay_rate =  0.93\n",
      "step     0, entropy loss: 2.327955, regular loss: 0.169758, total loss: 2.497712,eita=0.20000,trainAccuracy=0.16,testAccuracy=0.0980\n",
      "step    50, entropy loss: 0.220993, regular loss: 0.170286, total loss: 0.391279,eita=0.20000,trainAccuracy=0.98,testAccuracy=0.9414\n",
      "step   100, entropy loss: 0.175208, regular loss: 0.170275, total loss: 0.345483,eita=0.18600,trainAccuracy=0.98,testAccuracy=0.9574\n",
      "step   150, entropy loss: 0.079247, regular loss: 0.170174, total loss: 0.249421,eita=0.18600,trainAccuracy=1.00,testAccuracy=0.9676\n",
      "step   200, entropy loss: 0.164147, regular loss: 0.170046, total loss: 0.334194,eita=0.17298,trainAccuracy=0.99,testAccuracy=0.9671\n",
      "step   250, entropy loss: 0.160820, regular loss: 0.169927, total loss: 0.330747,eita=0.17298,trainAccuracy=0.99,testAccuracy=0.9758\n",
      "step   300, entropy loss: 0.091945, regular loss: 0.169776, total loss: 0.261720,eita=0.16087,trainAccuracy=1.00,testAccuracy=0.9799\n",
      "learningRate =  0.2 decay_rate =  0.98\n",
      "step     0, entropy loss: 2.333809, regular loss: 0.226910, total loss: 2.560719,eita=0.20000,trainAccuracy=0.23,testAccuracy=0.1210\n",
      "step    50, entropy loss: 0.309318, regular loss: 0.227292, total loss: 0.536609,eita=0.20000,trainAccuracy=0.95,testAccuracy=0.9356\n",
      "step   100, entropy loss: 0.089072, regular loss: 0.227184, total loss: 0.316256,eita=0.19600,trainAccuracy=1.00,testAccuracy=0.9654\n",
      "step   150, entropy loss: 0.059642, regular loss: 0.227036, total loss: 0.286678,eita=0.19600,trainAccuracy=1.00,testAccuracy=0.9701\n",
      "step   200, entropy loss: 0.251884, regular loss: 0.226840, total loss: 0.478724,eita=0.19208,trainAccuracy=0.96,testAccuracy=0.9708\n",
      "step   250, entropy loss: 0.113431, regular loss: 0.226616, total loss: 0.340047,eita=0.19208,trainAccuracy=0.99,testAccuracy=0.9771\n",
      "step   300, entropy loss: 0.142214, regular loss: 0.226373, total loss: 0.368587,eita=0.18824,trainAccuracy=0.99,testAccuracy=0.9724\n",
      "learningRate =  0.3 decay_rate =  0.93\n",
      "step     0, entropy loss: 2.432359, regular loss: 0.284069, total loss: 2.716428,eita=0.30000,trainAccuracy=0.12,testAccuracy=0.1032\n",
      "step    50, entropy loss: 0.246360, regular loss: 0.285402, total loss: 0.531762,eita=0.30000,trainAccuracy=0.97,testAccuracy=0.9180\n",
      "step   100, entropy loss: 0.130149, regular loss: 0.285138, total loss: 0.415287,eita=0.27900,trainAccuracy=0.98,testAccuracy=0.9570\n",
      "step   150, entropy loss: 0.062825, regular loss: 0.284762, total loss: 0.347587,eita=0.27900,trainAccuracy=1.00,testAccuracy=0.9690\n",
      "step   200, entropy loss: 0.096845, regular loss: 0.284333, total loss: 0.381179,eita=0.25947,trainAccuracy=1.00,testAccuracy=0.9738\n",
      "step   250, entropy loss: 0.063291, regular loss: 0.283886, total loss: 0.347177,eita=0.25947,trainAccuracy=1.00,testAccuracy=0.9752\n",
      "step   300, entropy loss: 0.088249, regular loss: 0.283391, total loss: 0.371639,eita=0.24131,trainAccuracy=0.99,testAccuracy=0.9819\n",
      "learningRate =  0.3 decay_rate =  0.98\n",
      "step     0, entropy loss: 2.292476, regular loss: 0.341232, total loss: 2.633708,eita=0.30000,trainAccuracy=0.30,testAccuracy=0.2102\n",
      "step    50, entropy loss: 0.172375, regular loss: 0.341993, total loss: 0.514368,eita=0.30000,trainAccuracy=0.97,testAccuracy=0.9261\n",
      "step   100, entropy loss: 0.248800, regular loss: 0.341699, total loss: 0.590500,eita=0.29400,trainAccuracy=0.98,testAccuracy=0.9539\n",
      "step   150, entropy loss: 0.039207, regular loss: 0.341159, total loss: 0.380366,eita=0.29400,trainAccuracy=1.00,testAccuracy=0.9693\n",
      "step   200, entropy loss: 0.275024, regular loss: 0.340600, total loss: 0.615625,eita=0.28812,trainAccuracy=0.98,testAccuracy=0.9730\n",
      "step   250, entropy loss: 0.043167, regular loss: 0.339988, total loss: 0.383155,eita=0.28812,trainAccuracy=1.00,testAccuracy=0.9806\n",
      "step   300, entropy loss: 0.033529, regular loss: 0.339387, total loss: 0.372916,eita=0.28236,trainAccuracy=1.00,testAccuracy=0.9837\n"
     ]
    }
   ],
   "source": [
    "#定义学习率，并训练\n",
    "if training_term == True :\n",
    "    start = time.time()\n",
    "    learningRate = [0.2,0.3]\n",
    "    decay_rate   = [0.93,0.98]\n",
    "    for i in range(len(learningRate)):\n",
    "        for j in range(len(decay_rate)):\n",
    "            print('learningRate = ' ,learningRate[i],'decay_rate = ' ,decay_rate[j])\n",
    "            trainingNN(learning_rate = learningRate[i],decay_rate = decay_rate[j],trainingIterations = 301,print_step_num = 50)\n",
    "    end = time.time()\n",
    "    print(\"time elapse:{}\".format(int(end-start)))\n",
    "#-----------------------------------------------"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "* 学习率是调优的重中之重，与训练迭代参数、批次参数、网络结构、激活函数、正则等关系都很密切，在对数据集有所认识的基础上，调试时间有减少\n",
    "* 初始值：网络结构复杂/批次大时，适于较大值，便于快速快速收敛，同时辅助较大下降系数，避免过拟合；\n",
    "* 下降值：配合初始值使用，即保证一定收敛速度，又不过拟合\n",
    "* 步进decay_steps：调优过程中也是一个重要参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 正则表达式的影响"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "regular =  l2 lambda =  0.0001\n",
      "step     0, entropy loss: 2.339739, regular loss: 0.813896, total loss: 3.153635,eita=0.01000,trainAccuracy=0.13,testAccuracy=0.1298\n",
      "step    50, entropy loss: 1.151402, regular loss: 0.813897, total loss: 1.965299,eita=0.01000,trainAccuracy=0.83,testAccuracy=0.7943\n",
      "step   100, entropy loss: 0.584642, regular loss: 0.813933, total loss: 1.398575,eita=0.00980,trainAccuracy=0.84,testAccuracy=0.8664\n",
      "step   150, entropy loss: 0.304475, regular loss: 0.813921, total loss: 1.118396,eita=0.00980,trainAccuracy=0.94,testAccuracy=0.8980\n",
      "step   200, entropy loss: 0.338492, regular loss: 0.813890, total loss: 1.152382,eita=0.00960,trainAccuracy=0.94,testAccuracy=0.9118\n",
      "step   250, entropy loss: 0.344386, regular loss: 0.813845, total loss: 1.158231,eita=0.00960,trainAccuracy=0.91,testAccuracy=0.9196\n",
      "step   300, entropy loss: 0.295695, regular loss: 0.813797, total loss: 1.109492,eita=0.00941,trainAccuracy=0.93,testAccuracy=0.9238\n",
      "regular =  l2 lambda =  1e-05\n",
      "step     0, entropy loss: 2.379998, regular loss: 0.089548, total loss: 2.469546,eita=0.01000,trainAccuracy=0.10,testAccuracy=0.0861\n",
      "step    50, entropy loss: 1.069589, regular loss: 0.089555, total loss: 1.159145,eita=0.01000,trainAccuracy=0.82,testAccuracy=0.7811\n",
      "step   100, entropy loss: 0.617993, regular loss: 0.089566, total loss: 0.707559,eita=0.00980,trainAccuracy=0.85,testAccuracy=0.8585\n",
      "step   150, entropy loss: 0.400967, regular loss: 0.089572, total loss: 0.490539,eita=0.00980,trainAccuracy=0.92,testAccuracy=0.8663\n",
      "step   200, entropy loss: 0.342870, regular loss: 0.089576, total loss: 0.432446,eita=0.00960,trainAccuracy=0.92,testAccuracy=0.9002\n",
      "step   250, entropy loss: 0.497428, regular loss: 0.089579, total loss: 0.587007,eita=0.00960,trainAccuracy=0.89,testAccuracy=0.8802\n",
      "step   300, entropy loss: 0.297366, regular loss: 0.089581, total loss: 0.386947,eita=0.00941,trainAccuracy=0.92,testAccuracy=0.9022\n",
      "time elapse:471\n"
     ]
    }
   ],
   "source": [
    "#定义正则表达式，并训练\n",
    "# keras预定义的正则：注意没有l1:\n",
    "# 'l2','log_poisson_loss'\n",
    "if training_term == True :\n",
    "    start = time.time()\n",
    "    regular = ['l2'] #,'log_poisson_loss'\n",
    "    regular_lambda = [1e-4,1e-5]\n",
    "    for i in range(len(regular)):\n",
    "        for j in range(len(regular_lambda)):\n",
    "            print('regular = ' ,regular[i],'lambda = ',regular_lambda[j])\n",
    "            trainingNN(regular = regular[i],regular_lambda = regular_lambda[j],trainingIterations = 301,print_step_num = 50)\n",
    "    end = time.time()\n",
    "    print(\"time elapse:{}\".format(int(end-start)))\n",
    "#-----------------------------------------------"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "* 正则作用极其重要，尤其当网络结构复杂时，正则项是必备项，本次调参对正则有一定了解，因此调参过程比较快\n",
    "* l2正则：惩罚力度大，比较平稳，本次任务选择了l2\n",
    "* 惩罚系数lambda：最重要的调参工作之一，需要配合网络复杂程度、迭代次数等调试,例如结构复杂时惩罚加大。经验值1e-5到1e-4之间，本次接近1e-4更好些\n",
    "* tf预定义函数中，未看到L1正则，需要继续看看; log_poisson_loss正则有待尝试"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 优化函数的影响"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "optimizer =  Adam\n",
      "step     0, entropy loss: 2.393904, regular loss: 0.684004, total loss: 3.077909,eita=0.01000,trainAccuracy=0.08,testAccuracy=0.0709\n",
      "step    50, entropy loss: 0.964630, regular loss: 0.684023, total loss: 1.648653,eita=0.01000,trainAccuracy=0.82,testAccuracy=0.8309\n",
      "step   100, entropy loss: 0.569448, regular loss: 0.684050, total loss: 1.253498,eita=0.00980,trainAccuracy=0.88,testAccuracy=0.7960\n",
      "step   150, entropy loss: 0.372946, regular loss: 0.684049, total loss: 1.056995,eita=0.00980,trainAccuracy=0.88,testAccuracy=0.9058\n",
      "step   200, entropy loss: 0.425336, regular loss: 0.684034, total loss: 1.109370,eita=0.00960,trainAccuracy=0.93,testAccuracy=0.9109\n",
      "step   250, entropy loss: 0.361951, regular loss: 0.684011, total loss: 1.045961,eita=0.00960,trainAccuracy=0.89,testAccuracy=0.9266\n",
      "step   300, entropy loss: 0.191568, regular loss: 0.683983, total loss: 0.875551,eita=0.00941,trainAccuracy=0.96,testAccuracy=0.9215\n",
      "optimizer =  Momentum\n",
      "step     0, entropy loss: 2.324551, regular loss: 0.741085, total loss: 3.065636,eita=0.01000,trainAccuracy=0.12,testAccuracy=0.1066\n",
      "step    50, entropy loss: 1.026772, regular loss: 0.741096, total loss: 1.767868,eita=0.01000,trainAccuracy=0.80,testAccuracy=0.7972\n",
      "step   100, entropy loss: 0.601606, regular loss: 0.741124, total loss: 1.342730,eita=0.00980,trainAccuracy=0.88,testAccuracy=0.8644\n",
      "step   150, entropy loss: 0.501821, regular loss: 0.741121, total loss: 1.242942,eita=0.00980,trainAccuracy=0.86,testAccuracy=0.8291\n",
      "step   200, entropy loss: 0.319818, regular loss: 0.741104, total loss: 1.060922,eita=0.00960,trainAccuracy=0.93,testAccuracy=0.9153\n",
      "step   250, entropy loss: 0.391004, regular loss: 0.741079, total loss: 1.132083,eita=0.00960,trainAccuracy=0.90,testAccuracy=0.9192\n",
      "step   300, entropy loss: 0.421850, regular loss: 0.741049, total loss: 1.162899,eita=0.00941,trainAccuracy=0.93,testAccuracy=0.9277\n",
      "optimizer =  NesterovMomentum\n",
      "step     0, entropy loss: 2.485815, regular loss: 0.798194, total loss: 3.284009,eita=0.01000,trainAccuracy=0.12,testAccuracy=0.1155\n",
      "step    50, entropy loss: 0.843708, regular loss: 0.798205, total loss: 1.641913,eita=0.01000,trainAccuracy=0.83,testAccuracy=0.7933\n",
      "step   100, entropy loss: 0.470241, regular loss: 0.798210, total loss: 1.268451,eita=0.00980,trainAccuracy=0.92,testAccuracy=0.8709\n",
      "step   150, entropy loss: 0.441777, regular loss: 0.798195, total loss: 1.239972,eita=0.00980,trainAccuracy=0.90,testAccuracy=0.8977\n",
      "step   200, entropy loss: 0.450002, regular loss: 0.798170, total loss: 1.248172,eita=0.00960,trainAccuracy=0.90,testAccuracy=0.8868\n",
      "step   250, entropy loss: 0.314123, regular loss: 0.798137, total loss: 1.112260,eita=0.00960,trainAccuracy=0.94,testAccuracy=0.9287\n",
      "step   300, entropy loss: 0.261128, regular loss: 0.798102, total loss: 1.059230,eita=0.00941,trainAccuracy=0.93,testAccuracy=0.9267\n"
     ]
    }
   ],
   "source": [
    "#定义优化函数，并训练\n",
    "# keras预定义的函数：\n",
    "# 'GD','Adam','Adadelta','AdagradDA','Momentum','NesterovMomentum' , 'Ftrl' ,'ProximalGradientDescent', 'ProximalAdagrad', 'RMSProp'\n",
    "if training_term == True :  \n",
    "    optimizer = ['Adam','Momentum','NesterovMomentum']\n",
    "    for i in range(len(optimizer)):\n",
    "        print('optimizer = ' ,optimizer[i])\n",
    "        trainingNN(optimizer = optimizer,trainingIterations = 301,print_step_num = 50)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "1. Adam  : 损失和精度收敛最快，但是从最后的训练精度和预测精度看，有一定震荡，需要其他参数辅助；\n",
    "2. Momentum  :收敛慢，可能是为了跨过局部极值而带来的代价\n",
    "3. NesterovMomentum  :巨人的肩膀确实高，收敛稳定，后来居上。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 参数合影"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "参数逐个检查，记录最优"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step     0, entropy loss: 2.405468, regular loss: 0.919060, total loss: 3.324528,eita=0.10000,trainAccuracy=0.27,testAccuracy=0.1662\n",
      "step   100, entropy loss: 0.119267, regular loss: 0.918336, total loss: 1.037603,eita=0.09800,trainAccuracy=1.00,testAccuracy=0.9578\n",
      "step   200, entropy loss: 0.123645, regular loss: 0.917219, total loss: 1.040864,eita=0.09604,trainAccuracy=0.99,testAccuracy=0.9709\n",
      "step   300, entropy loss: 0.124354, regular loss: 0.916058, total loss: 1.040412,eita=0.09412,trainAccuracy=0.98,testAccuracy=0.9697\n",
      "step   400, entropy loss: 0.047005, regular loss: 0.914880, total loss: 0.961885,eita=0.09224,trainAccuracy=1.00,testAccuracy=0.9770\n",
      "step   500, entropy loss: 0.040866, regular loss: 0.913679, total loss: 0.954545,eita=0.09039,trainAccuracy=1.00,testAccuracy=0.9778\n",
      "step   600, entropy loss: 0.052919, regular loss: 0.912464, total loss: 0.965383,eita=0.08858,trainAccuracy=1.00,testAccuracy=0.9832\n",
      "step   700, entropy loss: 0.073819, regular loss: 0.911248, total loss: 0.985067,eita=0.08681,trainAccuracy=1.00,testAccuracy=0.9806\n",
      "step   800, entropy loss: 0.017695, regular loss: 0.910036, total loss: 0.927732,eita=0.08508,trainAccuracy=1.00,testAccuracy=0.9857\n",
      "step   900, entropy loss: 0.043957, regular loss: 0.908825, total loss: 0.952782,eita=0.08337,trainAccuracy=1.00,testAccuracy=0.9867\n",
      "step  1000, entropy loss: 0.055141, regular loss: 0.907607, total loss: 0.962748,eita=0.08171,trainAccuracy=1.00,testAccuracy=0.9879\n"
     ]
    }
   ],
   "source": [
    "# 全部参数\n",
    "# if training_term == True :  \n",
    "start = time.time()\n",
    "trainingNN(#训练迭代参数\n",
    "           trainingIterations = 3001,batchSize = 100,\n",
    "           #-----------------------------------------------\n",
    "           #keras网络层参数\n",
    "           #卷积\n",
    "           filters=np.array([16,32,64]),\n",
    "           kernel_size=np.array([[5,5],[5,5],[5,5]]),\n",
    "           strides=np.array([[1,1],[1,1],[1,1]]),\n",
    "           padding='same',\n",
    "           #初始化\n",
    "           kernel_initializer = 'lecun_uniform',\n",
    "           bias_initializer='zeros',\n",
    "           #池化\n",
    "           pool_size=[2,2],\n",
    "           #激活函数参数\n",
    "           activation = 'elu',\n",
    "           #正则参数\n",
    "           regular = 'l2',regular_lambda = 7e-5,\n",
    "           #结构\n",
    "           input_shape = np.array([28,28,1]),\n",
    "           units = 1000,\n",
    "           numClasses = 10,\n",
    "           #-----------------------------------------------\n",
    "           #学习率参数\n",
    "           learning_rate = 0.1,global_step=tf.Variable(0),decay_steps = 100,decay_rate = 0.98,staircase=True,\n",
    "           #-----------------------------------------------\n",
    "           #优化参数\n",
    "           optimizer = 'GD',momentum = 0.9,\n",
    "           #-----------------------------------------------\n",
    "           #损失函数参数\n",
    "           loss_calculation = 'softmax',\n",
    "           #监测参数\n",
    "           print_step_num = 100\n",
    "           )\n",
    "end = time.time()\n",
    "print(\"time of conv2d for 1 map:{}\".format(int(end-start)))\n",
    "#-----------------------------------------------"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Hint：\n",
    "- 深度神经网络\n",
    "- 激活函数\n",
    "- 正则化\n",
    "- 初始化\n",
    "- 卷积\n",
    "- 池化\n",
    "\n",
    "并探索如下超参数设置：\n",
    "  - 卷积kernel size\n",
    "  - 卷积kernel 数量\n",
    "  - 学习率\n",
    "  - 正则化因子\n",
    "  - 权重初始化分布参数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 作业小结\n",
    "1. 学习了keras的网络构建和函数库，初始化、激活、正则、代价函数等都有集成,很全面好用\n",
    "2. 学习了tensorflowd的学习率和优化函数，以及与keras的搭配使用，相得益彰，丰富的预定义函数让人眼花缭乱，需要多多摸索\n",
    "3. 尝试了assert监测方法，在网络层shape频繁变换的调优调试过程中起到了很重要的作用；name_space有待尝试\n",
    "4. 改进了网络层封装，调整卷积kernel size和数量更方便，目前3层，使用更多层可能会效果更好，有待尝试\n",
    "5. 本次训练的极值仍然是采用各影响因子多次轮询分析的方法，没有用gridSearch的方法，全局调优效果更好,对计算力是个考验。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
