{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# TensorFlow 简介\n",
    "\n",
    "***\n",
    "**编写** [王何宇](http://person.zju.edu.cn/wangheyu), [浙江大学数学科学学院](http://www.math.zju.edu.cn)\n",
    "\n",
    "**参考资料**\n",
    "1. https://www.tensorflow.org .\n",
    "\n",
    "2. Nick MacClure, TensorFlow Machine Learning Cookbook, PACKT Publishing Ltd., Birmingham, 2017.\n",
    "\n",
    "## TensorFlow的安装\n",
    "\n",
    "如果你已经安装了anaconda, 并且将源定义到了tuna(清华), 那么只要在命令行下输入:\n",
    "\n",
    "conda install tensorflow\n",
    "\n",
    "就可以了. 所有平台的anaconda应该都一样, 不过我只在Ubuntu下测试过. \n",
    "\n",
    "## Tensor\n",
    "\n",
    "TensorFlow的数据单位称为\"tensor\". 这个对我们来说是熟悉的概念, 即\"张量\". 我们常用的\"标量\"是0维张量, \"向量\"是1维张量, \"矩阵\"是2维张量, 而一般意义的张量, 可以是任意有限维. 在TensorFlow中, 张量的维数称\"rank\". 张量的各维长度称\"shape\". 下面是一些张量的例子:\n",
    "\n",
    "    3 # a rank 0 tensor; a scalar with shape []\n",
    "    [1., 2., 3.] # a rank 1 tensor; a vector with shape [3]\n",
    "    [[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]\n",
    "    [[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]\n",
    "\n",
    "下面我们导入TensorFlow."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 计算图(Computational Graph)\n",
    "\n",
    "TensorFlow的核心过程分为两个独立的部分: 建立计算图和运行计算图.\n",
    "\n",
    "**计算图** 用图论的概念来描述TensorFlow的程序算法过程. 图的节点可以是TensorFlow的操作. 我们下面来建立一个简单的计算图, 每一个节点都有若干个(也可以没有)tensor作为输入和输出. 有一种节点代表一个常数(constant, 张量), 所有的constant在TensorFlow中都没有输入, 而输出就是它内部存储的那个张量的值. 我们下面建立两个浮点数tensor(其实是标量), node1和node2:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tensor(\"Const:0\", shape=(), dtype=float32) Tensor(\"Const_1:0\", shape=(), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "node1 = tf.constant(3.0, dtype=tf.float32)\n",
    "node2 = tf.constant(4.0) # also tf.float32 implicitly\n",
    "print(node1, node2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意也许并没有打印出你期待的3.0和4.0. 具体输出数值必须要在它们被求值的时候. 而要实际对它们求值, 我们需要将此计算图放入一个过程(session)去执行. Session封装了TensorFlow的控制流程和状态.\n",
    "\n",
    "下面这段代码建立了session对象, 并且对node1和node2进行了估值. 这样就能看到3.0和4.0了."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[3.0, 4.0]\n"
     ]
    }
   ],
   "source": [
    "sess = tf.Session()\n",
    "print(sess.run([node1, node2]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们用更多的操作(操作也是节点)使计算更复杂. 比如下面的计算图我们的两个已定义constant加起来."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "node3: Tensor(\"Add:0\", shape=(), dtype=float32)\n",
      "sess.run(node3): 7.0\n"
     ]
    }
   ],
   "source": [
    "from __future__ import print_function # 这句在Python3中事实没有用\n",
    "node3 = tf.add(node1, node2)\n",
    "print(\"node3:\", node3)\n",
    "print(\"sess.run(node3):\", sess.run(node3))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "writer = tf.summary.FileWriter(\"log/tb/0\", graph=tf.get_default_graph())\n",
    "writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一个加法的可视化计算图如下:\n",
    "![](images/getting_started_add.png)\n",
    "\n",
    "到目前为止, 这个计算图还是很乏味, 因为它只有一个常数输出. 计算图也可以有参数化的外部输入, 从而得到变化的输出. 输入参数称为占位符(placeholder). 一个placeholder在计算中必须要有具体的值代入."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "a = tf.placeholder(tf.float32)\n",
    "b = tf.placeholder(tf.float32)\n",
    "adder_node = a + b  # + provides a shortcut for tf.add(a, b)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "有了placeholder, 下面的语句很像一个函数或这lambda(C++和Python中的一种简化函数). 我们用feed_dict给placeholder赋具体的值, 然后计算图会计算出相应的值."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "7.5\n",
      "[3. 7.]\n"
     ]
    }
   ],
   "source": [
    "print(sess.run(adder_node, {a: 3, b: 4.5}))\n",
    "print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个过程的可视化图为:\n",
    "![](images/getting_started_adder.png)\n",
    "引入新的操作使计算图更复杂. (注意计算是自动回溯的.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "22.5\n"
     ]
    }
   ],
   "source": [
    "add_and_triple = adder_node * 3.\n",
    "print(sess.run(add_and_triple, {a: 3, b: 4.5}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在的可视化计算图为: (这里标识符有一点不一致.)\n",
    "![](images/getting_started_triple.png)\n",
    "\n",
    "在机器学习(machine learning)中, 我们不但希望模型可以接受变化的输入, 我们还要根据结果调整计算图以改进输出. 变量(variable)允许我们改变计算图的参数(注意, 这是算法参数而不是外界输入参数. 比如我们做数值积分, 积分权重和积分点属于算法参数, 改进这些参数可以改进算法. 而具体的被积函数求值则属于外部输出, 它们是算法的应用, 不会改变算法本身. 算法参数对应这里的variable, 而外界输出参数对应这里的placeholder. 在一般编程语言中, 这二者是程序员来区分的, 而在TensorFlow中, 由于数据分析对象和数据分析模型二者区分是清晰的, 因此从概念上就将二者分离.) variable需要在构建时指明类型(type)并给出初值(inital value). 下面是大家很熟悉的线性回归模型. 注意variable和placeholder的各自地位是清晰的."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Colocations handled automatically by placer.\n"
     ]
    }
   ],
   "source": [
    "W = tf.Variable([.3], dtype=tf.float32)\n",
    "b = tf.Variable([-.3], dtype=tf.float32)\n",
    "x = tf.placeholder(tf.float32)\n",
    "linear_model = W*x + b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意到之前我们用tf.constant初始化了constant, 它们的值是永远不能改变的(逻辑上constant是variable, 但不可改变). 和constant不同, variable在调用tf.Variable时并没有实际初始化, 它们的初始化必须显式调用. 比如下面的操作一口气初始化了所有variable:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "init = tf.global_variables_initializer()\n",
    "sess.run(init)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里init是TensorFlow中初始化全部全局变量(global variable)的一个计算子图(子程序)的句柄(调用接口). 直到我们在sess.run中实际调用这个句柄, 这些变量才真实被初始化."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "而对于输入参数x是一个placeholder, 我们可以根据不同的x值来计算这个linear_model的实际值."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0.         0.3        0.6        0.90000004]\n"
     ]
    }
   ],
   "source": [
    "print(sess.run(linear_model, {x: [1, 2, 3, 4]}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们现在有了一个模型, 接下去就要评估这个模型的优劣. 正如我们在最小二乘回归里做的, 我们还需要一个误差函数来评估模型, 这里误差函数称为损失函数(loss function).\n",
    "\n",
    "在机器学习模型中, 一般是不可能存在真解的, 我们的评估基于历史数据. 这里我们先做个弊. 假设placeholder y是真解. 那么最小二乘误差就可以如下构建. 这里linear_model - y是实际误差, 而tf.square是平方运算, 最后tf.reduce_sum则是整体最小二乘误差(离散2范数平方):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "23.66\n"
     ]
    }
   ],
   "source": [
    "y = tf.placeholder(tf.float32)\n",
    "squared_deltas = tf.square(linear_model - y)\n",
    "loss = tf.reduce_sum(squared_deltas)\n",
    "print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下去我们需要调整参数W和b来改进模型. 这里我们继续作弊, 比如我们知道更好的参数是-1和1, 然后我们直接修改这两个参数. 修改使用tf.assign操作."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.0\n"
     ]
    }
   ],
   "source": [
    "fixW = tf.assign(W, [-1.])\n",
    "fixb = tf.assign(b, [1.])\n",
    "sess.run([fixW, fixb])\n",
    "print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "呵呵呵, 作弊就是牛逼, 我们已经有了完美的模型. 误差是0. 在实际工作中, 这个是不会发生的, 此外, 我们需要计算能自动地调整这些参数. 那就意味着我们需要优化(optimization)算法. 比如最速下降算法(gradient descent). 数值梯度不是一个容易的数值求解过程, 不过TensorFlow在train模块中提供了相应的功能, 可以直接调用. 注意下面的sess.run(init)初始化了W和b, 因此初始的参数是不精确的, 但下面的计算图能找到非常接近真解的值. 这里最方便的一点是我们不需要去控制数据流向, 所有的数据都正确地回溯. 这里可以体会到tensor flow的真意."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n",
      "[array([-0.9999969], dtype=float32), array([0.9999908], dtype=float32)]\n"
     ]
    }
   ],
   "source": [
    "optimizer = tf.train.GradientDescentOptimizer(0.01)\n",
    "train = optimizer.minimize(loss)\n",
    "\n",
    "sess.run(init) # reset values to incorrect defaults.\n",
    "for i in range(1000):\n",
    "  sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})\n",
    "\n",
    "print(sess.run([W, b]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "完整版的代码如下:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "W: [-0.9999969] b: [0.9999908] loss: 5.6999738e-11\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "# Model parameters\n",
    "W = tf.Variable([.3], dtype=tf.float32)\n",
    "b = tf.Variable([-.3], dtype=tf.float32)\n",
    "# Model input and output\n",
    "x = tf.placeholder(tf.float32)\n",
    "linear_model = W*x + b\n",
    "y = tf.placeholder(tf.float32)\n",
    "\n",
    "# loss\n",
    "loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares\n",
    "# optimizer\n",
    "optimizer = tf.train.GradientDescentOptimizer(0.01)\n",
    "train = optimizer.minimize(loss)\n",
    "\n",
    "# training data\n",
    "x_train = [1, 2, 3, 4]\n",
    "y_train = [0, -1, -2, -3]\n",
    "# training loop\n",
    "init = tf.global_variables_initializer()\n",
    "sess = tf.Session()\n",
    "sess.run(init) # reset values to wrong\n",
    "for i in range(1000):\n",
    "  sess.run(train, {x: x_train, y: y_train})\n",
    "\n",
    "# evaluate training accuracy\n",
    "curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})\n",
    "print(\"W: %s b: %s loss: %s\"%(curr_W, curr_b, curr_loss))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可视化计算图如下:\n",
    "![](images/getting_started_final.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下去介绍tf.estimator模块. 这个一个高层次的类库, 目的是简化机器学习的操作过程. 它有以下几个方面: 运行训练循环, 运行评估循环, 管理数据集. 来看看用了estimator后线性回归会简单成什么样子. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Using default config.\n",
      "WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpaz2twpe2\n",
      "INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpaz2twpe2', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\n",
      "graph_options {\n",
      "  rewrite_options {\n",
      "    meta_optimizer_iterations: ONE\n",
      "  }\n",
      "}\n",
      ", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f9b386a1208>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "To construct input pipelines, use the `tf.data` module.\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "To construct input pipelines, use the `tf.data` module.\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/python/feature_column/feature_column_v2.py:2703: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Create CheckpointSaverHook.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/monitored_session.py:809: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "To construct input pipelines, use the `tf.data` module.\n",
      "INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpaz2twpe2/model.ckpt.\n",
      "INFO:tensorflow:loss = 14.0, step = 1\n",
      "INFO:tensorflow:global_step/sec: 665.552\n",
      "INFO:tensorflow:loss = 0.50831145, step = 101 (0.151 sec)\n",
      "INFO:tensorflow:global_step/sec: 971.401\n",
      "INFO:tensorflow:loss = 0.03811746, step = 201 (0.103 sec)\n",
      "INFO:tensorflow:global_step/sec: 863.232\n",
      "INFO:tensorflow:loss = 0.007971367, step = 301 (0.116 sec)\n",
      "INFO:tensorflow:global_step/sec: 769.97\n",
      "INFO:tensorflow:loss = 0.0032586372, step = 401 (0.130 sec)\n",
      "INFO:tensorflow:global_step/sec: 786.521\n",
      "INFO:tensorflow:loss = 0.0005074969, step = 501 (0.127 sec)\n",
      "INFO:tensorflow:global_step/sec: 962.995\n",
      "INFO:tensorflow:loss = 0.00022165407, step = 601 (0.104 sec)\n",
      "INFO:tensorflow:global_step/sec: 1018.53\n",
      "INFO:tensorflow:loss = 1.6976946e-05, step = 701 (0.098 sec)\n",
      "INFO:tensorflow:global_step/sec: 970.039\n",
      "INFO:tensorflow:loss = 6.859027e-06, step = 801 (0.103 sec)\n",
      "INFO:tensorflow:global_step/sec: 987.909\n",
      "INFO:tensorflow:loss = 5.89365e-07, step = 901 (0.101 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 1000 into /tmp/tmpaz2twpe2/model.ckpt.\n",
      "INFO:tensorflow:Loss for final step: 2.2201436e-07.\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Starting evaluation at 2020-12-22T06:39:53Z\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use standard file APIs to check for files with this prefix.\n",
      "INFO:tensorflow:Restoring parameters from /tmp/tmpaz2twpe2/model.ckpt-1000\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Finished evaluation at 2020-12-22-06:39:55\n",
      "INFO:tensorflow:Saving dict for global step 1000: average_loss = 6.024204e-08, global_step = 1000, label/mean = -1.5, loss = 2.4096815e-07, prediction/mean = -1.5001228\n",
      "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1000: /tmp/tmpaz2twpe2/model.ckpt-1000\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Starting evaluation at 2020-12-22T06:39:55Z\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from /tmp/tmpaz2twpe2/model.ckpt-1000\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Finished evaluation at 2020-12-22-06:39:58\n",
      "INFO:tensorflow:Saving dict for global step 1000: average_loss = 0.0025410648, global_step = 1000, label/mean = -3.027521, loss = 0.010164259, prediction/mean = -2.999834\n",
      "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1000: /tmp/tmpaz2twpe2/model.ckpt-1000\n",
      "train metrics: {'average_loss': 6.024204e-08, 'label/mean': -1.5, 'loss': 2.4096815e-07, 'prediction/mean': -1.5001228, 'global_step': 1000}\n",
      "eval metrics: {'average_loss': 0.0025410648, 'label/mean': -3.027521, 'loss': 0.010164259, 'prediction/mean': -2.999834, 'global_step': 1000}\n"
     ]
    }
   ],
   "source": [
    "# NumPy is often used to load, manipulate and preprocess data.\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "\n",
    "# Declare list of features. We only have one numeric feature. There are many\n",
    "# other types of columns that are more complicated and useful.\n",
    "feature_columns = [tf.feature_column.numeric_column(\"x\", shape=[1])]\n",
    "\n",
    "# An estimator is the front end to invoke training (fitting) and evaluation\n",
    "# (inference). There are many predefined types like linear regression,\n",
    "# linear classification, and many neural network classifiers and regressors.\n",
    "# The following code provides an estimator that does linear regression.\n",
    "estimator = tf.estimator.LinearRegressor(feature_columns=feature_columns)\n",
    "\n",
    "# TensorFlow provides many helper methods to read and set up data sets.\n",
    "# Here we use two data sets: one for training and one for evaluation\n",
    "# We have to tell the function how many batches\n",
    "# of data (num_epochs) we want and how big each batch should be.\n",
    "x_train = np.array([1., 2., 3., 4.])\n",
    "y_train = np.array([0., -1., -2., -3.])\n",
    "x_eval = np.array([2., 5., 8., 1.])\n",
    "y_eval = np.array([-1.01, -4.1, -7, 0.])\n",
    "input_fn = tf.estimator.inputs.numpy_input_fn(\n",
    "    {\"x\": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)\n",
    "train_input_fn = tf.estimator.inputs.numpy_input_fn(\n",
    "    {\"x\": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)\n",
    "eval_input_fn = tf.estimator.inputs.numpy_input_fn(\n",
    "    {\"x\": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)\n",
    "\n",
    "# We can invoke 1000 training steps by invoking the  method and passing the\n",
    "# training data set.\n",
    "estimator.train(input_fn=input_fn, steps=1000)\n",
    "\n",
    "# Here we evaluate how well our model did.\n",
    "train_metrics = estimator.evaluate(input_fn=train_input_fn)\n",
    "eval_metrics = estimator.evaluate(input_fn=eval_input_fn)\n",
    "print(\"train metrics: %r\"% train_metrics)\n",
    "print(\"eval metrics: %r\"% eval_metrics)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "tf.estimator也允许用户自己设计回归模型. 下面就是一个例子, 我们自己实现了线性回归."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "\n",
    "# Declare list of features, we only have one real-valued feature\n",
    "def model_fn(features, labels, mode):\n",
    "  # Build a linear model and predict values\n",
    "  W = tf.get_variable(\"W\", [1], dtype=tf.float64)\n",
    "  b = tf.get_variable(\"b\", [1], dtype=tf.float64)\n",
    "  y = W*features['x'] + b\n",
    "  # Loss sub-graph\n",
    "  loss = tf.reduce_sum(tf.square(y - labels))\n",
    "  # Training sub-graph\n",
    "  global_step = tf.train.get_global_step()\n",
    "  optimizer = tf.train.GradientDescentOptimizer(0.01)\n",
    "  train = tf.group(optimizer.minimize(loss),\n",
    "                   tf.assign_add(global_step, 1))\n",
    "  # EstimatorSpec connects subgraphs we built to the\n",
    "  # appropriate functionality.\n",
    "  return tf.estimator.EstimatorSpec(\n",
    "      mode=mode,\n",
    "      predictions=y,\n",
    "      loss=loss,\n",
    "      train_op=train)\n",
    "\n",
    "estimator = tf.estimator.Estimator(model_fn=model_fn)\n",
    "# define our data sets\n",
    "x_train = np.array([1., 2., 3., 4.])\n",
    "y_train = np.array([0., -1., -2., -3.])\n",
    "x_eval = np.array([2., 5., 8., 1.])\n",
    "y_eval = np.array([-1.01, -4.1, -7., 0.])\n",
    "input_fn = tf.estimator.inputs.numpy_input_fn(\n",
    "    {\"x\": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)\n",
    "train_input_fn = tf.estimator.inputs.numpy_input_fn(\n",
    "    {\"x\": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)\n",
    "eval_input_fn = tf.estimator.inputs.numpy_input_fn(\n",
    "    {\"x\": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)\n",
    "\n",
    "# train\n",
    "estimator.train(input_fn=input_fn, steps=1000)\n",
    "# Here we evaluate how well our model did.\n",
    "train_metrics = estimator.evaluate(input_fn=train_input_fn)\n",
    "eval_metrics = estimator.evaluate(input_fn=eval_input_fn)\n",
    "print(\"train metrics: %r\"% train_metrics)\n",
    "print(\"eval metrics: %r\"% eval_metrics)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## MNIST项目初阶\n",
    "\n",
    "这是一个机器学习领域的入门项目, 就想学习C语言总是先学Hello World一样. MNIST是一个计算机视觉库, 全部图片都是手写的数字, 比如这些:\n",
    "![](images/MNIST.png)\n",
    "这些图片代表的数字是已知的, 并且在每个图片中的label里标注了. 比如上面的几个图片的label就是5, 0, 4和1.\n",
    "\n",
    "在这个教程中, 我们将训练一个模型去识别这些手写图片上的数字. 我们这里的目的不是训练一个尽可能实用的模型, 而是展示如何使用TensorFlow, 因此我们采用一个非常简单的模型, 称为Softmax Regression.\n",
    "\n",
    "实际上这个教程的代码非常短, 真正核心只有三行. 但它提供了一些关键信息: TensorFlow是如何工作的, Machine Learning的基本概念. 因此值得我们仔细学习.\n",
    "\n",
    "接下去我们会逐行讲解代码mnist_softmax.py的内容. 先看一下."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-17-144f80b31d53>:36: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please write your own downloading logic.\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting MNIST_data/train-images-idx3-ubyte.gz\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.data to implement this functionality.\n",
      "Extracting MNIST_data/train-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use tf.one_hot on tensors.\n",
      "Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n",
      "Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From /home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use alternatives such as official/mnist/dataset.py from tensorflow/models.\n",
      "WARNING:tensorflow:From <ipython-input-17-144f80b31d53>:57: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "\n",
      "Future major versions of TensorFlow will allow gradients to flow\n",
      "into the labels input on backprop by default.\n",
      "\n",
      "See `tf.nn.softmax_cross_entropy_with_logits_v2`.\n",
      "\n",
      "0.9204\n"
     ]
    },
    {
     "ename": "SystemExit",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "An exception has occurred, use %tb to see the full traceback.\n",
      "\u001b[0;31mSystemExit\u001b[0m\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/wang/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3275: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.\n",
      "  warn(\"To exit: use 'exit', 'quit', or Ctrl-D.\", stacklevel=1)\n"
     ]
    }
   ],
   "source": [
    "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     http://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "# ==============================================================================\n",
    "\n",
    "\"\"\"A very simple MNIST classifier.\n",
    "See extensive documentation at\n",
    "https://www.tensorflow.org/get_started/mnist/beginners\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import sys\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "FLAGS = None\n",
    "\n",
    "\n",
    "def main(_):\n",
    "  # Import data\n",
    "  mnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n",
    "\n",
    "  # Create the model\n",
    "  x = tf.placeholder(tf.float32, [None, 784])\n",
    "  W = tf.Variable(tf.zeros([784, 10]))\n",
    "  b = tf.Variable(tf.zeros([10]))\n",
    "  y = tf.matmul(x, W) + b\n",
    "\n",
    "  # Define loss and optimizer\n",
    "  y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "\n",
    "  # The raw formulation of cross-entropy,\n",
    "  #\n",
    "  #   tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)),\n",
    "  #                                 reduction_indices=[1]))\n",
    "  #\n",
    "  # can be numerically unstable.\n",
    "  #\n",
    "  # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw\n",
    "  # outputs of 'y', and then average across the batch.\n",
    "  cross_entropy = tf.reduce_mean(\n",
    "      tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))\n",
    "  train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "\n",
    "  sess = tf.InteractiveSession()\n",
    "  tf.global_variables_initializer().run()\n",
    "  # Train\n",
    "  for _ in range(1000):\n",
    "    batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})\n",
    "\n",
    "  # Test trained model\n",
    "  correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "  accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "  print(sess.run(accuracy, feed_dict={x: mnist.test.images,\n",
    "                                      y_: mnist.test.labels}))\n",
    "\n",
    "if __name__ == '__main__':\n",
    "  parser = argparse.ArgumentParser()\n",
    "  parser.add_argument('--data_dir', type=str, default='MNIST_data',\n",
    "                      help='Directory for storing input data')\n",
    "  FLAGS, unparsed = parser.parse_known_args()\n",
    "tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### MNIST的数据的结构\n",
    "\n",
    "MNIST的数据可以从Yann LeCun(http://yann.lecun.com/exdb/mnist/)管理的网站获得. 代码中的以下两句命令会自动下载并将数据存储在你指定的MNIST_data目录中. 如果你已经有这个数据, 那么存放在此目录中可以避免重复下载.\n",
    "\n",
    "    from tensorflow.examples.tutorials.mnist import input_data\n",
    "    mnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\n",
    "\n",
    "MNIST有三部分构成: 55,000份训练数据(train data, mnist.train), 10,000份测试数据(test data, mnist.test), 和5,000份验证数据(validation data, mnist.validation). 这种区分是重要的, 机器学习的关键是我们必须要有一些未经学习的数据用来做测试, 否则我们无法判定我们的学习是否真正有效.(而不是对已知数据做统计...)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "每一份MNIST数据又分成两部分, 一张手写数字的数字图片和对应的label. 我们这里称图片为\"x\", labels为\"y\". 训练数据和测试数据都有图片和对应的label, 比如训练数据的图片和label分别为mnist.train.images和mnist.train.labels.\n",
    "\n",
    "每一张数字图片都是28X28像素的灰度位图. 我们可以用一个数字矩阵来表示:\n",
    "![](images/MNIST-Matrix.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们可以将这个矩阵拉伸成28x28 = 784个数字. 这里具体是如何拉伸的并不重要, 只要我们是用同一个方法来拉伸全部图片. 于是, 全部MNIST图片就是一堆位于784维向量空间的点, 并且有丰富的结构.\n",
    "\n",
    "注意到拉伸数据实际上已经损失了一些2D结构, 这会是一个问题么? 确实, 如果能利用这些2D结构会得到更好的模型, 但作为入门级教程, 我们现在的模型softmax回归(softmax regression)并不在乎."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "于是现在mnist.train.images是一个形状(shape)为[55000, 784]的tensor. 第一个指标是图片的编号, 第二个指标是每个像素点在图片中的编号(已拉伸). 而tensor中的每一个元素都是一个0, 1间的灰度数据.\n",
    "![](images/mnist-train-xs.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "MNIST中的每一个图片都有一个label, 用0到9表图片的内容. 我们的教程将label表示成\"one-hot vectors\"(翻不了T_T). 也就是每个向量都是10维的, 但只有label对应的位置为1, 其余全为0. 比如label 3表示为[0,0,0,1,0,0,0,0,0,0]. 于是, mnist.train.labels是一个[55000, 10]的数字tensor.\n",
    "![](images/mnist-train-ys.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Softmax回归(Softmax Regressions)\n",
    "MNIST中每一个图片都是0到9的手写数字. 所以每一个图片都只有10个可能. 而我们需要通过观察输入的图片给出一个它是某个数字的概率. 比如, 我们模型可能认为一张数字9的图片是9的概率为80%, 但也有5%的可能是8, 然后还有一点点可能性是其余数字. 结论不可能是100%的确信. (这其实是个聚类分析)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对这类经典问题, softmax回归是一个自然而简单的模型. 它专门处理有限可能性的概率分配. 它的输出就是值为0到1之间的有限向量, 并且全部分量和为1(概率质量分布?). 甚至在很多更精密的模型中, 最后也会用一层softmax来投射为概率质量分布."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "标准的softmax回归分两步: 对每一个输入, 我们累加它属于某一个分类集合(我们这里的分类是0到9)的全部支持信息(evidence), 然后再转成概率表示. 对给定图片是否属于某个0到9的分类集合, 所谓的支持信息就是图片像素灰度值的加权和. 对于支持对应结论的像素, 权重是正的, 而对于不支持结论的数据, 权重是负的. 下面的图像给出了每一个像素点属于每一个分类集的权重表示, 这里红色是正权重, 蓝色是负权重. \n",
    "![](images/softmax-weights.png)\n",
    "(而通过训练我们可以调整这些权重分布)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里还有一些额外的支持信息称为\"置偏\"(bias, 置偏通过中心移动, 使得支持信息极大化, 或者均值和期望对齐，说人话：线性回归的常数项...). 基本上, 我们认为这些支持信息都是和具体的输入无关的(在训练完成之后). 而对一个新的输入x, 我们可以按如下方式计算它是数字i的支持信息:\n",
    "$$\n",
    "\\mbox{evidence_i} = \\sum_{j=0} W_{i, j}x_j + b_i\n",
    "$$\n",
    "这里$W_i$是第i个数字的权重分布, $b_i$是对应的bias. 指标j是一个图片中的每一个像素点. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "而evidence可以用函数softmax转成概率y:\n",
    "$$\n",
    "y = \\mbox{softmax}(\\mbox{evidence}).\n",
    "$$\n",
    "这里softmax的作用是\"激励函数\"(activation), 有时也称为\"连接函数\"(link), 将我们的线性函数的模糊结果锐化为我们想要的更确定的结论, 在本教程中, 是10个数字分类的概率分布. 具体的内部实现过程如下: \n",
    "$$\n",
    "\\mbox{softmax}(\\mbox{evidence})_i = \\frac{\\exp(\\mbox{evidence}_i)}{\\sum_j\\exp({\\mbox{evidence}_j})}\n",
    "$$\n",
    "这里利用指数函数的非线性放大了evidence的区分度, 然后用一个归一化转成概率. 这里指数函数还能保正. (做非线性放大时, 要考虑一下实际意义)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "整个softmax回归可以用下图表示, \n",
    "![](images/softmax-regression-scalargraph.png)\n",
    "或矩阵形式为:\n",
    "$$\n",
    "y = \\mbox{softmax}(Wx + b)\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 回归实现\n",
    "为了在Python中有限实现数值计算, 我们需要Numpy支持. 但仍有一些计算留在Python内部, 使得我们计算效率大大降低. 特别是用GPU进行分布式计算的时候. 导入TensorFlow\n",
    "\n",
    "    import tensorflow as tf\n",
    "先给输入数据留个位置(placeholder)\n",
    "\n",
    "    x = tf.placeholder(tf.float32, [None, 784])\n",
    "这里None表示第一个维数可以是任何长度."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后我们需要weight和bias作为参数. 注意参数在TensorFlow中是设置为Variable的.(区分了输入数据和模型参数, 前者是placeholder, 后者是Variable) 我们通过调整Variable来校正模型.\n",
    "\n",
    "    W = tf.Variable(tf.zeros([784, 10]))\n",
    "    b = tf.Variable(tf.zeros([10]))\n",
    "\n",
    "我们很理解参数必须有个初值, 而这里初值其实不敏感, 所以我们干脆把W和b都初始设置成全0. 然后照矩阵格式进行计算. (注意tf中的矩阵乘法)\n",
    "\n",
    "    y = tf.nn.softmax(tf.matmul(x, W) + b)\n",
    "这里softmax内部已经把全部的输入数据对各个数字的evidence都算出来了. 再套用softmax, 已经是激励后的概率了. (一键回归666, 作者又开始狂吹...)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 训练(training)\n",
    "所谓训练, 就是用已知正确结果的训练数据去调整这些W和b, 使得模型能够去识别未知的图片. 为此我们先需要一个评估函数, 或者说误差. 而在Machine Learning中, 喜欢用\"代价\"(cost), 或\"损失\"(loss). 总之, 我们设法通过极小化误差来调整模型. \n",
    "(现在所谓神经网络或机器学习, 已经开始脱掉外套露出内衣了, 其本质就是参数优化. 而模型本身是一个多层的线性模型, 外加一个非线性放大. 当然层数可能非常多, 非线性可能很奇葩, 但调参数过程还是离不开那几种...)\n",
    "\n",
    "一个推荐的loss函数称为\"交叉熵\"(cross-entropy). 这是一个来自信息论的概念, 定义如下:\n",
    "$$\n",
    "H_{y'}(y) = -\\sum_i y_i'\\log(y_i)\n",
    "$$\n",
    "这里$y$是计算所的的概率分布, 而$y'$是实际的概率分布(one-hot). (仔细想想这个函数的意义) 某种意义上说, cross-entropy度量了我们的计算结果的不准确. 更过关于cross-entropy的内容可以参考[这里](http://colah.github.io/posts/2015-09-Visual-Information/)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在考虑cross-entropy的实现, 首先我们需要一个新的placeholder来存放用于校正的真解:\n",
    "\n",
    "    y_ = tf.placeholder(tf.float32, [None, 10])\n",
    "而cross-entropy函数的实现则只有一句话:\n",
    "\n",
    "    cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))\n",
    "这里reduce_sum的意思是对部分指标求和, 这里具体是对标号为1的指标(维数标号从0开始)分量求和, 求和的指标是reduction_indices=[1]指定的. 最后, reduce_mean计算全部结果的平均cross_entropy. 整个计算过程用公式表示如下:(注意现在y和y'都是 None X 10维的矩阵)  \n",
    "$$\n",
    "\\mbox{cross_entropy}_{y'}(y) = \\frac{1}{N}\\left(\\sum_{i=0}^{N - 1}\\left(-\\sum_j y_{i,j}'\\log(y_{i,j})\\right)\\right)\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意在源码中, 我们并不是直接使用这个公式, 因为这么做数值上不稳定($y_{i,j}$可能很小, $\\log(y_{i,j})$接近$-\\infty$). 我们直接调用tf.nn.softmax_cross_entropy_with_logits这个函数来代替\n",
    "\n",
    "    -tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])\n",
    "外部再求平均:\n",
    "\n",
    "    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在我们需要模型通过训练来降低这个cross_entropy. 这里不同与传统的编程方式, TensorFlow不是一个过程性语言, 它始终掌握整个计算图的信息, 因此知道哪些variable会影响到你的loss函数, 以及该如何调整这些variable来使loss最小化. 也就是说, 要解一个最小化loss的优化问题. 而具体的优化策略, 可以由用户挑选. (用户可以只挑选策略, 选择一些优化参数, 然后剩下的全部交给TensorFlow去做. 当然对高级用户也可以自己构建复杂的优化策略. ) 这里挑选的策略是:\n",
    "\n",
    "    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n",
    "通过梯度下降算法(最速下降)来最小化cross_entropy. 学习率(learning rate)是0.5. (这个学习率怀疑是某种松弛因子, 即如果当前值是$x_k$, 最速方向是$p$, 最佳步长是$\\alpha$, 那么新值取$x_{k + 1} = x_k + 0.5 \\alpha p$) 除了梯度下降法, 还有很多优化方法可以选."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在我们可以在一个交互式环境中实际运行我们的模型:\n",
    "\n",
    "    sess = tf.InteractiveSession()\n",
    "首先我们建立一个操作来初始化全部变量. \n",
    "\n",
    "    tf.global_variables_initializer().run()\n",
    "然后我们训练模型1000次...\n",
    "\n",
    "    for _ in range(1000):\n",
    "      batch_xs, batch_ys = mnist.train.next_batch(100)\n",
    "      sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "每次训练, 我们都用100个一批的数据样本来训练, 这100个样本从我们的训练数据集中随机抽取, 并传递给placeholder. 这里之所以用随机抽取的小样本而不是用完整的大样本进行训练是为了提高计算效率. 这种做法称为随机训练(stochastic training), 相应的优化称为随机梯度下降(stochastic gradient descent)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 评估我们的模型\n",
    "\n",
    "我们需要一个函数来统计我们模型的正确性. tf.argmax是一个特别有用的函数, 它给我们某个tensor在指定维度上的最大值的指标. 比如tf.argmax(y,1)就是模型得到的最终结果. 而`tf.argmax(y_,1)`显然就是真解的值. 所以\n",
    "\n",
    "    correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))\n",
    "就用True和False给出了各个样本的实际正确与否. 我们直接将其转换成浮点数就是正确率了(这里依赖具体的转换规则). 比如\n",
    "\n",
    "    [True, False, True, True]\n",
    "对应\n",
    "\n",
    "    [1,0,1,1]\n",
    "对应0.75."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最终, 我们打印输出我们的结果\n",
    "    \n",
    "    print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))\n",
    "应该是92%."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 程序和实现过程的可视化\n",
    "\n",
    "我们已经注意到, 尽管TensorFlow的程序简单, 但实际的数据和内部过程其实及其复杂. 而现代机器学习和神经网络更是要求层数应充分多, 数据集应充分大. 在这种情况下, 程序的调试, 跟踪甚至输入输出的分析都成为极大的负担. 为此, TensorFlow项目中包含了一个专门用于数据和程序可视化分析的项目, 即Tensorboard. 这个项目的详细介绍可以看它的作者在github上的[说明](https://github.com/tensorflow/tensorboard). 我们先来看一下计算图的可视化."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "tf.reset_default_graph()\n",
    "\n",
    "sess = tf.Session()\n",
    "node1 = tf.constant(3.0, dtype=tf.float32)\n",
    "node2 = tf.constant(4.0) # also tf.float32 implicitly\n",
    "node3 = tf.add(node1, node2)\n",
    "\n",
    "sess.run(node3)\n",
    "\n",
    "writer = tf.summary.FileWriter(\"tensorboard/simple/0\", graph=tf.get_default_graph())\n",
    "writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在命令行下, 输入tensorboard --logdir=tensorboard/simple/0可以运行tensorboard.\n",
    "![](images/simple.png)\n",
    "注意到这里标识符名并没有被显示出来, 这个需要我们在程序中显式指定. 我们把这个程序改一改."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "tf.reset_default_graph()\n",
    "\n",
    "with tf.name_scope('input'):\n",
    "    node1 = tf.constant(3.0, dtype=tf.float32, name=\"node1\")\n",
    "    node2 = tf.constant(4.0, name=\"node2\") # also tf.float32 implicitly\n",
    "    node3 = tf.add(node1, node2)\n",
    "    \n",
    "with tf.Session() as sess:\n",
    "    sess.run(node3)\n",
    "    writer = tf.summary.FileWriter(\"tensorboard/hello/0\", graph=tf.get_default_graph())\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在可视图变成这样\n",
    "![](images/hello.png)\n",
    "可双击放大\n",
    "![](images/hello_zoom.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "除了程序本身, 我们可以通过Summary对象, 跟踪程序的数据流, 分析可能模型的性能和错误. 直接看一个复杂点的例子."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting MNIST_data/train-images-idx3-ubyte.gz\n",
      "Extracting MNIST_data/train-labels-idx1-ubyte.gz\n",
      "Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n",
      "Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n",
      "WARNING:tensorflow:From <ipython-input-20-a9364877e911>:106: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/wang/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py:1702: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).\n",
      "  warnings.warn('An interactive session is already active. This can '\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy at step 0: 0.0821\n",
      "Accuracy at step 10: 0.7216\n",
      "Accuracy at step 20: 0.8189\n",
      "Accuracy at step 30: 0.8683\n",
      "Accuracy at step 40: 0.8852\n",
      "Accuracy at step 50: 0.8941\n",
      "Accuracy at step 60: 0.9018\n",
      "Accuracy at step 70: 0.9023\n",
      "Accuracy at step 80: 0.9014\n",
      "Accuracy at step 90: 0.9147\n",
      "Adding run metadata for 99\n",
      "Accuracy at step 100: 0.916\n",
      "Accuracy at step 110: 0.9181\n",
      "Accuracy at step 120: 0.9128\n",
      "Accuracy at step 130: 0.9236\n",
      "Accuracy at step 140: 0.9235\n",
      "Accuracy at step 150: 0.9252\n",
      "Accuracy at step 160: 0.9279\n",
      "Accuracy at step 170: 0.9292\n",
      "Accuracy at step 180: 0.9319\n",
      "Accuracy at step 190: 0.9341\n",
      "Adding run metadata for 199\n",
      "Accuracy at step 200: 0.9361\n",
      "Accuracy at step 210: 0.9346\n",
      "Accuracy at step 220: 0.9355\n",
      "Accuracy at step 230: 0.9361\n",
      "Accuracy at step 240: 0.9388\n",
      "Accuracy at step 250: 0.9413\n",
      "Accuracy at step 260: 0.9417\n",
      "Accuracy at step 270: 0.9357\n",
      "Accuracy at step 280: 0.9424\n",
      "Accuracy at step 290: 0.942\n",
      "Adding run metadata for 299\n",
      "Accuracy at step 300: 0.9452\n",
      "Accuracy at step 310: 0.9462\n",
      "Accuracy at step 320: 0.9404\n",
      "Accuracy at step 330: 0.9484\n",
      "Accuracy at step 340: 0.9491\n",
      "Accuracy at step 350: 0.9495\n",
      "Accuracy at step 360: 0.951\n",
      "Accuracy at step 370: 0.9497\n",
      "Accuracy at step 380: 0.9492\n",
      "Accuracy at step 390: 0.9502\n",
      "Adding run metadata for 399\n",
      "Accuracy at step 400: 0.9468\n",
      "Accuracy at step 410: 0.9497\n",
      "Accuracy at step 420: 0.9537\n",
      "Accuracy at step 430: 0.9472\n",
      "Accuracy at step 440: 0.949\n",
      "Accuracy at step 450: 0.9507\n",
      "Accuracy at step 460: 0.9497\n",
      "Accuracy at step 470: 0.9528\n",
      "Accuracy at step 480: 0.9579\n",
      "Accuracy at step 490: 0.9573\n",
      "Adding run metadata for 499\n",
      "Accuracy at step 500: 0.9536\n",
      "Accuracy at step 510: 0.9583\n",
      "Accuracy at step 520: 0.9582\n",
      "Accuracy at step 530: 0.9542\n",
      "Accuracy at step 540: 0.9505\n",
      "Accuracy at step 550: 0.9566\n",
      "Accuracy at step 560: 0.959\n",
      "Accuracy at step 570: 0.9589\n",
      "Accuracy at step 580: 0.9574\n",
      "Accuracy at step 590: 0.9605\n",
      "Adding run metadata for 599\n",
      "Accuracy at step 600: 0.9587\n",
      "Accuracy at step 610: 0.9599\n",
      "Accuracy at step 620: 0.9611\n",
      "Accuracy at step 630: 0.9611\n",
      "Accuracy at step 640: 0.9598\n",
      "Accuracy at step 650: 0.9571\n",
      "Accuracy at step 660: 0.9623\n",
      "Accuracy at step 670: 0.9611\n",
      "Accuracy at step 680: 0.9618\n",
      "Accuracy at step 690: 0.9639\n",
      "Adding run metadata for 699\n",
      "Accuracy at step 700: 0.9637\n",
      "Accuracy at step 710: 0.9627\n",
      "Accuracy at step 720: 0.9618\n",
      "Accuracy at step 730: 0.9637\n",
      "Accuracy at step 740: 0.9638\n",
      "Accuracy at step 750: 0.9639\n",
      "Accuracy at step 760: 0.9645\n",
      "Accuracy at step 770: 0.9652\n",
      "Accuracy at step 780: 0.9667\n",
      "Accuracy at step 790: 0.9674\n",
      "Adding run metadata for 799\n",
      "Accuracy at step 800: 0.9678\n",
      "Accuracy at step 810: 0.9682\n",
      "Accuracy at step 820: 0.9658\n",
      "Accuracy at step 830: 0.9662\n",
      "Accuracy at step 840: 0.9632\n",
      "Accuracy at step 850: 0.9676\n",
      "Accuracy at step 860: 0.9659\n",
      "Accuracy at step 870: 0.9661\n",
      "Accuracy at step 880: 0.9627\n",
      "Accuracy at step 890: 0.9683\n",
      "Adding run metadata for 899\n",
      "Accuracy at step 900: 0.9685\n",
      "Accuracy at step 910: 0.9687\n",
      "Accuracy at step 920: 0.967\n",
      "Accuracy at step 930: 0.9689\n",
      "Accuracy at step 940: 0.9695\n",
      "Accuracy at step 950: 0.9695\n",
      "Accuracy at step 960: 0.9685\n",
      "Accuracy at step 970: 0.9691\n",
      "Accuracy at step 980: 0.9673\n",
      "Accuracy at step 990: 0.9689\n",
      "Adding run metadata for 999\n"
     ]
    },
    {
     "ename": "SystemExit",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "An exception has occurred, use %tb to see the full traceback.\n",
      "\u001b[0;31mSystemExit\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the 'License');\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     http://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an 'AS IS' BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License.\n",
    "# ==============================================================================\n",
    "\"\"\"A simple MNIST classifier which displays summaries in TensorBoard.\n",
    "This is an unimpressive MNIST model, but it is a good example of using\n",
    "tf.name_scope to make a graph legible in the TensorBoard graph explorer, and of\n",
    "naming summary tags so that they are grouped meaningfully in TensorBoard.\n",
    "It demonstrates the functionality of every TensorBoard dashboard.\n",
    "\"\"\"\n",
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import argparse\n",
    "import os\n",
    "import sys\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "tf.reset_default_graph()\n",
    "\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "\n",
    "FLAGS = None\n",
    "\n",
    "\n",
    "def train():\n",
    "  # Import data\n",
    "  mnist = input_data.read_data_sets('MNIST_data',\n",
    "                                    one_hot=True,\n",
    "                                    fake_data=FLAGS.fake_data)\n",
    "\n",
    "  sess = tf.InteractiveSession()\n",
    "  # Create a multilayer model.\n",
    "\n",
    "  # Input placeholders\n",
    "  with tf.name_scope('input'):\n",
    "    x = tf.placeholder(tf.float32, [None, 784], name='x-input')\n",
    "    y_ = tf.placeholder(tf.float32, [None, 10], name='y-input')\n",
    "\n",
    "  with tf.name_scope('input_reshape'):\n",
    "    image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])\n",
    "    tf.summary.image('input', image_shaped_input, 10)\n",
    "\n",
    "  # We can't initialize these variables to 0 - the network will get stuck.\n",
    "  def weight_variable(shape):\n",
    "    \"\"\"Create a weight variable with appropriate initialization.\"\"\"\n",
    "    initial = tf.truncated_normal(shape, stddev=0.1)\n",
    "    return tf.Variable(initial)\n",
    "\n",
    "  def bias_variable(shape):\n",
    "    \"\"\"Create a bias variable with appropriate initialization.\"\"\"\n",
    "    initial = tf.constant(0.1, shape=shape)\n",
    "    return tf.Variable(initial)\n",
    "\n",
    "  def variable_summaries(var):\n",
    "    \"\"\"Attach a lot of summaries to a Tensor (for TensorBoard visualization).\"\"\"\n",
    "    with tf.name_scope('summaries'):\n",
    "      mean = tf.reduce_mean(var)\n",
    "      tf.summary.scalar('mean', mean)\n",
    "      with tf.name_scope('stddev'):\n",
    "        stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))\n",
    "      tf.summary.scalar('stddev', stddev)\n",
    "      tf.summary.scalar('max', tf.reduce_max(var))\n",
    "      tf.summary.scalar('min', tf.reduce_min(var))\n",
    "      tf.summary.histogram('histogram', var)\n",
    "\n",
    "  def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):\n",
    "    \"\"\"Reusable code for making a simple neural net layer.\n",
    "    It does a matrix multiply, bias add, and then uses ReLU to nonlinearize.\n",
    "    It also sets up name scoping so that the resultant graph is easy to read,\n",
    "    and adds a number of summary ops.\n",
    "    \"\"\"\n",
    "    # Adding a name scope ensures logical grouping of the layers in the graph.\n",
    "    with tf.name_scope(layer_name):\n",
    "      # This Variable will hold the state of the weights for the layer\n",
    "      with tf.name_scope('weights'):\n",
    "        weights = weight_variable([input_dim, output_dim])\n",
    "        variable_summaries(weights)\n",
    "      with tf.name_scope('biases'):\n",
    "        biases = bias_variable([output_dim])\n",
    "        variable_summaries(biases)\n",
    "      with tf.name_scope('Wx_plus_b'):\n",
    "        preactivate = tf.matmul(input_tensor, weights) + biases\n",
    "        tf.summary.histogram('pre_activations', preactivate)\n",
    "      activations = act(preactivate, name='activation')\n",
    "      tf.summary.histogram('activations', activations)\n",
    "      return activations\n",
    "\n",
    "  hidden1 = nn_layer(x, 784, 500, 'layer1')\n",
    "\n",
    "  with tf.name_scope('dropout'):\n",
    "    keep_prob = tf.placeholder(tf.float32)\n",
    "    tf.summary.scalar('dropout_keep_probability', keep_prob)\n",
    "    dropped = tf.nn.dropout(hidden1, keep_prob)\n",
    "\n",
    "  # Do not apply softmax activation yet, see below.\n",
    "  y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identity)\n",
    "\n",
    "  with tf.name_scope('cross_entropy'):\n",
    "    # The raw formulation of cross-entropy,\n",
    "    #\n",
    "    # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)),\n",
    "    #                               reduction_indices=[1]))\n",
    "    #\n",
    "    # can be numerically unstable.\n",
    "    #\n",
    "    # So here we use tf.nn.softmax_cross_entropy_with_logits on the\n",
    "    # raw outputs of the nn_layer above, and then average across\n",
    "    # the batch.\n",
    "    diff = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)\n",
    "    with tf.name_scope('total'):\n",
    "      cross_entropy = tf.reduce_mean(diff)\n",
    "  tf.summary.scalar('cross_entropy', cross_entropy)\n",
    "\n",
    "  with tf.name_scope('train'):\n",
    "    train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(\n",
    "        cross_entropy)\n",
    "\n",
    "  with tf.name_scope('accuracy'):\n",
    "    with tf.name_scope('correct_prediction'):\n",
    "      correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))\n",
    "    with tf.name_scope('accuracy'):\n",
    "      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "  tf.summary.scalar('accuracy', accuracy)\n",
    "\n",
    "  # Merge all the summaries and write them out to\n",
    "  # /tmp/tensorflow/mnist/logs/mnist_with_summaries (by default)\n",
    "  merged = tf.summary.merge_all()\n",
    "  train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph)\n",
    "  test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test')\n",
    "  tf.global_variables_initializer().run()\n",
    "\n",
    "  # Train the model, and also write summaries.\n",
    "  # Every 10th step, measure test-set accuracy, and write test summaries\n",
    "  # All other steps, run train_step on training data, & add training summaries\n",
    "\n",
    "  def feed_dict(train):\n",
    "    \"\"\"Make a TensorFlow feed_dict: maps data onto Tensor placeholders.\"\"\"\n",
    "    if train or FLAGS.fake_data:\n",
    "      xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data)\n",
    "      k = FLAGS.dropout\n",
    "    else:\n",
    "      xs, ys = mnist.test.images, mnist.test.labels\n",
    "      k = 1.0\n",
    "    return {x: xs, y_: ys, keep_prob: k}\n",
    "\n",
    "  for i in range(FLAGS.max_steps):\n",
    "    if i % 10 == 0:  # Record summaries and test-set accuracy\n",
    "      summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))\n",
    "      test_writer.add_summary(summary, i)\n",
    "      print('Accuracy at step %s: %s' % (i, acc))\n",
    "    else:  # Record train set summaries, and train\n",
    "      if i % 100 == 99:  # Record execution stats\n",
    "        run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\n",
    "        run_metadata = tf.RunMetadata()\n",
    "        summary, _ = sess.run([merged, train_step],\n",
    "                              feed_dict=feed_dict(True),\n",
    "                              options=run_options,\n",
    "                              run_metadata=run_metadata)\n",
    "        train_writer.add_run_metadata(run_metadata, 'step%03d' % i)\n",
    "        train_writer.add_summary(summary, i)\n",
    "        print('Adding run metadata for', i)\n",
    "      else:  # Record a summary\n",
    "        summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))\n",
    "        train_writer.add_summary(summary, i)\n",
    "  train_writer.close()\n",
    "  test_writer.close()\n",
    "\n",
    "\n",
    "def main(_):\n",
    "  if tf.gfile.Exists(FLAGS.log_dir):\n",
    "    tf.gfile.DeleteRecursively(FLAGS.log_dir)\n",
    "  tf.gfile.MakeDirs(FLAGS.log_dir)\n",
    "  train()\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "  parser = argparse.ArgumentParser()\n",
    "  parser.add_argument('--fake_data', nargs='?', const=True, type=bool,\n",
    "                      default=False,\n",
    "                      help='If true, uses fake data for unit testing.')\n",
    "  parser.add_argument('--max_steps', type=int, default=1000,\n",
    "                      help='Number of steps to run trainer.')\n",
    "  parser.add_argument('--learning_rate', type=float, default=0.001,\n",
    "                      help='Initial learning rate')\n",
    "  parser.add_argument('--dropout', type=float, default=0.9,\n",
    "                      help='Keep probability for training dropout.')\n",
    "  parser.add_argument(\n",
    "      '--data_dir',\n",
    "      type=str,\n",
    "      default=os.path.join(os.getenv('TEST_TMPDIR', 'tensorboard'),\n",
    "                           'MNIST_data'),\n",
    "      help='Directory for storing input data')\n",
    "  parser.add_argument(\n",
    "      '--log_dir',\n",
    "      type=str,\n",
    "      default=os.path.join(os.getenv('TEST_TMPDIR', 'tensorboard'),\n",
    "                           'mnist/logs'),\n",
    "      help='Summaries log directory')\n",
    "  FLAGS, unparsed = parser.parse_known_args()\n",
    "tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
