{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Tensorflow基础"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 一、什么是Tensorflow\n",
    "\n",
    "\n",
    "<img src=\"tensorflow-logo.gif\" >\n",
    "\n",
    ">Tensorflow是一个符号式编程的框架。由谷歌大脑开发，2015年开源，是目前业界用的最广泛的深度学习框架之一。该框架可广泛的用于各个终端，服务器端，移动端和嵌入式端等。\n",
    "\n",
    "一个Tensorflow程序通常包含两个部分：\n",
    "\n",
    "- 构建计算图\n",
    "- 执行计算图\n",
    "\n",
    "下面来看一个最简单的Tensorflow程序的例子"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[4.0]\n"
     ]
    }
   ],
   "source": [
    "import warnings\n",
    "warnings.filterwarnings('ignore')\n",
    "# 导入tensorflow包\n",
    "import tensorflow as tf\n",
    "# 构建计算图\n",
    "g1 = tf.get_default_graph()\n",
    "w = tf.constant(2.)\n",
    "y = w+2\n",
    "# 加载会话，执行计算图\n",
    "with tf.Session(graph = g1) as sess:\n",
    "    print(sess.run([y]))\n",
    "# 清空图\n",
    "tf.reset_default_graph()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 二、Tensorflow中的基本概念\n",
    "\n",
    "Tensorflow中有一些基本概念必须掌握清楚：\n",
    "\n",
    "- 图(graph)\n",
    "\n",
    "- 会话(session)\n",
    "\n",
    "- 操作(op)\n",
    "\n",
    "- 张量(tensor)\n",
    "\n",
    "- 变量(variable)\n",
    "\n",
    "- 占位符(placeholder)\n",
    "\n",
    "- 计算路径\n",
    "\n",
    "- tf.assgin"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1 什么是图(graph)\n",
    "\n",
    "图由节点和边组成，需要注意的是这个图的概念和和理论上的计算图不一样。在Tensorflow中边表示流动的方向，节点表示**张量**和**操作**。张量和操作的概念在后面会进一步讲解。（课上讲的计算图，边表示操作，节点表示变量）\n",
    "\n",
    ">注意：计算图只包含操作，不包含结果(没有实际的运算过程)\n",
    "\n",
    "![tesnor-flow](tensors_flowing.gif)\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "当你打开Tensorflow的时候，tf会自动为你分配一个默认的图。你所有构件图的操作都会在这个默认的图上进行操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 默认的图上进行操作\n",
    "g0 = tf.get_default_graph()\n",
    "# 这是图的一个构件\n",
    "x0 = tf.Variable(1)\n",
    "# 查看这个图中的构件属不属于这个图\n",
    "x0.graph is g0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "False\n",
      "True\n"
     ]
    }
   ],
   "source": [
    "# 在不同的图上进行不同的操作\n",
    "g1 = tf.Graph()\n",
    "g2 = tf.Graph()\n",
    "# 在g1这个图上进行操作\n",
    "with g1.as_default():\n",
    "    x1 = tf.Variable(1)   \n",
    "# 在g2这个图上进行操作\n",
    "with g2.as_default():\n",
    "    x2 = tf.Variable(1)\n",
    "print(x1.graph is g2)\n",
    "print(x2.graph is g2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "有时候需要查看一个图上有哪些操作节点："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<tf.Operation 'Variable/initial_value' type=Const>,\n",
       " <tf.Operation 'Variable' type=VariableV2>,\n",
       " <tf.Operation 'Variable/Assign' type=Assign>,\n",
       " <tf.Operation 'Variable/read' type=Identity>]"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看某一个图上的操作节点\n",
    "g1.get_operations()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在构建图的时候，由于重复构建操作导致图出错，所以在构建图的时候一定记得对默认的图进行清空。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "g_now = tf.get_default_graph()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<tf.Operation 'Variable/initial_value' type=Const>,\n",
       " <tf.Operation 'Variable' type=VariableV2>,\n",
       " <tf.Operation 'Variable/Assign' type=Assign>,\n",
       " <tf.Operation 'Variable/read' type=Identity>]"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "g_now.get_operations()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.reset_default_graph()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "g_now = tf.get_default_graph()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "g_now.get_operations()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 扩展\n",
    "> Tensorflow是一种静态图的深度学习框架，Pytorch是一种动态图的深度学习框架。Tensorflow2.0引入了动态图机制，未来的框架是即可静态图也可动态图，两者可相互切换。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.2 什么是会话(session)\n",
    "\n",
    "会话的作用是处理内容和优化，使我们能够实际执行计算图指定的计算。\n",
    "计算图是要执行的计算模版，会话通过分配计算资源来执行计算图的计算。\n",
    "\n",
    ">图的构建是几乎不占资源的，但是会话会占用很多资源\n",
    "\n",
    "来看一个简单的例子"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[5]\n",
      "[15]\n",
      "5\n"
     ]
    }
   ],
   "source": [
    "# 构建图（在默认的图上构建）\n",
    "w = tf.constant(3)\n",
    "x = w+2\n",
    "y = x+5\n",
    "z = x*3\n",
    "# 执行会话\n",
    "with tf.Session(graph=tf.get_default_graph()) as sess:\n",
    "    print(sess.run([x]))\n",
    "    print(sess.run([z])) \n",
    "    print(x.eval())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"1.png\" >"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 注意：\n",
    "\n",
    "1. eval( )等价于sess.run( )。\n",
    "2. tensorflow会自动检测依赖关系。\n",
    "3. 除了variable变量其余计算结果每次计算完后会释放。variable在session执行完后释放。\n",
    "4. 重复计算的问题。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[5, 15]\n"
     ]
    }
   ],
   "source": [
    "# 解决重复计算的问题\n",
    "# 构建图\n",
    "w = tf.constant(3)\n",
    "x = w+2\n",
    "y = x+5\n",
    "z = x*3\n",
    "# 执行会话\n",
    "with tf.Session(graph=tf.get_default_graph()) as sess:\n",
    "    print(sess.run([x,z]))\n",
    "    #print(sess.run[x])\n",
    "    #print(sess.run(z))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "每次使用上下文管理器太麻烦，于是我们有互动的会话，互动的会话就像ipython一样，实时反馈。实时反馈的目的是为了简单进行调动"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "5\n",
      "5\n"
     ]
    }
   ],
   "source": [
    "# 注意需要手动关闭\n",
    "sess = tf.InteractiveSession()\n",
    "print(x.eval())\n",
    "print(sess.run(x))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "sess.close()\n",
    "# print(sess.run([x]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.3 什么是张量(tensor)\n",
    "\n",
    "操作的输入和输出就是张量，Tensorflow直观翻译，就是张量流动的意思。\n",
    "\n",
    "- 标量(scalar)\n",
    "- 向量(vector)\n",
    "- 矩阵(matrix)\n",
    "- 张量(tensor)\n",
    "\n",
    "在tensorflow中张量主要有三个来源：\n",
    "\n",
    "- constant\n",
    "- variable\n",
    "- placeholder\n",
    "\n",
    "这里我们只讨论contant，后面两种在后面小节会详细讨论。\n",
    "\n",
    "`constant`的生存周期在会话内。\n",
    "\n",
    "`tf.constant`生成常量的意思（常量意味着不可变）。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[array([[ 0.,  1.,  2.,  3.],\n",
      "       [ 4.,  5.,  6.,  7.],\n",
      "       [ 8.,  9., 10., 11.]], dtype=float32)]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "a = tf.constant(np.arange(12).reshape(3,4),dtype=tf.float32)\n",
    "with tf.Session(graph=tf.get_default_graph()) as sess:\n",
    "    print(sess.run([a]))\n",
    "tf.reset_default_graph()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[array([[[0., 0., 0., 0., 0., 0.],\n",
      "        [0., 0., 0., 0., 0., 0.]],\n",
      "\n",
      "       [[0., 0., 0., 0., 0., 0.],\n",
      "        [0., 0., 0., 0., 0., 0.]],\n",
      "\n",
      "       [[0., 0., 0., 0., 0., 0.],\n",
      "        [0., 0., 0., 0., 0., 0.]]], dtype=float32)]\n"
     ]
    }
   ],
   "source": [
    "a = tf.constant(0.0,shape=(3,2,6),dtype=tf.float32)\n",
    "with tf.Session() as sess:\n",
    "    print(sess.run([a]))\n",
    "tf.reset_default_graph()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.4 什么是变量(variable)\n",
    "\n",
    "在神经网络中，我们需要一种能充当神经网络参数的角色。他可以被保存，也可被更新。这种角色称之为变量。变量也是一种张量。variable这种张量在session中会一直保持，不会被释放，可以被改变，直到session被关闭。\n",
    "\n",
    "> 注意：变量一定需要初始化。（神经网络参数也需要初始化！）\n",
    "\n",
    "变量的初始化主要使用下面的代码，这样就不需要一个一个的初始化了。\n",
    "\n",
    "```python\n",
    "# 这个实际上是一个op包含了所有的变量。\n",
    "......\n",
    "init = tf.global_variables_initializer()\n",
    "......\n",
    "sess.run(init)\n",
    "```\n",
    "\n",
    "- 其中`init`构建在图中。\n",
    "- sess.run(init)在session中执行。\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在Tensorflow中主要通过`tf.Variable`和`tf.get_variable`两个接口来实现变量。两者有很大的区别，建议大家尽量使用`tf.get_variable`\n",
    "\n",
    "### tf.Variable的使用\n",
    "\n",
    "**1.** 每次调用得到的都是不同的变量，即使使用了相同的变量名，在底层实现的时候还是会为变量创建不同的别名。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /Users/wanjun/anaconda/envs/python36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Colocations handled automatically by placer.\n",
      "var:0 [0.6459048]\n",
      "var_1:0 [2.]\n"
     ]
    }
   ],
   "source": [
    "var1 = tf.Variable(tf.random_uniform([1], -1.0, 1.0),name='var',dtype=tf.float32)\n",
    "var2 = tf.Variable(initial_value=[2],name='var',dtype=tf.float32)\n",
    "init = tf.global_variables_initializer()\n",
    "with tf.Session() as sess:\n",
    "    sess.run(init)\n",
    "    print(var1.name, sess.run(var1))\n",
    "    print(var2.name, sess.run(var2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2.** 会受`tf.name_scope`环境的影响，即会在前面加上`name_scope`的空间前缀。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "var_b_scope/var:0 [2.]\n",
      "var_b_scope/var_1:0 [2.]\n",
      "var_a_scope/var:0 [2.]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "with tf.name_scope('var_b_scope'):\n",
    "    var1 = tf.Variable(name='var', initial_value=[2], dtype=tf.float32)\n",
    "    var2 = tf.Variable(name='var', initial_value=[2], dtype=tf.float32)\n",
    "with tf.name_scope('var_a_scope'):\n",
    "    var3 = tf.Variable(name='var', initial_value=[2], dtype=tf.float32)\n",
    "init = tf.global_variables_initializer()\n",
    "with tf.Session() as sess:\n",
    "    sess.run(init)\n",
    "    print(var1.name, sess.run(var1))\n",
    "    print(var2.name, sess.run(var2))\n",
    "    print(var3.name, sess.run(var3))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3.** `Variable()`创建时直接指定初始化的方式，还可以把其他变量的初始值作为初始值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "var2 = tf.Variable(var1.initialized_value())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### tf.get_variable的使用"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**1.** 只会创建一个同名变量，如果想共享变量，需指定`reuse=True`，否则多次创建会报错，使用`reuse=True`（第一次创建的时候不用，后面共享的时候声明）,可以动态的修改某个`scope`的共享属性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[array([-0.43011189], dtype=float32)]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "\n",
    "def func(x):\n",
    "    weight = tf.get_variable(name = \"weight\",initializer = tf.random_normal([1]))  \n",
    "    bias = tf.get_variable(name=\"bias\",initializer = tf.zeros([1]))  \n",
    "    return tf.add(tf.multiply(weight, x), bias)\n",
    "\n",
    "result1 = func(1)\n",
    "#result2 = func(2)\n",
    "init = tf.global_variables_initializer()\n",
    "with tf.Session() as sess:\n",
    "    sess.run(init)\n",
    "    print(sess.run([result1]))\n",
    "   # print(sess.run(result2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[array([-1.785076], dtype=float32)]\n",
      "[array([-3.570152], dtype=float32)]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "def func(x,reuse):\n",
    "    with tf.variable_scope('neuron',reuse=reuse):\n",
    "        weight = tf.get_variable(name = \"weight\",initializer = tf.random_normal([1]))  \n",
    "        bias = tf.get_variable(name=\"bias\",initializer = tf.zeros([1]))  \n",
    "    return tf.add(tf.multiply(weight, x), bias)\n",
    "result1 = func(1,reuse=False)\n",
    "result2 = func(2,reuse=True)\n",
    "init = tf.global_variables_initializer()\n",
    "with tf.Session() as sess:\n",
    "    sess.run(init)\n",
    "    print(sess.run([result1]))\n",
    "    print(sess.run([result2]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**2.** 不受`with tf.name_scope`的影响(注：是`name_scope`，不是`variable_scope`，`tf.Variable`和`tf.get_variable`都会受`variable_scope`影响）)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "var:0 [0.6803905]\n",
      "var1:0 [0.02682829]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "with tf.name_scope('var_a_scope'):\n",
    "    var1 = tf.get_variable(name='var', shape=[1], dtype=tf.float32)\n",
    "    var2 = tf.get_variable(name='var1', shape=[1], dtype=tf.float32)\n",
    "with tf.Session() as sess:\n",
    "    sess.run(tf.global_variables_initializer())\n",
    "    print(var1.name, sess.run(var1))\n",
    "    print(var2.name, sess.run(var2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**3.** 初始化方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [],
   "source": [
    "conv1_weights = tf.get_variable(name=\"conv1_weights\", shape=[5, 5, 3, 3], dtype=tf.float32, initializer=tf.truncated_normal_initializer())\n",
    "conv1_biases = tf.get_variable(name='conv1_biases', shape=[3], dtype=tf.float32, initializer=tf.zeros_initializer())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**4.** `with tf.variable_scope('scope_name\")`会进行“累加”，每调用一次就会给里面的所有变量添加一次前缀，叠加顺序是外层先调用的在前，后调用的在后"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "image_filters/scope_a/conv1_weights:0\n",
      "image_filters/scope_a/Relu:0\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "def my_image_filter(input_images):\n",
    "    with tf.variable_scope('scope_a'):\n",
    "        conv1_weights = tf.get_variable(name=\"conv1_weights\", shape=[5, 5, 3, 3], dtype=tf.float32, \\\n",
    "                                        initializer=tf.truncated_normal_initializer())\n",
    "        conv1_biases = tf.get_variable(name='conv1_biases', shape=[3], dtype=tf.float32, \\\n",
    "                                        initializer=tf.zeros_initializer())\n",
    "        conv1 = tf.nn.conv2d(input_images, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')\n",
    "        print(conv1_weights.name)\n",
    "        return  tf.nn.relu(conv1 + conv1_biases)\n",
    "\n",
    "image1 = np.random.random(3*5*5).reshape(1, 5, 5, 3).astype(np.float32)\n",
    "image2 = np.random.random(3*5*5).reshape(1, 5, 5, 3).astype(np.float32)\n",
    "with tf.variable_scope(\"image_filters\") as scope:\n",
    "    result1 = my_image_filter(image1)\n",
    "    print(result1.name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.5 什么是占位符(placeholder)\n",
    "\n",
    "我们需要让计算图能接受外面来的数据，如何接受外面的数据就是通过占位符实现的。简单点理解就是神经网络需要输入的数据就是由占位符来输入的。为什么叫占位符，因为`BatchSize`是占了个位置，占好位置后，输入的数据在不断变化。\n",
    "> 注意：占位符也是一种`tensor`。输入的数据我们一般输入`numpy`的`ndarray`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[1.7917175 1.6017565 1.9925709]\n",
      " [1.2930344 1.2192346 1.210482 ]\n",
      " [1.4833066 1.4074428 1.2816691]\n",
      " [1.4580355 1.621912  1.4550905]\n",
      " [1.6928124 1.7759385 1.3649117]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "ph = tf.placeholder(dtype=tf.float32,shape =(None,3))\n",
    "add_op = tf.add(ph,1)\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    #print(sess.run(ph,feed_dict={ph:np.random.rand(4,3)}))\n",
    "    print(sess.run(add_op,feed_dict={ph:np.random.rand(5,3)}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.6 什么是计算路径\n",
    "\n",
    "计算路径是指如果计算的节点具有依赖关系，那么我们就会计算这些节点，沿着父节点找。\n",
    "TensorFlow仅通过必需的节点自动进行计算这一事实是该框架的一个巨大优势。如果计算图非常大并且有许多不必要的节点，那么它可以节省大量调用的运行时间。它允许我们构建大型的多用途计算图，这些计算图使用单个共享的核心节点集合，并根据所采取的不同计算路径去做不同的事情"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "18\n"
     ]
    }
   ],
   "source": [
    "# tensorflow会自动寻找依赖关系\n",
    "# 如果去掉feed_dict会报错\n",
    "tf.reset_default_graph()\n",
    "\n",
    "ph = tf.placeholder(tf.int32)\n",
    "\n",
    "three_node = tf.constant(3)\n",
    "\n",
    "sum_node = ph + three_node\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    #print(sess.run(three_node))\n",
    "    print(sess.run(sum_node,feed_dict={ph:15}))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.7、一个重要的操作tf.assign\n",
    "\n",
    "\n",
    "`tf.assign(target, value)`表示把`value`值赋值给`target`。`target`必须是一个可变的`tensor`(variable)可以没被初始化。`value`必须要有和`target`相同的数据类型和形状。\n",
    "\n",
    "思考一下如下的操作需要用到`tf.assign`吗？如果要用，对谁用？\n",
    "\n",
    "$\\theta = \\theta - \\beta \\nabla L(\\theta)$\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 三、实现线性回归\n",
    "\n",
    "线性回归可以看作是最简单的神经网络。我们使用4种方法来实现一个线性回归。\n",
    "\n",
    "- 解析法。\n",
    "- 人工求梯度。\n",
    "- 使用低阶API求梯度。\n",
    "- 使用高阶API求梯度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[-3.74659576e+01]\n",
      " [ 4.35208052e-01]\n",
      " [ 9.34183039e-03]\n",
      " [-1.05619654e-01]\n",
      " [ 6.38267040e-01]\n",
      " [-4.28281601e-06]\n",
      " [-3.77140474e-03]\n",
      " [-4.26884502e-01]\n",
      " [-4.40567464e-01]]\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/wanjun/anaconda/envs/python36/lib/python3.6/site-packages/sklearn/externals/joblib/__init__.py:15: DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.\n",
      "  warnings.warn(msg, category=DeprecationWarning)\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "import numpy as np\n",
    "from sklearn.datasets import fetch_california_housing\n",
    "\n",
    "housing = fetch_california_housing()\n",
    "m, n = housing.data.shape\n",
    "housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]\n",
    "\n",
    "X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name=\"X\")\n",
    "y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name=\"y\")\n",
    "XT = tf.transpose(X)\n",
    "theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)\n",
    "with tf.Session() as sess: \n",
    "    theta_value = theta.eval()\n",
    "    print(theta_value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0 MSE = 10.076755\n",
      "Epoch 100 MSE = 0.7233888\n",
      "Epoch 200 MSE = 0.5737729\n",
      "Epoch 300 MSE = 0.5593785\n",
      "Epoch 400 MSE = 0.5504173\n",
      "Epoch 500 MSE = 0.5438431\n",
      "Epoch 600 MSE = 0.53898513\n",
      "Epoch 700 MSE = 0.5353839\n",
      "Epoch 800 MSE = 0.5327053\n",
      "Epoch 900 MSE = 0.53070563\n",
      "[[ 2.0685523 ]\n",
      " [ 0.8501739 ]\n",
      " [ 0.14335446]\n",
      " [-0.2655531 ]\n",
      " [ 0.28865078]\n",
      " [ 0.00418023]\n",
      " [-0.04184572]\n",
      " [-0.70123595]\n",
      " [-0.67240614]]\n"
     ]
    }
   ],
   "source": [
    "import time\n",
    "tf.reset_default_graph()\n",
    "\n",
    "n_epochs = 1000\n",
    "learning_rate = 0.01\n",
    "\n",
    "data = housing.data\n",
    "scaled_housing_data_plus_bias = (data-np.mean(data,axis=0))/np.std(data,axis=0)\n",
    "scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data_plus_bias]\n",
    "\n",
    "X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name=\"X\")\n",
    "y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name=\"y\")\n",
    "\n",
    "\n",
    "\n",
    "theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0), name=\"theta\")\n",
    "y_pred = tf.matmul(X, theta, name=\"predictions\")\n",
    "\n",
    "error = y_pred - y\n",
    "mse = tf.reduce_mean(tf.square(error), name=\"mse\")\n",
    "\n",
    "\n",
    "gradients = 2./m * tf.matmul(tf.transpose(X), error)\n",
    "\n",
    "training_op = tf.assign(theta, theta - learning_rate * gradients)\n",
    "\n",
    "init = tf.global_variables_initializer() \n",
    "\n",
    "with tf.Session() as sess:\n",
    "    sess.run(init)\n",
    "    for epoch in range(n_epochs):\n",
    "        if epoch%100==0:\n",
    "            print(\"Epoch\", epoch, \"MSE =\", mse.eval())\n",
    "            time.sleep(1)\n",
    "        sess.run(training_op)\n",
    "    best_theta = theta.eval()\n",
    "    print(best_theta)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0 MSE = 7.4708076\n",
      "Epoch 100 MSE = 0.8945591\n",
      "Epoch 200 MSE = 0.72462505\n",
      "Epoch 300 MSE = 0.66868955\n",
      "Epoch 400 MSE = 0.629366\n",
      "Epoch 500 MSE = 0.60087246\n",
      "Epoch 600 MSE = 0.5801903\n",
      "Epoch 700 MSE = 0.56516325\n",
      "Epoch 800 MSE = 0.55423355\n",
      "Epoch 900 MSE = 0.5462741\n",
      "[[ 2.0685525 ]\n",
      " [ 0.8148505 ]\n",
      " [ 0.1622447 ]\n",
      " [-0.1510959 ]\n",
      " [ 0.1730188 ]\n",
      " [ 0.01151067]\n",
      " [-0.04269051]\n",
      " [-0.5967006 ]\n",
      " [-0.5614512 ]]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "n_epochs = 1000\n",
    "learning_rate = 0.01\n",
    "\n",
    "data = housing.data\n",
    "scaled_housing_data_plus_bias = (data-np.mean(data,axis=0))/np.std(data,axis=0)\n",
    "scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data_plus_bias]\n",
    "X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name=\"X\")\n",
    "y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name=\"y\")\n",
    "\n",
    "theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0), name=\"theta\")\n",
    "y_pred = tf.matmul(X, theta, name=\"predictions\")\n",
    "\n",
    "error = y_pred - y\n",
    "mse = tf.reduce_mean(tf.square(error), name=\"mse\")\n",
    "\n",
    "gradients = tf.gradients(mse,theta)\n",
    "\n",
    "training_op = tf.assign(theta, theta - learning_rate * gradients[0])\n",
    "init = tf.global_variables_initializer() \n",
    "with tf.Session() as sess:\n",
    "    sess.run(init)\n",
    "    for epoch in range(n_epochs):\n",
    "        if epoch%100==0:\n",
    "            print(\"Epoch\", epoch, \"MSE =\", mse.eval())\n",
    "            time.sleep(1)\n",
    "        sess.run(training_op)\n",
    "    best_theta = theta.eval()\n",
    "    print(best_theta)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0 MSE = 8.762407\n",
      "Epoch 100 MSE = 0.5314667\n",
      "Epoch 200 MSE = 0.52459747\n",
      "Epoch 300 MSE = 0.5243461\n",
      "Epoch 400 MSE = 0.5243242\n",
      "Epoch 500 MSE = 0.52432144\n",
      "Epoch 600 MSE = 0.524321\n",
      "Epoch 700 MSE = 0.52432096\n",
      "Epoch 800 MSE = 0.524321\n",
      "Epoch 900 MSE = 0.524321\n",
      "[[ 2.0685577 ]\n",
      " [ 0.8296145 ]\n",
      " [ 0.11875065]\n",
      " [-0.2655177 ]\n",
      " [ 0.30568862]\n",
      " [-0.00450329]\n",
      " [-0.03932609]\n",
      " [-0.8998968 ]\n",
      " [-0.8705517 ]]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "n_epochs = 1000\n",
    "learning_rate = 0.01\n",
    "\n",
    "data = housing.data\n",
    "scaled_housing_data_plus_bias = (data-np.mean(data,axis=0))/np.std(data,axis=0)\n",
    "scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data_plus_bias]\n",
    "X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name=\"X\")\n",
    "y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name=\"y\")\n",
    "theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0), name=\"theta\")\n",
    "y_pred = tf.matmul(X, theta, name=\"predictions\")\n",
    "error = y_pred - y\n",
    "mse = tf.reduce_mean(tf.square(error), name=\"mse\")\n",
    "\n",
    "\n",
    "#optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\n",
    "optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,momentum=0.9)\n",
    "training_op = optimizer.minimize(mse)\n",
    "\n",
    "init = tf.global_variables_initializer() \n",
    "with tf.Session() as sess:\n",
    "    sess.run(init)\n",
    "    for epoch in range(n_epochs):\n",
    "        if epoch%100==0:\n",
    "            print(\"Epoch\", epoch, \"MSE =\", mse.eval())\n",
    "            time.sleep(1)\n",
    "        sess.run(training_op)\n",
    "    best_theta = theta.eval()\n",
    "    print(best_theta)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 四、保存和恢复模型\n",
    "\n",
    "在模型的参数学习过程中，我们需要根据情况保存模型。根据前面的讲解知道神经网络的参数存储在`variable`中，`variable`的参数在`session`关闭后就会释放。所以我们需要在`session`打开的时候保存模型的参数。\n",
    "\n",
    "保存模型参数类似于`checkpoint`(切片快照)。在迭代的过程中，选择某一次快照一下然后保存到硬盘中。\n",
    "\n",
    "> 注意：模型保存在硬盘中有四个文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 0.6585562 ]\n",
      " [-0.40244937]\n",
      " [ 0.7453103 ]]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "theta = tf.Variable(tf.random_uniform([3, 1], -1.0, 1.0), name=\"theta\")\n",
    "init = tf.global_variables_initializer()\n",
    "saver = tf.train.Saver()\n",
    "n_epochs = 1000\n",
    "with tf.Session() as sess: \n",
    "    sess.run(init)\n",
    "    for epoch in range(n_epochs):\n",
    "        # checkpoint every 100 epochs\n",
    "        if epoch % 100 == 0: \n",
    "            saver.save(sess, save_path=\"./model/my_model.ckpt\")\n",
    "    print(theta.eval())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们来查看一下保存的文件：\n",
    "\n",
    "<img src=\"checkpoint.png\" >\n",
    "\n",
    "文件中主要保存了两类东西：\n",
    "\n",
    "1. 计算图(保存在meta文件中)\n",
    "2. variable的参数(保存在data文件中)\n",
    "\n",
    "---\n",
    "\n",
    "所以我们从硬盘中加载回模型有两种方法：\n",
    "\n",
    "1. 复制之前的代码，生成一摸一样的计算图，然后加载参数。\n",
    "2. 加载meta文件，将计算图加载回来，然后加载参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /Users/wanjun/anaconda/envs/python36/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use standard file APIs to check for files with this prefix.\n",
      "INFO:tensorflow:Restoring parameters from ./model/my_model.ckpt\n",
      "[[ 0.6585562 ]\n",
      " [-0.40244937]\n",
      " [ 0.7453103 ]]\n"
     ]
    }
   ],
   "source": [
    "# 方法一\n",
    "tf.reset_default_graph()\n",
    "theta = tf.Variable(tf.random_uniform([3, 1], -1.0, 1.0), name=\"theta\")\n",
    "init = tf.global_variables_initializer()\n",
    "saver = tf.train.Saver()\n",
    "with tf.Session() as sess: \n",
    "    saver.restore(sess,save_path=\"./model/my_model.ckpt\")\n",
    "    print(theta.eval())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Restoring parameters from ./model/my_model.ckpt\n"
     ]
    }
   ],
   "source": [
    "# 方法二\n",
    "tf.reset_default_graph()\n",
    "saver = tf.train.import_meta_graph('./model/my_model.ckpt.meta')\n",
    "with tf.Session() as sess:\n",
    "    saver.restore(sess,'./model/my_model.ckpt')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.1 tf.get_collection和tf.add_to_collection\n",
    "\n",
    "为了方便我们取出不同的operation，我们需要使用tf.add_to_collection和tf.get_collection。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 0.86878633]\n",
      " [-0.9685254 ]\n",
      " [-0.8158989 ]]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "theta = tf.Variable(tf.random_uniform([3, 1], -1.0, 1.0), name=\"theta\")\n",
    "tf.add_to_collection('my_op',theta)\n",
    "init = tf.global_variables_initializer()\n",
    "saver = tf.train.Saver()\n",
    "n_epochs = 1000\n",
    "with tf.Session() as sess: \n",
    "    sess.run(init)\n",
    "    for epoch in range(n_epochs):\n",
    "        # checkpoint every 100 epochs\n",
    "        if epoch % 100 == 0: \n",
    "            saver.save(sess, save_path=\"./model/my_model.ckpt\")\n",
    "    print(theta.eval())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Restoring parameters from ./model/my_model.ckpt\n",
      "[[ 0.86878633]\n",
      " [-0.9685254 ]\n",
      " [-0.8158989 ]]\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "saver = tf.train.import_meta_graph('./model/my_model.ckpt.meta')\n",
    "my_op = tf.get_collection('my_op')\n",
    "with tf.Session() as sess:\n",
    "    saver.restore(sess,'./model/my_model.ckpt')\n",
    "    print(sess.run(my_op[0]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[<tf.Tensor 'theta:0' shape=(3, 1) dtype=float32_ref>]"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "my_op"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 五、TensorBoard监控\n",
    "\n",
    "TensorBoard是和TensorFlow配套的一个神经网络可视化的工具。\n",
    "\n",
    "大致流程如下：\n",
    "\n",
    "1. 在你创建的图里面，选择你要汇总(summary)的节点。\n",
    "2. 因为你要对每一个汇总操作,进行sess.run操作，为了方便所以我们需要将所有操作进行汇总。(tf.summary.merge_all())。\n",
    "3. 在sess中运行上面汇总的操作。\n",
    "4. 使用tf.summary.FileWriter,将结果写入文件。\n",
    "5. 使用tensorboard --logdir='path'运行文件。\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.1 summary操作\n",
    "\n",
    "---\n",
    "```python\n",
    "#统计标量，比如loss,accuracy，得到时序图\n",
    "tf.summary.scalar(name,tensor)\n",
    "```\n",
    "---\n",
    "\n",
    "```python\n",
    "#统计张量，直方图统计，看weights,bias的分布\n",
    "tf.summary.histogram(name,tensor)\n",
    "```\n",
    "---\n",
    "\n",
    "```python\n",
    "# 将summary的操作进行汇总\n",
    "# inputs是个list\n",
    "# 一个表示部分汇总一个表示全部汇总\n",
    "merge_some=tf.summary.merge(inputs,collections=None,name=None)\n",
    "merge_summary=tf.summary.merge_all(key=tf.GraphKeys.SUMMARIES)\n",
    "```\n",
    "---\n",
    "\n",
    "> 注意：上面的操作都是在图的定义中"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.2 文件写入操作\n",
    "\n",
    "---\n",
    "\n",
    "```python\n",
    "#写入到硬盘的文件\n",
    "#这个操作把图写入文件中\n",
    "file_writer=tf.summary.FileWriter(logdir,graph,flush_secs)\n",
    "```\n",
    "---\n",
    "\n",
    "```python\n",
    "merge=sess.run(merge_some)\n",
    "file_writer.add_summary(merge,step)\n",
    "```\n",
    "\n",
    "注意，该操作是在sess会话中运行\n",
    "\n",
    "```python\n",
    "[...]\n",
    "for batch_index in range(n_batches):\n",
    "    X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size) \n",
    "    if batch_index % 10 == 0:\n",
    "        summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})\n",
    "        step = epoch * n_batches + batch_index\n",
    "        file_writer.add_summary(summary_str, step)\n",
    "    sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n",
    "[...]\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " ---\n",
    "### 5.3 运行tensorboard\n",
    "\n",
    "```bash\n",
    "#path为目录地址\n",
    "tensorboard --logdir='path'\n",
    "```\n",
    "\n",
    "```bash\n",
    "#关于端口被占用的解决方法\n",
    "#默认使用的是6006端口\n",
    "lsof -i:6006\n",
    "kill -9 4969\n",
    "```\n",
    "\n",
    "\n",
    "```\n",
    "#在浏览器中输入\n",
    " http://0.0.0.0:6006/ (or http://localhost:6006/)\n",
    "```\n",
    "\n",
    "6006倒过来就是goog的意思\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /Users/wanjun/anaconda/envs/python36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Colocations handled automatically by placer.\n",
      "tf_logs/run-20190831063428/\n",
      "Epoch 0 MSE = 7.0744567\n",
      "Epoch 100 MSE = 0.7813135\n",
      "Epoch 200 MSE = 0.651434\n",
      "Epoch 300 MSE = 0.62424386\n",
      "Epoch 400 MSE = 0.60432595\n",
      "Epoch 500 MSE = 0.5886629\n",
      "Epoch 600 MSE = 0.5762598\n",
      "Epoch 700 MSE = 0.56639147\n",
      "Epoch 800 MSE = 0.5585053\n",
      "Epoch 900 MSE = 0.55217683\n",
      "[[ 2.0685527 ]\n",
      " [ 0.9574246 ]\n",
      " [ 0.16366452]\n",
      " [-0.46780983]\n",
      " [ 0.4557338 ]\n",
      " [ 0.01017649]\n",
      " [-0.04585057]\n",
      " [-0.4524668 ]\n",
      " [-0.4361016 ]]\n"
     ]
    }
   ],
   "source": [
    "# tensorboard举例\n",
    "import tensorflow as tf\n",
    "import numpy as np\n",
    "from datetime import datetime\n",
    "from sklearn.datasets import fetch_california_housing\n",
    "\n",
    "\n",
    "housing = fetch_california_housing()\n",
    "m, n = housing.data.shape\n",
    "\n",
    "tf.reset_default_graph()\n",
    "\n",
    "n_epochs = 1000\n",
    "learning_rate = 0.01\n",
    "\n",
    "data = housing.data\n",
    "scaled_housing_data_plus_bias = (data-np.mean(data,axis=0))/np.std(data,axis=0)\n",
    "scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data_plus_bias]\n",
    "X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name=\"X\")\n",
    "y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name=\"y\")\n",
    "\n",
    "theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0), name=\"theta\")\n",
    "y_pred = tf.matmul(X, theta, name=\"predictions\")\n",
    "error = y_pred - y\n",
    "mse = tf.reduce_mean(tf.square(error), name=\"mse\")\n",
    "\n",
    "# 定义写入的东西\n",
    "tf.summary.scalar('mse',mse)\n",
    "tf.summary.histogram('theta',theta)\n",
    "\n",
    "# 进行汇总\n",
    "merge_summary=tf.summary.merge_all(key=tf.GraphKeys.SUMMARIES)\n",
    "\n",
    "# 定义写入地址\n",
    "now = datetime.utcnow().strftime(\"%Y%m%d%H%M%S\")\n",
    "root_logdir = \"tf_logs\"\n",
    "logdir = \"{}/run-{}/\".format(root_logdir, now)\n",
    "# 打印写入的地址方便tensorboard使用\n",
    "print(logdir)\n",
    "\n",
    "optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\n",
    "training_op = optimizer.minimize(mse)\n",
    "\n",
    "init = tf.global_variables_initializer() \n",
    "with tf.Session() as sess:\n",
    "    \n",
    "    sess.run(init)\n",
    "    # 定义写的文件（打开）\n",
    "    file_writer=tf.summary.FileWriter(logdir,sess.graph)\n",
    "    for epoch in range(n_epochs):\n",
    "        if epoch%100==0:\n",
    "            print(\"Epoch\", epoch, \"MSE =\", mse.eval())\n",
    "        sess.run(training_op)\n",
    "         # 计算写入的值\n",
    "        summary_str = merge_summary.eval()\n",
    "        file_writer.add_summary(summary_str, epoch)\n",
    "    best_theta = theta.eval()\n",
    "    print(best_theta)\n",
    "file_writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "### 六、 实现第一个神经网络\n",
    "\n",
    "任务：使用mnist数据集来实现图像的分类：\n",
    "\n",
    "<img src=\"mnist.png\" style=\"zoom:50%\" >\n",
    "\n",
    "输入是以下的一张图片：\n",
    "\n",
    "<img src=\"0.png\" style=\"zoom:100%\" >\n",
    "\n",
    "等价于一个矩阵：\n",
    "\n",
    "<img src=\"0_matrix.png\" style=\"zoom:15%\" >"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracting MNIST_data/train-images-idx3-ubyte.gz\n",
      "Extracting MNIST_data/train-labels-idx1-ubyte.gz\n",
      "Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n",
      "Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n",
      "epoch:1,train_loss:0.3265,test_acc:0.9439\n",
      "epoch:2,train_loss:0.2267,test_acc:0.9543\n",
      "epoch:3,train_loss:0.0779,test_acc:0.9606\n",
      "epoch:4,train_loss:0.0871,test_acc:0.9666\n",
      "epoch:5,train_loss:0.0362,test_acc:0.9668\n",
      "epoch:6,train_loss:0.1314,test_acc:0.9693\n",
      "epoch:7,train_loss:0.0297,test_acc:0.9705\n",
      "epoch:8,train_loss:0.0319,test_acc:0.9710\n",
      "epoch:9,train_loss:0.0348,test_acc:0.9725\n",
      "epoch:10,train_loss:0.0344,test_acc:0.9730\n",
      "epoch:11,train_loss:0.0269,test_acc:0.9727\n",
      "epoch:12,train_loss:0.0270,test_acc:0.9744\n",
      "epoch:13,train_loss:0.0081,test_acc:0.9711\n",
      "epoch:14,train_loss:0.0191,test_acc:0.9727\n",
      "epoch:15,train_loss:0.0168,test_acc:0.9706\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow.examples.tutorials.mnist import input_data\n",
    "import sys\n",
    "\n",
    "\n",
    "tf.reset_default_graph()\n",
    "epochs = 15\n",
    "batch_size = 100\n",
    "total_sum = 0\n",
    "epoch = 0\n",
    "\n",
    "mnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n",
    "train_num = mnist.train.num_examples\n",
    "\n",
    "\n",
    "\n",
    "input_data = tf.placeholder(tf.float32,shape=(None,784))\n",
    "input_label = tf.placeholder(tf.float32,shape=(None,10))\n",
    "\n",
    "w1 = tf.get_variable(shape=(784,64),name='hidden_1_w')\n",
    "b1 = tf.get_variable(shape=(64),initializer=tf.zeros_initializer(),name='hidden_1_b')\n",
    "\n",
    "w2 = tf.get_variable(shape=(64,32),name='hidden_2_w')\n",
    "b2 = tf.get_variable(shape=(32),initializer=tf.zeros_initializer(),name='hidden_2_b')\n",
    "\n",
    "w3 = tf.get_variable(shape=(32,10),name='layer_output')\n",
    "\n",
    "#logit层\n",
    "output = tf.matmul(tf.nn.relu(tf.matmul(tf.nn.relu(tf.matmul(input_data,w1)+b1),w2)+b2),w3)\n",
    "\n",
    "loss = tf.losses.softmax_cross_entropy(input_label,output)\n",
    "\n",
    "#opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)\n",
    "opt = tf.train.AdamOptimizer()\n",
    "\n",
    "train_op = opt.minimize(loss)\n",
    "\n",
    "# 测试评估\n",
    "correct_pred = tf.equal(tf.argmax(input_label,axis=1),tf.argmax(output,axis=1))\n",
    "acc = tf.reduce_mean(tf.cast(correct_pred,tf.float32))\n",
    "\n",
    "tf.add_to_collection('my_op',input_data)\n",
    "tf.add_to_collection('my_op',output)\n",
    "tf.add_to_collection('my_op',loss)\n",
    "\n",
    "init = tf.global_variables_initializer()\n",
    "saver = tf.train.Saver()\n",
    "with tf.Session() as sess:\n",
    "    sess.run([init])\n",
    "    test_data = mnist.test.images\n",
    "    test_label = mnist.test.labels\n",
    "    while epoch<epochs:\n",
    "        data,label=mnist.train.next_batch(batch_size)\n",
    "        data = data.reshape(-1,784)\n",
    "        total_sum+=batch_size\n",
    "        sess.run([train_op],feed_dict={input_data:data,input_label:label})\n",
    "        if total_sum//train_num>epoch:\n",
    "            epoch = total_sum//train_num\n",
    "            loss_val = sess.run([loss],feed_dict={input_data:data,input_label:label})\n",
    "            acc_test = sess.run([acc],feed_dict={input_data:test_data,input_label:test_label})\n",
    "            saver.save(sess, save_path=\"./model/my_model.ckpt\")\n",
    "            print('epoch:{},train_loss:{:.4f},test_acc:{:.4f}'.format(epoch,loss_val[0],acc_test[0]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<matplotlib.image.AxesImage at 0x1a4edb8978>"
      ]
     },
     "execution_count": 86,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAP8AAAD8CAYAAAC4nHJkAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAADOhJREFUeJzt3W+IXPW9x/HPx70NaFpITDANaa72Frl6yYO0LnrBi1rUkiuBGLSSKCUXStMHFSxEqMQHDUJBSvonjwpbXBqhtS2kvQkYtUsQbOEiRg3RNjaVujZr1qQhSg0iUfO9D/ZEtnHnN+PMmTmz+32/IOzM+Z4z58uQz5wzc/78HBECkM9FTTcAoBmEH0iK8ANJEX4gKcIPJEX4gaQIP5AU4QeSIvxAUv8yyJXZ5nRCoM8iwp3M19OW3/Y623+2/artB3p5LQCD5W7P7bc9IumopFslTUl6TtLmiPhTYRm2/ECfDWLLf62kVyPirxFxVtIvJW3o4fUADFAv4V8l6dis51PVtH9ie6vtg7YP9rAuADXr5Qe/uXYtPrZbHxFjksYkdvuBYdLLln9K0upZzz8n6Xhv7QAYlF7C/5ykK21/3vYiSZsk7aunLQD91vVuf0R8YPteSU9JGpE0HhF/rK0zAH3V9aG+rlbGd36g7wZykg+A+YvwA0kRfiApwg8kRfiBpAg/kBThB5Ii/EBShB9IivADSRF+ICnCDyRF+IGkCD+QFOEHkiL8QFKEH0iK8ANJEX4gKcIPJEX4gaQIP5AU4QeSIvxAUoQfSIrwA0kRfiApwg8kRfiBpLoeoluSbE9KekfSh5I+iIjROpoC0H89hb/y5Yg4VcPrABggdvuBpHoNf0j6ne3nbW+toyEAg9Hrbv/1EXHc9mWSJmy/EhHPzJ6h+lDggwEYMo6Iel7I3iHpTETsLMxTz8oAtBQR7mS+rnf7bS+2/ZnzjyV9RdLL3b4egMHqZbd/haTf2j7/Or+IiCdr6QpA39W229/RytjtB/qu77v9AOY3wg8kRfiBpAg/kBThB5Ii/EBSdVzVhyG2aNGiYv3mm28u1u++++5ifdmyZcX6unXrivVePPlk+bSS2267rW/rXgjY8gNJEX4gKcIPJEX4gaQIP5AU4QeSIvxAUhznXwCuuuqqlrVdu3YVl73llluK9ep+DS21uyT80KFDLWtLliwpLnv55Zf3tG6UseUHkiL8QFKEH0iK8ANJEX4gKcIPJEX4gaQ4zj8EtmzZUqxfc801xfrmzZtb1i66qPz5PjExUazv2bOnWH/66aeL9TNnzrSsHThwoLhsO4cPH+5p+ezY8gNJEX4gKcIPJEX4gaQIP5AU4QeSIvxAUm2H6LY9Lmm9pJMRsaaadqmkX0m6QtKkpLsi4q22K0s6RHe7e9fv37+/WH///feL9SeeeKJl7b777isu+/rrrxfrvVq/fn3L2t69e3t67RUrVhTrp06d6un156s6h+j+maQL//c+IOlARFwp6UD1HMA80jb8EfGMpNMXTN4gaXf1eLek22vuC0Cfdfudf0VETEtS9fey+loCMAh9P7ff9lZJW/u9HgCfTLdb/hO2V0pS9fdkqxkjYiwiRiNitMt1AeiDbsO/T9L5S9G2SOrtZ1sAA9c2/LYfk/R/kv7d9pTtr0t6WNKttv8i6dbqOYB5pO13/ohodbF4eWB3fOSOO+7oafmdO3cW6w8++GBPr99P999/f9fLvvbaa8V61uP4deEMPyApwg8kRfiBpAg/kBThB5Ii/EBS3Lp7AKanp4v18fHxYv2hhx6qs51aXXfddcX6DTfc0LJ29uzZ4rKbNm3qqid0hi0/kBThB5Ii/EBShB9IivADSRF+ICnCDyTV9tbdta4s6a27F7J25yiUhh8/evRocdmrr766q56yq/PW3QAWIMIPJEX4gaQIP5AU4QeSIvxAUoQfSIrj/Ci68cYbi/WJiYli/e23325ZK13rL0mvvPJKsY65cZwfQBHhB5Ii/EBShB9IivADSRF+ICnCDyTV9r79tsclrZd0MiLWVNN2SPqGpL9Xs22PiP39ahLNWbp0abE+MjJSrL/55pstaxzHb1YnW/6fSVo3x/QfRcTa6h/BB+aZtuGPiGcknR5ALwAGqJfv/PfaPmx73HZ53xDA0Ok2/D+R9AVJayVNS/pBqxltb7V90PbBLtcFoA+6Cn9EnIiIDyPinKSfSrq2MO9YRIxGxGi3TQKoX1fht71y1tONkl6upx0Ag9LJob7HJN0kabntKUnflXST7bWSQtKkpG/2sUcAfdA2/BGxeY7Jj/ShFwyhbdu29bT8nj17auoEdeMMPyApwg8kRfiBpAg/kBThB5Ii/EBS3Lo7uVWrVhXrU1NTxfpbb71VrK9Zs6Zl7fjx48Vl0R1u3Q2giPADSRF+ICnCDyRF+IGkCD+QFOEHkmp7SS8WtnaX7LY7D2Tjxo3FOsfyhxdbfiApwg8kRfiBpAg/kBThB5Ii/EBShB9Iiuv5F7glS5YU68eOHSvWFy9eXKwvX768WD99mjFeB43r+QEUEX4gKcIPJEX4gaQIP5AU4QeSIvxAUm2v57e9WtKjkj4r6ZyksYjYZftSSb+SdIWkSUl3RUT5Ju4YuIsvvrhYv+SSS4r1ycnJYv299977pC1hSHSy5f9A0raIuFrSf0r6lu3/kPSApAMRcaWkA9VzAPNE2/BHxHREvFA9fkfSEUmrJG2QtLuabbek2/vVJID6faLv/LavkPRFSc9KWhER09LMB4Sky+puDkD/dHwPP9uflrRH0rcj4h92R6cPy/ZWSVu7aw9Av3S05bf9Kc0E/+cR8Ztq8gnbK6v6Skkn51o2IsYiYjQiRutoGEA92obfM5v4RyQdiYgfzirtk7SlerxF0t762wPQL53s9l8v6WuSXrJ9qJq2XdLDkn5t++uS/ibpq/1pEb248847e1r+8ccfL9bffffdnl4fzWkb/oj4g6RWX/BvrrcdAIPCGX5AUoQfSIrwA0kRfiApwg8kRfiBpBiie4G75557ivV2p2k/++yzdbaDIcKWH0iK8ANJEX4gKcIPJEX4gaQIP5AU4QeS4jj/AtduCPZ29VOnTtXZDoYIW34gKcIPJEX4gaQIP5AU4QeSIvxAUoQfSMrtjvPWujJ7cCtLZNmyZS1rL774YnHZVatWFesjIyNd9YTmRERHY+mx5QeSIvxAUoQfSIrwA0kRfiApwg8kRfiBpNpez297taRHJX1W0jlJYxGxy/YOSd+Q9Pdq1u0Rsb9fjaK1pUuXtqy1O46PvDq5mccHkrZFxAu2PyPpedsTVe1HEbGzf+0B6Je24Y+IaUnT1eN3bB+RxOYEmOc+0Xd+21dI+qKk82M43Wv7sO1x23Pue9reavug7YM9dQqgVh2H3/anJe2R9O2I+Iekn0j6gqS1mtkz+MFcy0XEWESMRsRoDf0CqElH4bf9Kc0E/+cR8RtJiogTEfFhRJyT9FNJ1/avTQB1axt+zwzj+oikIxHxw1nTV86abaOkl+tvD0C/dPJr//WSvibpJduHqmnbJW22vVZSSJqU9M2+dIi2pqenW9aeeuqp4rJvvPFG3e1gnujk1/4/SJrr+mCO6QPzGGf4AUkRfiApwg8kRfiBpAg/kBThB5Li1t3AAsOtuwEUEX4gKcIPJEX4gaQIP5AU4QeSIvxAUp1cz1+nU5Jen/V8eTVtGA1rb8Pal0Rv3aqzt8s7nXGgJ/l8bOX2wWG9t9+w9jasfUn01q2memO3H0iK8ANJNR3+sYbXXzKsvQ1rXxK9dauR3hr9zg+gOU1v+QE0pJHw215n+8+2X7X9QBM9tGJ70vZLtg81PcRYNQzaSdsvz5p2qe0J23+p/rYeonfwve2w/Ub13h2yfVtDva22/bTtI7b/aPu+anqj712hr0bet4Hv9tsekXRU0q2SpiQ9J2lzRPxpoI20YHtS0mhENH5M2PYNks5IejQi1lTTvi/pdEQ8XH1wLo2I7wxJbzsknWl65OZqQJmVs0eWlnS7pP9Rg+9doa+71MD71sSW/1pJr0bEXyPirKRfStrQQB9DLyKekXT6gskbJO2uHu/WzH+egWvR21CIiOmIeKF6/I6k8yNLN/reFfpqRBPhXyXp2KznUxquIb9D0u9sP297a9PNzGFFNWz6+eHTL2u4nwu1Hbl5kC4YWXpo3rtuRryuWxPhn+sWQ8N0yOH6iPiSpP+W9K1q9xad6Wjk5kGZY2TpodDtiNd1ayL8U5JWz3r+OUnHG+hjThFxvPp7UtJvNXyjD584P0hq9fdkw/18ZJhGbp5rZGkNwXs3TCNeNxH+5yRdafvzthdJ2iRpXwN9fIztxdUPMbK9WNJXNHyjD++TtKV6vEXS3gZ7+SfDMnJzq5Gl1fB7N2wjXjdykk91KOPHkkYkjUfE9wbexBxs/5tmtvbSzBWPv2iyN9uPSbpJM1d9nZD0XUn/K+nXkv5V0t8kfTUiBv7DW4vebtLMrutHIzef/4494N7+S9LvJb0k6Vw1ebtmvl839t4V+tqsBt43zvADkuIMPyApwg8kRfiBpAg/kBThB5Ii/EBShB9IivADSf0/P4DMdhH7QBsAAAAASUVORK5CYII=\n",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "from matplotlib import pyplot as plt\n",
    "%matplotlib inline\n",
    "index = 666\n",
    "plt.imshow(test_data[index].reshape(28,28),cmap='gray')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Restoring parameters from ./model/my_model.ckpt\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "sess = tf.InteractiveSession()\n",
    "saver = tf.train.import_meta_graph('./model/my_model.ckpt.meta')\n",
    "saver.restore(sess,\"./model/my_model.ckpt\")\n",
    "input_tensor = tf.get_collection('my_op')[0]\n",
    "output_tensor = tf.get_collection('my_op')[1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "7"
      ]
     },
     "execution_count": 88,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "np.argmax(sess.run(output_tensor,feed_dict={input_tensor:np.expand_dims(test_data[index],axis=0)}))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
