{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Eager execution\n",
    "tensorflow 的 eager execution 是一个命令式的编程环境，它可以立即评估操作，而不必构建图：操作返回具体的值，而不是构建计算图以便以后运行。这使得开始使用 tensorflow 和调试模型变得很容易，而且还减少了样板文件。阅读本指南，要在交互式python解释器中运行下面的代码示例。\n",
    "\n",
    "eager execution 是一个灵活的机器学习平台，用于研究和实验，它提供了：\n",
    "1. 一个直观的界面——自然地构造代码并使用 python 数据结构，快速迭代小模型和小数据。\n",
    "2. 更简单的调试——直接调用操作来检查运行的模型和测试更改。使用标准的 python 调试工具进行即时错误报告。\n",
    "3. 自然控制流——使用 python 控制流代替图形控制流，简化了动态模型的规范。\n",
    "\n",
    "eager execution 支持大部分的 tensorflow 操作和 CPU 加速。\n",
    "\n",
    "> 注意：一些模型在启用 eager execution 时可能会增加开销。性能改进正在进行中，但如果发现问题，请提交错误并共享。\n",
    "\n",
    "## 设置和基本用法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "hello, [[4.]]\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "import cProfile\n",
    "\n",
    "# 在 tensorflow2 中，eager execution 是默认开启的\n",
    "tf.executing_eagerly()\n",
    "# 现在你可以运行 tensorflow 操作，结果将立即返回:\n",
    "x = [[2.]]\n",
    "m = tf.matmul(x, x)\n",
    "print(\"hello, {}\".format(m))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "启用 eager execution 会改变 tensorflow 操作现在的行为，它们会立即求值并将值返回给 python。张量对象引用计算图中节点的具体值，而不是符号句柄。由于没有可在会话中生成和运行的计算图，因此使用print() 或调试器检查结果很容易。计算、打印和检查张量值不会中断计算渐变的流程。\n",
    "\n",
    "eager execution 与 numpy 配合使用很好。numpy 操作接受 tf.Tensor 参数。tensorflow 的 tf.math 操作将 python 对象和 numpy 数组转换为 tf.Tensor 对象。tf.Tensor.numpy 方法将对象的值作为 numpy ndarray 返回。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[[1 2]\n",
      " [3 4]], shape=(2, 2), dtype=int32)\n",
      "tf.Tensor(\n",
      "[[2 3]\n",
      " [4 5]], shape=(2, 2), dtype=int32)\n",
      "tf.Tensor(\n",
      "[[ 2  6]\n",
      " [12 20]], shape=(2, 2), dtype=int32)\n",
      "[[ 2  6]\n",
      " [12 20]]\n",
      "[[1 2]\n",
      " [3 4]]\n"
     ]
    }
   ],
   "source": [
    "a = tf.constant([[1, 2],\n",
    "                 [3, 4]])\n",
    "print(a)\n",
    "\n",
    "# 支持广播\n",
    "b = tf.add(a, 1)\n",
    "print(b)\n",
    "\n",
    "# 支持操作重载\n",
    "print(a * b)\n",
    "\n",
    "# 使用 numpy 值\n",
    "import numpy as np\n",
    "\n",
    "c = np.multiply(a, b)\n",
    "print(c)\n",
    "\n",
    "# 从一个张量中得到 numpy 值\n",
    "print(a.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 动态控制流\n",
    "eager execution 的一个主要好处是，当模型执行时，宿主语言的所有功能都可用。例如，写 [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz) 很容易："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1\n",
      "2\n",
      "Fizz\n",
      "4\n",
      "Buzz\n",
      "Fizz\n",
      "7\n",
      "8\n",
      "Fizz\n",
      "Buzz\n",
      "11\n",
      "Fizz\n",
      "13\n",
      "14\n",
      "FizzBuzz\n"
     ]
    }
   ],
   "source": [
    "def fizzbuzz(max_num):\n",
    "  counter = tf.constant(0)\n",
    "  max_num = tf.convert_to_tensor(max_num)\n",
    "  for num in range(1, max_num.numpy()+1):\n",
    "    num = tf.constant(num)\n",
    "    if int(num % 3) == 0 and int(num % 5) == 0:\n",
    "      print('FizzBuzz')\n",
    "    elif int(num % 3) == 0:\n",
    "      print('Fizz')\n",
    "    elif int(num % 5) == 0:\n",
    "      print('Buzz')\n",
    "    else:\n",
    "      print(num.numpy())\n",
    "    counter += 1\n",
    "\n",
    "    # 它有依赖于张量值的条件，并在运行时打印这些值。\n",
    "fizzbuzz(15)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## eager 训练\n",
    "### 梯度计算\n",
    "自动梯度计算对于实现机器学习算法（如用于训练神经网络的反向传播）是有用的。在 eager execution 期间，使用 [tf.GradientTape](https://tensorflow.google.cn/api_docs/python/tf/GradientTape) 来跟踪后面计算梯度的操作。\n",
    "\n",
    "你可以使用 tf.GradientTape 训练或计算梯度。它对于复杂的训练循环特别有用。\n",
    "\n",
    "因为在每次调用期间可能会发生不同的操作，所以所有的前向操作都会被记录到一个 \"tape（磁带）\"中。要计算梯度，倒放磁带，然后弃掉。一个特定的 tf.GradientTape 只能计算一个梯度；随后的调用将抛出一个运行时错误。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor([[2.]], shape=(1, 1), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "w = tf.Variable([[1.0]])\n",
    "with tf.GradientTape() as tape:\n",
    "  loss = w * w\n",
    "\n",
    "grad = tape.gradient(loss, w)\n",
    "print(grad) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练模型\n",
    "下面的示例创建了一个多层模型，用于对标准 MNIST 手写数字进行分类。它演示了优化器和层 API，以在 eager execution 环境中构建可训练的图。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 获取并格式化 mnist 数据\n",
    "(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()\n",
    "\n",
    "dataset = tf.data.Dataset.from_tensor_slices(\n",
    "  (tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),\n",
    "   tf.cast(mnist_labels,tf.int64)))\n",
    "dataset = dataset.shuffle(1000).batch(32)\n",
    "\n",
    "# 构建模型\n",
    "mnist_model = tf.keras.Sequential([\n",
    "  tf.keras.layers.Conv2D(16,[3,3], activation='relu',\n",
    "                         input_shape=(None, None, 1)),\n",
    "  tf.keras.layers.Conv2D(16,[3,3], activation='relu'),\n",
    "  tf.keras.layers.GlobalAveragePooling2D(),\n",
    "  tf.keras.layers.Dense(10)\n",
    "])\n",
    "\n",
    "# 即使不经过训练，也可以调用模型并在 eager execution 时检查输出：\n",
    "for images,labels in dataset.take(1):\n",
    "  print(\"逻辑回归: \", mnist_model(images[0:1]).numpy())\n",
    "\n",
    "# 虽然 Keras 模型有一个内置的训练循环（使用fit方法），但有时需要更多的定制。\n",
    "# 下面是一个使用 eager 实现的训练循环的示例：\n",
    "optimizer = tf.keras.optimizers.Adam()\n",
    "loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n",
    "\n",
    "loss_history = []\n",
    "\n",
    "# 使用 tf.debugging 中的 assert 函数检查条件是否成立。\n",
    "def train_step(images, labels):\n",
    "  with tf.GradientTape() as tape:\n",
    "    logits = mnist_model(images, training=True)\n",
    "    \n",
    "    # 添加断言以检查输出的形状。\n",
    "    tf.debugging.assert_equal(logits.shape, (32, 10))\n",
    "    \n",
    "    loss_value = loss_object(labels, logits)\n",
    "\n",
    "  loss_history.append(loss_value.numpy().mean())\n",
    "  grads = tape.gradient(loss_value, mnist_model.trainable_variables)\n",
    "  optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))\n",
    "\n",
    "def train(epochs):\n",
    "  for epoch in range(epochs):\n",
    "    for (batch, (images, labels)) in enumerate(dataset):\n",
    "      train_step(images, labels)\n",
    "    print ('周期 {} 完成'.format(epoch))\n",
    "    \n",
    "train(epochs = 3)\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "plt.plot(loss_history)\n",
    "plt.xlabel('批次 #')\n",
    "plt.ylabel('损失值')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 变量和优化器\n",
    "[tf.Variable](https://tensorflow.google.cn/api_docs/python/tf/Variable) 对象存储可变的 [tf.Tensor](https://tensorflow.google.cn/api_docs/python/tf/Tensor)——像在训练期间可访问的值，以使更加容易地区分。\n",
    "\n",
    "变量集合可以封装到层或模型中，以及对其进行操作的方法。有关详细信息，参见 [ Custom Keras layers and models](https://tensorflow.google.cn/guide/keras/custom_layers_and_models)。层和模型之间的主要区别在于模型添加了 Model.fit、Model.evaluate 和Model.save 等方法。\n",
    "\n",
    "例如，可以重写上面的自动区分示例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Linear(tf.keras.Model):\n",
    "  def __init__(self):\n",
    "    super(Linear, self).__init__()\n",
    "    self.W = tf.Variable(5., name='weight')\n",
    "    self.B = tf.Variable(10., name='bias')\n",
    "  def call(self, inputs):\n",
    "    return inputs * self.W + self.B\n",
    "\n",
    "# 3*x+2点的小型数据集\n",
    "NUM_EXAMPLES = 2000\n",
    "training_inputs = tf.random.normal([NUM_EXAMPLES])\n",
    "noise = tf.random.normal([NUM_EXAMPLES])\n",
    "training_outputs = training_inputs * 3 + 2 + noise\n",
    "\n",
    "# 待优化的损失函数\n",
    "def loss(model, inputs, targets):\n",
    "  error = model(inputs) - targets\n",
    "  return tf.reduce_mean(tf.square(error))\n",
    "\n",
    "def grad(model, inputs, targets):\n",
    "  with tf.GradientTape() as tape:\n",
    "    loss_value = loss(model, inputs, targets)\n",
    "  return tape.gradient(loss_value, [model.W, model.B])\n",
    "\n",
    "# 下一步\n",
    "# 1. 创建模型\n",
    "# 2. 损失函数相对于模型参数的导数。\n",
    "# 3. 一种基于导数的变量更新策略。\n",
    "\n",
    "model = Linear()\n",
    "optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)\n",
    "\n",
    "print(\"初始损失值: {:.3f}\".format(loss(model, training_inputs, training_outputs)))\n",
    "\n",
    "steps = 300\n",
    "for i in range(steps):\n",
    "  grads = grad(model, training_inputs, training_outputs)\n",
    "  optimizer.apply_gradients(zip(grads, [model.W, model.B]))\n",
    "  if i % 20 == 0:\n",
    "    print(\"损失值:{:03d}, 步数: {:.3f}\".format(i, loss(model, training_inputs, training_outputs)))\n",
    "    \n",
    "print(\"最终损失值: {:.3f}\".format(loss(model, training_inputs, training_outputs)))\n",
    "\n",
    "print(\"权重 = {}, 偏置项 = {}\".format(model.W.numpy(), model.B.numpy()))\n",
    "\n",
    "# 注意：变量将一直存在，直到删除对 python 对象的最后一个引用，变量将被删除。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 基于对象的保存\n",
    "tf.keras.Model 包含一个 save_weights 方法，允许你轻松创建检查点。\n",
    "\n",
    "使用 [tf.train.Checkpoint](https://tensorflow.google.cn/api_docs/python/tf/train/Checkpoint) 可以完全控制这个过程。\n",
    "\n",
    "此节是 [guide to training checkpoints](https://tensorflow.google.cn/guide/checkpoint) 的简略版。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.save_weights('weights')\n",
    "status = model.load_weights('weights')\n",
    "\n",
    "x.assign(2.)   # 为变量分配一个新值并保存。\n",
    "checkpoint_path = './ckpt/'\n",
    "checkpoint.save('./ckpt/')\n",
    "\n",
    "x.assign(11.)  # Change the variable after saving.\n",
    "\n",
    "# 从检查点恢复值\n",
    "checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))\n",
    "\n",
    "print(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了保存和加载模型，tf.train.Checkpoint 存储对象的内部状态，而不需要隐藏变量。要记录模型、优化器和全局步骤的状态，要手动将它们传递到 tf.train.Checkpoint："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x21fec216c88>"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model = tf.keras.Sequential([\n",
    "  tf.keras.layers.Conv2D(16,[3,3], activation='relu'),\n",
    "  tf.keras.layers.GlobalAveragePooling2D(),\n",
    "  tf.keras.layers.Dense(10)\n",
    "])\n",
    "optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)\n",
    "checkpoint_dir = 'path/to/model_dir'\n",
    "if not os.path.exists(checkpoint_dir):\n",
    "  os.makedirs(checkpoint_dir)\n",
    "checkpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\n",
    "root = tf.train.Checkpoint(optimizer=optimizer,\n",
    "                           model=model)\n",
    "\n",
    "root.save(checkpoint_prefix)\n",
    "root.restore(tf.train.latest_checkpoint(checkpoint_dir))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意：在许多训练循环中，变量是在调用 [tf.train.Checkpoint.restore](https://tensorflow.google.cn/api_docs/python/tf/train/Checkpoint#restore) 之后创建的。这些变量将在创建后立即恢复，断言可用于确保检查点已完全加载。有关详细信息，参阅 [guide to training checkpoints](https://tensorflow.google.cn/guide/checkpoint)。\n",
    "\n",
    "### 面向对象度量\n",
    "[tf.keras.metrics](https://tensorflow.google.cn/api_docs/python/tf/keras/metrics) 存储为对象。通过将新数据传递给 callable 来更新度量，并使用 tf.keras.metrics.result 方法检索结果，例如："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor: shape=(), dtype=float32, numpy=5.5>"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m = tf.keras.metrics.Mean(\"loss\")\n",
    "m(0)\n",
    "m(5)\n",
    "m.result()\n",
    "m([8, 9])\n",
    "m.result()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 总结和 tensorboard\n",
    "[tensorboard](https://tensorflow.org/tensorboard) 是理解、调试和优化模型训练过程的可视化工具。它使用在执行程序时写入的摘要事件。\n",
    "\n",
    "你可以使用 [tf.summary](https://tensorflow.google.cn/api_docs/python/tf/summary) 来记录紧急执行中变量的摘要。例如，每100个培训步骤记录一次损失摘要："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "logdir = \"./tb/\"\n",
    "writer = tf.summary.create_file_writer(logdir)\n",
    "\n",
    "steps = 1000\n",
    "with writer.as_default():  # 或者在循环之前调用writer.set_as_default（）。\n",
    "  for i in range(steps):\n",
    "    step = i + 1\n",
    "    # 使用真正的训练函数计算损失值\n",
    "    loss = 1 - 0.001 * step\n",
    "    if step % 100 == 0:\n",
    "      tf.summary.scalar('loss', loss, step=step)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 高级自动区分主题\n",
    "### 动态模型\n",
    "[tf.GradientTape](https://tensorflow.google.cn/api_docs/python/tf/GradientTape) 也可用于动态模型。尽管控制流很复杂，这个回溯线搜索算法（[backtracking line search algorithm ](https://wikipedia.org/wiki/Backtracking_line_search)）的例子看起来像正常的 numpy 代码，除了有梯度和可微之外："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def line_search_step(fn, init_x, rate=1.0):\n",
    "  with tf.GradientTape() as tape:\n",
    "    # 变量被自动跟踪。但是要从一个张量计算梯度，你必须 \"观察\" 它。\n",
    "    tape.watch(init_x)\n",
    "    value = fn(init_x)\n",
    "  grad = tape.gradient(value, init_x)\n",
    "  grad_norm = tf.reduce_sum(grad * grad)\n",
    "  init_value = value\n",
    "  while value > init_value - rate * grad_norm:\n",
    "    x = init_x - rate * grad\n",
    "    value = fn(x)\n",
    "    rate /= 2.0\n",
    "  return x, value"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 自定义梯度\n",
    "自定义梯度是覆盖梯度的简单方法。在正向函数中，定义相对于输入、输出或中间结果的梯度。例如，这里有一个简单的方法来剪辑反向过程中的梯度范数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.5\n",
      "nan\n"
     ]
    }
   ],
   "source": [
    "@tf.custom_gradient\n",
    "def clip_gradient_by_norm(x, norm):\n",
    "  y = tf.identity(x)\n",
    "  def grad_fn(dresult):\n",
    "    return [tf.clip_by_norm(dresult, norm), None]\n",
    "  return y, grad_fn\n",
    "\n",
    "# 自定义梯度变化通常用于为一系列操作提供数值稳定的梯度变化：\n",
    "def log1pexp(x):\n",
    "  return tf.math.log(1 + tf.exp(x))\n",
    "\n",
    "def grad_log1pexp(x):\n",
    "  with tf.GradientTape() as tape:\n",
    "    tape.watch(x)\n",
    "    value = log1pexp(x)\n",
    "  return tape.gradient(value, x)\n",
    "\n",
    "# 梯度计算在 x = 0 处工作良好。\n",
    "print(grad_log1pexp(tf.constant(0.)).numpy())\n",
    "\n",
    "# 然而，x = 100 由于数值不稳定而失败。\n",
    "print(grad_log1pexp(tf.constant(100.)).numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这里，log1pexp 函数可以用自定义梯度进行解析简化。下面的实现重用了在前向传播期间计算的tf.exp(x) 的值，从而通过消除冗余计算提高了效率："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.5\n",
      "1.0\n"
     ]
    }
   ],
   "source": [
    "@tf.custom_gradient\n",
    "def log1pexp(x):\n",
    "  e = tf.exp(x)\n",
    "  def grad(dy):\n",
    "    return dy * (1 - 1 / (1 + e))\n",
    "  return tf.math.log(1 + e), grad\n",
    "\n",
    "def grad_log1pexp(x):\n",
    "  with tf.GradientTape() as tape:\n",
    "    tape.watch(x)\n",
    "    value = log1pexp(x)\n",
    "  return tape.gradient(value, x)\n",
    "\n",
    "print(grad_log1pexp(tf.constant(0.)).numpy())\n",
    "\n",
    "print(grad_log1pexp(tf.constant(100.)).numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 性能\n",
    "在 eager execution 期间，计算会自动转移到 GPU。如果你想控制计算的运行位置，你可以将它封装在tf.device('/gpu:0') 块中（或等效的CPU）:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "时间 乘以矩阵 (1000, 1000) 本身的时间f 200\n",
      "CPU: 1.9707226753234863 时长\n",
      "GPU: 没有找到\n"
     ]
    }
   ],
   "source": [
    "import time\n",
    "\n",
    "def measure(x, steps):\n",
    "  # tensorflow 在第一次使用 GPU 时进行初始化，不包括计时。\n",
    "  tf.matmul(x, x)\n",
    "  start = time.time()\n",
    "  for i in range(steps):\n",
    "    x = tf.matmul(x, x)\n",
    "  # tf.matmul 可以在完成矩阵乘法之前返回（例如，可以在对 CUDA 流执行操作之后返回）。\n",
    "  # 下面的x.numpy() 调用将确保所有排队的操作都已完成（并且还将结果复制到主机内存中，\n",
    "  # 因此我们不仅仅包括 matmul 操作时间）。\n",
    "  _ = x.numpy()\n",
    "  end = time.time()\n",
    "  return end - start\n",
    "\n",
    "shape = (1000, 1000)\n",
    "steps = 200\n",
    "print(\"时间 乘以矩阵 {} 本身的时间 {}\".format(shape, steps))\n",
    "\n",
    "# Run on CPU:\n",
    "with tf.device(\"/cpu:0\"):\n",
    "  print(\"CPU: {} secs\".format(measure(tf.random.normal(shape), steps)))\n",
    "\n",
    "# Run on GPU, if available:\n",
    "if tf.config.experimental.list_physical_devices(\"GPU\"):\n",
    "  with tf.device(\"/gpu:0\"):\n",
    "    print(\"GPU: {} secs\".format(measure(tf.random.normal(shape), steps)))\n",
    "else:\n",
    "  print(\"GPU: 没有找到\")\n",
    "\n",
    "# 可以将t f.Tensor 对象复制到其他设备以执行其操作：\n",
    "if tf.config.experimental.list_physical_devices(\"GPU\"):\n",
    "  x = tf.random.normal([10, 10])\n",
    "\n",
    "  x_gpu0 = x.gpu()\n",
    "  x_cpu = x.cpu()\n",
    "\n",
    "  _ = tf.matmul(x_cpu, x_cpu)    # 在 CPU上运行\n",
    "  _ = tf.matmul(x_gpu0, x_gpu0)  # 在 GPU:0 上运行"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 基准\n",
    "对于计算量大的模型，比如在 GPU 上进行R esNet50 训练，eager execution 的执行性能与 tf.function 执行相当。但是，对于计算量较少的模型，这种差距会变得更大，对于具有大量小操作的模型，优化热代码路径还有很多工作要做。\n",
    "\n",
    "## 与函数一起作用\n",
    "虽然 eager execution 的执行使开发和调试更具交互性，但 tensorflow1 风格的图执行在分布式培训、性能优化和生产部署方面具有优势。为了弥补这一差距，tensorflow2 通过 tf.function API 引入了函数。有关更多信息，参见 [tf.function](https://tensorflow.google.cn/guide/function) 指南。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
