{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 使用 GPU\n",
    "tensorflow 代码和 [tf.keras](https://tensorflow.google.cn/api_docs/python/tf/keras) 模型可以无需更改代码就直接运行在单个 GPU 上。\n",
    "> 注意：可以使用 tf.config.experimental.list_physical_devices('GPU') 来确保 tensorflow 是在使用 GPU。\n",
    "\n",
    "将 tensorflow 运行在多个 GPU 或者多台机器上最简单的方法是使用 [分发策略（Distribution Strategies）](https://tensorflow.google.cn/guide/distributed_training)。\n",
    "\n",
    "本指南适用于尝试过这些方法并发现需要对 tensorflow 如何使用 GPU 进行细粒度控制的用户。\n",
    "\n",
    "## 设置\n",
    "确保你安装了 tensorflow GPU 最新版本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "print(\"可用的 GPU 数: \", len(tf.config.experimental.list_physical_devices('GPU')))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 概述\n",
    "tensorflow 支持在各种类型的设备上运行计算，包括 CPU 和 GPU。它们用字符串标识符表示，例如：\n",
    "- \"device:CPU:0\"：计算机的 CPU。\n",
    "- \"/GPU:0\"：tensorflow 可以观察的机器的第一个 GPU 的简写符号。\n",
    "- \"/job:localhost/replica:0/task:0/device:GPU:1\"：tensorflow 可见的计算机的第二个 GPU 的完全限定名。\n",
    "\n",
    "如果 tensorflow 操作同时具有 CPU 和 GPU 实现，则默认情况下，当操作分配给设备时，GPU 设备将被赋予优先的权利。例如，[tf.matmul](https://tensorflow.google.cn/api_docs/python/tf/linalg/matmul) 有 CPU 和 GPU 内核。在具有设备 CPU:0 和 GPU:0 的系统上，将选择 GPU:0 设备运行 tf.matmul，除非你明确请求在其他设备上运行它。\n",
    "\n",
    "### 日志设备的位置\n",
    "要找出你的操作和张量被分配给哪些设备，将 [tf.debugging.set_log_device_placement(True)](https://tensorflow.google.cn/api_docs/python/tf/debugging/set_log_device_placement) 设置为程序的第一条语句。启用日志设备后会打印所有张量分配或操作的日志。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.debugging.set_log_device_placement(True)\n",
    "\n",
    "# 创建一些张量\n",
    "a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n",
    "b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n",
    "c = tf.matmul(a, b)\n",
    "\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上面的代码将打印在 GPU:0 上执行 MatMul 操作的指示。\n",
    "\n",
    "## 指定设备的位置\n",
    "如果希望某个特定操作在你选择的设备上运行，而不是在自动为你选择的设备上运行，则可以使用tf.device 创建设备的上下文，并且该上下文中的所有操作都将在同一指定设备上运行。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.debugging.set_log_device_placement(True)\n",
    "\n",
    "# 将张量放在 CPU 上\n",
    "with tf.device('/CPU:0'):\n",
    "  a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n",
    "  b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n",
    "\n",
    "c = tf.matmul(a, b)\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在 a 和 b 被分配给 CPU:0。由于没有为 MatMul 操作显式指定设备，tensorflow 运行时将根据操作和可用设备（本例中的 GPU:0 ）选择一个，并根据需要在设备之间自动复制张量。\n",
    "\n",
    "## 限制 GPU 内存增长\n",
    "默认情况下，tensorflow 映射进程可见的所有 GPU（受 [CUDA_VISIBLE_DEVICES(https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars)]影响）的几乎所有 GPU 内存。这样做是为了通过减少内存碎片更有效地使用设备上相对宝贵的 GPU 内存资源。为了将 Tensorflow 限制在一组特定的 GPU，我们使用[tf.config.experimental.set_visible_devices](https://tensorflow.google.cn/api_docs/python/tf/config/set_visible_devices) 方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gpus = tf.config.experimental.list_physical_devices('GPU')\n",
    "if gpus:\n",
    "  # 限制 tensorflow 只使用第一个 GPU\n",
    "  try:\n",
    "    tf.config.experimental.set_visible_devices(gpus[0], 'GPU')\n",
    "    logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n",
    "    print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPU\")\n",
    "  except RuntimeError as e:\n",
    "    # 在初始化 GPU 之前，必须设置可见设备\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在某些情况下，进程可以只分配可用内存的一个子集，或者只增加进程所需的内存使用量。tensorflow 提供了两种方法来控制这种情况。\n",
    "\n",
    "第一个选项是通过调用 [tf.config.experimental.set_memory_growth](https://tensorflow.google.cn/api_docs/python/tf/config/experimental/set_memory_growth) 来启动内存增长，它尝试只分配运行时分配所需的 GPU 内存：它开始分配很少的内存，当程序运行时需要更多的 GPU 内存，它扩展了分配给 tensorflow 进程的 GPU 内存区域。注意，tensorflow 不会释放内存，因为它会导致内存碎片。若要为特定 GPU 启用内存增长，在分配任何张量或执行任何操作之前使用以下代码："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gpus = tf.config.experimental.list_physical_devices('GPU')\n",
    "if gpus:\n",
    "  try:\n",
    "    # 目前，跨 GPU 的内存增长需要是相同的\n",
    "    for gpu in gpus:\n",
    "      tf.config.experimental.set_memory_growth(gpu, True)\n",
    "    logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n",
    "    print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPUs\")\n",
    "  except RuntimeError as e:\n",
    "    # 必须在初始化 GPU 之前设置内存增长\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "启用此选项的另一种方法是将环境变量 TF_FORCE_GPU_ALLOW_GROWTH 设置为 true。此配置是特定于平台的。\n",
    "\n",
    "第二种方法是使用 [tf.config.experimental.set_virtual_device_configuration](https://tensorflow.google.cn/api_docs/python/tf/config/set_logical_device_configuration) 配置虚拟 GPU 设备，并设置要在 GPU 上分配的总内存的硬限制。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gpus = tf.config.experimental.list_physical_devices('GPU')\n",
    "if gpus:\n",
    "  # 限制 tensorflow 只能在第一个 GPU 上分配 1GB 的内存\n",
    "  try:\n",
    "    tf.config.experimental.set_virtual_device_configuration(\n",
    "        gpus[0],\n",
    "        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])\n",
    "    logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n",
    "    print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPUs\")\n",
    "  except RuntimeError as e:\n",
    "    # 在初始化 GPU 之前必须设置虚拟设备\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如果你想真正绑定 tensorflow 进程可用的 GPU 内存量，这非常有用。当 GPU 与其他应用程序（如工作站 GUI）共享时，这是本地开发的常见做法。\n",
    "\n",
    "## 在多 GPU 系统使用单个 GPU\n",
    "如果系统中有多个 GPU，默认情况下将选择 ID 最低的 GPU。如果要在其他 GPU上运行，则需要显式指定首选项："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.debugging.set_log_device_placement(True)\n",
    "\n",
    "try:\n",
    "  # 指定一个无效的 GPU 设备\n",
    "  with tf.device('/device:GPU:2'):\n",
    "    a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n",
    "    b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n",
    "    c = tf.matmul(a, b)\n",
    "except RuntimeError as e:\n",
    "  print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如果指定的设备不存在，则会出现运行时错误：`……/device:GPU:2未知设备`。\n",
    "\n",
    "如果你想要 tensorflow 在指定的设备不存在的情况下自动选择一个现有且受支持的设备来运行操作，则可以调用 [tf.config.set_soft_device_placement(True)[https://tensorflow.google.cn/api_docs/python/tf/config/set_soft_device_placement]。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.config.set_soft_device_placement(True)\n",
    "tf.debugging.set_log_device_placement(True)\n",
    "\n",
    "# 创建一些张量\n",
    "a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n",
    "b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n",
    "c = tf.matmul(a, b)\n",
    "\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用多 GPU\n",
    "为多个在 GPU 开发中使一个模型能够在额外资源的情况下扩展。如果在一个只有一个 GPU 的系统上开发，我们可以用虚拟设备模拟多个 GPU。这使得无需额外资源即可轻松测试多个 GPU 设置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gpus = tf.config.experimental.list_physical_devices('GPU')\n",
    "if gpus:\n",
    "  # 创建两个虚拟 GPU，每个 GPU 1GB 内存\n",
    "  try:\n",
    "    tf.config.experimental.set_virtual_device_configuration(\n",
    "        gpus[0],\n",
    "        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),\n",
    "         tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])\n",
    "    logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n",
    "    print(len(gpus), \"Physical GPU,\", len(logical_gpus), \"Logical GPUs\")\n",
    "  except RuntimeError as e:\n",
    "    # 在初始化 GPU 之前必须设置虚拟设备\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一旦有多个逻辑上的 GPU 可供运行时使用，我们就可以使用多个 GPU 进行[tf.distribute.Strategy(https://tensorflow.google.cn/api_docs/python/tf/distribute/Strategy)]或手动设置。\n",
    "\n",
    "### 使用 tf.distribute.Strategy\n",
    "\n",
    "使用多个 GPU 的最佳实践是使用 tf.distribute.Strategy。下面是一个简单的例子："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.debugging.set_log_device_placement(True)\n",
    "\n",
    "strategy = tf.distribute.MirroredStrategy()\n",
    "with strategy.scope():\n",
    "  inputs = tf.keras.layers.Input(shape=(1,))\n",
    "  predictions = tf.keras.layers.Dense(1)(inputs)\n",
    "  model = tf.keras.models.Model(inputs=inputs, outputs=predictions)\n",
    "  model.compile(loss='mse',\n",
    "                optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个程序将在每个 GPU 上运行一个模型的副本，在它们之间分割输入数据，也称为 \"数据并行\"。\n",
    "\n",
    "有关分发策略的更多信息，请参阅此处的 [指南](https://tensorflow.google.cn/guide/distributed_training)。\n",
    "\n",
    "### 手动设置\n",
    "\n",
    "tf.distribute.Strategy 通过在设备间复制计算而在后台起作用。你可以通过在每个 GPU上 构建模型来手动实现复制。例如："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.debugging.set_log_device_placement(True)\n",
    "\n",
    "gpus = tf.config.experimental.list_logical_devices('GPU')\n",
    "if gpus:\n",
    "  # 在多个 GPU 上复制你的计算\n",
    "  c = []\n",
    "  for gpu in gpus:\n",
    "    with tf.device(gpu.name):\n",
    "      a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n",
    "      b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n",
    "      c.append(tf.matmul(a, b))\n",
    "\n",
    "  with tf.device('/CPU:0'):\n",
    "    matmul_sum = tf.add_n(c)\n",
    "\n",
    "  print(matmul_sum)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
