{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 优化 tf.data API 性能\n",
    "## 概述\n",
    "GPU 和 TPU 可以彻底上减少执行单个训练步骤所需的时间。要达到最佳性能，需要一个高效的输入管道，在当前步骤完成之前为下一个步骤交付数据。tf.data API 有助于建立灵活和有效的输入管道。本文档演示了如何使用 tf.data API 来构建高性能的 tensoflow 输入管道。\n",
    "\n",
    "在你开始前，先阅读 [Build TensorFlow input pipelines](https://tensorflow.google.cn/guide/data) 指南，了解如何去使用 tf.data API。\n",
    "\n",
    "## 资源\n",
    "- [Build TensorFlow input piplines](https://tensorflow.google.cn/guide/data)\n",
    "- [tf.data.Dataset](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset) API\n",
    "\n",
    "## 设置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "import time"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在本指南中，你将遍历数据集并衡量性能。制定可重复的性能基准是困难的，不同的因素会影响它：\n",
    "- 当前 CPU 负载\n",
    "- 网络流量\n",
    "- 复杂的机制，如缓存等\n",
    "因此，为了提供可重复的基准，应构建一个人工示例。\n",
    "\n",
    "### 数据集\n",
    "定义一个从 tf.data 继承的类。数据集称为 ArtificialDataset。这个数据集：\n",
    "- 生成num_samples样本（默认值为3）\n",
    "- 模拟打开文件的第一个项目之前休眠一段时间\n",
    "- 在生成每个项模拟从文件中读取数据之前，需要休眠一段时间"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ArtificialDataset(tf.data.Dataset):\n",
    "    def _generator(num_samples):\n",
    "        # 打开文件\n",
    "        time.sleep(0.03)\n",
    "        \n",
    "        for sample_idx in range(num_samples):\n",
    "            # 从文件中读取数据( line, record)\n",
    "            time.sleep(0.015)\n",
    "            \n",
    "            yield (sample_idx,)\n",
    "    \n",
    "    def __new__(cls, num_samples=3):\n",
    "        return tf.data.Dataset.from_generator(\n",
    "            cls._generator,\n",
    "            output_types=tf.dtypes.int64,\n",
    "            output_shapes=(1,),\n",
    "            args=(num_samples,)\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此数据集类似于一个 [tf.data.dataset.range](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#range)，在每个示例的开始和之间添加固定的延迟。\n",
    "\n",
    "### 循环训练\n",
    "编写一个虚拟的训练循环，它度量遍历数据集所需的时间，模拟训练时间。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def benchmark(dataset, num_epochs=2):\n",
    "    start_time = time.perf_counter()\n",
    "    for epoch_num in range(num_epochs):\n",
    "        for sample in dataset:\n",
    "            # 执行训练步骤\n",
    "            time.sleep(0.01)\n",
    "    tf.print(\"Execution time:\", time.perf_counter() - start_time)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 优化性能\n",
    "为了展示如何优化性能，你将改进 ArtificialDataset 的性能。\n",
    "### 原始方法\n",
    "从一个简单的管道开始，不使用任何技巧，按原样遍历数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "benchmark(ArtificialDataset())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面是显示你的执行时间的使用：\n",
    "\n",
    "![title](../img/5_2/naive.svg)\n",
    "\n",
    "你可以看到，执行一个训练步骤包括：\n",
    "- 如果文件之前没有被打开，那么打开它\n",
    "- 从文件中获取数据项\n",
    "- 使用数据进行训练\n",
    "\n",
    "但是，在这里这样的简单同步实现中，当你的管道正在获取数据时，你的模型处于空闲状态。相反，当你的模型正在训练时，输入管道处于空闲状态。因此，训练步长是所有时间的总和，即打开、读取和训练的时间。\n",
    "\n",
    "接下来的部分将建立在这个输入管道上，说明设计高性能 tensorflow 输入管道的最佳实践。\n",
    "\n",
    "### 预取\n",
    "预取与训练步骤的预处理和模型执行重叠。当模型执行训练步骤 s 时，输入管道读取步骤 s+1 的数据。这样做可以将步进时间减少到训练的最大值（而不是总和）以及提取数据所需的时间。\n",
    "\n",
    "tf.data API提供了 [tf.data.Dataset.prefetch](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#prefetch) 转换。它可以用来将数据产生的时间与数据消耗的时间分离。特别是，转换使用后台线程和内部缓冲区在请求元素之前从输入数据集中预取元素。要预取的元素数应等于（或可能大于）单个培训步骤消耗的批数。可以手动调整该值，也可以将其设置为 [tf.data.experimental.AUTOTUNE](https://tensorflow.google.cn/api_docs/python/tf/data/experimental#AUTOTUNE)，这将提示tf.data 运行时在运行时动态调整该值。\n",
    "\n",
    "注意，只要 \"生产者\" 的工作与 \"消费者\" 的工作重叠，使用预取转换就是有好处的："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "benchmark(\n",
    "    ArtificialDataset()\n",
    "    .prefetch(tf.data.experimental.AUTOTUNE)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![title](../img/5_2/prefetched.svg)\n",
    "\n",
    "这一次，您可以看到，当训练步骤针对样本0运行时，输入管道正在读取样本1的数据，以此类推。\n",
    "\n",
    "### 并行数据提取\n",
    "在实际设置中，输入数据可以远程存储（例如，GCS 或 HDFS）。在本地读取数据时工作良好的数据集管道在远程读取数据时可能会因本地和远程存储之间的以下差异而在 I/O 上遇到瓶颈：\n",
    "- 读取字节的时间（Time-to-first-byte）：从远程存储读取文件的第一个字节所需的时间可能比从本地存储读取文件所需的时间长几个数量级。\n",
    "- 读吞吐量（Read throughput）：虽然远程存储通常提供较大的聚合带宽，但读取单个文件可能只能利用此带宽的一小部分。\n",
    "\n",
    "此外，一旦原始字节加载到内存中，可能还需要反序列化 和/或 解密数据（例如 [protobuf](https://developers.google.cn/protocol-buffers/)），这需要额外的计算。无论数据是本地存储还是远程存储，都会出现此开销，但如果数据没有有效预取，则在远程情况下可能会更糟。\n",
    "\n",
    "为了减轻各种数据提取开销的影响，可以使用 [tf.data.Dataset.interleave](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#interleave) 转换并行化数据加载步骤，交错其他数据集（如数据文件读取器）的内容。要重叠的数据集数量可以由 cycle_length 参数指定，而并行级别可以由 num_parallel_calls 参数指定。与预取（prefetch）转换类似，交织（interleave）转换支持 [tf.data.experimental.AUTOTUNE](https://tensorflow.google.cn/api_docs/python/tf/data/experimental#AUTOTUNE)，它将把关于使用何种并行级别委托给 tf.data 运行时决定。\n",
    "\n",
    "#### 连续交错\n",
    "\n",
    "[tf.data.dataset.interleave](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#interleave) 转换的默认参数使它依次交错来自两个数据集的单个样本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "benchmark(\n",
    "    tf.data.Dataset.range(2)\n",
    "    .interleave(ArtificialDataset)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![title](../img/5_2/sequential_interleave.svg)\n",
    "\n",
    "此图允许显示交错转换的行为，从两个可用的数据集中交替获取样本。但是，这里不涉及性能改进。\n",
    "\n",
    "#### 并行交错\n",
    "\n",
    "现在使用交织转换的 num_parallel_calls 参数。这将并行加载多个数据集，减少了等待打开文件的时间。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "benchmark(\n",
    "    tf.data.Dataset.range(2)\n",
    "    .interleave(\n",
    "        ArtificialDataset,\n",
    "        num_parallel_calls=tf.data.experimental.AUTOTUNE\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![title](../img/5_2/parallel_interleave.svg)\n",
    "\n",
    "这一次，两个数据集的读取是并行的，减少了全局数据处理时间。\n",
    "\n",
    "### 并行数据转换\n",
    "准备数据时，可能需要预处理输入元素。为此，tf.data API 提供了 [tf.data.Dataset.map](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#map) 转换，该转换将用户定义的函数应用于输入数据集的每个元素。由于输入元素彼此独立，因此可以跨多个 CPU 核并行化预处理。为了实现这一点，类似于预取和交织转换，映射转换提供 num_parallel_calls 参数来指定并行级别。\n",
    "\n",
    "\n",
    "选择 num_parallel_calls 参数的最佳值取决于硬件、训练数据的特性（例如其大小和形状）、map 函数的成本以及 CPU 上同时发生的其他处理。一个简单的启发式方法是使用可用的 CPU 核心数。但是，对于预取和交织转换，map 转换支持 [tf.data.experimental.AUTOTUNE](https://tensorflow.google.cn/api_docs/python/tf/data/experimental#AUTOTUNE)，它将决定使用什么级别的并行性给tf.data运行时。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def mapped_function(s):\n",
    "    # 做一些预处理\n",
    "    tf.py_function(lambda: time.sleep(0.03), [], ())\n",
    "    return s"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 顺序映射\n",
    "\n",
    "首先使用没有并行的映射转换作为基线示例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "benchmark(\n",
    "    ArtificialDataset()\n",
    "    .map(mapped_function)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![title](../img/5_2/sequential_map.svg)\n",
    "\n",
    "正如[原始的方法](https://tensorflow.google.cn/guide/data_performance#The_naive_approach)，在这里，打开、阅读、预处理（映射）和培训步骤所花费的时间加在一起就可以进行一次迭代。\n",
    "\n",
    "#### 并行映射\n",
    "现在，使用相同的预处理函数，但将其并行应用于多个样本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "benchmark(\n",
    "    ArtificialDataset()\n",
    "    .map(\n",
    "        mapped_function,\n",
    "        num_parallel_calls=tf.data.experimental.AUTOTUNE\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![https://tensorflow.google.cn/guide/images/data_performance/parallel_map.svg](../img/5_2/parallel_map.svg)\n",
    "\n",
    "现在，你可以在图上看到预处理步骤重叠，减少了单个迭代的总时间。\n",
    "\n",
    "### 缓存\n",
    "[tf.data.Dataset.cache](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#cache) 转换可以在内存或本地存储中缓存数据集。这将保存在每个时期执行的一些操作（如文件打开和数据读取）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "benchmark(\n",
    "    ArtificialDataset()\n",
    "    .map(  # 在缓存之前应用耗时的操作\n",
    "        mapped_function\n",
    "    ).cache(\n",
    "    ),\n",
    "    5\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![https://tensorflow.google.cn/guide/images/data_performance/cached_dataset.svg](../img/5_2/cached_dataset.svg)\n",
    "\n",
    "缓存数据集时，缓存数据集之前的转换（如文件打开和数据读取）仅在第一个 epoch 期间执行。下一个epoch 将重用由缓存转换缓存的数据。\n",
    "\n",
    "如果传递到映射转换中的用户自定义函数的代价很昂贵，则在映射转换后应用缓存转换，只要生成的数据集仍然可以放入内存或本地存储。如果用户定义的函数增加了存储超出缓存容量的数据集所需的空间，应在缓存转换后应用该函数，或者在训练之前考虑预处理数据以减少资源使用。\n",
    "\n",
    "### 矢量映射\n",
    "调用传递到映射转换中的用户自定义函数会产生与调度和执行用户定义函数相关的开销。我们建议对用户定义的函数进行矢量化（即让它一次操作一批输入），并在映射转换之前应用批转换。\n",
    "\n",
    "为了说明这一良好实践，你的 ArtificialDataset 不适合，因为调度延迟大约为10微秒（10e-6秒），远远小于 ArtificialDataset 中使用的数十毫秒，因此很难看到其影响。\n",
    "\n",
    "对于本例，使用基本的 [tf.data.Dataset.range](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#range) 函数并将训练循环简化为最简单的形式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fast_dataset = tf.data.Dataset.range(10000)\n",
    "\n",
    "def fast_benchmark(dataset, num_epochs=2):\n",
    "    start_time = time.perf_counter()\n",
    "    for _ in tf.data.Dataset.range(num_epochs):\n",
    "        for _ in dataset:\n",
    "            pass\n",
    "    tf.print(\"执行时间:\", time.perf_counter() - start_time)\n",
    "    \n",
    "def increment(x):\n",
    "    return x+1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 标量映射\n",
    "fast_benchmark(\n",
    "    fast_dataset\n",
    "    # 一次应用一个函数\n",
    "    .map(increment)\n",
    "    # 批次\n",
    "    .batch(256)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![https://tensorflow.google.cn/guide/images/data_performance/scalar_map.svg](../img/5_2/scalar_map.svg)\n",
    "\n",
    "上面的图表说明了发生了什么（用较少的样本）。你可以看到映射函数被应用于每个示例。虽然这个函数非常快，但是它有一些影响时间性能的开销。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 矢量映射\n",
    "fast_benchmark(\n",
    "    fast_dataset\n",
    "    .batch(256)\n",
    "    # 把函数应用到一批项上，tf.Tensor.__add__ 方法处理批量数据\n",
    "    .map(increment)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "   ![https://tensorflow.google.cn/guide/images/data_performance/vectorized_map.svg](../img/5_2/vectorized_map.svg)\n",
    "   \n",
    "这一次，映射函数被调用一次并应用于一批样本。虽然该函数可能需要更多的时间来执行，但开销只出现一次，从而提高了总体时间性能。\n",
    "\n",
    "### 减少内存占用\n",
    "许多转换，包括交错、预取和无序处理，都维护了元素的内部缓冲区。如果传递到映射转换的用户自定义函数更改了元素的大小，那么映射转换的顺序和缓冲区元素的转换将影响内存使用。一般来说，我们建议选择导致内存占用率较低的顺序，除非不同的顺序对性能是可取的。\n",
    "\n",
    "#### 缓存部分计算\n",
    "建议在映射转换之后缓存数据集，除非此转换使数据太大而无法装入内存。如果映射的函数可以分成两部分：一部分耗时，另一部分占用内存，则可以实现折中。在这种情况下，可以如下转换："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset.map(time_consuming_mapping).cache().map(memory_consuming_mapping)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这样，只在第一个epoch期间执行耗时的部分，并且避免使用太多的缓存空间。\n",
    "\n",
    "## 最佳实践总结\n",
    "以下是设计高性能 tensorflow 输入管道的最佳实践总结：\n",
    "- 使用预取转换来重叠生产者和消费者的工作。（[Use the prefetch transformation](https://tensorflow.google.cn/guide/data_performance#Pipelining) to overlap the work of a producer and consumer）\n",
    "- 使用交织转换并行化数据读取转换。（[Parallelize the data reading transformation](https://tensorflow.google.cn/guide/data_performance#Parallelizing_data_extraction) using the interleave transformation.）\n",
    "- 通过设置 num_parallel_calls 参数来并行化映射转换。（[Parallelize the map transformation](https://tensorflow.google.cn/guide/data_performance#Parallelizing_data_transformation) by setting the num_parallel_calls argument.）\n",
    "- 使用缓存转换在第一个epoch期间将数据缓存在内存中。（[Use the cache transformation](https://tensorflow.google.cn/guide/data_performance#Caching) to cache data in memory during the first epoch）\n",
    "- 将传入映射转换的用户自定义函数矢量化。（[Vectorize user-defined functions](https://tensorflow.google.cn/guide/data_performance#Map_and_batch) passed in to the map transformation）\n",
    "- 在应用交错、预取和无序转换时减少内存使用。（[Reduce memory usage](https://tensorflow.google.cn/guide/data_performance#Reducing_memory_footprint) when applying the interleave, prefetch, and shuffle transformations.）\n",
    "\n",
    "## 复制图形\n",
    "> 注意：本文其余部分是关于如何复制上述图形，请随意使用此代码，但注意它并不是本教程的重要部分。\n",
    "\n",
    "要深入了解tf.data.Dataset API，可以使用自己的管道。下面是用于打印本指南中图像的代码。它可以是一个很好的起点，为常见的困难提供一些解决方法，例如：\n",
    "- 执行时间再现性（Execution time reproducibility）\n",
    "- eager execution 的映射函数（Mapped functions eager execution）\n",
    "- 交错转换可调用（interleave transformation callable）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import itertools\n",
    "from collections import defaultdict\n",
    "\n",
    "import numpy as np\n",
    "import matplotlib as mpl\n",
    "import matplotlib.pyplot as plt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据集\n",
    "与 ArtificialDataset 类似，你可以构建一个返回每个步骤所花费时间的数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class TimeMeasuredDataset(tf.data.Dataset):\n",
    "    # 输出: (steps, timings, counters)\n",
    "    OUTPUT_TYPES = (tf.dtypes.string, tf.dtypes.float32, tf.dtypes.int32)\n",
    "    OUTPUT_SHAPES = ((2, 1), (2, 2), (2, 3))\n",
    "    \n",
    "    _INSTANCES_COUNTER = itertools.count()  # 生成的数据集的数目\n",
    "    _EPOCHS_COUNTER = defaultdict(itertools.count)  #为每个数据集完成的 epoch 数量\n",
    "    \n",
    "    def _generator(instance_idx, num_samples):\n",
    "        epoch_idx = next(TimeMeasuredDataset._EPOCHS_COUNTER[instance_idx])\n",
    "        \n",
    "        # 打开文件\n",
    "        open_enter = time.perf_counter()\n",
    "        time.sleep(0.03)\n",
    "        open_elapsed = time.perf_counter() - open_enter\n",
    "        \n",
    "        for sample_idx in range(num_samples):\n",
    "            # 从文件中读取数据 (line, record)\n",
    "            read_enter = time.perf_counter()\n",
    "            time.sleep(0.015)\n",
    "            read_elapsed = time.perf_counter() - read_enter\n",
    "            \n",
    "            yield (\n",
    "                [(\"Open\",), (\"Read\",)],\n",
    "                [(open_enter, open_elapsed), (read_enter, read_elapsed)],\n",
    "                [(instance_idx, epoch_idx, -1), (instance_idx, epoch_idx, sample_idx)]\n",
    "            )\n",
    "            open_enter, open_elapsed = -1., -1.  # 负值将被过滤\n",
    "            \n",
    "    \n",
    "    def __new__(cls, num_samples=3):\n",
    "        return tf.data.Dataset.from_generator(\n",
    "            cls._generator,\n",
    "            output_types=cls.OUTPUT_TYPES,\n",
    "            output_shapes=cls.OUTPUT_SHAPES,\n",
    "            args=(next(cls._INSTANCES_COUNTER), num_samples)\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此数据集提供形状 [[2，1]，[2，2]，[2，3]] 和类型 [tf.dtypes.string，tf.dtypes.float32，tf.dtypes.int32]的示例。每个样本是：\n",
    "```txt\n",
    "(\n",
    "  [(\"Open\"), (\"Read\")],\n",
    "  [(t0, d), (t0, d)],\n",
    "  [(i, e, -1), (i, e, s)]\n",
    ")\n",
    "```\n",
    "属性：\n",
    "- Open 和 Read 是步骤标识符。\n",
    "- t0 是相应步骤开始时的时间戳\n",
    "- d 是在相应步骤中花费的时间\n",
    "- i 是实例索引\n",
    "- e 是 epoch 索引（数据集被迭代的次数）\n",
    "- s 是样本索引\n",
    "\n",
    "### 循环迭代\n",
    "使迭代循环稍微复杂一点，以聚合所有计时。这只适用于生成上述样本的数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def timelined_benchmark(dataset, num_epochs=2):\n",
    "    # 初始化累加器\n",
    "    steps_acc = tf.zeros([0, 1], dtype=tf.dtypes.string)\n",
    "    times_acc = tf.zeros([0, 2], dtype=tf.dtypes.float32)\n",
    "    values_acc = tf.zeros([0, 3], dtype=tf.dtypes.int32)\n",
    "    \n",
    "    start_time = time.perf_counter()\n",
    "    for epoch_num in range(num_epochs):\n",
    "        epoch_enter = time.perf_counter()\n",
    "        for (steps, times, values) in dataset:\n",
    "            # 记录数据集准备信息\n",
    "            steps_acc = tf.concat((steps_acc, steps), axis=0)\n",
    "            times_acc = tf.concat((times_acc, times), axis=0)\n",
    "            values_acc = tf.concat((values_acc, values), axis=0)\n",
    "            \n",
    "            # 模拟训练时间\n",
    "            train_enter = time.perf_counter()\n",
    "            time.sleep(0.01)\n",
    "            train_elapsed = time.perf_counter() - train_enter\n",
    "            \n",
    "            # 记录训练信息\n",
    "            steps_acc = tf.concat((steps_acc, [[\"Train\"]]), axis=0)\n",
    "            times_acc = tf.concat((times_acc, [(train_enter, train_elapsed)]), axis=0)\n",
    "            values_acc = tf.concat((values_acc, [values[-1]]), axis=0)\n",
    "        \n",
    "        epoch_elapsed = time.perf_counter() - epoch_enter\n",
    "        # 记录 epoch 信息\n",
    "        steps_acc = tf.concat((steps_acc, [[\"Epoch\"]]), axis=0)\n",
    "        times_acc = tf.concat((times_acc, [(epoch_enter, epoch_elapsed)]), axis=0)\n",
    "        values_acc = tf.concat((values_acc, [[-1, epoch_num, -1]]), axis=0)\n",
    "        time.sleep(0.001)\n",
    "    \n",
    "    tf.print(\"执行时间:\", time.perf_counter() - start_time)\n",
    "    return {\"steps\": steps_acc, \"times\": times_acc, \"values\": values_acc}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 绘图方法\n",
    "最后，定义一个函数，该函数能够根据 timelined_benchmark 函数返回的值绘制时间线。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def draw_timeline(timeline, title, width=0.5, annotate=False, save=False):\n",
    "    # 从时间线中删除无效的条目（负的时间，或者空的步骤）\n",
    "    invalid_mask = np.logical_and(timeline['times'] > 0, timeline['steps'] != b'')[:,0]\n",
    "    steps = timeline['steps'][invalid_mask].numpy()\n",
    "    times = timeline['times'][invalid_mask].numpy()\n",
    "    values = timeline['values'][invalid_mask].numpy()\n",
    "    \n",
    "    # 获取一组不同的步骤，按第一次遇到它们的时间排序\n",
    "    step_ids, indices = np.stack(np.unique(steps, return_index=True))\n",
    "    step_ids = step_ids[np.argsort(indices)]\n",
    "\n",
    "    # 将起始时间移到 0 并计算最大时间值\n",
    "    min_time = times[:,0].min()\n",
    "    times[:,0] = (times[:,0] - min_time)\n",
    "    end = max(width, (times[:,0]+times[:,1]).max() + 0.01)\n",
    "    \n",
    "    cmap = mpl.cm.get_cmap(\"plasma\")\n",
    "    plt.close()\n",
    "    fig, axs = plt.subplots(len(step_ids), sharex=True, gridspec_kw={'hspace': 0})\n",
    "    fig.suptitle(title)\n",
    "    fig.set_size_inches(17.0, len(step_ids))\n",
    "    plt.xlim(-0.01, end)\n",
    "    \n",
    "    for i, step in enumerate(step_ids):\n",
    "        step_name = step.decode()\n",
    "        ax = axs[i]\n",
    "        ax.set_ylabel(step_name)\n",
    "        ax.set_ylim(0, 1)\n",
    "        ax.set_yticks([])\n",
    "        ax.set_xlabel(\"time (s)\")\n",
    "        ax.set_xticklabels([])\n",
    "        ax.grid(which=\"both\", axis=\"x\", color=\"k\", linestyle=\":\")\n",
    "        \n",
    "        # 获取给定步骤的计时和注解\n",
    "        entries_mask = np.squeeze(steps==step)\n",
    "        serie = np.unique(times[entries_mask], axis=0)\n",
    "        annotations = values[entries_mask]\n",
    "        \n",
    "        ax.broken_barh(serie, (0, 1), color=cmap(i / len(step_ids)), linewidth=1, alpha=0.66)\n",
    "        if annotate:\n",
    "            for j, (start, width) in enumerate(serie):\n",
    "                annotation = \"\\n\".join([f\"{l}: {v}\" for l,v in zip((\"i\", \"e\", \"s\"), annotations[j])])\n",
    "                ax.text(start + 0.001 + (0.001 * (j % 2)), 0.55 - (0.1 * (j % 2)), annotation,\n",
    "                        horizontalalignment='left', verticalalignment='center')\n",
    "    if save:\n",
    "        plt.savefig(title.lower().translate(str.maketrans(\" \", \"_\")) + \".svg\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用包装映射函数\n",
    "要在 eager execution 的上下文中运行映射函数，必须将它们封装在 [tf.py_function](https://tensorflow.google.cn/api_docs/python/tf/py_function) 中调用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def map_decorator(func):\n",
    "    def wrapper(steps, times, values):\n",
    "        # 使用一个 tf.py_function 防止图自动编译方法\n",
    "        return tf.py_function(\n",
    "            func,\n",
    "            inp=(steps, times, values),\n",
    "            Tout=(steps.dtype, times.dtype, values.dtype)\n",
    "        )\n",
    "    return wrapper"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 比较管道"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "_batch_map_num_items = 50\n",
    "\n",
    "def dataset_generator_fun(*args):\n",
    "    return TimeMeasuredDataset(num_samples=_batch_map_num_items)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 原始管道"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@map_decorator\n",
    "def naive_map(steps, times, values):\n",
    "    map_enter = time.perf_counter()\n",
    "    time.sleep(0.001)  # 消耗时间的步骤\n",
    "    time.sleep(0.0001)  # 消耗内存的步骤\n",
    "    map_elapsed = time.perf_counter() - map_enter\n",
    "\n",
    "    return (\n",
    "        tf.concat((steps, [[\"Map\"]]), axis=0),\n",
    "        tf.concat((times, [[map_enter, map_elapsed]]), axis=0),\n",
    "        tf.concat((values, [values[-1]]), axis=0)\n",
    "    )\n",
    "\n",
    "naive_timeline = timelined_benchmark(\n",
    "    tf.data.Dataset.range(2)\n",
    "    .flat_map(dataset_generator_fun)\n",
    "    .map(naive_map)\n",
    "    .batch(_batch_map_num_items, drop_remainder=True)\n",
    "    .unbatch(),\n",
    "    5\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 优化管道"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@map_decorator\n",
    "def time_consumming_map(steps, times, values):\n",
    "    map_enter = time.perf_counter()\n",
    "    time.sleep(0.001 * values.shape[0])  # 消耗时间的步骤\n",
    "    map_elapsed = time.perf_counter() - map_enter\n",
    "\n",
    "    return (\n",
    "        tf.concat((steps, tf.tile([[[\"1st map\"]]], [steps.shape[0], 1, 1])), axis=1),\n",
    "        tf.concat((times, tf.tile([[[map_enter, map_elapsed]]], [times.shape[0], 1, 1])), axis=1),\n",
    "        tf.concat((values, tf.tile([[values[:][-1][0]]], [values.shape[0], 1, 1])), axis=1)\n",
    "    )\n",
    "\n",
    "\n",
    "@map_decorator\n",
    "def memory_consumming_map(steps, times, values):\n",
    "    map_enter = time.perf_counter()\n",
    "    time.sleep(0.0001 * values.shape[0])  # 消耗内存的步骤\n",
    "    map_elapsed = time.perf_counter() - map_enter\n",
    "\n",
    "    # 使用 tf.tile 处理批次维度\n",
    "    return (\n",
    "        tf.concat((steps, tf.tile([[[\"2nd map\"]]], [steps.shape[0], 1, 1])), axis=1),\n",
    "        tf.concat((times, tf.tile([[[map_enter, map_elapsed]]], [times.shape[0], 1, 1])), axis=1),\n",
    "        tf.concat((values, tf.tile([[values[:][-1][0]]], [values.shape[0], 1, 1])), axis=1)\n",
    "    )\n",
    "\n",
    "\n",
    "optimized_timeline = timelined_benchmark(\n",
    "    tf.data.Dataset.range(2)\n",
    "    .interleave(  # 并行读取数据\n",
    "        dataset_generator_fun,\n",
    "        num_parallel_calls=tf.data.experimental.AUTOTUNE\n",
    "    )\n",
    "    .batch(  # 矢量化映射函数\n",
    "        _batch_map_num_items,\n",
    "        drop_remainder=True)\n",
    "    .map(  # 并行映射转换\n",
    "        time_consumming_map,\n",
    "        num_parallel_calls=tf.data.experimental.AUTOTUNE\n",
    "    )\n",
    "    .cache()  # 缓存数据\n",
    "    .map(  # 减少内存使用\n",
    "        memory_consumming_map,\n",
    "        num_parallel_calls=tf.data.experimental.AUTOTUNE\n",
    "    )\n",
    "    .prefetch(  # 重叠生产者和消费者的任务\n",
    "        tf.data.experimental.AUTOTUNE\n",
    "    )\n",
    "    .unbatch(),\n",
    "    5\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "draw_timeline(naive_timeline, \"Naive\", 15)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "draw_timeline(optimized_timeline, \"Optimized\", 15)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
