{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# tf.data：构建 tensorflow 数据输入流水线\n",
    "[tf.data](https://tensorflow.google.cn/api_docs/python/tf/data) 允许你从简单的、可重用的部分构建复杂的输入管道。例如，图像模型的管道可以聚合来自分布式文件系统中的文件的数据，对每个图像进行随机扰动，并将随机选择的图像合并成一批进行训练。文本模型的管道可能涉及从原始文本数据中提取符号，将其转换为带有查找表的嵌入标识符，并将不同长度的序列组合在一起。tf.data API使处理从不同数据格式读取的大量数据成为可能。\n",
    "\n",
    "tf.data API 引入了一个表示元素序列的 [tf.data.Dataset](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset) 抽象，其中每个元素由一个或多个组件组成。例如，在图像管道中，元素可以是单个训练示例，其中有一对张量组件表示图像及其标签。\n",
    "\n",
    "创建数据集有两种不同的方法：\n",
    "- 数据源从存储在内存或一个或多个文件中的数据构建数据集。\n",
    "- 数据转换从一个或多个 tf.data.Dataset 对象构造数据集。\n",
    "\n",
    "## 设置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "import pathlib\n",
    "import os\n",
    "import matplotlib.pyplot as plt\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "np.set_printoptions(precision=4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 基础结构\n",
    "要创建输入管道，必须从数据源开始。例如，要从内存中的数据构造数据集，可以使用[tf.data.Dataset.from_tensors()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#from_tensors) 或 [tf.data.Dataset.from_tensor_slices()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#from_tensor_slices)。或者，如果你的输入数据以推荐的 TFRecord 格式存储在文件中，那么你可以使用 [tf.data.TFRecordDataset()](https://tensorflow.google.cn/api_docs/python/tf/data/TFRecordDataset)。\n",
    "\n",
    "一旦有了 Dataset 对象，就可以通过链接 tf.data 上的方法调用将其转换为新的 Dataset 对象。例如，可以应用每个元素的转换（如 [Dataset.map()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#map)）和多个元素的转换（如  [Dtaset.batch()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#batch)）。有关转换的完整列表，可查看 [tf.data.Dataset](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset) 文档。\n",
    "\n",
    "Dataset 对象是一个可迭代的 python 对象。这使得可以用 for 循环来操作它的元素："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])\n",
    "print(dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for elem in dataset:\n",
    "  print(elem.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 或者使用i ter 显式地创建一个 python 迭代器，然后使用 next 操作它的元素：\n",
    "it = iter(dataset)\n",
    "print(next(it).numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 或者，可以使用 reduce 转换使用数据集元素，该转换减少所有元素以生成单个结果。下面的示例说明如何使用 reduce 转换来计算整数集的和:\n",
    "print(dataset.reduce(0, lambda state, value: state + value).numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据集结构\n",
    "数据集包含每个具有相同（嵌套）结构的元素，并且结构的各个组件可以是任何类型的元素，可以由 [tf.TypeSpec](https://tensorflow.google.cn/api_docs/python/tf/TypeSpec) 表示，包括 [tf.Tensor](https://tensorflow.google.cn/api_docs/python/tf/Tensor)、[tf.sparse.SparseTensor](https://tensorflow.google.cn/api_docs/python/tf/sparse/SparseTensor)、[tf.RaggedTensor](https://tensorflow.google.cn/api_docs/python/tf/RaggedTensor)、[tf.tensorray](https://tensorflow.google.cn/api_docs/python/tf/TensorArray) 或 [tf.data.dataset](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset)。\n",
    "\n",
    "[Dataset.element_spec](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#element_spec) 属性允许你检查每个元素组件的类型。该属性返回 tf.TypeSpec 对象的嵌套结构，与元素的结构匹配，元素的结构可以是单个组件、组件的元组或组件的嵌套元组。例如："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10]))\n",
    "print(dataset1.element_spec)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset2 = tf.data.Dataset.from_tensor_slices(\n",
    "   (tf.random.uniform([4]),\n",
    "    tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))\n",
    "print(dataset2.element_spec)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n",
    "print(dataset3.element_spec)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 包含稀疏张量的数据集\n",
    "dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]))\n",
    "print(dataset4.element_spec)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用v alue_type 查看元素规范表示的值的类型\n",
    "print(dataset4.element_spec.value_type)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据集转换支持任何结构的数据集。在使用将函数应用于每个元素的 Dataset.map() 和 Dataset.filter() 转换时，元素结构决定了函数的参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset1 = tf.data.Dataset.from_tensor_slices(\n",
    "    tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32))\n",
    "print(dataset1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for z in dataset1:\n",
    "  print(z.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset2 = tf.data.Dataset.from_tensor_slices(\n",
    "   (tf.random.uniform([4]),\n",
    "    tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))\n",
    "print(dataset2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n",
    "print(dataset3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for a, (b,c) in dataset3:\n",
    "  print('形状: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 读取输入数据\n",
    "### 使用 numpy 数组\n",
    "如果你的所有输入数据都适合存储在内存中，那么创建数据集的最简单方法就是将它们转换为 tf.Variable 对象，并使用 [Dataset.from_tensor_slices()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#from_tensor_slices)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train, test = tf.keras.datasets.fashion_mnist.load_data()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "images, labels = train\n",
    "images = images/255\n",
    "\n",
    "dataset = tf.data.Dataset.from_tensor_slices((images, labels))\n",
    "print(dataset)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 注意：面的代码片段将把特性和标签数组作为 [tf.constant()](https://tensorflow.google.cn/api_docs/python/tf/constant) 操作嵌入到 tensorflow 图中。这对于小数据集来说这样工作很好，但是会浪费内存——因为数组的内容会被复制多次——并且可能会达到 tf 的 tf.GraphDef 缓冲区协议的2GB限制。\n",
    "\n",
    "### 使用 python 生成器\n",
    "另一个可以很容易地作为 tf.data.Dataset 接收的常见数据源是python生成器。\n",
    "> 注意：虽然这是一种方便的方法，但它的可移植性和可伸缩性有限。它必须在创建生成器的同一个python 进程中运行，并且仍然受 python GIL 的约束。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def count(stop):\n",
    "  i = 0\n",
    "  while i<stop:\n",
    "    yield i\n",
    "    i += 1\n",
    "\n",
    "for n in count(5):\n",
    "  print(n)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[Dataset.from_generator](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#from_generator) 构造函数将 python 生成器转换为功能完整的 tf.data.Dataset。\n",
    "\n",
    "构造函数接受可调用的输入，而不是迭代器。这允许它在到达终点时重新启动生成器。它接受一个可选的args 参数，该参数作为可调用参数传递。\n",
    "\n",
    "output_types 参数是必需的，因为 tf.data 在内部构建 tf.Graph，而图的边需要 tf.dtype。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ds_counter = tf.data.Dataset.from_generator(\n",
    "    count, args=[25], output_types=tf.int32, output_shapes = (), )\n",
    "\n",
    "for count_batch in ds_counter.repeat().batch(10).take(10):\n",
    "  print(count_batch.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "output_shapes参数不是必需的，但它是被高度推荐（使用的）的，因为许多 tensorflow 操作不支持具有未知秩的张量。如果特定轴的长度未知或可变，请在 output_shapes 中将其设置为 None。\n",
    "\n",
    "还需要注意的是，output_shapes 和 output_types 遵循与其他数据集方法相同的嵌套规则。\n",
    "\n",
    "下面是一个生成器实例，演示了这两个方面，它返回数组的元组，其中第二个数组是长度未知的向量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def gen_series():\n",
    "  i = 0\n",
    "  while True:\n",
    "    size = np.random.randint(0, 10)\n",
    "    yield i, np.random.normal(size=(size,))\n",
    "    i += 1\n",
    "\n",
    "for i, series in gen_series():\n",
    "  print(i, \":\", str(series))\n",
    "  if i > 5:\n",
    "    break"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 第一个输出是 int32，第二个输出是 float32。\n",
    "# 第一项是一个标量 shape()，第二项是一个长度未知的向量 shape (None，)\n",
    "ds_series = tf.data.Dataset.from_generator(\n",
    "    gen_series, \n",
    "    output_types=(tf.int32, tf.float32), \n",
    "    output_shapes=((), (None,)))\n",
    "\n",
    "print(ds_series)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 现在它可以像普通的tf.data.Dataset一样使用。\n",
    "# 注意，在批量处理具有可变形状的数据集时，需要使用 dataset .padded_batch。\n",
    "ds_series_batch = ds_series.shuffle(20).padded_batch(10, padded_shapes=([], [None]))\n",
    "\n",
    "ids, sequence_batch = next(iter(ds_series_batch))\n",
    "print(ids.numpy())\n",
    "print()\n",
    "print(sequence_batch.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 注意：从 tensorflow2.2 开始，padded_shapes 参数不再需要。默认行为是将所有轴填充到批处理中最长的轴。\n",
    "\n",
    "对于更实际的示例，可尝试将 [preprocessing.image.ImageDataGenerator](https://tensorflow.google.cn/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) 包装为tf.data.Dataset。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 首先下载数据：\n",
    "flowers = tf.keras.utils.get_file(\n",
    "    'flower_photos',\n",
    "    'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n",
    "    untar=True)\n",
    "\n",
    "# 创建 image.ImageDataGenerator\n",
    "img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)\n",
    "images, labels = next(img_gen.flow_from_directory(flowers))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(images.dtype, images.shape)\n",
    "print(labels.dtype, labels.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ds = tf.data.Dataset.from_generator(\n",
    "    img_gen.flow_from_directory, args=[flowers], \n",
    "    output_types=(tf.float32, tf.float32), \n",
    "    output_shapes=([32,256,256,3], [32,5])\n",
    ")\n",
    "print(ds)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用 TFRecord 数据\n",
    "有关端到端示例，参见 [Loading TFRecords](https://tensorflow.google.cn/tutorials/load_data/tf_records)。\n",
    "\n",
    "tf.data API支持多种文件格式，因此你可以处理不适合内存的大型数据集。例如，TFRecord 文件格式是一种面向记录的简单二进制格式，许多 tensorflow 应用程序使用它来训练数据。[tf.data.TFRecordDataset](https://tensorflow.google.cn/api_docs/python/tf/data/TFRecordDataset) 类使你能够将一个或多个 TFRecord文件 的内容作为输入管道的一部分进行流式传输。\n",
    "\n",
    "下面是一个使用来自 the French Street Name Signs（FSNS）的测试文件的示例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建从两个文件中读取所有示例的数据集\n",
    "fsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \n",
    "                                         \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TFRecordDataset 初始化器的文件名参数可以是字符串、字符串列表或 tf.Variable 的字符串。因此，如果你有两组用于培训和验证的文件，你可以创建一个工厂方法来生成数据集，将文件名作为输入参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\n",
    "print(dataset)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "很多 tensorflow 项目在其 TFRecord 文件中使用序列化的 tf.train.Example 记录。在检查之前，需要对其进行解码："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "raw_example = next(iter(dataset))\n",
    "parsed = tf.train.Example.FromString(raw_example.numpy())\n",
    "\n",
    "print(parsed.features.feature['image/text'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用文本数据\n",
    "可以查看 [Loading Text](https://tensorflow.google.cn/tutorials/load_data/text) 端到端的例子。\n",
    "\n",
    "许多数据集被分为一个或多个文件。[tf.data.TextLineDataset](https://tensorflow.google.cn/api_docs/python/tf/data/TextLineDataset) 提供了从一个或多个文本文件中提取行的简单方法。给定一个或多个文件名，TextLineDataset 将为这些文件的每行生成一个字符串值元素。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'\n",
    "file_names = ['cowper.txt', 'derby.txt', 'butler.txt']\n",
    "\n",
    "file_paths = [\n",
    "    tf.keras.utils.get_file(file_name, directory_url + file_name)\n",
    "    for file_name in file_names\n",
    "]\n",
    "\n",
    "dataset = tf.data.TextLineDataset(file_paths)\n",
    "\n",
    "# 以下是第一个文件的前 5 行\n",
    "for line in dataset.take(5):\n",
    "  print(line.numpy())\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 若要在文件之间交替行，使用 Dataset.interleave。这使得将文件混合在一起变得更加容易。\n",
    "# 以下是每个译本的第一行、第二行和第三行\n",
    "files_ds = tf.data.Dataset.from_tensor_slices(file_paths)\n",
    "lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3)\n",
    "\n",
    "for i, line in enumerate(lines_ds.take(9)):\n",
    "  if i % 3 == 0:\n",
    "    print()\n",
    "  print(line.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 默认情况下，TextLineDataset 会生成每个文件的每一行，这可能是不需要的，\n",
    "# 例如，如果文件以标题行开始，或者包含注释。\n",
    "# 可以使用 Dataset.skip() 或 Dataset.filter() 转换删除这些行。\n",
    "# 在这里，你将跳过第一行，然后筛选以只找到幸存者。\n",
    "# PS：c此处幸存者的意思为需要保存的元素（行）。\n",
    "titanic_file = tf.keras.utils.get_file(\"train.csv\", \n",
    "                                       \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n",
    "titanic_lines = tf.data.TextLineDataset(titanic_file)\n",
    "\n",
    "for line in titanic_lines.take(10):\n",
    "  print(line.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def survived(line):\n",
    "  return tf.not_equal(tf.strings.substr(line, 0, 1), \"0\")\n",
    "\n",
    "survivors = titanic_lines.skip(1).filter(survived)\n",
    "\n",
    "for line in survivors.take(10):\n",
    "  print(line.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用 CSV 数据\n",
    "查看 [Loading CSV File](https://tensorflow.google.cn/tutorials/load_data/csv)，和 [Loading Pandas DataFrames](https://tensorflow.google.cn/tutorials/load_data/pandas) 获取更多例子。\n",
    "\n",
    "CSV 文件格式是一种以纯文本形式存储表格数据的流行格式。例如："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "titanic_file = tf.keras.utils.get_file(\"train.csv\", \n",
    "                                       \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n",
    "\n",
    "df = pd.read_csv(titanic_file, index_col=None)\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 如果你的数据在内存中，data .from_tensor_slices 方法同样也适用于字典，这样就可以轻松导入数据了\n",
    "titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df))\n",
    "\n",
    "for feature_batch in titanic_slices.take(1):\n",
    "  for key, value in feature_batch.items():\n",
    "    print(\"  {!r:20s}: {}\".format(key, value))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一种更可扩展的方法是根据需要从磁盘加载。\n",
    "\n",
    "tf.data 模块提供了从一个或多个符合 [RFC 4180](https://tools.ietf.org/html/rfc4180) 的 CSV 文件中提取记录的方法。\n",
    "\n",
    "[experimental.make_csv_dataset](https://tensorflow.google.cn/api_docs/python/tf/data/experimental/make_csv_dataset) 函数是读取csv文件集的高级接口。它支持列类型推断和许多其他特性，如批处理和混编，以使使用变得简单。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "titanic_batches = tf.data.experimental.make_csv_dataset(\n",
    "    titanic_file, batch_size=4,\n",
    "    label_name=\"survived\")\n",
    "\n",
    "for feature_batch, label_batch in titanic_batches.take(1):\n",
    "  print(\"'保存': {}\".format(label_batch))\n",
    "  print(\"特征:\")\n",
    "  for key, value in feature_batch.items():\n",
    "    print(\"  {!r:20s}: {}\".format(key, value))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 如果只需要列的一个子集，那么可以使用 select_columns 参数。\n",
    "titanic_batches = tf.data.experimental.make_csv_dataset(\n",
    "    titanic_file, batch_size=4,\n",
    "    label_name=\"survived\", select_columns=['class', 'fare', 'survived'])\n",
    "\n",
    "for feature_batch, label_batch in titanic_batches.take(1):\n",
    "  print(\"'保存': {}\".format(label_batch))\n",
    "  for key, value in feature_batch.items():\n",
    "    print(\"  {!r:20s}: {}\".format(key, value))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 还有一个较低级别的 experimental.CsvDataset 类，\n",
    "# 它提供了更细粒度的控制。它不支持列类型推断。相反，必须指定每列的类型。\n",
    "# 如果某些列是空的，则此低级接口允许提供默认值，而不是列类型。\n",
    "titanic_types  = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string] \n",
    "dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True)\n",
    "\n",
    "for line in dataset.take(10):\n",
    "  print([item.numpy() for item in line])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建一个从两个 CSV 文件中读取所有记录的数据集，每个数据集有 4 个可能丢失值的浮动列。\n",
    "record_defaults = [999,999,999,999]\n",
    "dataset = tf.data.experimental.CsvDataset(\"missing.csv\", record_defaults)\n",
    "dataset = dataset.map(lambda *items: tf.stack(items))\n",
    "print(dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for line in dataset:\n",
    "  print(line.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "默认情况下，CsvDataset 会生成文件每行中的每一列，这可能是不需要的，例如，如果文件以一个应该忽略的标题行开始，或者输入中不需要某些列。可以分别使用 header 和 select_cols 参数删除这些行和字段。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建一个数据集，该数据集从两个具有标题的CSV文件中读取所有记录，并从列 2 和列 4 中提取浮动数据。\n",
    "record_defaults = [999, 999] # 仅为所选列提供默认值\n",
    "dataset = tf.data.experimental.CsvDataset(\"missing.csv\", record_defaults, select_cols=[1, 3])\n",
    "dataset = dataset.map(lambda *items: tf.stack(items))\n",
    "print(dataset)\n",
    "\n",
    "for line in dataset:\n",
    "  print(line.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用文件集\n",
    "有许多数据集作为一组文件分发，其中每个文件都是一个示例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 注意：这些图像是经过许可的 CC-BY，详情可参阅 LICENSE.txt。\n",
    "flowers_root = tf.keras.utils.get_file(\n",
    "    'flower_photos',\n",
    "    'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n",
    "    untar=True)\n",
    "flowers_root = pathlib.Path(flowers_root)\n",
    "\n",
    "# 根目录包含每个类的目录\n",
    "for item in flowers_root.glob(\"*\"):\n",
    "  print(item.name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 每个类目录中的文件是例子\n",
    "list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))\n",
    "for f in list_ds.take(5):\n",
    "  print(f.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用 tf.io.Read_file 函数读取数据，并从路径中提取标签，返回（图像，标签）对\n",
    "def process_path(file_path):\n",
    "  label = tf.strings.split(file_path, os.sep)[-2]\n",
    "  return tf.io.read_file(file_path), label\n",
    "\n",
    "labeled_ds = list_ds.map(process_path)\n",
    "\n",
    "for image_raw, label_text in labeled_ds.take(1):\n",
    "  print(repr(image_raw.numpy()[:100]))\n",
    "  print()\n",
    "  print(label_text.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 批处理数据集元素\n",
    "### 简单的批处理\n",
    "最简单的批处理方式是将数据集的 n 个连续元素堆叠成单个元素。[Dataset.batch()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#batch) 变换就是这样做的，它有和 [tf.stack()](https://tensorflow.google.cn/api_docs/python/tf/stack) 运算符相同的约束，适用于元素的每个分量：也就是说，对于每个分量 i，所有的元素都必须有一个完全相同形状的张量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "inc_dataset = tf.data.Dataset.range(100)\n",
    "dec_dataset = tf.data.Dataset.range(0, -100, -1)\n",
    "dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset))\n",
    "batched_dataset = dataset.batch(4)\n",
    "\n",
    "for batch in batched_dataset.take(4):\n",
    "  print([arr.numpy() for arr in batch])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 当tf.data尝试传播形状信息时，Dataset.batch 的默认设置导致未知的批大小，\n",
    "# 因为最后一个批可能未满。注意形状中的非数值：\n",
    "print(batched_dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用 drop_remainder 参数忽略最后一批，并得到完整的形状传播:\n",
    "batched_dataset = dataset.batch(7, drop_remainder=True)\n",
    "print(batched_dataset)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 带填充的批处理张量\n",
    "以上方法适用于具有相同大小的张量。然而，许多模型（例如序列模型）处理的输入数据可能具有不同的大小（例如不同长度的序列）。要处理这种情况，应使用 [Dataset.padded_batch](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#padded_batch) 转换允通过指定一个或多个维度来填充不同形状的张量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = tf.data.Dataset.range(100)\n",
    "dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x))\n",
    "dataset = dataset.padded_batch(4, padded_shapes=(None,))\n",
    "\n",
    "for batch in dataset.take(2):\n",
    "  print(batch.numpy())\n",
    "  print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Dataset.added_batch 转换允许你为每个组件的每个维度设置不同的填充，它可以是可变长度（在上面的示例中由 None 表示）或常量长度。还可以覆盖 padding 值，该值默认为0。\n",
    "\n",
    "## 训练工作流\n",
    "### 处理多个周期\n",
    "tf.data API 提供两个主要的方法用于对于同一数据不同 epoch 进行处理。\n",
    "\n",
    "在多个 epoch 中遍历数据集的最简单方法是使用 [Dataset.repeat()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#repeat) 转换。首先，创建一个泰坦尼克数据集（titanic data）:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "titanic_file = tf.keras.utils.get_file(\"train.csv\", \n",
    "                                       \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n",
    "titanic_lines = tf.data.TextLineDataset(titanic_file)\n",
    "\n",
    "def plot_batch_sizes(ds):\n",
    "  batch_sizes = [batch.shape[0] for batch in ds]\n",
    "  plt.bar(range(len(batch_sizes)), batch_sizes)\n",
    "  plt.xlabel('批次数')\n",
    "  plt.ylabel('批次大小')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "应用没有参数的 [Dataset.repeat()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#repeat) 转换将无限期地重复输入。\n",
    "\n",
    "Dataset.repeat 转换将其参数串联起来，而不发出一个 epoch 结束和下一个 epoch 开始的信号。因此，在 Dataset.repeat 之后应用的 Dataset.batch 将产生跨越 epoch 边界的批："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "titanic_batches = titanic_lines.repeat(3).batch(128)\n",
    "plot_batch_sizes(titanic_batches)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 如果需要清除 epoch 分隔，在重复之前放置 Dataset.batch：\n",
    "titanic_batches = titanic_lines.batch(128).repeat(3)\n",
    "plot_batch_sizes(titanic_batches)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 如果你想在每个 epoch 结束时执行一个自定义计算（例如收集统计数据），\n",
    "# 那么最简单的方法是在每个 epoch 上重新启动数据集迭代\n",
    "epochs = 3\n",
    "dataset = titanic_lines.batch(128)\n",
    "\n",
    "for epoch in range(epochs):\n",
    "  for batch in dataset:\n",
    "    print(batch.shape)\n",
    "  print(\"epoch 结束: \", epoch)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 混编输入数据\n",
    "[Dataset.shuffle()](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#shuffle) 转换维护一个固定大小的缓冲区，并从该缓冲区中随机选择下一个元素。\n",
    "\n",
    "> 注意：虽然大的 buffer_size 更彻底地混编数据，但是它们会占用大量的内存，并且需要大量的时间来填充。如果出现问题，可以考虑在文件之间使用 [Dataset.interleave](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#interleave)。\n",
    "\n",
    "添加一个索引到数据集，这样你可以看到效果："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lines = tf.data.TextLineDataset(titanic_file)\n",
    "counter = tf.data.experimental.Counter()\n",
    "\n",
    "dataset = tf.data.Dataset.zip((counter, lines))\n",
    "dataset = dataset.shuffle(buffer_size=100)\n",
    "dataset = dataset.batch(20)\n",
    "print(dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 因为 buffer_size 是100，而批处理大小是 20，所以第一批不包含索引超过 120 的元素。\n",
    "n,line_batch = next(iter(dataset))\n",
    "print(n.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在 Dataset.repeat 中，与Dataset.batch一样，数据相对于 Dataset.repeat 的顺序很重要。\n",
    "\n",
    "在 shuffle 缓冲区为空之前，Dataset.shuffle 不会发出 epoch 结束的信号。因此，在重复之前放置的洗牌将显示一个纪元的每个元素，然后移动到下一个纪元："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset = tf.data.Dataset.zip((counter, lines))\n",
    "shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2)\n",
    "\n",
    "print(\"这是 item id 在 epoch 边界附近:\\n\")\n",
    "for n, line_batch in shuffled.skip(60).take(5):\n",
    "  print(n.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled]\n",
    "plt.plot(shuffle_repeat, label=\"shuffle().repeat()\")\n",
    "plt.ylabel(\"平均 item ID\")\n",
    "plt.legend()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 但是在混编之前的重复将 epoch 的界限混合在一起\n",
    "dataset = tf.data.Dataset.zip((counter, lines))\n",
    "shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10)\n",
    "\n",
    "print(\"这是 item id 在 epoch 边界附近:\\n\")\n",
    "for n, line_batch in shuffled.skip(55).take(15):\n",
    "  print(n.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled]\n",
    "\n",
    "plt.plot(shuffle_repeat, label=\"shuffle().repeat()\")\n",
    "plt.plot(repeat_shuffle, label=\"repeat().shuffle()\")\n",
    "plt.ylabel(\"平均 item ID\")\n",
    "plt.legend()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 预处理数据\n",
    "[Dataset.map(f)](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#map) 转换通过对输入数据集的每个元素应用给定的函数 f 来生成新的数据集。它基于函数式编程语言中通常应用于列表（和其他结构）的 map() 函数。函数 f 接受表示输入中单个元素的 tf.Tensor 对象，并返回表示新数据集中单个元素的 tf.Tensor 对象。它的实现是使用标准的 tensorflow 操作将一个元素转换为另一个元素。\n",
    "\n",
    "本节介绍如何使用 Dataset.map() 的常见示例。\n",
    "\n",
    "### 解码图像数据并调整大小\n",
    "在对真实世界的图像数据进行神经网络训练时，通常需要将不同大小的图像转换为通用大小，以便将它们批量转换为固定大小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 重建花名数据集\n",
    "list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))\n",
    "\n",
    "# 编写一个操作数据集元素的函数\n",
    "# 从文件中读取图像，将其解码为一个密集的张量，并将其调整为一个固定的形状\n",
    "def parse_image(filename):\n",
    "  parts = tf.strings.split(filename, os.sep)\n",
    "  label = parts[-2]\n",
    "\n",
    "  image = tf.io.read_file(filename)\n",
    "  image = tf.image.decode_jpeg(image)\n",
    "  image = tf.image.convert_image_dtype(image, tf.float32)\n",
    "  image = tf.image.resize(image, [128, 128])\n",
    "  return image, label\n",
    "\n",
    "# 测试它是否工作\n",
    "file_path = next(iter(list_ds))\n",
    "image, label = parse_image(file_path)\n",
    "\n",
    "def show(image, label):\n",
    "  plt.figure()\n",
    "  plt.imshow(image)\n",
    "  plt.title(label.numpy().decode('utf-8'))\n",
    "  plt.axis('off')\n",
    "\n",
    "show(image, label)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 将它映射到数据集上\n",
    "images_ds = list_ds.map(parse_image)\n",
    "\n",
    "for image, label in images_ds.take(2):\n",
    "  show(image, label)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用任意的 python 逻辑\n",
    "出于性能原因，应尽可能使用 tensorflow 操作预处理数据。但是，在分析输入数据时调用外部 python 库有时很有用。可以在 Dataset.map() 转换中使用 [tf.py_function()](https://tensorflow.google.cn/api_docs/python/tf/py_function) 操作。\n",
    "\n",
    "例如，如果要应用随机旋转，[tf.image](https://tensorflow.google.cn/api_docs/python/tf/image) 模块只有 [tf.image.rot90](https://tensorflow.google.cn/api_docs/python/tf/image/rot90)，这对图像增强不是很有用。\n",
    "\n",
    "> 注意： tensorflow_addons 在 tensorflow.image.rotate 中有一个与 tensorflow 兼容的 rotate。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 要演示 tf.py_function，应尝试改用 scipy.ndimage.rotate 函数：\n",
    "import scipy.ndimage as ndimage\n",
    "\n",
    "def random_rotate_image(image):\n",
    "  image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False)\n",
    "  return image\n",
    "\n",
    "image, label = next(iter(images_ds))\n",
    "image = random_rotate_image(image)\n",
    "show(image, label)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 要对数据集使用此函数。注意与 Dataset.from_generator 应用相同的注意事项，\n",
    "# 在应用函数时需要描述返回的形状和类型\n",
    "def tf_random_rotate_image(image, label):\n",
    "  im_shape = image.shape\n",
    "  [image,] = tf.py_function(random_rotate_image, [image], [tf.float32])\n",
    "  image.set_shape(im_shape)\n",
    "  return image, label\n",
    "\n",
    "rot_ds = images_ds.map(tf_random_rotate_image)\n",
    "\n",
    "for image, label in rot_ds.take(2):\n",
    "  show(image, label)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 解析 tf.Example 协议缓冲区消息\n",
    "许多输入管道从 TFRecord 格式提取 [tf.train.Example](https://tensorflow.google.cn/api_docs/python/tf/train/Example) 协议缓冲区消息。每个 tf.train.Example 记录包含一个或多个 \"特征\"，输入管道通常将这些特征转换为张量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \n",
    "                                         \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")\n",
    "dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\n",
    "print(dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 你可以在 tf.data.Dataset 之外使用tf.train.Example 来解析数据：\n",
    "raw_example = next(iter(dataset))\n",
    "parsed = tf.train.Example.FromString(raw_example.numpy())\n",
    "\n",
    "feature = parsed.features.feature\n",
    "raw_img = feature['image/encoded'].bytes_list.value[0]\n",
    "img = tf.image.decode_png(raw_img)\n",
    "plt.imshow(img)\n",
    "plt.axis('off')\n",
    "_ = plt.title(feature[\"image/text\"].bytes_list.value[0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "raw_example = next(iter(dataset))\n",
    "\n",
    "def tf_parse(eg):\n",
    "  example = tf.io.parse_example(\n",
    "      eg[tf.newaxis], {\n",
    "          'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string),\n",
    "          'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string)\n",
    "      })\n",
    "  return example['image/encoded'][0], example['image/text'][0]\n",
    "\n",
    "img, txt = tf_parse(raw_example)\n",
    "print(txt.numpy())\n",
    "print(repr(img.numpy()[:20]), \"...\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "decoded = dataset.map(tf_parse)\n",
    "print(decoded)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "image_batch, text_batch = next(iter(decoded.batch(10)))\n",
    "print(image_batch.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 时间序列窗口\n",
    "有关端到端时间序列示例，参见：[Time series forecasting](https://tensorflow.google.cn/tutorials/text/time_series)。\n",
    "\n",
    "时间序列数据通常以完整的时间轴组织。\n",
    "\n",
    "使用简单的 [Dataset.range](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#range) 演示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "range_ds = tf.data.Dataset.range(100000)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通常，基于这类数据的模型需要一个连续的时间片。\n",
    "\n",
    "最简单的方法是批量处理数据：\n",
    "\n",
    "**使用批处理**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "batches = range_ds.batch(10, drop_remainder=True)\n",
    "\n",
    "for batch in batches.take(5):\n",
    "  print(batch.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "或者为了对未来做一个密集的预测，你可以将特征和标签相对地移动一步："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def dense_1_step(batch):\n",
    "  # 将特性和标签相对地移动一步\n",
    "  return batch[:-1], batch[1:]\n",
    "\n",
    "predict_dense_1_step = batches.map(dense_1_step)\n",
    "\n",
    "for features, label in predict_dense_1_step.take(3):\n",
    "  print(features.numpy(), \" => \", label.numpy())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 要预测整个窗口而不是一个固定的偏移量，你可以把批次分成两部分\n",
    "batches = range_ds.batch(15, drop_remainder=True)\n",
    "\n",
    "def label_next_5_steps(batch):\n",
    "  return (batch[:-5],   # 采取前 5 步\n",
    "          batch[-5:])   # 采取剩下的\n",
    "\n",
    "predict_5_steps = batches.map(label_next_5_steps)\n",
    "\n",
    "for features, label in predict_5_steps.take(3):\n",
    "  print(features.numpy(), \" => \", label.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了让一批产品的特性和另一批产品的标签有一些重叠，可以使用 [Dataset.zip](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#zip)："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "feature_length = 10\n",
    "label_length = 5\n",
    "\n",
    "features = range_ds.batch(feature_length, drop_remainder=True)\n",
    "labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:-5])\n",
    "\n",
    "predict_5_steps = tf.data.Dataset.zip((features, labels))\n",
    "\n",
    "for features, label in predict_5_steps.take(3):\n",
    "  print(features.numpy(), \" => \", label.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**使用窗口**：\n",
    "在使用 [Dataset.batch](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#batch) 工作时，有些情况下可能需要更好的控制。[Dataset.window](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#window) 方法提供了完整的控制，但需要注意：它返回一个数据集的数据集。有关详细信息，参见 [Dataset structure](https://tensorflow.google.cn/guide/data#dataset_structure)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "window_size = 5\n",
    "\n",
    "windows = range_ds.window(window_size, shift=1)\n",
    "for sub_ds in windows.take(5):\n",
    "  print(sub_ds)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[Dataset.flat_map](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#flat_map) 方法可以获取数据集的数据集并将其展平为单个数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    " for x in windows.flat_map(lambda x: x).take(30):\n",
    "   print(x.numpy(), end=' ')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在几乎所有情况下，你都需要首先批处理数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def sub_to_batch(sub):\n",
    "  return sub.batch(window_size, drop_remainder=True)\n",
    "\n",
    "for example in windows.flat_map(sub_to_batch).take(5):\n",
    "  print(example.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在，你可以看到 shift 参数控制每个窗口的移动量。\n",
    "\n",
    "把这个放在一起，你可以编写这个函数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def make_window_dataset(ds, window_size=5, shift=1, stride=1):\n",
    "  windows = ds.window(window_size, shift=shift, stride=stride)\n",
    "\n",
    "  def sub_to_batch(sub):\n",
    "    return sub.batch(window_size, drop_remainder=True)\n",
    "\n",
    "  windows = windows.flat_map(sub_to_batch)\n",
    "  return windows\n",
    "\n",
    "ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3)\n",
    "\n",
    "for example in ds.take(10):\n",
    "  print(example.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后很容易提取标签，就像以前一样："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dense_labels_ds = ds.map(dense_1_step)\n",
    "\n",
    "for inputs,labels in dense_labels_ds.take(3):\n",
    "  print(inputs.numpy(), \"=>\", labels.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 重采样\n",
    "在处理类不平衡的数据集时，可能需要重新对该数据集采样。tf.data 提供了两种方法。信用卡欺诈数据集（The credit card fraud dataset）就是此类问题的一个很好的例子。\n",
    "\n",
    "> 注意：有关完整的教程，参阅 [Imbalanced Data](https://tensorflow.google.cn/tutorials/keras/imbalanced_data)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "zip_path = tf.keras.utils.get_file(\n",
    "    origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip',\n",
    "    fname='creditcard.zip',\n",
    "    extract=True)\n",
    "\n",
    "csv_path = zip_path.replace('.zip', '.csv')\n",
    "\n",
    "creditcard_ds = tf.data.experimental.make_csv_dataset(\n",
    "    csv_path, batch_size=1024, label_name=\"Class\",\n",
    "    # 设置列类型:30 个 float 和一个 int。\n",
    "    column_defaults=[float()]*30+[int()])\n",
    "\n",
    "# 检查类的分布，它是高度倾斜的\n",
    "def count(counts, batch):\n",
    "  features, labels = batch\n",
    "  class_1 = labels == 1\n",
    "  class_1 = tf.cast(class_1, tf.int32)\n",
    "\n",
    "  class_0 = labels == 0\n",
    "  class_0 = tf.cast(class_0, tf.int32)\n",
    "\n",
    "  counts['class_0'] += tf.reduce_sum(class_0)\n",
    "  counts['class_1'] += tf.reduce_sum(class_1)\n",
    "\n",
    "  return counts\n",
    "\n",
    "counts = creditcard_ds.take(10).reduce(\n",
    "    initial_state={'class_0': 0, 'class_1': 0},\n",
    "    reduce_func = count)\n",
    "\n",
    "counts = np.array([counts['class_0'].numpy(),\n",
    "                   counts['class_1'].numpy()]).astype(np.float32)\n",
    "\n",
    "fractions = counts/counts.sum()\n",
    "print(fractions)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用不平衡数据集进行培训的一种常见方法是平衡数据集。tf.data 包括几个方法来进行这个工作流程：\n",
    "\n",
    "**数据集采样**：重采样数据集的一种方法是使用来自数据集的 sample_from_datasets。当每个类都有单独的 data.Dataset 时，这更适用。\n",
    "\n",
    "这里，只需使用过滤器从信用卡欺诈数据中生成它们："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "negative_ds = (\n",
    "  creditcard_ds\n",
    "    .unbatch()\n",
    "    .filter(lambda features, label: label==0)\n",
    "    .repeat())\n",
    "positive_ds = (\n",
    "  creditcard_ds\n",
    "    .unbatch()\n",
    "    .filter(lambda features, label: label==1)\n",
    "    .repeat())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for features, label in positive_ds.batch(10).take(1):\n",
    "  print(label.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用 [tf.data.experimental.sample_from_datasets](https://tensorflow.google.cn/api_docs/python/tf/data/experimental/sample_from_datasets) 传递数据集，每个数据集的权重为:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "balanced_ds = tf.data.experimental.sample_from_datasets(\n",
    "    [negative_ds, positive_ds], [0.5, 0.5]).batch(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在数据集产生每个类的例子的概率是50/50："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for features, labels in balanced_ds.take(10):\n",
    "  print(labels.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**拒绝重采样**：上面的 experimental.sample_from_datasets 方法的一个问题是，它需要为每个类单独设置一个 tf.data.Dataset。使用 Dataset.filter 工作，但会导致所有数据被加载两次。\n",
    "\n",
    "[data.experimental.rejection](https://tensorflow.google.cn/api_docs/python/tf/data/experimental/rejection_resample) 重采样函数可以应用于数据集以重新平衡它，而只加载一次。元素将从数据集中删除以实现平衡。\n",
    "\n",
    "data.experimental.rejection 采用类函数参数。此类函数应用于每个数据集元素，用于确定示例属于哪个类以进行平衡。\n",
    "\n",
    "信用卡的元素已经是（功能，标签）对了。所以类函数只需要返回这些标签："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def class_func(features, label):\n",
    "  return label\n",
    "\n",
    "# 该重采样器还需要一个目标分布，以及一个可选的初始分布估计\n",
    "resampler = tf.data.experimental.rejection_resample(\n",
    "    class_func, target_dist=[0.5, 0.5], initial_dist=fractions)\n",
    "\n",
    "# 重采样器处理个别的例子，所以你必须在应用重采样器之前取消数据集\n",
    "resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10)\n",
    "\n",
    "# resampler从class_func的输出返回创建（class，example）对。\n",
    "# 在本例中，示例已经是一个（feature, label）对，因此使用map删除标签的额外副本\n",
    "balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label)\n",
    "\n",
    "# 现在数据集产生每个类的例子的概率是50/50\n",
    "for features, labels in balanced_ds.take(10):\n",
    "  print(labels.numpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 迭代检查点\n",
    "tensorflow 支持获取检查点，这样当你的训练重新启动时，它可以恢复最新的检查点以恢复其大部分进度。除了检查模型变量之外，还可以检查数据集迭代器的进度。如果你有一个大型数据集，并且不想在每次重新启动时从头开始启动数据集，那么这可能非常有用。但是注意，迭代器检查点可能很大，因为像shuffle 和 prefetch 这样的转换需要在迭代器中缓冲元素。\n",
    "\n",
    "要将迭代器包含在检查点中，应将迭代器传递给 [tf.train.Checkpoint](https://tensorflow.google.cn/api_docs/python/tf/train/Checkpoint) 构造函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "range_ds = tf.data.Dataset.range(20)\n",
    "\n",
    "iterator = iter(range_ds)\n",
    "ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator)\n",
    "manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3)\n",
    "\n",
    "print([next(iterator).numpy() for _ in range(5)])\n",
    "\n",
    "save_path = manager.save()\n",
    "\n",
    "print([next(iterator).numpy() for _ in range(5)])\n",
    "\n",
    "ckpt.restore(manager.latest_checkpoint)\n",
    "\n",
    "print([next(iterator).numpy() for _ in range(5)])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 注意：不可能对依赖于外部状态（如 tf.py_function）的迭代器进行检查点检查。尝试这样做会引发异常。\n",
    "\n",
    "## 使用高级 API\n",
    "### tf.keras\n",
    "keras API 简化了创建和执行机器学习模型的许多方面。它的 .fit() 和 .evaluate() 和 .predict() API支持将数据集作为输入。下面是一个快速的数据集和模型设置："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train, test = tf.keras.datasets.fashion_mnist.load_data()\n",
    "\n",
    "images, labels = train\n",
    "images = images/255.0\n",
    "labels = labels.astype(np.int32)\n",
    "\n",
    "fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))\n",
    "fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)\n",
    "\n",
    "model = tf.keras.Sequential([\n",
    "  tf.keras.layers.Flatten(),\n",
    "  tf.keras.layers.Dense(10)\n",
    "])\n",
    "\n",
    "model.compile(optimizer='adam',\n",
    "              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), \n",
    "              metrics=['accuracy'])\n",
    "\n",
    "# 传递Model.fit 和 Model.evaluate 所需的 (feature, label) 标签对数据集：\n",
    "model.fit(fmnist_train_ds, epochs=2)\n",
    "\n",
    "# 如果传递无限数据集，例如通过调用 dataset.repeat()，则只需传递 steps_per_epoch 参数\n",
    "# model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20)\n",
    "\n",
    "# 对于评估，可以通过几个评估步骤\n",
    "loss, accuracy = model.evaluate(fmnist_train_ds)\n",
    "print(\"损失值 :\", loss)\n",
    "print(\"精确度 :\", accuracy)\n",
    "\n",
    "# 对于长数据集，设置步数来评估\n",
    "loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10)\n",
    "print(\"损失值 :\", loss)\n",
    "print(\"精确度 :\", accuracy)\n",
    "\n",
    "# 调用 Model.predict 时不需要这些标\n",
    "predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32)\n",
    "result = model.predict(predict_ds, steps = 10)\n",
    "print(result.shape)\n",
    "\n",
    "# 但是如果你确实传递了一个包含标签的数据集，这些标签就会被忽略\n",
    "result = model.predict(fmnist_train_ds, steps = 10)\n",
    "print(result.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### tf.eatimator\n",
    "要在 [tf.estimator.Estimator](https://tensorflow.google.cn/api_docs/python/tf/estimator/Estimator) 的 input_fn 中使用数据集，只需从 input_fn 返回数据集，框架将为你处理它的元素。例如："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow_datasets as tfds\n",
    "\n",
    "def train_input_fn():\n",
    "  titanic = tf.data.experimental.make_csv_dataset(\n",
    "      titanic_file, batch_size=32,\n",
    "      label_name=\"survived\")\n",
    "  titanic_batches = (\n",
    "      titanic.cache().repeat().shuffle(500)\n",
    "      .prefetch(tf.data.experimental.AUTOTUNE))\n",
    "  return titanic_batches\n",
    "\n",
    "embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32)\n",
    "cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) \n",
    "age = tf.feature_column.numeric_column('age')\n",
    "\n",
    "import tempfile\n",
    "model_dir = tempfile.mkdtemp()\n",
    "model = tf.estimator.LinearClassifier(\n",
    "    model_dir=model_dir,\n",
    "    feature_columns=[embark, cls, age],\n",
    "    n_classes=2\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = model.train(input_fn=train_input_fn, steps=100)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "result = model.evaluate(train_input_fn, steps=10)\n",
    "\n",
    "for key, value in result.items():\n",
    "  print(key, \":\", value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for pred in model.predict(train_input_fn):\n",
    "  for key, value in pred.items():\n",
    "    print(key, \":\", value)\n",
    "  break"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
