{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# TPU\n",
    "目前，Keras 和 Google Colab 都提供了对 Cloud TPU 的实验性支持。运行此 Colab notebook 之前，请检查 notebook 设置：Runtime > Change runtime type > Hardware accelerato >TPU，以确保硬件加速器是 TPU。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "import os\n",
    "import tensorflow_datasets as tfds"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 分发策略\n",
    "本指南演示如何使用分发策略 [tf.distribute.experiential.TPU strategy](https://tensorflow.google.cn/api_docs/python/tf/distribute/experimental/TPUStrategy) 来驱动 Cloud TPU 并训练 Keras 模型。分发策略是一种抽象，可用于在 CPU、GPU 或 TPU 上驱动模型。只要换掉分配策略，模型就会在给定的设备上运行。有关详细信息，请参阅 [分发策略指南](https://tensorflow.google.cn/guide/distributed_training)。\n",
    "\n",
    "下面是连接到 TPU 并创建 TPUStrategy 对象的代码。\n",
    "\n",
    "注意，TPUClusterResolver 的 TPU 参数只是 Colab 的一个特殊地址。在运行 Google Compute Engine（GCE）的情况下，应该输入 CloudTPU 的名称。\n",
    "\n",
    "> 注意：TPU 初始化代码必须在程序的开头。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])\n",
    "tf.config.experimental_connect_to_cluster(resolver)\n",
    "# 这是 TPU 初始化代码，必须在开头。\n",
    "tf.tpu.experimental.initialize_tpu_system(resolver)\n",
    "strategy = tf.distribute.experimental.TPUStrategy(resolver)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面是一个简单的 MNIST模型，与你将在 CPU 或 GPU 上使用的相同。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_model():\n",
    "  return tf.keras.Sequential(\n",
    "      [tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),\n",
    "       tf.keras.layers.Flatten(),\n",
    "       tf.keras.layers.Dense(128, activation='relu'),\n",
    "       tf.keras.layers.Dense(10)])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 输入数据集\n",
    "在使用 Cloud TPU 时，有效地使用 tf.data.Dataset API 至关重要，因为除非你能够足够快地向它们提供数据，否则不可能使用 Cloud TPU。有关数据集性能的详细信息，参阅 [Input Pipeline Performance Guide](https://tensorflow.google.cn/guide/data_performance)。\n",
    "\n",
    "除了最简单的实验（使用 [tf.data.Dataset.from_tensor_slices](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#from_tensor_slices) 或其他图形数据），你需要将数据集读取的所有数据文件存储在 Google Cloud Storage（GCS）存储桶中。\n",
    "\n",
    "对于大多数用例，建议将数据转换为 TFRecord 格式，并使用 [tf.data.TFRecordDataset](https://tensorflow.google.cn/api_docs/python/tf/data/TFRecordDataset) 读取它。有关如何执行此操作的详细信息，参阅 [TFRecord and tf.Example tutorial](https://tensorflow.google.cn/tutorials/load_data/tfrecord)。但是，这不是一个硬性要求，如果愿意，你可以使用其他数据集读取器（FixedLengthRecordDataset 或 TextLineDataset）。\n",
    "\n",
    "小数据集可以使用 [tf.data.Dataset.cache](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset#cache) 完全加载到内存中。\n",
    "\n",
    "无论使用何种数据格式，强烈建议你使用 100MB 左右的大文件。这在这个网络环境中尤为重要，因为打开超过这个大小的文件的开销要高得多。\n",
    "\n",
    "在这里，你应该使用 tensorflow_datasets 模块来获取 MNIST 训练数据的副本。注意，try_gcs 被指定为使用公共 GCS 存储桶中可用的副本。如果不指定此选项，则 TPU 将无法访问下载的数据。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_dataset(batch_size=200):\n",
    "  datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True,\n",
    "                             try_gcs=True)\n",
    "  mnist_train, mnist_test = datasets['train'], datasets['test']\n",
    "\n",
    "  def scale(image, label):\n",
    "    image = tf.cast(image, tf.float32)\n",
    "    image /= 255.0\n",
    "\n",
    "    return image, label\n",
    "\n",
    "  train_dataset = mnist_train.map(scale).shuffle(10000).batch(batch_size)\n",
    "  test_dataset = mnist_test.map(scale).batch(batch_size)\n",
    "\n",
    "  return train_dataset, test_dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 创建训练模型\n",
    "这里没有特定于 TPU 的代码，如果你有多个 GPU，并且使用 MirroredStrategy 而不是 TPUStrategy，那么你将可下面编写相同的代码。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with strategy.scope():\n",
    "  model = create_model()\n",
    "  model.compile(optimizer='adam',\n",
    "                loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "                metrics=['sparse_categorical_accuracy'])\n",
    "\n",
    "train_dataset, test_dataset = get_dataset()\n",
    "\n",
    "model.fit(train_dataset,\n",
    "          epochs=5,\n",
    "          validation_data=test_dataset)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 下一步\n",
    "- [Google Cloud TPU文档](https://cloud.google.com/tpu/docs/)——设置并运行 Google Cloud TPU。\n",
    "- [使用TensorFlow的分布式培训](https://tensorflow.google.cn/guide/distributed_training)——如何使用分发策略，并链接到显示最佳实践的许多示例。\n",
    "- [TensorFlow官方模型](https://github.com/tensorflow/models/tree/master/official)——与 Cloud TPU 兼容的最新 tensorflow2.x 模型示例。\n",
    "- [Google Cloud TPU 性能指南](https://cloud.google.com/tpu/docs/performance-guide——通过调整应用程序的 Cloud TPU 配置参数，进一步提高 Cloud TPU 性能。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
