{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 检查点\n",
    "\"保存一个 tensorflow（Saving a TensorFlow model）\"这句短语基本上意味这下面其一：\n",
    "1. 检查点，或者\n",
    "2. 保存模型\n",
    "\n",
    "检查点捕获被用于模型的所有参数（tf.Variable 对象）的精确值。检查点不包含计算的任何描述，所以只有在源码可用的情况下保存参数才有用。\n",
    "\n",
    "另一方面，SavedModel 格式除了参数值（检查点）之外，还包括由模型定义的计算的序列化描述。这种格式的模型独立于创建模型的源码。因此，它们适合于通过 TensorFlow Serving、TensorFlow Lite、TysFrace.js 或其他编程语言（C、C++、java、Go、Rust、C# 等）等程序进行部署。\n",
    "\n",
    "本指南涵盖了用于编写和读取检查点的 API。\n",
    "\n",
    "## 设置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "class Net(tf.keras.Model):\n",
    "  \"\"\"一个简单的线性模型\"\"\"\n",
    "\n",
    "  def __init__(self):\n",
    "    super(Net, self).__init__()\n",
    "    self.l1 = tf.keras.layers.Dense(5)\n",
    "\n",
    "  def call(self, x):\n",
    "    return self.l1(x)\n",
    "\n",
    "net = Net()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 从 tf.keras 训练 API 中保存\n",
    "查看 [tf.keras](https://tensorflow.google.cn/api_docs/python/tf/keras) 指南的保存和恢复部分。\n",
    "\n",
    "[tf.keras.Model.save_weights](https://tensorflow.google.cn/api_docs/python/tf/keras/Model#save_weights) 保存一个 tensorflow 检查点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "net.save_weights('easy_checkpoint')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 写入检查点\n",
    "tensorflow 的持久状态被存储在 tf.Variable 对象，它们能被直接构建，但是经常是被高级 API 像是 [tf.keras.layers](https://tensorflow.google.cn/api_docs/python/tf/keras/layers) 或 [tf.keras.Model](https://tensorflow.google.cn/api_docs/python/tf/keras/Model) 所创建。\n",
    "\n",
    "管理变量最简单的方法是将它们附加到 python 对象上，然后引用这些对象。\n",
    "\n",
    "[tf.train.Checkpoint](https://tensorflow.google.cn/api_docs/python/tf/train/Checkpoint)、[tf.keras.layers.Layer](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/Layer) 和 [tf.keras.Model](https://tensorflow.google.cn/api_docs/python/tf/keras/Model) 的子类会自动跟踪分配给它们属性的变量。下面的示例构造一个简单的线性模型，然后编写包含模型所有变量值的检查点。\n",
    "\n",
    "可以使用 model.save_weights 轻松保存模型检查点。\n",
    "\n",
    "### 手动检查\n",
    "#### 设置\n",
    "为了帮助演示 [tf.train.Checkpoint](https://tensorflow.google.cn/api_docs/python/tf/train/Checkpoint) 的所有功能，定义一个小型数据集和优化步骤："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def toy_dataset():\n",
    "  inputs = tf.range(10.)[:, None]\n",
    "  labels = inputs * 5. + tf.range(5.)[None, :]\n",
    "  return tf.data.Dataset.from_tensor_slices(\n",
    "    dict(x=inputs, y=labels)).repeat().batch(2)\n",
    "\n",
    "def train_step(net, example, optimizer):\n",
    "  \"\"\"使用 `优化器` 在 `例子` 上训练 `net`\"\"\"\n",
    "  with tf.GradientTape() as tape:\n",
    "    output = net(example['x'])\n",
    "    loss = tf.reduce_mean(tf.abs(output - example['y']))\n",
    "  variables = net.trainable_variables\n",
    "  gradients = tape.gradient(loss, variables)\n",
    "  optimizer.apply_gradients(zip(gradients, variables))\n",
    "  return loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 创建检查点对象\n",
    "你需要一个 tf.train.Checkpoint 对象去手动构建一个检查点，将要检查点的对象设置为对象的属性。\n",
    "\n",
    "[tf.train.ChechpointManager](https://tensorflow.google.cn/api_docs/python/tf/train/CheckpointManager) 能帮助管理多个坚检查点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "opt = tf.keras.optimizers.Adam(0.1)\n",
    "dataset = toy_dataset()\n",
    "iterator = iter(dataset)\n",
    "ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)\n",
    "manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 训练和检查模型\n",
    "下面的循环训练创建了一个模型实例和一个优化器，然后将它们收集到 tf.train.Checkpoint 对象。它在每批数据的循环中调用训练步骤，并定期将检查点写入磁盘。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "从头开始初始化.\n",
      "步骤 10 保存检查点: ./tf_ckpts\\ckpt-1\n",
      "损失值 28.11\n",
      "步骤 20 保存检查点: ./tf_ckpts\\ckpt-2\n",
      "损失值 21.52\n",
      "步骤 30 保存检查点: ./tf_ckpts\\ckpt-3\n",
      "损失值 14.98\n",
      "步骤 40 保存检查点: ./tf_ckpts\\ckpt-4\n",
      "损失值 8.62\n",
      "步骤 50 保存检查点: ./tf_ckpts\\ckpt-5\n",
      "损失值 4.47\n"
     ]
    }
   ],
   "source": [
    "def train_and_checkpoint(net, manager):\n",
    "  ckpt.restore(manager.latest_checkpoint)\n",
    "  if manager.latest_checkpoint:\n",
    "    print(\"从 {} 恢复\".format(manager.latest_checkpoint))\n",
    "  else:\n",
    "    print(\"从头开始初始化.\")\n",
    "\n",
    "  for _ in range(50):\n",
    "    example = next(iterator)\n",
    "    loss = train_step(net, example, opt)\n",
    "    ckpt.step.assign_add(1)\n",
    "    if int(ckpt.step) % 10 == 0:\n",
    "      save_path = manager.save()\n",
    "      print(\"步骤 {} 保存检查点: {}\".format(int(ckpt.step), save_path))\n",
    "      print(\"损失值 {:1.2f}\".format(loss.numpy()))\n",
    "\n",
    "train_and_checkpoint(net, manager)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 恢复和继续训练\n",
    "在第一次训练之后，你可以传入一个新的模型和管理器，但是该训练会从你之前离开的地方开始："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "从 ./tf_ckpts\\ckpt-5 恢复\n",
      "步骤 60 保存检查点: ./tf_ckpts\\ckpt-6\n",
      "损失值 1.47\n",
      "步骤 70 保存检查点: ./tf_ckpts\\ckpt-7\n",
      "损失值 1.19\n",
      "步骤 80 保存检查点: ./tf_ckpts\\ckpt-8\n",
      "损失值 0.30\n",
      "步骤 90 保存检查点: ./tf_ckpts\\ckpt-9\n",
      "损失值 0.32\n",
      "步骤 100 保存检查点: ./tf_ckpts\\ckpt-10\n",
      "损失值 0.25\n"
     ]
    }
   ],
   "source": [
    "opt = tf.keras.optimizers.Adam(0.1)\n",
    "net = Net()\n",
    "dataset = toy_dataset()\n",
    "iterator = iter(dataset)\n",
    "ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)\n",
    "manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)\n",
    "\n",
    "train_and_checkpoint(net, manager)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "tf.train.CheckpointManager 对象会删除旧的检查点。上面的配置（max_to_keep=3）只保留最近的三个检查点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['./tf_ckpts\\\\ckpt-8', './tf_ckpts\\\\ckpt-9', './tf_ckpts\\\\ckpt-10']\n"
     ]
    }
   ],
   "source": [
    "print(manager.checkpoints)  # 列出剩下的三个检查点"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这些路径，例如 \"./tf_ckpts/ckpt-10\"，不是磁盘上的文件。相反，它们是索引文件和包含变量值的一个或多个数据文件的前缀。这些前缀组合在一个检查点文件（\"/tf_ckpts/checkpoint\"）中，检查点管理器在其中保存其状态。\n",
    "\n",
    "## 加载结构\n",
    "tensotflow 通过遍历具有命名边的有向图，从加载的对象开始，将变量与检查点值匹配。边缘名称通常来自对象中的属性名称，例如s elf.l1=tf.keras.layers.Dense(5) 中的 \"l1\"。tf.train.Checkpoint 使用关键字参数名，如 tf.train.Checkpoint（step=…）中的 \"step\"。\n",
    "\n",
    "上面例子中的依赖关系图是这样的：\n",
    "\n",
    "![title](../img/5_3/whole_checkpoint.jpg)\n",
    "\n",
    "优化器为红色，常规变量为蓝色，优化器槽变量为橙色。其他节点（例如，表示 tf.train.Checkpoint）为黑色。\n",
    "\n",
    "槽（Slot）变量是优化器状态的一部分，但是是为特定变量创建的。例如，上面的 \"m\" 边对应于动量，Adam 优化器为每个变量跟踪动量。只有当变量和优化器都将被保存时，槽变量才会保存在检查点中，从而保存虚线边。\n",
    "\n",
    "在 tf.train.Checkpoint 对象上调用 restore() 将还原请求入队，一旦与来自 Checkpoint 对象的路径匹配，就立即还原变量值。例如，我们可以通过重建一条通过网络和层到它的路径来加载上面定义的模型的偏差。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0. 0. 0. 0. 0.]\n",
      "[1.2629232 2.623802  2.4494455 4.7659163 4.690216 ]\n"
     ]
    }
   ],
   "source": [
    "to_restore = tf.Variable(tf.zeros([5]))\n",
    "print(to_restore.numpy())  # 所有都是 0\n",
    "fake_layer = tf.train.Checkpoint(bias=to_restore)\n",
    "fake_net = tf.train.Checkpoint(l1=fake_layer)\n",
    "new_root = tf.train.Checkpoint(net=fake_net)\n",
    "status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))\n",
    "print(to_restore.numpy())  # 现在我们获得了还原的值"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这些新对象的依赖关系图是我们在上面编写的更大检查点的一个子图。它只包括 tf.train.Checkpoint 用来给检查点编号的偏差和保存计数器。\n",
    "\n",
    "![title](../img/5_3/partial_checkpoint.jpg)\n",
    "\n",
    "restore() 返回一个 status 对象，该对象具有可选断言。在新检查点中创建的所有对象都已还原，因此status.assert_existing_objects_matched() 通过。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x196d9722208>"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "status.assert_existing_objects_matched()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "检查点中有许多对象不匹配，包括层的内核和优化器的变量。status.assert_consumered() 只在检查点和程序完全匹配时通过，并在此处引发异常。\n",
    "\n",
    "### 延迟还原\n",
    "当输入形状可用时，tensorflow 中的层对象可能会延迟变量的创建到其第一次调用。例如，密集层内核的形状取决于层的输入和输出形状，因此作为构造函数参数所需的输出形状不足以单独创建变量。由于调用层也会读取变量的值，因此必须在创建变量和首次使用变量之间进行还原。\n",
    "\n",
    "为了支持这个习惯用法，tf.train.Checkpoint 队列将恢复尚未具有匹配变量的队列。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[0. 0. 0. 0. 0.]]\n",
      "[[4.8120327 4.7232347 4.9282074 4.7934976 4.8187246]]\n"
     ]
    }
   ],
   "source": [
    "delayed_restore = tf.Variable(tf.zeros([1, 5]))\n",
    "print(delayed_restore.numpy())  # 没有还原：还是 0\n",
    "fake_layer.kernel = delayed_restore\n",
    "print(delayed_restore.numpy())  # 还原"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 手动检查检查点\n",
    "[tf.train.list_variable](https://tensorflow.google.cn/api_docs/python/tf/train/list_variables) 列出了检查点里检查点的键和形状的值。检查点的键是上图中显示的路径。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('_CHECKPOINTABLE_OBJECT_GRAPH', []),\n",
       " ('iterator/.ATTRIBUTES/ITERATOR_STATE', []),\n",
       " ('net/l1/bias/.ATTRIBUTES/VARIABLE_VALUE', [5]),\n",
       " ('net/l1/bias/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE', [5]),\n",
       " ('net/l1/bias/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE', [5]),\n",
       " ('net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE', [1, 5]),\n",
       " ('net/l1/kernel/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE',\n",
       "  [1, 5]),\n",
       " ('net/l1/kernel/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE',\n",
       "  [1, 5]),\n",
       " ('optimizer/beta_1/.ATTRIBUTES/VARIABLE_VALUE', []),\n",
       " ('optimizer/beta_2/.ATTRIBUTES/VARIABLE_VALUE', []),\n",
       " ('optimizer/decay/.ATTRIBUTES/VARIABLE_VALUE', []),\n",
       " ('optimizer/iter/.ATTRIBUTES/VARIABLE_VALUE', []),\n",
       " ('optimizer/learning_rate/.ATTRIBUTES/VARIABLE_VALUE', []),\n",
       " ('save_counter/.ATTRIBUTES/VARIABLE_VALUE', []),\n",
       " ('step/.ATTRIBUTES/VARIABLE_VALUE', [])]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tf.train.list_variables(tf.train.latest_checkpoint('./tf_ckpts/'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 列表和字典跟踪\n",
    "与 `self.l1=tf.keras.layers.Dense(5)` 等直接属性分配一样，为属性分配列表和字典将跟踪其内容。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "save = tf.train.Checkpoint()\n",
    "save.listed = [tf.Variable(1.)]\n",
    "save.listed.append(tf.Variable(2.))\n",
    "save.mapped = {'one': save.listed[0]}\n",
    "save.mapped['two'] = save.listed[1]\n",
    "save_path = save.save('./tf_list_example')\n",
    "\n",
    "restore = tf.train.Checkpoint()\n",
    "v2 = tf.Variable(0.)\n",
    "assert 0. == v2.numpy()  # 还原前\n",
    "restore.mapped = {'two': v2}\n",
    "restore.restore(save_path)\n",
    "assert 2. == v2.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "您可能会注意到列表和字典的包装对象。这些包装器是底层数据结构的可检查点版本。就像基于属性的加载一样，这些包装器会在变量添加到容器中时立即恢复变量的值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ListWrapper([])\n"
     ]
    }
   ],
   "source": [
    "restore.listed = []\n",
    "print(restore.listed)  # ListWrapper([])\n",
    "v1 = tf.Variable(0.)\n",
    "restore.listed.append(v1)  # 从前一个单元格中的 restore() 恢复v1\n",
    "assert 1. == v1.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "相同的跟踪将自动应用于 tf.keras.Model 的子类，并可用于例如跟踪层列表。\n",
    "\n",
    "## 使用 Estimator 保存基于对象的检查点\n",
    "查看 [Estimator guide](https://tensorflow.google.cn/guide/estimator)。\n",
    "\n",
    "默认情况下，Estimator 使用变量名保存检查点，而不是前面章节中描述的对象图。Checkpoint 将接受基于名称的检查点，但是当将模型的一部分移动到估计器模型之外时，变量名称可能会改变。保存基于对象的检查点使得在 Estimator 内部训练模型，然后在 Estimator 外部更容易地使用它。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Using default config.\n",
      "INFO:tensorflow:Using config: {'_model_dir': './tf_estimator_example/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\n",
      "graph_options {\n",
      "  rewrite_options {\n",
      "    meta_optimizer_iterations: ONE\n",
      "  }\n",
      "}\n",
      ", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n",
      "WARNING:tensorflow:From f:\\python\\pythonenv\\machine_learning\\lib\\site-packages\\tensorflow_core\\python\\ops\\resource_variable_ops.py:1635: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "If using Keras pass *_constraint arguments to layers.\n",
      "WARNING:tensorflow:From f:\\python\\pythonenv\\machine_learning\\lib\\site-packages\\tensorflow_core\\python\\training\\training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Create CheckpointSaverHook.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Saving checkpoints for 0 into ./tf_estimator_example/model.ckpt.\n",
      "INFO:tensorflow:loss = 4.410821, step = 0\n",
      "WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 2 vs previous value: 2. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.\n",
      "INFO:tensorflow:Saving checkpoints for 10 into ./tf_estimator_example/model.ckpt.\n",
      "INFO:tensorflow:Loss for final step: 35.36306.\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow_estimator.python.estimator.estimator.EstimatorV2 at 0x196f2f2a788>"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import tensorflow.compat.v1 as tf_compat\n",
    "\n",
    "def model_fn(features, labels, mode):\n",
    "  net = Net()\n",
    "  opt = tf.keras.optimizers.Adam(0.1)\n",
    "  ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(),\n",
    "                             optimizer=opt, net=net)\n",
    "  with tf.GradientTape() as tape:\n",
    "    output = net(features['x'])\n",
    "    loss = tf.reduce_mean(tf.abs(output - features['y']))\n",
    "  variables = net.trainable_variables\n",
    "  gradients = tape.gradient(loss, variables)\n",
    "  return tf.estimator.EstimatorSpec(\n",
    "    mode,\n",
    "    loss=loss,\n",
    "    train_op=tf.group(opt.apply_gradients(zip(gradients, variables)),\n",
    "                      ckpt.step.assign_add(1)),\n",
    "    # 告诉 Estimator 以基于对象的格式保存 ckpt\n",
    "    scaffold=tf_compat.train.Scaffold(saver=ckpt))\n",
    "\n",
    "tf.keras.backend.clear_session()\n",
    "est = tf.estimator.Estimator(model_fn, './tf_estimator_example/')\n",
    "est.train(toy_dataset, steps=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "然后，Checkpoint 可以从其 model_dir 加载 Estimator 的检查点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "10"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "opt = tf.keras.optimizers.Adam(0.1)\n",
    "net = Net()\n",
    "ckpt = tf.train.Checkpoint(\n",
    "  step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net)\n",
    "ckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/'))\n",
    "ckpt.step.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 总结\n",
    "tensorflow 对象提供了一种简单的自动机制来保存和恢复它们所使用的变量的值。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
