{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# SaveModel\n",
    "一个 SaveModel 模型包含一个完整的 tensorflow 程序，包括权重和计算。它不要求原始的模型能够正常运行，这（一特性）使得它在共享和部署（使用TFLite、TensorFlow.js、TensorFlow service或TensorFlow Hub）上很有用。\n",
    "\n",
    "这篇文档将分为两部分讲述如何使用底层的 [tf.saved_model](https://tensorflow.google.cn/api_docs/python/tf/saved_model) API：\n",
    "- 如果你使用的是 [tf.keras.Model](https://tensorflow.google.cn/api_docs/python/tf/keras/Model)，那么你可能需要 [keras.Model.save(output_path)[https://tensorflow.google.cn/api_docs/python/tf/keras/Model#save] 方法：具体查看 [keras save and serialize](https://tensorflow.google.cn/guide/keras/save_and_serialize)。\n",
    "- 如果你只是在训练期间保存/加载权重，参阅 [训练检查点指南](https://tensorflow.google.cn/guide/checkpoint)。\n",
    "\n",
    "## 从 Keras 创建 SavedModel\n",
    "为了快速介绍，本节将导出一个经过预先培训的 Keras 模型，并使用该模型提供图像分类请求。本指南的其余部分将填写详细信息，并讨论创建 savedModel 的其他方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from matplotlib import pyplot as plt\n",
    "import numpy as np\n",
    "\n",
    "physical_devices = tf.config.experimental.list_physical_devices('GPU')\n",
    "if physical_devices:\n",
    "  tf.config.experimental.set_memory_growth(physical_devices[0], True)\n",
    "\n",
    "file = tf.keras.utils.get_file(\n",
    "    \"grace_hopper.jpg\",\n",
    "    \"https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg\")\n",
    "img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224])\n",
    "plt.imshow(img)\n",
    "plt.axis('off')\n",
    "x = tf.keras.preprocessing.image.img_to_array(img)\n",
    "x = tf.keras.applications.mobilenet.preprocess_input(\n",
    "    x[tf.newaxis,...])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们将使用 Grace Hopper 的图像作为运行示例，并使用 Keras 预先训练的图像分类模型，因为它易于使用。自定义模型也可以工作，稍后将详细介绍。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "labels_path = tf.keras.utils.get_file(\n",
    "    'ImageNetLabels.txt',\n",
    "    'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')\n",
    "imagenet_labels = np.array(open(labels_path).read().splitlines())\n",
    "\n",
    "pretrained_model = tf.keras.applications.MobileNet()\n",
    "result_before_save = pretrained_model(x)\n",
    "\n",
    "decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1]\n",
    "\n",
    "print(\"保存前的结果:\\n\", decoded)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这张图片的最大预测是 \"军装（military uniform）\"。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.saved_model.save(pretrained_model, \"/tmp/mobilenet/1/\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "保存路径遵循 tensorflow Serving 使用的约定，其中最后一个路径组件（此处为1）是模型的版本号——它允许 tensorflow Serving 之类的工具对相对新鲜度（是否为最新的）进行推理。\n",
    "\n",
    "SavedModel 具有被称为 signatures（签名） 的命名函数。Keras 模型在 serving_default 签名密钥下导出它们的前向通行证。[SavedModel命令行界面](https://tensorflow.google.cn/guide/saved_model#saved_model_cli) 对于检查磁盘上的 SavedModels 非常有用：\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve --signature_def serving_default\n",
    "```\n",
    "\n",
    "我们可以使用 [tf.saved_model.load](https://tensorflow.google.cn/api_docs/python/tf/saved_model/load) 将 SavedModel 加载回 python 中，并查看 Admiral Hopper 图片的分类。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loaded = tf.saved_model.load(\"/tmp/mobilenet/1/\")\n",
    "print(list(loaded.signatures.keys()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "导入签名返回字典："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "infer = loaded.signatures[\"serving_default\"]\n",
    "print(infer.structured_outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从 SavedModel 运行推断得到的结果与原始模型相同。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]\n",
    "\n",
    "decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]\n",
    "\n",
    "print(\"保存和加载后处理的结果:\\n\", decoded)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 在 tensorflow Serving 中运行 SavedModel\n",
    "SavedModels可以从Python中使用（更多内容见下文），但生产环境通常使用专用服务进行推断，而不运行 python 代码。要想在 tensorflow Serving 中使用 SavedModel ，很容易设置。\n",
    "\n",
    "参阅 [tensorflow Serving REST](https://tensorflow.google.cn/tfx/tutorials/serving/rest_simple) 教程以了解有关服务的更多详细信息，包括在 notebook 或本地计算机上安装 tensorFlow_model_server 的说明。作为一个快速的草图，要为上面导出的 mobilenet 模型提供服务，只需将模型服务器指向SavedModel 目录：\n",
    "```sh\n",
    "nohup tensorflow_model_server \n",
    "  --rest_api_port=8501 \n",
    "  --model_name=mobilenet\n",
    "  --model_base_path=\"/tmp/mobilenet\" >server.log 2>&1\n",
    "```\n",
    "然后发送一个请求。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import numpy\n",
    "import requests\n",
    "data = json.dumps({\"signature_name\": \"serving_default\",\n",
    "                   \"instances\": x.tolist()})\n",
    "headers = {\"content-type\": \"application/json\"}\n",
    "json_response = requests.post('http://localhost:8501/v1/models/mobilenet:predict',\n",
    "                              data=data, headers=headers)\n",
    "predictions = numpy.array(json.loads(json_response.text)[\"predictions\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "预测的结果与 python 中的结果相同。\n",
    "\n",
    "## 磁盘中的 SavedModel 格式\n",
    "SavedModel 是一个包含序列化签名和运行这些签名需要的状态的目录，也包括变量值和向量\n",
    "\n",
    "save_model.pb 文件存储实际的 tensorflow 程序或模型，以及一组命名签名，每个签名标识一个接受张量输入并生成张量输出的函数。\n",
    "\n",
    "SavedModels 可能包含模型的多个变体（多个 v1.MetaGraphDefs，用 --tag_set 标志标识为saved_model_cli），但这很少见。创建一个模型的多个变体的 API 包括[tf.Estimator.experimental_export_all_saved_models](https://tensorflow.google.cn/api_docs/python/tf/estimator/Estimator#experimental_export_all_saved_models) 和TensorFlow1 的 tf.saved_model.Builder。\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve\n",
    "```\n",
    "变量目录包含一个标准的训练检查点（参见 [训练检查点指南](https://tensorflow.google.cn/guide/checkpoint)）。\n",
    "```sh\n",
    "ls /tmp/mobilenet/1/variables\n",
    "```\n",
    "\n",
    "assets目录包含被 tensorflow 图使用的文件，例如用于初始化词汇表的文本文件。在本例中未使用它。\n",
    "\n",
    "SavedModels 可能有一个 assets.extra 目录，用保存于 tensorflow 图未使用的文件，例如有关使用 SavedModel 的信息。tensorflow 本身不使用此目录。\n",
    "\n",
    "## 导入自定义模型\n",
    "在第一部分中，tf.saved_model.save 自动确定 tf.keras.model 对象的签名。这是因为 Keras模 型对象有一个明确的方法来导出已知的输入形状。save_model.save 与底层模型构建 API 的工作原理相同，但是如果你计划为模型提供服务，则需要指明要将哪个函数用作签名。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CustomModule(tf.Module):\n",
    "\n",
    "  def __init__(self):\n",
    "    super(CustomModule, self).__init__()\n",
    "    self.v = tf.Variable(1.)\n",
    "\n",
    "  @tf.function\n",
    "  def __call__(self, x):\n",
    "    return x * self.v\n",
    "\n",
    "  @tf.function(input_signature=[tf.TensorSpec([], tf.float32)])\n",
    "  def mutate(self, new_v):\n",
    "    self.v.assign(new_v)\n",
    "\n",
    "module = CustomModule()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个模块有两个用 tf.function 修饰的方法。尽管这些函数将包含在 SavedModel 中，如果通过tf.saved_mode l重新加载 SavedModel，则可以使用这些函数。如果加载到 python 程序中，不显式地声明服务签名工具（如 tensorflow Service 和 saved_model_cli）就不能访问它们。\n",
    "\n",
    "mutate 方法具有已声明的 input_signature，因此已有足够的信息将其计算图保存在 SavedModel 中。__ call__ 方法没有声明的签名，因此它的签名是从保存之前的使用方式推断出来的：调用一次或多次该方法将为参数中看到的张量形状和数据类型的每个特定组合创建计算图。（[tf.function](https://tensorflow.google.cn/tutorials/customization/performance#tracing) 教程调用跟踪函数。）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "module(tf.constant(0.))\n",
    "tf.saved_model.save(module, \"/tmp/module_no_signatures\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对于没有 input_signature 的函数，加载后将可以使用保存前使用的任何输入形状。因为我们只用标量调用 __ call__，所以它只接受标量值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "imported = tf.saved_model.load(\"/tmp/module_no_signatures\")\n",
    "assert 3. == imported(tf.constant(3.)).numpy()\n",
    "imported.mutate(tf.constant(2.))\n",
    "assert 6. == imported(tf.constant(3.)).numpy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 函数不会接受像向量这样的新形状\n",
    "imported(tf.constant([3.]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "get_concrete_function 允许你在不调用函数的情况下向函数添加输入形状。它使用 [tf.TensorSpec](https://tensorflow.google.cn/api_docs/python/tf/TensorSpec) 对象代替 Tensor 参数，指示输入的形状和数据类型。形状可以是 None，表示可以接受任何形状，也可以是轴大小的列表。如果轴大小为 None，则该轴可以接受任何大小。（这通常用于批量大小。）tf.TensorSpecs 也可以有名称，默认为函数的参数关键字（此处为 \"x\"）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "module.__call__.get_concrete_function(x=tf.TensorSpec([None], tf.float32))\n",
    "tf.saved_model.save(module, \"/tmp/module_no_signatures\")\n",
    "imported = tf.saved_model.load(\"/tmp/module_no_signatures\")\n",
    "assert [3.] == imported(tf.constant([3.])).numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们没有将导出的函数标识为签名，所以它没有。\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/module_no_signatures --tag_set serve\n",
    "```\n",
    "\n",
    "### 标识要导出的签名\n",
    "若要指示函数应为签名，在保存时指定 signatures 参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))\n",
    "tf.saved_model.save(module, \"/tmp/module_with_signature\", signatures=call)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意，我们首先使用 get_concrete_function 将 tf.function 转换为 ConcreteFunction。这是必须的，因为函数是在没有固定输入签名的情况下创建的，因此没有与之相关联的一组确定的张量输入。\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/module_with_signature --tag_set serve --signature_def serving_default\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "imported = tf.saved_model.load(\"/tmp/module_with_signature\")\n",
    "signature = imported.signatures[\"serving_default\"]\n",
    "assert [3.] == signature(x=tf.constant([3.]))[\"output_0\"].numpy()\n",
    "imported.mutate(tf.constant(2.))\n",
    "assert [6.] == signature(x=tf.constant([3.]))[\"output_0\"].numpy()\n",
    "assert 2. == imported.v.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们导出了一个签名，其密钥默认为 \"serving_default\"。要导出多个签名，要传递字典。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@tf.function(input_signature=[tf.TensorSpec([], tf.string)])\n",
    "def parse_string(string_input):\n",
    "  return imported(tf.strings.to_number(string_input))\n",
    "\n",
    "signatures = {\"serving_default\": parse_string,\n",
    "              \"from_float\": imported.signatures[\"serving_default\"]}\n",
    "\n",
    "tf.saved_model.save(imported, \"/tmp/module_with_multiple_signatures\", signatures)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```sh\n",
    "saved_model_cli show --dir /tmp/module_with_multiple_signatures --tag_set serve\n",
    "```\n",
    "save_model_cli 还可以直接从命令行运行 SavedModels。\n",
    "```sh\n",
    "saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def serving_default --input_exprs=\"string_input='3.'\"\n",
    "saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def from_float --input_exprs=\"x=3.\"\n",
    "```\n",
    "\n",
    "## 在 python 中重用 SavedModel\n",
    "让我们再看看上面的 CustomModule 类，以及如何将这种类型的模块对象保存为SavedModels并重新加载。你是否注意到调用模块（…）时没有使用签名？\n",
    "\n",
    "对从 tf.keras.mode l或 tf.Module 派生的对象调用 tf.saved_model.save(obj，\"/path\") 可保存 tf.Variable 属性，如训练检查点教程（[training checkpoints](https://tensorflow.google.cn/guide/checkpoint)）中所述：递归地遍历由其属性引用的对象，并保存在这些对象上找到的 tf.Variable 属性的当前值。同样，在这些对象上找到的 tf.function 修饰的方法保存了它们的计算图。但是，原始的 python 类型、方法的 python 代码和 python 值数据成员将丢失。\n",
    "\n",
    "调用 obj=tf.saved_model.load('/path') 将还原保存的对象（现在已还原为占位符类型）、其 tf.Variable 属性及其各自的保存值以及 tf.function 修饰的方法。这些方法可以像以前一样调用，只要保存了一个计算图，其中包含调用中使用的张量形状和非张量值的组合。从 python 代码中回溯 tf.function 已不可能，将引发异常。\n",
    "\n",
    "与 .signatures 字典中的具体函数相比，还原的 tf.functions 为还原的模型提供了更丰富、更具 python 风格的 API。但是，依赖于签名的非 python 环境无法访问此 API。\n",
    "\n",
    "### 基本的微调\n",
    "变量对象是可用的，我们可以通过导入函数来支持。这足以在简单的情况下对 SavedModel 进行微调（即重新训练）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = tf.optimizers.SGD(0.05)\n",
    "\n",
    "def train_step():\n",
    "  with tf.GradientTape() as tape:\n",
    "    loss = (10. - imported(tf.constant(2.))) ** 2\n",
    "  variables = tape.watched_variables()\n",
    "  grads = tape.gradient(loss, variables)\n",
    "  optimizer.apply_gradients(zip(grads, variables))\n",
    "  return loss\n",
    "\n",
    "for _ in range(10):\n",
    "  # v 接近 5，损失值接近 0\n",
    "  print(\"损失值={:.2f} v={:.2f}\".format(train_step(), imported.v.numpy()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 一般的微调\n",
    "Keras 中的 SavedModel 提供了比普通调用更多的细节，以解决更高级的微调情况。tensorflow Hub 建议在 SavedModels shared 中提供以下内容（如果适用），以便进行微调：\n",
    "- 如果模型使用了 dropout 或另一种技术，其中前向传播在训练和推断（如批处理规范化）之间有所不同，__ call__ 方法接受一个可选的、python 值的 training=参数，该参数默认为 False，但可以设置为True。\n",
    "-  __ call__ 属性后面，有 .variable 和 .trainable_variable 属性以及相应的变量列表。可训练的变量中省略了一个最初可训练但在微调期间要冻结的变量。\n",
    "- 为了将权重正则化表示为层或子模型属性的框架（如 Keras），还可以有一个 .regularization_loss 属性。它包含一个零参数函数列表，这些函数的值是用来添加到总损失中的。\n",
    "\n",
    "回到最初的 MobileNet 示例，我们可以看到正在运行的其中一些（内容）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loaded = tf.saved_model.load(\"/tmp/mobilenet/1/\")\n",
    "print(\"MobileNet 有 {} 可训练参数: {}, ...\".format(\n",
    "          len(loaded.trainable_variables),\n",
    "          \", \".join([v.name for v in loaded.trainable_variables[:5]])))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "trainable_variable_ids = {id(v) for v in loaded.trainable_variables}\n",
    "non_trainable_variables = [v for v in loaded.variables\n",
    "                           if id(v) not in trainable_variable_ids]\n",
    "print(\"MobileNet 还有 {} 不可训练参数: {}, ...\".format(\n",
    "          len(non_trainable_variables),\n",
    "          \", \".join([v.name for v in non_trainable_variables[:3]])))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## SavedModel 的控制流\n",
    "任何可以放入 tf.function 的内容都可以放入 SavedModel。对于 [AutoGraph](https://tensorflow.google.cn/guide/function)，这包括依赖于张量的条件逻辑，由常规 python 控制流指定。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@tf.function(input_signature=[tf.TensorSpec([], tf.int32)])\n",
    "def control_flow(x):\n",
    "  if x < 0:\n",
    "    tf.print(\"Invalid!\")\n",
    "  else:\n",
    "    tf.print(x % 3)\n",
    "\n",
    "to_export = tf.Module()\n",
    "to_export.control_flow = control_flow\n",
    "tf.saved_model.save(to_export, \"/tmp/control_flow\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "imported = tf.saved_model.load(\"/tmp/control_flow\")\n",
    "imported.control_flow(tf.constant(-1))\n",
    "imported.control_flow(tf.constant(2))\n",
    "imported.control_flow(tf.constant(3))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Estimator 中的 SavedModel\n",
    "Estimator 通过 [tf.Estimator.export_saved_model](https://tensorflow.google.cn/api_docs/python/tf/estimator/Estimator#export_saved_model) 导出 SavedModel。详情参阅 [Estimator guide](https://tensorflow.google.cn/guide/estimator)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_column = tf.feature_column.numeric_column(\"x\")\n",
    "estimator = tf.estimator.LinearClassifier(feature_columns=[input_column])\n",
    "\n",
    "def input_fn():\n",
    "  return tf.data.Dataset.from_tensor_slices(\n",
    "    ({\"x\": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)\n",
    "estimator.train(input_fn)\n",
    "\n",
    "serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n",
    "  tf.feature_column.make_parse_example_spec([input_column]))\n",
    "export_path = estimator.export_saved_model(\n",
    "  \"/tmp/from_estimator/\", serving_input_fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个 SavedModel 接受序列化的 tf.Example 协议缓冲区，这些缓冲区对服务很有用。我们也可以用 tf.saved_model.load 加载它并从 python 运行它。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "imported = tf.saved_model.load(export_path)\n",
    "\n",
    "def predict(x):\n",
    "  example = tf.train.Example()\n",
    "  example.features.feature[\"x\"].float_list.value.extend([x])\n",
    "  return imported.signatures[\"predict\"](\n",
    "    examples=tf.constant([example.SerializeToString()]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(predict(1.5))\n",
    "print(predict(3.5))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[tf.estimator.export.build_raw_serving_input_receiver_fn](https://tensorflow.google.cn/api_docs/python/tf/estimator/export/build_raw_serving_input_receiver_fn) 允许你创建采用原始张量而不是 [tf.train.Examples](https://tensorflow.google.cn/api_docs/python/tf/train/Example) 的输入函数。\n",
    "\n",
    "## 在 C++ 加载 SavedModel\n",
    "SaveDead 加载程序（[loader](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/saved_model/loader.h)）为 C++ 版本提供了一个 API，可以从路径加载 SavedModel，同时允许 SessionOptions 和 RunOption。必须指定与要加载的图形关联的标记。SavedModel 的加载器版本称为SavedModelBundle，它包含 MetaGraphDef 和加载它的会话。\n",
    "```c++\n",
    "const string export_dir = ...\n",
    "SavedModelBundle bundle;\n",
    "...\n",
    "LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain},\n",
    "               &bundle);\n",
    "```\n",
    "\n",
    "## SavedModel 命令行接口的详细信息\n",
    "可以使用 SavedModel 命令行界面（CLI）检查和执行 SavedModel。例如，可以使用 CLI 检查模型的 SignatureDefs。CLI 使你能够快速确认输入张量的 dtype 和 shape 与模型匹配。此外，如果要测试模型，可以使用 CLI 进行健全性检查，方法是以各种格式（例如，python 表达式）传入示例输入，然后获取输出。\n",
    "### 安装 SavedModel CLI\n",
    "一般来说，你可以通过以下两种方式安装 tensorflow：\n",
    "- 下载 tensorflow 预先构建的二进制文件。\n",
    "- 从源码构建 tensorflow。\n",
    "\n",
    "如果通过预先构建的 tensorflow 二进制文件安装 tensorflow，那么 SavedModel CLI 已经安装在系统路径的 bin/saved_model_CLI 中。\n",
    "\n",
    "如果从源代码生成 tensorflow，则必须运行以下附加命令来生成 saved_model_cli：\n",
    "```sh\n",
    "bazel build tensorflow/python/tools:saved_model_cli\n",
    "```\n",
    "\n",
    "### 概述命令\n",
    "SavedModel CLI 在 SavedModel 上支持以下两个命令：\n",
    "- show：其中显示了可从 SavedModel 中获得的计算结果。\n",
    "- run：运行 SavedModel 的计算。\n",
    "\n",
    "#### show 命令\n",
    "SavedMode l包含一个或多个由其标记集标识的模型变体（技术上是 [v1.MetaGraphDefs](https://tensorflow.google.cn/api_docs/python/tf/compat/v1/MetaGraphDef)）。为了服务于一个模型，你可能会想知道每个模型变量中有什么样的 SignatureDefs，以及它们的输入和输出是什么。show 命令允许你按层次顺序检查 SavedModel 的内容。下面是语法：\n",
    "```txt\n",
    "usage: saved_model_cli show [-h] --dir DIR [--all]\n",
    "[--tag_set TAG_SET] [--signature_def SIGNATURE_DEF_KEY]\n",
    "```\n",
    "例如，下面的命令显示了 SavedModel 中所有可用的标记集：\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/saved_model_dir\n",
    "The given SavedModel contains the following tag-sets:\n",
    "serve\n",
    "serve, gpu\n",
    "```\n",
    "下面的命令显示了标签集的所有可用的 SignatureDef 键：\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve\n",
    "The given SavedModel `MetaGraphDef` contains `SignatureDefs` with the\n",
    "following keys:\n",
    "SignatureDef key: \"classify_x2_to_y3\"\n",
    "SignatureDef key: \"classify_x_to_y\"\n",
    "SignatureDef key: \"regress_x2_to_y3\"\n",
    "SignatureDef key: \"regress_x_to_y\"\n",
    "SignatureDef key: \"regress_x_to_y2\"\n",
    "SignatureDef key: \"serving_default\"\n",
    "```\n",
    "\n",
    "如果标记集中有多个标记，则必须指定所有标记，每个标记用逗号分隔。例如：\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve,gpu\n",
    "```\n",
    "\n",
    "要显示特定 SignatureDef 的所有输入和输出的 TensorInfo，传入 SignatureDef 键到 signature_def 选项。当你想知道张量键值、输入张量的数据类型和形状以便以后执行计算图时，这非常有用。例如：\n",
    "```sh\n",
    "saved_model_cli show --dir \\\n",
    "/tmp/saved_model_dir --tag_set serve --signature_def serving_default\n",
    "The given SavedModel SignatureDef contains the following input(s):\n",
    "  inputs['x'] tensor_info:\n",
    "      dtype: DT_FLOAT\n",
    "      shape: (-1, 1)\n",
    "      name: x:0\n",
    "The given SavedModel SignatureDef contains the following output(s):\n",
    "  outputs['y'] tensor_info:\n",
    "      dtype: DT_FLOAT\n",
    "      shape: (-1, 1)\n",
    "      name: y:0\n",
    "Method name is: tensorflow/serving/predict\n",
    "```\n",
    "\n",
    "要在 SavedModel 中显示所有可用的信息，可以使用 --all 选项。例如：\n",
    "```sh\n",
    "saved_model_cli show --dir /tmp/saved_model_dir --all\n",
    "MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:\n",
    "\n",
    "signature_def['classify_x2_to_y3']:\n",
    "  The given SavedModel SignatureDef contains the following input(s):\n",
    "    inputs['inputs'] tensor_info:\n",
    "        dtype: DT_FLOAT\n",
    "        shape: (-1, 1)\n",
    "        name: x2:0\n",
    "  The given SavedModel SignatureDef contains the following output(s):\n",
    "    outputs['scores'] tensor_info:\n",
    "        dtype: DT_FLOAT\n",
    "        shape: (-1, 1)\n",
    "        name: y3:0\n",
    "  Method name is: tensorflow/serving/classify\n",
    "\n",
    "...\n",
    "\n",
    "signature_def['serving_default']:\n",
    "  The given SavedModel SignatureDef contains the following input(s):\n",
    "    inputs['x'] tensor_info:\n",
    "        dtype: DT_FLOAT\n",
    "        shape: (-1, 1)\n",
    "        name: x:0\n",
    "  The given SavedModel SignatureDef contains the following output(s):\n",
    "    outputs['y'] tensor_info:\n",
    "        dtype: DT_FLOAT\n",
    "        shape: (-1, 1)\n",
    "        name: y:0\n",
    "  Method name is: tensorflow/serving/predict\n",
    "```\n",
    "\n",
    "#### run 命令\n",
    "调用run命令运行图形计算，传递输入，然后显示（并可选地保存）输出。下面是语法：\n",
    "```txt\n",
    "usage: saved_model_cli run [-h] --dir DIR --tag_set TAG_SET --signature_def\n",
    "                           SIGNATURE_DEF_KEY [--inputs INPUTS]\n",
    "                           [--input_exprs INPUT_EXPRS]\n",
    "                           [--input_examples INPUT_EXAMPLES] [--outdir OUTDIR]\n",
    "                           [--overwrite] [--tf_debug]\n",
    "```\n",
    "run命令提供了以下三种向模型传递输入的方法：\n",
    "- --inputs 选项允许在文件中传入 numpy ndarray。\n",
    "- --input_exprs 选项允许传入 python 表达式。\n",
    "- --input_examples 选项允许通过传入 tf.train.Example。\n",
    "\n",
    "**--inputs**：\n",
    "\n",
    "要在文件中传递输入数据，指定 --input 选项，该选项采用以下通用格式：\n",
    "```sh\n",
    "--inputs <INPUTS>\n",
    "```\n",
    "其中输入是以下格式之一：\n",
    "- &#60;nput_key&#62;=&#60;文件名&#62;\n",
    "- &#60;input_key&#62;=&#60;filename&#62;[&#60;variable_name&#62;]\n",
    "\n",
    "你可以传递多个输入。如果传递多个输入，使用分号分隔每个输入。\n",
    "\n",
    "saved_model_cli 使用 numpy.load 加载文件名。文件名可以采用以下任何格式：\n",
    "- .npy\n",
    "- .npz\n",
    "- pickle 格式\n",
    "\n",
    ".npy 文件总是包含 numpy ndarray 。因此，从 .npy 文件加载时，内容将直接分配给指定的输入张量。如果使用 .npy 文件指定变量名，将忽略变量名并发出警告。\n",
    "\n",
    "从 .npz（zip）文件加载时，可以选择指定一个变量名来标识要为输入张量键加载的zip文件中的变量。如果不指定变量名，SavedModel CLI 将检查 zip 文件中是否只包含一个文件，并为指定的输入张量键加载该文件。\n",
    "\n",
    "\n",
    "从 pickle 文件加载时，如果方括号中未指定变量名，则 pickle 文件中的任何内容都将传递给指定的输入张量键。否则，SavedModel CLI 将假定 pickle 文件中存储了一个字典，并将使用与变量名对应的值。\n",
    "\n",
    "**--input_exprs**\n",
    "\n",
    "要通过 python 表达式传递输入，指定 --input_exprs 选项。当你没有数据文件，但仍希望使用一些与模型 SignatureDefs 的数据类型和形状匹配的简单输入来正常检查模型时，这将非常有用。例如：\n",
    "```sh\n",
    "`<input_key>=[[1],[2],[3]]`\n",
    "```\n",
    "\n",
    "除了 python 表达式，你可能也想传一个 numy 函数，比如：\n",
    "```sh\n",
    "`<input_key>=np.ones((32,32,3))`\n",
    "```\n",
    "注意numpy模块已经作为np提供给你了。\n",
    "\n",
    "**--input_example**\n",
    "\n",
    "要将 tf.train.Example 作为输入传递，指定 --input_examples 选项。对于每个输入键，它接受一个字典列表，其中每个字典都是 tf.train.Example 的实例。字典键是特征，值是每个特征的值列表。例如：\n",
    "```sh\n",
    "`<input_key>=[{\"age\":[22,24],\"education\":[\"BS\",\"MS\"]}]`\n",
    "```\n",
    "\n",
    "**保存输出**\n",
    "\n",
    "默认情况下，SavedModel CLI 将输出写入 stdout。如果将目录传递给 --outdir 选项，则输出将保存为以给定目录下的输出张量键命名的 .npy 文件。\n",
    "\n",
    "使用 --overwrite 覆盖现有的输出文件。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
