{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "mHDxn9VHjxKn"
      },
      "source": [
        "##### Copyright 2019 The TensorFlow Authors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "cellView": "form",
        "colab": {},
        "colab_type": "code",
        "id": "3x19oys5j89H"
      },
      "outputs": [],
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "hFDUpbtvv_3u"
      },
      "source": [
        "# Save and loading APIs"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "V94_3U2k9rWV"
      },
      "source": [
        "<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/save_and_serialize\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/save_and_serialize.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/save_and_serialize.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/save_and_serialize.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "_9W6rw0T8m8V"
      },
      "source": [
        "## Introduction\n",
        "\n",
        "A Keras model consists of multiple components:\n",
        "\n",
        "- An architecture, or configuration, which specifyies what layers the model contain, and how they're connected.\n",
        "- A set of weights values (the \"state of the model\").\n",
        "- An optimizer (defined by compiling the model).\n",
        "- A set of losses and metrics (defined by compiling the model or calling `add_loss()` or `add_metric()`).\n",
        "\n",
        "The Keras API makes it possible to save of these pieces to disk at once, or to only selectively save some of them:\n",
        "\n",
        "- Saving everything into a single archive in the TensorFlow SavedModel format (or in the older Keras H5 format). This is the standard practice.\n",
        "- Saving the architecture / configuration only, typically as a JSON file.\n",
        "- Saving the weights values only. This is generally used when training the model.\n",
        "\n",
        "Let's take a look at each of these options: when would you use one or the other? How do they work?"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "6soUbInX_4vy"
      },
      "source": [
        "## The short answer to saving \u0026 loading\n",
        "\n",
        "The TL;DR\n",
        "\n",
        "**Saving a Keras model:**\n",
        "\n",
        "```python\n",
        "model = ...  # Get model (Sequential, Functional Model, or Model subclass)\n",
        "model.save('path/to/location')\n",
        "```\n",
        "\n",
        "**Loading the model back:**\n",
        "\n",
        "```python\n",
        "from tensorflow import keras\n",
        "model = keras.models.load_model('path/to/location')\n",
        "```\n",
        "\n",
        "Now, let's look at the details."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "OLoG__M9wklG"
      },
      "source": [
        "## Setup"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "PNVkewAnk8Hw"
      },
      "outputs": [],
      "source": [
        "!pip install -U tf-nightly"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "WG2b0_BaADkT"
      },
      "outputs": [],
      "source": [
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "from tensorflow import keras"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "6dZOzKILruaJ"
      },
      "source": [
        "## Whole-model saving \u0026 loading\n",
        "\n",
        "You can save an entire model to a single artifact. It will include:\n",
        "\n",
        "- The model's architecture/config\n",
        "- The model's weight values (which were learned during training)\n",
        "- The model's compilation information (if `compile()`) was called\n",
        "- The optimizer and its state, if any (this enables you to restart training where you left)\n",
        "\n",
        "#### APIs\n",
        "\n",
        "- `model.save()` or `tf.keras.models.save_model()`\n",
        "- `tf.keras.models.load_model()`\n",
        "\n",
        "There are two formats you can use to save an entire model to disk: **the TensorFlow SavedModel format**, and **the older Keras H5 format**. The recommended format is SavedModel. It is the default when you use `model.save()`.\n",
        "You can switch to the H5 format by:\n",
        "\n",
        "- Passing `format='h5'` to `save()`.\n",
        "- Passing a filename that ends in `.h5` or `.keras` to `save()`."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Lt2jQepyFFtd"
      },
      "source": [
        "### SavedModel format\n",
        "\n",
        "**Example:**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "3wqmi0E9CTEz"
      },
      "outputs": [],
      "source": [
        "def get_model():\n",
        "  # Create a simple model.\n",
        "  inputs = keras.Input(shape=(32,))\n",
        "  outputs = keras.layers.Dense(1)(inputs)\n",
        "  model = keras.Model(inputs, outputs)\n",
        "  model.compile(optimizer='adam', loss='mean_squared_error')\n",
        "  return model\n",
        "\n",
        "model = get_model()\n",
        "\n",
        "# Train the model.\n",
        "test_input = np.random.random((128, 32))\n",
        "test_target = np.random.random((128, 1))\n",
        "model.fit(test_input, test_target)\n",
        "\n",
        "# Calling `save('my_model')` creates a SavedModel folder `my_model`.\n",
        "model.save('my_model')\n",
        "\n",
        "# It can be used to reconstruct the model identically.\n",
        "reconstructed_model = keras.models.load_model('my_model')\n",
        "\n",
        "# Let's check:\n",
        "np.testing.assert_allclose(\n",
        "  model.predict(test_input),\n",
        "  reconstructed_model.predict(test_input))\n",
        "\n",
        "# The reconstructed model is already compiled and has retained the optimizer\n",
        "# state, so training can resume:\n",
        "reconstructed_model.fit(test_input, test_target)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "I5y4up5oCa6n"
      },
      "source": [
        "#### What the SavedModel contains\n",
        "\n",
        "Calling `model.save('my_model')` creates a folder named `my_model`, containing the following:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "n0XIHNb3ClgV"
      },
      "outputs": [],
      "source": [
        "!ls my_model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "oNt0pRRlCtZI"
      },
      "source": [
        "The model architecture, and training configuration (including the optimizer, losses, and metrics) are stored in `saved_model.pb`. The weights are saved in the `variables/` directory.\n",
        "\n",
        "For detailed information on the SavedModel format, see the [SavedModel guide (*The SavedModel format on disk*)](https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk).\n",
        "\n",
        "\n",
        "#### How SavedModel handles custom objects\n",
        "\n",
        "When saving the model and its layers, the SavedModel format stores the class name, **call function**, losses, and weights (and the config, if implemented). The call function defines the computation graph of the model/layer.\n",
        "\n",
        "In the absence of the model/layer config, the call function is used to create a model that exists like the original model which can be trained, evaluated, and used for inference.\n",
        "\n",
        "Nevertheless, it is always a good practice to define the `get_config` and `from_config` methods when writing a custom model or layer class. This allows you to easily update the computation later if needed. See the section about [Custom objects](save_and_serialize.ipynb#custom-objects) for more information.\n",
        "\n",
        "Below is an example of what happens when loading custom layers from the SavedModel format **without** overwriting the config methods."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "tobiwLoLExHF"
      },
      "outputs": [],
      "source": [
        "class CustomModel(keras.Model):\n",
        "  def __init__(self, hidden_units):\n",
        "    super(CustomModel, self).__init__()\n",
        "    self.dense_layers = [keras.layers.Dense(u) for u in hidden_units]\n",
        "  def call(self, inputs):\n",
        "    x = inputs\n",
        "    for layer in self.dense_layers:\n",
        "      x = layer(x)\n",
        "    return x\n",
        "\n",
        "model = CustomModel([16, 16, 10])\n",
        "# Build the model by calling it\n",
        "input_arr = tf.random.uniform((1, 5))\n",
        "outputs=model(input_arr)\n",
        "model.save('my_model')\n",
        "\n",
        "# Delete the custom-defined model class to ensure that the loader does not have\n",
        "# access to it.\n",
        "del CustomModel\n",
        "\n",
        "loaded = keras.models.load_model('my_model')\n",
        "np.testing.assert_allclose(loaded(input_arr), outputs)\n",
        "\n",
        "print(\"Original model:\", model)\n",
        "print(\"Loaded model:\", loaded)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "1P5AUjv9Jfz4"
      },
      "source": [
        "As seen in the example above, the loader dynamically creates a new model class that acts like the original model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "nQteZrQfRzMd"
      },
      "source": [
        "### Keras H5 format\n",
        "\n",
        "Keras also supports saving a single HDF5 file containing the model's architecture, weights values, and `compile()` information. It is a light-weight alternative to SavedModel. \n",
        "\n",
        "**Example:**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "0h8P7IwaDyHO"
      },
      "outputs": [],
      "source": [
        "model = get_model()\n",
        "\n",
        "# Train the model.\n",
        "test_input = np.random.random((128, 32))\n",
        "test_target = np.random.random((128, 1))\n",
        "model.fit(test_input, test_target)\n",
        "\n",
        "# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.\n",
        "model.save('my_h5_model.h5')\n",
        "\n",
        "# It can be used to reconstruct the model identically.\n",
        "reconstructed_model = keras.models.load_model('my_h5_model.h5')\n",
        "\n",
        "# Let's check:\n",
        "np.testing.assert_allclose(\n",
        "  model.predict(test_input),\n",
        "  reconstructed_model.predict(test_input))\n",
        "\n",
        "# The reconstructed model is already compiled and has retained the optimizer\n",
        "# state, so training can resume:\n",
        "reconstructed_model.fit(test_input, test_target)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "y7LUzyZVD2kE"
      },
      "source": [
        "\n",
        "#### Limitations\n",
        "\n",
        "Compared to the SavedModel format, there are two things that don't get included in the H5 file:\n",
        "\n",
        "- **External losses \u0026 metrics** added via `model.add_loss()` \u0026 `model.add_metric()` are not saved (unlike SavedModel). If you have such losses \u0026 metrics on your model and you want to resume training, you need to add these losses back yourself after loading the model. Note that this does not apply to losses/metrics created *inside* layers via `self.add_loss()` \u0026 `self.add_metric()`. As long as the layer gets loaded, these losses \u0026 metrics are kept, since they are part of the `call` method of the layer.\n",
        "- The **computation graph of custom objects** such as custom layers is not included in the saved file. At loading time, Keras will need access to the Python classes/functions of these objects in order to reconstruct the model. See [Custom objects](save_and_serialize.ipynb#custom-objects).\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "yqZ3RJQ9r2ZX"
      },
      "source": [
        "## Saving the architecture\n",
        "\n",
        "The model's configuration (or architecture) specifies what layers the model contains, and how these layers are connected*. If you have the configuration of a model, then the model can be created with a freshly initialized state for the weights and no compilation information.\n",
        "\n",
        "*Note this only applies to models defined using the functional or Sequential apis, not subclassed models."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "BYsCChEfLabt"
      },
      "source": [
        "### Configuration of a Sequential model or Functional API model\n",
        "\n",
        "These types of models are explicit graphs of layers: their configuration is always available in a structured form.\n",
        "\n",
        "#### APIs\n",
        "\n",
        "- `get_config()` and `from_config()`\n",
        "- `tf.keras.models.model_to_json()` and `tf.keras.models.model_from_json()`"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "QCqwWH4nMGno"
      },
      "source": [
        "#### `get_config()` and `from_config()`\n",
        "\n",
        "Calling `config = model.get_config()` will return a Python dict containing the configuration of the model. The same model can then be reconstructed via `Sequential.from_config(config)` (for a `Sequential` model) or `Model.from_config(config)` (for a Functional API model).\n",
        "\n",
        "The same workflow also works for any serializable layer.\n",
        "\n",
        "**Layer example:**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "3nLu0w5jMJxl"
      },
      "outputs": [],
      "source": [
        "layer = keras.layers.Dense(3, activation='relu')\n",
        "layer_config = layer.get_config()\n",
        "new_layer = keras.layers.Dense.from_config(layer_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "ot6XY7BXMLyq"
      },
      "source": [
        "**Sequential model example:**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "iFR7Dp9DMM0i"
      },
      "outputs": [],
      "source": [
        "model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])\n",
        "config = model.get_config()\n",
        "new_model = keras.Sequential.from_config(config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "TAyd4ALBMNGg"
      },
      "source": [
        "**Functional model example:**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "3q8bWStfMOI_"
      },
      "outputs": [],
      "source": [
        "inputs = keras.Input((32,))\n",
        "outputs = keras.layers.Dense(1)(inputs)\n",
        "model = keras.Model(inputs, outputs)\n",
        "config = model.get_config()\n",
        "new_model = keras.Model.from_config(config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "nZ-zLw1YMPwS"
      },
      "source": [
        "#### `to_json()` and `tf.keras.models.model_from_json()`\n",
        "\n",
        "This is similar to `get_config` / `from_config`, except it turns the model into a JSON string, which can then be loaded without the original model class. It is also specific to models, it isn't meant for layers.\n",
        "\n",
        "**Example:**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "mP6EzTY7MWEF"
      },
      "outputs": [],
      "source": [
        "model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])\n",
        "json_config = model.to_json()\n",
        "new_model = keras.models.model_from_json(json_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "-G924qw34J3D"
      },
      "source": [
        "### Custom objects \n",
        "\n",
        "**Models and layers**\n",
        "\n",
        "The architecture of subclassed models and layers are defined in the methods `__init__` and `call`. They are considered Python bytecode, which cannot be serialized into a JSON-compatible config*.\n",
        "\n",
        "In order to save/load a model with custom-defined layers, or a subclassed model, you should overwrite the `get_config` and optionally `from_config` methods. Additionally, you should use register the custom object so that Keras is aware of it. \n",
        "\n",
        "*you could try serializing the bytecode (e.g. via `pickle`), but it's completely unsafe and means your model cannot be loaded on a different system.\n",
        "\n",
        "**Custom functions**\n",
        "\n",
        "Custom-defined functions (e.g. activation loss or initialization) do not need a `get_config` method. The function name is sufficient for loading as long as it is registered as a custom object."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "wbQH7MQB9Coj"
      },
      "source": [
        "#### Defining the config methods\n",
        "\n",
        "Specifications:\n",
        "\n",
        "* `get_config` should return a JSON-serializable dictionary in order to be compatible with the Keras architecture- and model-saving APIs.\n",
        "* `from_config(config)` (`classmethod`) should return a new layer or model object that is created from the config. The default implementation returns `cls(**config)`.\n",
        "\n",
        "Example:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "jYEYC6E_9zPI"
      },
      "outputs": [],
      "source": [
        "class CustomLayer(keras.layers.Layer):\n",
        "  def __init__(self, a):\n",
        "    self.var = tf.Variable(a, name='var_a')\n",
        "  def call(self, inputs, training=False):\n",
        "    if training:\n",
        "      return inputs * self.var\n",
        "    else:\n",
        "      return inputs\n",
        "  def get_config(self):\n",
        "    return {'a': self.var.numpy()}\n",
        "\n",
        "  # There's actually no need to define `from_config` here, since returning\n",
        "  # `cls(**config)` is the default behavior.\n",
        "  @classmethod\n",
        "  def from_config(cls, config):\n",
        "    return cls(**config)\n",
        "\n",
        "layer = CustomLayer(5)\n",
        "layer.var.assign(2)\n",
        "\n",
        "serialized_layer = keras.layers.serialize(layer)\n",
        "new_layer = keras.layers.deserialize(serialized_layer, custom_objects={'CustomLayer': CustomLayer})"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "DE1yAqgJ9ED4"
      },
      "source": [
        "#### Registering the custom object\n",
        "\n",
        "Keras keeps a note of which class generated the config. From the example above, `tf.keras.layers.serialize` generates a serialized form of the custom layer:\n",
        "\n",
        "```\n",
        "{'class_name': 'CustomLayer', 'config': {'a': 2}}\n",
        "```\n",
        "\n",
        "Keras keeps a master list of all built-in layer, model, optimizer, and metric classes, which is used to find the correct class to call `from_config`. If the  class can't be found, than an error is raised (`Value Error: Unknown layer`). There are a few ways to register custom classes to this list:\n",
        "\n",
        "1. Setting `custom_objects` argument in the loading function. (see the example in section above \"Defining the config methods\")\n",
        "2. `tf.keras.utils.custom_object_scope` or `tf.keras.utils.CustomObjectScope`\n",
        "3. `tf.keras.utils.register_keras_serializable`"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "4egrFCizO6S3"
      },
      "source": [
        "#### Custom layer and function example"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "tC6NKUGKPCCn"
      },
      "outputs": [],
      "source": [
        "class CustomLayer(keras.layers.Layer):\n",
        "    def __init__(self, units=32, **kwargs):\n",
        "        super(CustomLayer, self).__init__(**kwargs)\n",
        "        self.units = units\n",
        "\n",
        "    def build(self, input_shape):\n",
        "        self.w = self.add_weight(\n",
        "            shape=(input_shape[-1], self.units),\n",
        "            initializer=\"random_normal\",\n",
        "            trainable=True,\n",
        "        )\n",
        "        self.b = self.add_weight(\n",
        "            shape=(self.units,), initializer=\"random_normal\", trainable=True\n",
        "        )\n",
        "\n",
        "    def call(self, inputs):\n",
        "        return tf.matmul(inputs, self.w) + self.b\n",
        "\n",
        "    def get_config(self):\n",
        "        config = super(CustomLayer, self).get_config()\n",
        "        config.update({\"units\": self.units})\n",
        "        return config\n",
        "\n",
        "def custom_activation(x):\n",
        "  return tf.nn.tanh(x) ** 2\n",
        "\n",
        "\n",
        "# Make a model with the CustomLayer and custom_activation\n",
        "inputs = keras.Input((32,))\n",
        "x = CustomLayer(32)(inputs)\n",
        "outputs = keras.layers.Activation(custom_activation)(x)\n",
        "model = keras.Model(inputs, outputs)\n",
        "\n",
        "# Retrieve the config\n",
        "config = model.get_config()\n",
        "\n",
        "# At loading time, register the custom objects with a `custom_object_scope`:\n",
        "custom_objects = {'CustomLayer': CustomLayer,\n",
        "                  'custom_activation': custom_activation}\n",
        "with keras.utils.custom_object_scope(custom_objects):\n",
        "  new_model = keras.Model.from_config(config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "AQriJnZkP-8w"
      },
      "source": [
        "### In-memory model cloning\n",
        "\n",
        "You can also do in-memory cloning of a model via `tf.keras.models.clone_model()`. This is equivalent to getting the config then recreating the model from its config (so it does not preserve compilation information or layer weights values).\n",
        "\n",
        "**Example**:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "BDghBVLTQAd8"
      },
      "outputs": [],
      "source": [
        "with keras.utils.custom_object_scope(custom_objects):\n",
        "  new_model = keras.models.clone_model(model)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "wwCxkE6RyyPy"
      },
      "source": [
        "## Saving \u0026 loading only the model's weights values\n",
        "\n",
        "You can choose to only save \u0026 load a model's weights. This can be useful if:\n",
        "\n",
        "- You only need the model for inference: in this case you won't need to restart training, so you don't need the compilation information or optimizer state.\n",
        "- You are doing transfer learning: in this case you will be training a new model reusing the state of a prior model, so you don't need the compilation information of the prior model.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Ssn25Yc0PilJ"
      },
      "source": [
        "### APIs for in-memory weight transfer\n",
        "\n",
        "Weights can be copied between different objects by using `get_weights` and `set_weights`:\n",
        "\n",
        "* `tf.keras.layers.Layer.get_weights()`: Returns a list of numpy arrays. \n",
        "* `tf.keras.layers.Layer.set_weights()`: Sets the model weights to the values in the `weights` argument. \n",
        "\n",
        "Examples below.\n",
        "\n",
        "\n",
        "***Transfering weights from one layer to another, in memory***\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "Bq29sTYXVk-t"
      },
      "outputs": [],
      "source": [
        "def create_layer():\n",
        "  layer = keras.layers.Dense(64, activation='relu', name='dense_2')\n",
        "  layer.build((None, 784))\n",
        "  return layer\n",
        "\n",
        "layer_1 = create_layer()\n",
        "layer_2 = create_layer()\n",
        "\n",
        "# Copy weights from layer 2 to layer 1\n",
        "layer_2.set_weights(layer_1.get_weights())"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "mQFlw6QUQoTZ"
      },
      "source": [
        "***Transfering weights from one model to another model with a compatible architecture, in memory***"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "tcP1GS9YQovn"
      },
      "outputs": [],
      "source": [
        "# Create a simple functional model\n",
        "inputs = keras.Input(shape=(784,), name='digits')\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_2')(x)\n",
        "outputs = keras.layers.Dense(10, name='predictions')(x)\n",
        "functional_model = keras.Model(inputs=inputs, outputs=outputs, name='3_layer_mlp')\n",
        "\n",
        "# Define a subclassed model with the same architecture\n",
        "class SubclassedModel(keras.Model):\n",
        "  def __init__(self, output_dim, name=None):\n",
        "    super(SubclassedModel, self).__init__(name=name)\n",
        "    self.output_dim = output_dim\n",
        "    self.dense_1 = keras.layers.Dense(64, activation='relu', name='dense_1')\n",
        "    self.dense_2 = keras.layers.Dense(64, activation='relu', name='dense_2')\n",
        "    self.dense_3 = keras.layers.Dense(output_dim, name='predictions')\n",
        "  def call(self, inputs):\n",
        "    x = self.dense_1(inputs)\n",
        "    x = self.dense_2(x)\n",
        "    x = self.dense_3(x)\n",
        "    return x\n",
        "  def get_config(self):\n",
        "    return {'output_dim': self.output_dim, 'name': self.name}\n",
        "\n",
        "subclassed_model = SubclassedModel(10)\n",
        "# Call the subclassed model once to create the weights.\n",
        "subclassed_model(tf.ones((1, 784)))\n",
        "\n",
        "# Copy weights from functional_model to subclassed_model.\n",
        "subclassed_model.set_weights(functional_model.get_weights())\n",
        "\n",
        "assert len(functional_model.weights) == len(subclassed_model.weights)\n",
        "for a, b in zip(functional_model.weights, subclassed_model.weights):\n",
        "  np.testing.assert_allclose(a.numpy(), b.numpy()) "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "460K5GuHQpCO"
      },
      "source": [
        "***The case of stateless layers***\n",
        "\n",
        "Because stateless layers do not change the order or number of weights, models can have compatible architectures even if there are extra/missing stateless layers."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "apu_HaGTQpnd"
      },
      "outputs": [],
      "source": [
        "inputs = keras.Input(shape=(784,), name='digits')\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_2')(x)\n",
        "outputs = keras.layers.Dense(10, name='predictions')(x)\n",
        "functional_model = keras.Model(inputs=inputs, outputs=outputs, name='3_layer_mlp')\n",
        "\n",
        "inputs = keras.Input(shape=(784,), name='digits')\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_2')(x)\n",
        "\n",
        "# Add a dropout layer, which does not contain any weights.\n",
        "x = keras.layers.Dropout(.5)(x)\n",
        "outputs = keras.layers.Dense(10, name='predictions')(x)\n",
        "functional_model_with_dropout = keras.Model(inputs=inputs, outputs=outputs, name='3_layer_mlp')\n",
        "\n",
        "functional_model_with_dropout.set_weights(functional_model.get_weights())"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "opP1KROHwWwd"
      },
      "source": [
        "### APIs for saving weights to disk \u0026 loading them back\n",
        "\n",
        "Weights can be saved to disk by calling `model.save_weights` in the following formats:\n",
        "* TensorFlow Checkpoint \n",
        "* HDF5\n",
        "\n",
        "The default format for `model.save_weights` is TensorFlow checkpoint. There are two ways to specify the save format:\n",
        "\n",
        "1. `save_format` argument: Set the value to `save_format=\"tf\"` or  `save_format=\"h5\"`.\n",
        "2. `path` argument: If the path ends with `.h5` or `.hdf5`, then the HDF5 format is used. Other suffixes will result in a TensorFlow checkpoint unless `save_format` is set.\n",
        "\n",
        "There is also an option of retrieving weights as in-memory numpy arrays. Each API has their pros and cons which are detailed below .\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "532ycSCl4AjJ"
      },
      "source": [
        "### TF Checkpoint format\n",
        "\n",
        "**Example**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "2EjsoCVv7ddG"
      },
      "outputs": [],
      "source": [
        "# Runnable example \n",
        "sequential_model = keras.Sequential(\n",
        "    [keras.Input(shape=(784,), name='digits'),\n",
        "     keras.layers.Dense(64, activation='relu', name='dense_1'), \n",
        "     keras.layers.Dense(64, activation='relu', name='dense_2'),\n",
        "     keras.layers.Dense(10, name='predictions')])\n",
        "sequential_model.save_weights('ckpt')\n",
        "load_status = sequential_model.load_weights('ckpt')\n",
        "\n",
        "# `assert_consumed` can be used as validation that all variable values have been \n",
        "# restored from the checkpoint. See `tf.train.Checkpoint.restore` for other\n",
        "# methods in the Status object.\n",
        "load_status.assert_consumed()  "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "nvAEkNnH7S-9"
      },
      "source": [
        "#### Format details\n",
        "\n",
        "The TensorFlow Checkpoint format saves and restores the weights using object attribute names. For instance, consider the `tf.keras.layers.Dense` layer. The layer contains two weights: `dense.kernel` and `dense.bias`. When the layer is saved to the `tf` format, the resulting checkpoint contains the keys `\"kernel\"` and `\"bias\"` and their corresponding weight values. For more information see [\"Loading mechanics\" in the Checkpoint guide](https://www.tensorflow.org/guide/checkpoint#loading_mechanics).\n",
        "\n",
        "Note that attribute/graph edge is named after **the name used in parent object, not the name of the variable**. Consider the `CustomLayer` in the example below. The variable `CustomLayer.var` is saved with `\"var\"` as part of key, not `\"var_a\"`.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "vxe-dhSR9Y8X"
      },
      "outputs": [],
      "source": [
        "class CustomLayer(keras.layers.Layer):\n",
        "  def __init__(self, a):\n",
        "    self.var = tf.Variable(a, name='var_a')\n",
        "  \n",
        "layer = CustomLayer(5)\n",
        "layer_ckpt = tf.train.Checkpoint(layer=layer).save('custom_layer')\n",
        "\n",
        "ckpt_reader = tf.train.load_checkpoint(layer_ckpt)\n",
        "\n",
        "ckpt_reader.get_variable_to_dtype_map()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "pPvtq_Nd8jdA"
      },
      "source": [
        "#### Transfer learning example\n",
        "Essentially, as long as two models have the same architecture, they are able to share the same checkpoint. Example:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "b0GQaCbf_dLQ"
      },
      "outputs": [],
      "source": [
        "inputs = keras.Input(shape=(784,), name='digits')\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_2')(x)\n",
        "outputs = keras.layers.Dense(10, name='predictions')(x)\n",
        "functional_model = keras.Model(inputs=inputs, outputs=outputs, name='3_layer_mlp')\n",
        "\n",
        "# Extract a portion of the functional model defined in the Setup section.\n",
        "# The following lines produce a new model that excludes the final output\n",
        "# layer of the functional model.\n",
        "pretrained = keras.Model(functional_model.inputs, \n",
        "                            functional_model.layers[-1].input,\n",
        "                            name='pretrained_model')\n",
        "# Randomly assign \"trained\" weights.\n",
        "for w in pretrained.weights:\n",
        "  w.assign(tf.random.normal(w.shape))\n",
        "pretrained.save_weights('pretrained_ckpt')\n",
        "pretrained.summary()\n",
        "\n",
        "# Assume this is a separate program where only 'pretrained_ckpt' exists.\n",
        "# Create a new functional model with a different output dimension.\n",
        "inputs = keras.Input(shape=(784,), name='digits')\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_2')(x)\n",
        "outputs = keras.layers.Dense(5, name='predictions')(x)\n",
        "model = keras.Model(inputs=inputs, outputs=outputs, name='new_model')\n",
        "\n",
        "# Load the weights from pretrained_ckpt into model. \n",
        "model.load_weights('pretrained_ckpt')\n",
        "\n",
        "# Check that all of the pretrained weights have been loaded.\n",
        "for a, b in zip(pretrained.weights, model.weights):\n",
        "  np.testing.assert_allclose(a.numpy(), b.numpy())\n",
        "\n",
        "print('\\n','-'*50)\n",
        "model.summary()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "Zdnpw_6PEsAN"
      },
      "outputs": [],
      "source": [
        "# Example 2: Sequential model\n",
        "# Recreate the pretrained model, and load the saved weights.\n",
        "inputs = keras.Input(shape=(784,), name='digits')\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
        "x = keras.layers.Dense(64, activation='relu', name='dense_2')(x)\n",
        "pretrained_model = keras.Model(inputs=inputs, outputs=x, name='pretrained')\n",
        "\n",
        "# Sequential example:\n",
        "model = keras.Sequential(\n",
        "    [pretrained_model, keras.layers.Dense(5, name='predictions')])\n",
        "model.summary()\n",
        "\n",
        "pretrained_model.load_weights('pretrained_ckpt')\n",
        "\n",
        "# Warning! Calling `model.load_weights('pretrained_ckpt')` won't throw an error,\n",
        "# but will *not* work as expected. If you inspect the weights, you'll see that\n",
        "# none of the weights will have loaded. `pretrained_model.load_weights()` is the\n",
        "# correct method to call."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "09aVEG1VEOZe"
      },
      "source": [
        "It is generally recommended to stick to the same API for building models. If you\n",
        "switch between Sequential and Functional, or Functional and subclassed, etc., then always rebuild the pre-trained model and load the pre-trained weights to that model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "RcgCqEkFGU5j"
      },
      "source": [
        "The next question is, how can weights be saved and loaded to different models if the model architectures are quite different? The solution is to use `tf.train.Checkpoint` to save and restore the exact layers/variables. Example:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "yxnpGl1-GTpF"
      },
      "outputs": [],
      "source": [
        "# Create a subclassed model that essentially uses functional_model's first \n",
        "# and last layers.\n",
        "# First, save the weights of functional_model's first and last dense layers. \n",
        "first_dense = functional_model.layers[1]\n",
        "last_dense = functional_model.layers[-1]\n",
        "ckpt_path = tf.train.Checkpoint(\n",
        "    dense=first_dense,  \n",
        "    kernel=last_dense.kernel, \n",
        "    bias=last_dense.bias).save('ckpt')\n",
        "\n",
        "# Define the subclassed model.\n",
        "class ContrivedModel(keras.Model):\n",
        "  def __init__(self):\n",
        "    super(ContrivedModel, self).__init__()\n",
        "    self.first_dense = keras.layers.Dense(64)\n",
        "    self.kernel = self.add_variable('kernel', shape=(64, 10))\n",
        "    self.bias = self.add_variable('bias', shape=(10,))\n",
        "\n",
        "  def call(self, inputs):\n",
        "    x = self.first_dense(inputs)\n",
        "    return tf.matmul(x, self.kernel) + self.bias\n",
        "\n",
        "model = ContrivedModel()\n",
        "# Call model on inputs to create the variables of the dense layer.\n",
        "_ = model(tf.ones((1, 784)))\n",
        "\n",
        "# Create a Checkpoint with the same structure as before, and load the weights.\n",
        "tf.train.Checkpoint(\n",
        "    dense=model.first_dense,  \n",
        "    kernel=model.kernel, \n",
        "    bias=model.bias).restore(ckpt_path).assert_consumed()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "lIi2UbmzLBn_"
      },
      "source": [
        "### HDF5 format\n",
        "\n",
        "The HDF5 format contains weights grouped by layer names. The weights are lists ordered by concatenating the list of trainable weights to the list of non-trainable weights (same as `layer.weights`). Thus, a model can use a hdf5 checkpoint if it has the same layers and trainable statuses as saved in the checkpoint. \n",
        "\n",
        "**Example**"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "GeIaOLxYJoFq"
      },
      "outputs": [],
      "source": [
        "# Runnable example \n",
        "sequential_model = keras.Sequential(\n",
        "    [keras.Input(shape=(784,), name='digits'),\n",
        "     keras.layers.Dense(64, activation='relu', name='dense_1'), \n",
        "     keras.layers.Dense(64, activation='relu', name='dense_2'),\n",
        "     keras.layers.Dense(10, name='predictions')])\n",
        "sequential_model.save_weights('weights.h5')\n",
        "sequential_model.load_weights('weights.h5')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "ZCUytX4zJ7vr"
      },
      "source": [
        "Note that changing `layer.trainable` may result in a different\n",
        "`layer.weights` ordering when the model contains nested layers."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "iQLrK3H-NCKF"
      },
      "outputs": [],
      "source": [
        "class NestedDenseLayer(keras.layers.Layer):\n",
        "  def __init__(self, units, name=None):\n",
        "    super(NestedDenseLayer, self).__init__(name=name)\n",
        "    self.dense_1 = keras.layers.Dense(units, name='dense_1')\n",
        "    self.dense_2 = keras.layers.Dense(units, name='dense_2')\n",
        "  def call(self, inputs):\n",
        "    return self.dense_2(self.dense_1(inputs))\n",
        "\n",
        "nested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, 'nested')])\n",
        "variable_names = [v.name for v in nested_model.weights]\n",
        "print('variables: {}'.format(variable_names))\n",
        "\n",
        "print('\\nChanging trainable status of one of the nested layers...')\n",
        "nested_model.get_layer('nested').dense_1.trainable = False\n",
        "\n",
        "variable_names_2 = [v.name for v in nested_model.weights]\n",
        "print('\\nvariables: {}'.format(variable_names_2))\n",
        "print('variable ordering changed:', variable_names != variable_names_2)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "VO0IgR_SJoYc"
      },
      "source": [
        "#### Transfer learning example\n",
        "\n",
        "When loading pretrained weights from HDF5, it is recommended to load the weights into the original checkpointed model, and then extract the desired weights/layers into a new model. Example:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "rKXy9iYHQoz8"
      },
      "outputs": [],
      "source": [
        "def create_functional_model():\n",
        "  inputs = keras.Input(shape=(784,), name='digits')\n",
        "  x = keras.layers.Dense(64, activation='relu', name='dense_1')(inputs)\n",
        "  x = keras.layers.Dense(64, activation='relu', name='dense_2')(x)\n",
        "  outputs = keras.layers.Dense(10, name='predictions')(x)\n",
        "  return keras.Model(inputs=inputs, outputs=outputs, name='3_layer_mlp')\n",
        "functional_model = create_functional_model()  \n",
        "functional_model.save_weights('pretrained_weights.h5')\n",
        "\n",
        "# In a separate program:\n",
        "pretrained_model = create_functional_model()\n",
        "pretrained_model.load_weights('pretrained_weights.h5')\n",
        "\n",
        "# Create a new model by extracting layers from the original model:\n",
        "extracted_layers = pretrained_model.layers[:-1]\n",
        "extracted_layers.append(keras.layers.Dense(5, name='dense_3'))\n",
        "model = keras.Sequential(extracted_layers)\n",
        "model.summary()"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [],
      "name": "save_and_serialize.ipynb",
      "private_outputs": true,
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
