{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 编写自定义层和模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "tf.keras.backend.clear_session()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Layer 类\n",
    "### 层封装了状态（权重）和一些计算\n",
    "使用者将使用的主要数据结构是图层。一个图层封装了状态（层的 \"权重\" ）和从输入到输出的转换（ \"调用\"，即层的前向传播）。 \n",
    "\n",
    "下例是一个紧密连接的层。它具有一个状态：变量 w 和 b。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor(\n",
      "[[-0.12447943  0.03716024  0.02213474 -0.11281435]\n",
      " [-0.12447943  0.03716024  0.02213474 -0.11281435]], shape=(2, 4), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "from tensorflow.keras import layers\n",
    "\n",
    "class Linear(layers.Layer):\n",
    "\n",
    "  def __init__(self, units=32, input_dim=32):\n",
    "    super(Linear, self).__init__()\n",
    "    w_init = tf.random_normal_initializer()\n",
    "    self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units),\n",
    "                                              dtype='float32'),\n",
    "                         trainable=True)\n",
    "    b_init = tf.zeros_initializer()\n",
    "    self.b = tf.Variable(initial_value=b_init(shape=(units,),\n",
    "                                              dtype='float32'),\n",
    "                         trainable=True)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return tf.matmul(inputs, self.w) + self.b\n",
    "\n",
    "x = tf.ones((2, 2))\n",
    "linear_layer = Linear(4, 2)\n",
    "y = linear_layer(x)\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意：权重 w 和偏置项 b 在设置为图层属性时会自动被图层跟踪："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Linear(layers.Layer):\n",
    "\n",
    "  def __init__(self, units=32, input_dim=32):\n",
    "    super(Linear, self).__init__()\n",
    "    self.w = self.add_weight(shape=(input_dim, units),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "    self.b = self.add_weight(shape=(units,),\n",
    "                             initializer='zeros',\n",
    "                             trainable=True)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return tf.matmul(inputs, self.w) + self.b\n",
    "\n",
    "x = tf.ones((2, 2))\n",
    "linear_layer = Linear(4, 2)\n",
    "y = linear_layer(x)\n",
    "print(y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 图层可以具有不可训练的权重\n",
    "除了可训练的权重之外，你还可以向图层添加不可训练的权重。训练层时，在反向传播期间不考虑此类权重。 以下是添加和使用不可训练的重量的方法："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2. 2.]\n",
      "[4. 4.]\n",
      "权重: 1\n",
      "不可训练权重: 1\n",
      "可训练权重: []\n"
     ]
    }
   ],
   "source": [
    "class ComputeSum(layers.Layer):\n",
    "\n",
    "  def __init__(self, input_dim):\n",
    "    super(ComputeSum, self).__init__()\n",
    "    self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),\n",
    "                             trainable=False)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    self.total.assign_add(tf.reduce_sum(inputs, axis=0))\n",
    "    return self.total\n",
    "\n",
    "x = tf.ones((2, 2))\n",
    "my_sum = ComputeSum(2)\n",
    "y = my_sum(x)\n",
    "print(y.numpy())\n",
    "y = my_sum(x)\n",
    "print(y.numpy())\n",
    "\n",
    "# 它是layer.weights的一部分，但被归类为不可训练的权重：\n",
    "print('权重:', len(my_sum.weights))\n",
    "print('不可训练权重:', len(my_sum.non_trainable_weights))\n",
    "\n",
    "# 它不包含在可训练的重量中：\n",
    "print('可训练权重:', my_sum.trainable_weights)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 将权重创建的时期推迟到知道输入的维度为止\n",
    "在上面的逻辑回归示例中，我们的线性层采用了 input_dim 参数，该参数用于计算 __init__ 中权重 w 和偏置项 b 的形状："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Linear(layers.Layer):\n",
    "\n",
    "  def __init__(self, units=32, input_dim=32):\n",
    "      super(Linear, self).__init__()\n",
    "      self.w = self.add_weight(shape=(input_dim, units),\n",
    "                               initializer='random_normal',\n",
    "                               trainable=True)\n",
    "      self.b = self.add_weight(shape=(units,),\n",
    "                               initializer='zeros',\n",
    "                               trainable=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在许多情况下，你可能事先不知道输入的大小，并且你想在实例化图层后的某个时间，在该值已知后创建权重。 在 Keras API 中，建议你在图层的 build(inputs_shape) 方法中创建图层权重。像这样："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Linear(layers.Layer):\n",
    "\n",
    "  def __init__(self, units=32):\n",
    "    super(Linear, self).__init__()\n",
    "    self.units = units\n",
    "\n",
    "  def build(self, input_shape):\n",
    "    self.w = self.add_weight(shape=(input_shape[-1], self.units),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "    self.b = self.add_weight(shape=(self.units,),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return tf.matmul(inputs, self.w) + self.b\n",
    "\n",
    "# 图层的 __call__ 方法将在首次调用build时自动运行。您现在拥有了一个懒加载且易于使用的层：\n",
    "linear_layer = Linear(32)  # 实例化时，我们不知道该调用什么输入\n",
    "y = linear_layer(x)  # 首次调用该层时将动态创建该层的权重"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 图层可递归组合\n",
    "如果将 Layer 实例分配为另一个 Layer 的属性，则外层将开始跟踪内层的权重。 \n",
    "\n",
    "建议在 __init__ 方法中创建此类子层（由于子层通常具有构建方法，因此将在构建外层时构建它们）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "权重: 6\n",
      "可训练权重: 6\n"
     ]
    }
   ],
   "source": [
    "# 假设我们正在重新使用我们定义的 build 方法的线性类。\n",
    "\n",
    "class MLPBlock(layers.Layer):\n",
    "  def __init__(self):\n",
    "    super(MLPBlock, self).__init__()\n",
    "    self.linear_1 = Linear(32)\n",
    "    self.linear_2 = Linear(32)\n",
    "    self.linear_3 = Linear(1)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    x = self.linear_1(inputs)\n",
    "    x = tf.nn.relu(x)\n",
    "    x = self.linear_2(x)\n",
    "    x = tf.nn.relu(x)\n",
    "    return self.linear_3(x)\n",
    "\n",
    "\n",
    "mlp = MLPBlock()\n",
    "y = mlp(tf.ones(shape=(3, 64)))  # 第一次调用`mlp`将创建权重\n",
    "print('权重:', len(mlp.weights))\n",
    "print('可训练权重:', len(mlp.trainable_weights))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 各层递归收集前向传播期间产生的损失值\n",
    "编写层的调用方法时，可以创建损耗张量，稍后将在编写训练循环时使用。这可以通过调用self.add_loss(value) 来实现："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 产生活动正则化损失的层\n",
    "class ActivityRegularizationLayer(layers.Layer):\n",
    "\n",
    "  def __init__(self, rate=1e-2):\n",
    "    super(ActivityRegularizationLayer, self).__init__()\n",
    "    self.rate = rate\n",
    "\n",
    "  def call(self, inputs):\n",
    "    self.add_loss(self.rate * tf.reduce_sum(inputs))\n",
    "    return inputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这些损失值（包括由任何内部层产生的损失值）可以通过 layer.losses 进行检索。在每个 __call__ 到顶层图层的开始处都会重置此属性，以便 layer.losses 始终包含在上一次正向传递过程中创建的损耗值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "class OuterLayer(layers.Layer):\n",
    "\n",
    "  def __init__(self):\n",
    "    super(OuterLayer, self).__init__()\n",
    "    self.activity_reg = ActivityRegularizationLayer(1e-2)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return self.activity_reg(inputs)\n",
    "\n",
    "\n",
    "layer = OuterLayer()\n",
    "assert len(layer.losses) == 0  # 由于从未调用过该层，因此尚未造成任何损失\n",
    "_ = layer(tf.zeros(1, 1))\n",
    "assert len(layer.losses) == 1  # 产生了一个损失值\n",
    "\n",
    "# 在每次__call__的开始时都会重置 `layer.losses`\n",
    "_ = layer(tf.zeros(1, 1))\n",
    "assert len(layer.losses) == 1  # 这是上述计算中造成的损失"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此外，损失属性还包含为任何内层的权重创建的正则化损失："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[<tf.Tensor: shape=(), dtype=float32, numpy=0.001971811>]\n"
     ]
    }
   ],
   "source": [
    "class OuterLayer(layers.Layer):\n",
    "\n",
    "  def __init__(self):\n",
    "    super(OuterLayer, self).__init__()\n",
    "    self.dense = layers.Dense(32, kernel_regularizer=tf.keras.regularizers.l2(1e-3))\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return self.dense(inputs)\n",
    "\n",
    "\n",
    "layer = OuterLayer()\n",
    "_ = layer(tf.zeros((1, 1)))\n",
    "\n",
    "print(layer.losses)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 在编写训练循环时应考虑这些损失，如下所示：\n",
    "# 实例化一个优化器\n",
    "optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)\n",
    "loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n",
    "\n",
    "# 遍历数据集的批次\n",
    "for x_batch_train, y_batch_train in train_dataset:\n",
    "  with tf.GradientTape() as tape:\n",
    "    logits = layer(x_batch_train)  # 对最小批次进行逻辑回归计算\n",
    "    # 最小批次的损失值\n",
    "    loss_value = loss_fn(y_batch_train, logits)\n",
    "    # 添加在此向前传播过程中创建的额外损失\n",
    "    loss_value += sum(model.losses)\n",
    "\n",
    "  grads = tape.gradient(loss_value, model.trainable_weights)\n",
    "  optimizer.apply_gradients(zip(grads, model.trainable_weights))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "有关编写培训循环的详细指南，请参阅 [guide to training and evaluation](https://tensorflow.google.cn/guide/keras/train_and_evaluate)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 可以选择在图层上启用序列化\n",
    "如果你需要将自定义图层作为功能模型的一部分进行序列化，则可以选择实现 get_config 方法："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Linear(layers.Layer):\n",
    "\n",
    "  def __init__(self, units=32):\n",
    "    super(Linear, self).__init__()\n",
    "    self.units = units\n",
    "\n",
    "  def build(self, input_shape):\n",
    "    self.w = self.add_weight(shape=(input_shape[-1], self.units),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "    self.b = self.add_weight(shape=(self.units,),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return tf.matmul(inputs, self.w) + self.b\n",
    "\n",
    "  def get_config(self):\n",
    "    return {'units': self.units}\n",
    "\n",
    "\n",
    "# 可以从其配置重新创建该层：\n",
    "layer = Linear(64)\n",
    "config = layer.get_config()\n",
    "print(config)\n",
    "new_layer = Linear.from_config(config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意：基本类的 __init__ 方法采用一些关键字参数，尤其是名称和 dtype。\n",
    "\n",
    "好的作法是将这些参数传递给 __init__ 中的父类，并将其包含在层配置中："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Linear(layers.Layer):\n",
    "\n",
    "  def __init__(self, units=32, **kwargs):\n",
    "    super(Linear, self).__init__(**kwargs)\n",
    "    self.units = units\n",
    "\n",
    "  def build(self, input_shape):\n",
    "    self.w = self.add_weight(shape=(input_shape[-1], self.units),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "    self.b = self.add_weight(shape=(self.units,),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return tf.matmul(inputs, self.w) + self.b\n",
    "\n",
    "  def get_config(self):\n",
    "    config = super(Linear, self).get_config()\n",
    "    config.update({'units': self.units})\n",
    "    return config\n",
    "\n",
    "\n",
    "layer = Linear(64)\n",
    "config = layer.get_config()\n",
    "print(config)\n",
    "new_layer = Linear.from_config(config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如果在从其配置反序列化该层时需要更大的灵活性，则还可以覆盖 from_config 方法。这是 from_config 的基本实现：\n",
    "\n",
    "要了解有关序列化和保存的更多信息，请参阅 [Guide to Saving and Serializing Models](https://tensorflow.google.cn/guide/keras/save_and_serialize)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def from_config(cls, config):\n",
    "  return cls(**config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 调用方法中的特权训练参数\n",
    "一些层，尤其是 BatchNormalization 层和 Dropout 层，在训练和推理期间具有不同的行为。\n",
    "\n",
    "对于此类层，标准做法是在 call 方法中公开训练（布尔）参数。 通过在调用中公开此参数，你可以启用内置的训练和评估循环（例如，fit）以在训练和推理中正确使用该图层。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CustomDropout(layers.Layer):\n",
    "\n",
    "  def __init__(self, rate, **kwargs):\n",
    "    super(CustomDropout, self).__init__(**kwargs)\n",
    "    self.rate = rate\n",
    "\n",
    "  def call(self, inputs, training=None):\n",
    "    if training:\n",
    "        return tf.nn.dropout(inputs, rate=self.rate)\n",
    "    return inputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构建模型\n",
    "### 模型类\n",
    "一般来说，你将使用 Layer 类来定义内部计算，并使用 Model 类来定义外部模型。 \n",
    "\n",
    "例如，在 ResNet50 模型中，你将有几个 ResNet 子类化 Layer，而一个模型则包含了整个 ResNet50 网络。\n",
    "\n",
    "Model 类具有与 Layer 相同的 API，但有以下区别：\n",
    "1. 它公开了内置的训练，评估和预测循环（model.fit()，model.evaluate()，model.predict()）。\n",
    "2. 它通过 model.layers 属性公开其内层的列表\n",
    "3. 它公开了保存和序列化的 API\n",
    "\n",
    "实际上，Layer 类对应于在文档中所称的层（例如在卷积层或循环层中）或块（例如在 ResNet 块或Inception 块）。 同时，Model 类别对应于文档中所谓的模型（如深度学习模型）或网络（如深度神经网络）。\n",
    "\n",
    "例如，我们可以以上面的 mini-resnet 示例为例，并使用它来构建一个模型，该模型可以通过 fit() 进行训练，并可以通过 save_weights 保存："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ResNet(tf.keras.Model):\n",
    "\n",
    "    def __init__(self):\n",
    "        super(ResNet, self).__init__()\n",
    "        self.block_1 = ResNetBlock()\n",
    "        self.block_2 = ResNetBlock()\n",
    "        self.global_pool = layers.GlobalAveragePooling2D()\n",
    "        self.classifier = Dense(num_classes)\n",
    "\n",
    "    def call(self, inputs):\n",
    "        x = self.block_1(inputs)\n",
    "        x = self.block_2(x)\n",
    "        x = self.global_pool(x)\n",
    "        return self.classifier(x)\n",
    "\n",
    "\n",
    "resnet = ResNet()\n",
    "dataset = ...\n",
    "resnet.fit(dataset, epochs=10)\n",
    "resnet.save_weights(filepath)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 实践:端到端的例子\n",
    "到目前为止，你已经学到了以下内容： \n",
    "1. 层封装状态（在 __init__ 或内部版本中创建）和一些计算（在调用中）\n",
    "2. 可以递归嵌套图层以创建更大的新计算块\n",
    "3. 层可以创建和跟踪损失（通常是正则化损失）\n",
    "4. 你要训练的外部容器是模型。模型就像一个图层，但是增加了训练和序列化实用程序。\n",
    "\n",
    "让我们将所有这些内容放到一个端到端的示例中：\n",
    "\n",
    "我们将实现一个变体自动编码器（VAE）。我们将用 MNIST 数字对其进行训练。 我们的 VAE 将是 Model 的子类，它是由作为 Layer 的子类的嵌套嵌套层构建的。它将具有正则化损失（KL散度）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Start of epoch 0\n",
      "步数 0: 平均损失 = tf.Tensor(0.3369199, shape=(), dtype=float32)\n",
      "步数 100: 平均损失 = tf.Tensor(0.1264607, shape=(), dtype=float32)\n",
      "步数 200: 平均损失 = tf.Tensor(0.099742204, shape=(), dtype=float32)\n",
      "步数 300: 平均损失 = tf.Tensor(0.08947288, shape=(), dtype=float32)\n",
      "步数 400: 平均损失 = tf.Tensor(0.084490895, shape=(), dtype=float32)\n",
      "步数 500: 平均损失 = tf.Tensor(0.08107367, shape=(), dtype=float32)\n",
      "步数 600: 平均损失 = tf.Tensor(0.07891624, shape=(), dtype=float32)\n",
      "步数 700: 平均损失 = tf.Tensor(0.07729613, shape=(), dtype=float32)\n",
      "步数 800: 平均损失 = tf.Tensor(0.07609133, shape=(), dtype=float32)\n",
      "步数 900: 平均损失 = tf.Tensor(0.07506813, shape=(), dtype=float32)\n",
      "Start of epoch 1\n",
      "步数 0: 平均损失 = tf.Tensor(0.074772224, shape=(), dtype=float32)\n",
      "步数 100: 平均损失 = tf.Tensor(0.074108146, shape=(), dtype=float32)\n",
      "步数 200: 平均损失 = tf.Tensor(0.0735916, shape=(), dtype=float32)\n",
      "步数 300: 平均损失 = tf.Tensor(0.073125035, shape=(), dtype=float32)\n",
      "步数 400: 平均损失 = tf.Tensor(0.072780505, shape=(), dtype=float32)\n",
      "步数 500: 平均损失 = tf.Tensor(0.07237691, shape=(), dtype=float32)\n",
      "步数 600: 平均损失 = tf.Tensor(0.07207258, shape=(), dtype=float32)\n",
      "步数 700: 平均损失 = tf.Tensor(0.07178082, shape=(), dtype=float32)\n",
      "步数 800: 平均损失 = tf.Tensor(0.07153615, shape=(), dtype=float32)\n",
      "步数 900: 平均损失 = tf.Tensor(0.071272604, shape=(), dtype=float32)\n",
      "Start of epoch 2\n",
      "步数 0: 平均损失 = tf.Tensor(0.0712009, shape=(), dtype=float32)\n",
      "步数 100: 平均损失 = tf.Tensor(0.071029775, shape=(), dtype=float32)\n",
      "步数 200: 平均损失 = tf.Tensor(0.070887916, shape=(), dtype=float32)\n",
      "步数 300: 平均损失 = tf.Tensor(0.070735194, shape=(), dtype=float32)\n",
      "步数 400: 平均损失 = tf.Tensor(0.07063067, shape=(), dtype=float32)\n",
      "步数 500: 平均损失 = tf.Tensor(0.07048302, shape=(), dtype=float32)\n",
      "步数 600: 平均损失 = tf.Tensor(0.07036756, shape=(), dtype=float32)\n",
      "步数 700: 平均损失 = tf.Tensor(0.07024381, shape=(), dtype=float32)\n",
      "步数 800: 平均损失 = tf.Tensor(0.07013905, shape=(), dtype=float32)\n",
      "步数 900: 平均损失 = tf.Tensor(0.07000895, shape=(), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "class Sampling(layers.Layer):\n",
    "  \"\"\"使用（z_mean，z_log_var）采样 z，该向量编码一个数字\"\"\"\n",
    "\n",
    "  def call(self, inputs):\n",
    "    z_mean, z_log_var = inputs\n",
    "    batch = tf.shape(z_mean)[0]\n",
    "    dim = tf.shape(z_mean)[1]\n",
    "    epsilon = tf.keras.backend.random_normal(shape=(batch, dim))\n",
    "    return z_mean + tf.exp(0.5 * z_log_var) * epsilon\n",
    "\n",
    "\n",
    "class Encoder(layers.Layer):\n",
    "  \"\"\"将MNIST数字映射到三元组（z_mean，z_log_var，z）\"\"\"\n",
    "\n",
    "  def __init__(self,\n",
    "               latent_dim=32,\n",
    "               intermediate_dim=64,\n",
    "               name='encoder',\n",
    "               **kwargs):\n",
    "    super(Encoder, self).__init__(name=name, **kwargs)\n",
    "    self.dense_proj = layers.Dense(intermediate_dim, activation='relu')\n",
    "    self.dense_mean = layers.Dense(latent_dim)\n",
    "    self.dense_log_var = layers.Dense(latent_dim)\n",
    "    self.sampling = Sampling()\n",
    "\n",
    "  def call(self, inputs):\n",
    "    x = self.dense_proj(inputs)\n",
    "    z_mean = self.dense_mean(x)\n",
    "    z_log_var = self.dense_log_var(x)\n",
    "    z = self.sampling((z_mean, z_log_var))\n",
    "    return z_mean, z_log_var, z\n",
    "\n",
    "\n",
    "class Decoder(layers.Layer):\n",
    "  \"\"\"将编码的数字矢量z转换回可读的数字\"\"\"\n",
    "\n",
    "  def __init__(self,\n",
    "               original_dim,\n",
    "               intermediate_dim=64,\n",
    "               name='decoder',\n",
    "               **kwargs):\n",
    "    super(Decoder, self).__init__(name=name, **kwargs)\n",
    "    self.dense_proj = layers.Dense(intermediate_dim, activation='relu')\n",
    "    self.dense_output = layers.Dense(original_dim, activation='sigmoid')\n",
    "\n",
    "  def call(self, inputs):\n",
    "    x = self.dense_proj(inputs)\n",
    "    return self.dense_output(x)\n",
    "\n",
    "\n",
    "class VariationalAutoEncoder(tf.keras.Model):\n",
    "  \"\"\"将编码的数字矢量z转换回可读的数字\"\"\"\n",
    "\n",
    "  def __init__(self,\n",
    "               original_dim,\n",
    "               intermediate_dim=64,\n",
    "               latent_dim=32,\n",
    "               name='autoencoder',\n",
    "               **kwargs):\n",
    "    super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)\n",
    "    self.original_dim = original_dim\n",
    "    self.encoder = Encoder(latent_dim=latent_dim,\n",
    "                           intermediate_dim=intermediate_dim)\n",
    "    self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    z_mean, z_log_var, z = self.encoder(inputs)\n",
    "    reconstructed = self.decoder(z)\n",
    "    # 添加KL发散正则化损失。\n",
    "    kl_loss = - 0.5 * tf.reduce_mean(\n",
    "        z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)\n",
    "    self.add_loss(kl_loss)\n",
    "    return reconstructed\n",
    "\n",
    "original_dim = 784\n",
    "vae = VariationalAutoEncoder(original_dim, 64, 32)\n",
    "\n",
    "optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)\n",
    "mse_loss_fn = tf.keras.losses.MeanSquaredError()\n",
    "\n",
    "loss_metric = tf.keras.metrics.Mean()\n",
    "\n",
    "(x_train, _), _ = tf.keras.datasets.mnist.load_data()\n",
    "x_train = x_train.reshape(60000, 784).astype('float32') / 255\n",
    "\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices(x_train)\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)\n",
    "\n",
    "epochs = 3\n",
    "\n",
    "# 迭代周期\n",
    "for epoch in range(epochs):\n",
    "  print('-'*20)\n",
    "  print('周期 %d' % (epoch,))\n",
    "\n",
    "  # 迭代每批次的数据集\n",
    "  for step, x_batch_train in enumerate(train_dataset):\n",
    "    with tf.GradientTape() as tape:\n",
    "      reconstructed = vae(x_batch_train)\n",
    "      # 计算重建损失\n",
    "      loss = mse_loss_fn(x_batch_train, reconstructed)\n",
    "      loss += sum(vae.losses)  # 增加KLD正则化损失\n",
    "\n",
    "    grads = tape.gradient(loss, vae.trainable_weights)\n",
    "    optimizer.apply_gradients(zip(grads, vae.trainable_weights))\n",
    "\n",
    "    loss_metric(loss)\n",
    "\n",
    "    if step % 100 == 0:\n",
    "      print('步数 %s: 平均损失 = %s' % (step, loss_metric.result()))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意，由于 VAE 是 Model 的子类，因此它具有内置的训练循环。因此，你也可以像这样训练它："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "vae = VariationalAutoEncoder(784, 64, 32)\n",
    "\n",
    "optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)\n",
    "\n",
    "vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())\n",
    "vae.fit(x_train, x_train, epochs=3, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 函数式 API\n",
    "可以使用 [the Functional API](https://tensorflow.google.cn/guide/keras/functional) 构建模型。重要的是，选择一种样式或另一种样式不会阻止你利用以另一种样式编写的组件：你可以进行混合搭配。\n",
    "\n",
    "例如，下面的函数式 API示例重用了我们在上面示例中定义的相同采样层。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "original_dim = 784\n",
    "intermediate_dim = 64\n",
    "latent_dim = 32\n",
    "\n",
    "# 定义编码模型\n",
    "original_inputs = tf.keras.Input(shape=(original_dim,), name='encoder_input')\n",
    "x = layers.Dense(intermediate_dim, activation='relu')(original_inputs)\n",
    "z_mean = layers.Dense(latent_dim, name='z_mean')(x)\n",
    "z_log_var = layers.Dense(latent_dim, name='z_log_var')(x)\n",
    "z = Sampling()((z_mean, z_log_var))\n",
    "encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name='encoder')\n",
    "\n",
    "# 定义解码模型\n",
    "latent_inputs = tf.keras.Input(shape=(latent_dim,), name='z_sampling')\n",
    "x = layers.Dense(intermediate_dim, activation='relu')(latent_inputs)\n",
    "outputs = layers.Dense(original_dim, activation='sigmoid')(x)\n",
    "decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name='decoder')\n",
    "\n",
    "#  定义 VAE 模型\n",
    "outputs = decoder(z)\n",
    "vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name='vae')\n",
    "\n",
    "# 添加KL发散正则化损失\n",
    "kl_loss = - 0.5 * tf.reduce_mean(\n",
    "    z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)\n",
    "vae.add_loss(kl_loss)\n",
    "\n",
    "# 开始训练\n",
    "optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)\n",
    "vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())\n",
    "vae.fit(x_train, x_train, epochs=3, batch_size=64)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
