{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Keras 函数式 API\n",
    "## 起步"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "from tensorflow import keras\n",
    "from tensorflow.keras import layers\n",
    "\n",
    "tf.keras.backend.clear_session()  # 更容易重置 notebook 状态"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 介绍\n",
    "Keras 函数式 API 是一种创建模型的方法，该模型比 tf.keras.Sequential API更灵活。函数式 API 可以处理具有非线性拓扑的模型，具有共享层的模型以及具有多个输入或输出的模型。\n",
    "\n",
    "深度学习模型通常是层的有向无环图（DAG）的主要思想。因此，函数式 API 是一种构建层图的方法。\n",
    "\n",
    "思考以下模型：\n",
    "```text\n",
    "(input: 784-dimensional vectors)\n",
    "       ↧\n",
    "[Dense (64 units, relu activation)]\n",
    "       ↧\n",
    "[Dense (64 units, relu activation)]\n",
    "       ↧\n",
    "[Dense (10 units, softmax activation)]\n",
    "       ↧\n",
    "(output: logits of a probability distribution over 10 classes)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"mnist_model\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "input_1 (InputLayer)         [(None, 784)]             0         \n",
      "_________________________________________________________________\n",
      "dense (Dense)                (None, 64)                50240     \n",
      "_________________________________________________________________\n",
      "dense_1 (Dense)              (None, 64)                4160      \n",
      "_________________________________________________________________\n",
      "dense_2 (Dense)              (None, 10)                650       \n",
      "=================================================================\n",
      "Total params: 55,050\n",
      "Trainable params: 55,050\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "# 这是一个具有三层的基本图形。要使用功能性API构建此模型，要先创建一个输入节点：\n",
    "# 数据的形状设置为784维向量。由于仅指定每个样品的形状，因此始终忽略批次大小。 \n",
    "# 返回的输入包含有关输入给模型的输入数据的形状和dtype的信息：\n",
    "inputs = keras.Input(shape=(784,))\n",
    "\n",
    "# 通过在此输入对象上调用一个图层，可以在图层图中创建一个新节点：\n",
    "dense = layers.Dense(64, activation='relu')\n",
    "x = dense(inputs)\n",
    "\n",
    "# “图层调用”操作就像从“输入”到创建的该图层绘制箭头。你将输入“传递”到密集层，然后得到x。\n",
    "# 接下来在层图中添加更多层：\n",
    "x = layers.Dense(64, activation='relu')(x)\n",
    "outputs = layers.Dense(10)(x)\n",
    "\n",
    "# 此时，我们可以通过在层图中指定模型的输入和输出来创建模型：\n",
    "model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')\n",
    "\n",
    "# 检查模型结构\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练，评估和推断\n",
    "对于使用功能性API构建的模型，对于序列模型，训练，评估和推断的工作方式完全相同。 \n",
    "\n",
    "在这里，加载MNIST图像数据，将其整形为矢量，将模型拟合到数据上（同时在验证拆分上监视性能），然后在测试数据上评估模型：\n",
    "\n",
    "> 有关训练和评估更详细的内容，可查看 [train and evaluate](https://tensorflow.google.cn/guide/keras/train_and_evaluate)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 48000 samples, validate on 12000 samples\n",
      "Epoch 1/5\n",
      "48000/48000 [==============================] - 3s 59us/sample - loss: 0.3543 - accuracy: 0.8993 - val_loss: 0.1887 - val_accuracy: 0.9458\n",
      "Epoch 2/5\n",
      "48000/48000 [==============================] - 2s 33us/sample - loss: 0.1625 - accuracy: 0.9522 - val_loss: 0.1437 - val_accuracy: 0.9563\n",
      "Epoch 3/5\n",
      "48000/48000 [==============================] - 2s 33us/sample - loss: 0.1168 - accuracy: 0.9651 - val_loss: 0.1220 - val_accuracy: 0.9639\n",
      "Epoch 4/5\n",
      "48000/48000 [==============================] - 2s 33us/sample - loss: 0.0928 - accuracy: 0.9721 - val_loss: 0.1059 - val_accuracy: 0.9679\n",
      "Epoch 5/5\n",
      "48000/48000 [==============================] - 2s 35us/sample - loss: 0.0785 - accuracy: 0.9766 - val_loss: 0.1142 - val_accuracy: 0.9687\n",
      "10000/10000 - 0s - loss: 0.0992 - accuracy: 0.9719\n",
      "损失值： 0.09916581261539832\n",
      "精确度： 0.9719\n"
     ]
    }
   ],
   "source": [
    "(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n",
    "\n",
    "x_train = x_train.reshape(60000, 784).astype('float32') / 255\n",
    "x_test = x_test.reshape(10000, 784).astype('float32') / 255\n",
    "\n",
    "model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
    "              optimizer=keras.optimizers.RMSprop(),\n",
    "              metrics=['accuracy'])\n",
    "\n",
    "history = model.fit(x_train, y_train,\n",
    "                    batch_size=64,\n",
    "                    epochs=5,\n",
    "                    validation_split=0.2)\n",
    "\n",
    "test_scores = model.evaluate(x_test, y_test, verbose=2)\n",
    "print('损失值：', test_scores[0])\n",
    "print('精确度：', test_scores[1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 保存和序列化\n",
    "对于使用函数式 API 构建的模型，保存模型和序列化的工作方式与对顺序模型进行保存的方式相同。保存功能模型的标准方法是调用 model.save() 将整个模型保存为单个文件。之后可以从该文件重新创建相同的模型，即使构建该模型的代码不再可用。\n",
    "\n",
    "该保存的文件包括：\n",
    "1. 模型架构（model architecture）\n",
    "2. 模型权重（model weight values）\n",
    "3. 模型训练的配置（model training config）\n",
    "4.  优化器和它的状态（optimizer and its state）\n",
    "\n",
    "> 更详细的内容可查看 [save and serialize](https://tensorflow.google.cn/guide/keras/save_and_serialize)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.save('path_to_my_model')\n",
    "del model\n",
    "model = keras.models.load_model('path_to_my_model')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用同一层图定义多个模型\n",
    "在函数式 API 中，通过在层图中指定模型的输入和输出来创建模型。这意味着可以使用单个层图来生成多个模型。 \n",
    "\n",
    "在下面的示例中，使用相同的层堆栈实例化两个模型：将图像输入转换为16维向量的编码器模型，以及用于训练的端到端自动编码器模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"encoder\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "img (InputLayer)             [(None, 28, 28, 1)]       0         \n",
      "_________________________________________________________________\n",
      "conv2d (Conv2D)              (None, 26, 26, 16)        160       \n",
      "_________________________________________________________________\n",
      "conv2d_1 (Conv2D)            (None, 24, 24, 32)        4640      \n",
      "_________________________________________________________________\n",
      "max_pooling2d (MaxPooling2D) (None, 8, 8, 32)          0         \n",
      "_________________________________________________________________\n",
      "conv2d_2 (Conv2D)            (None, 6, 6, 32)          9248      \n",
      "_________________________________________________________________\n",
      "conv2d_3 (Conv2D)            (None, 4, 4, 16)          4624      \n",
      "_________________________________________________________________\n",
      "global_max_pooling2d (Global (None, 16)                0         \n",
      "=================================================================\n",
      "Total params: 18,672\n",
      "Trainable params: 18,672\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n",
      "Model: \"autoencoder\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "img (InputLayer)             [(None, 28, 28, 1)]       0         \n",
      "_________________________________________________________________\n",
      "conv2d (Conv2D)              (None, 26, 26, 16)        160       \n",
      "_________________________________________________________________\n",
      "conv2d_1 (Conv2D)            (None, 24, 24, 32)        4640      \n",
      "_________________________________________________________________\n",
      "max_pooling2d (MaxPooling2D) (None, 8, 8, 32)          0         \n",
      "_________________________________________________________________\n",
      "conv2d_2 (Conv2D)            (None, 6, 6, 32)          9248      \n",
      "_________________________________________________________________\n",
      "conv2d_3 (Conv2D)            (None, 4, 4, 16)          4624      \n",
      "_________________________________________________________________\n",
      "global_max_pooling2d (Global (None, 16)                0         \n",
      "_________________________________________________________________\n",
      "reshape (Reshape)            (None, 4, 4, 1)           0         \n",
      "_________________________________________________________________\n",
      "conv2d_transpose (Conv2DTran (None, 6, 6, 16)          160       \n",
      "_________________________________________________________________\n",
      "conv2d_transpose_1 (Conv2DTr (None, 8, 8, 32)          4640      \n",
      "_________________________________________________________________\n",
      "up_sampling2d (UpSampling2D) (None, 24, 24, 32)        0         \n",
      "_________________________________________________________________\n",
      "conv2d_transpose_2 (Conv2DTr (None, 26, 26, 16)        4624      \n",
      "_________________________________________________________________\n",
      "conv2d_transpose_3 (Conv2DTr (None, 28, 28, 1)         145       \n",
      "=================================================================\n",
      "Total params: 28,241\n",
      "Trainable params: 28,241\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "encoder_input = keras.Input(shape=(28, 28, 1), name='img')\n",
    "x = layers.Conv2D(16, 3, activation='relu')(encoder_input)\n",
    "x = layers.Conv2D(32, 3, activation='relu')(x)\n",
    "x = layers.MaxPooling2D(3)(x)\n",
    "x = layers.Conv2D(32, 3, activation='relu')(x)\n",
    "x = layers.Conv2D(16, 3, activation='relu')(x)\n",
    "encoder_output = layers.GlobalMaxPooling2D()(x)\n",
    "\n",
    "encoder = keras.Model(encoder_input, encoder_output, name='encoder')\n",
    "encoder.summary()\n",
    "\n",
    "x = layers.Reshape((4, 4, 1))(encoder_output)\n",
    "x = layers.Conv2DTranspose(16, 3, activation='relu')(x)\n",
    "x = layers.Conv2DTranspose(32, 3, activation='relu')(x)\n",
    "x = layers.UpSampling2D(3)(x)\n",
    "x = layers.Conv2DTranspose(16, 3, activation='relu')(x)\n",
    "decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)\n",
    "\n",
    "autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')\n",
    "autoencoder.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 所有模型都可以像图层一样被调用\n",
    "可以通过在另一层的输入或输出上调用任何模型，将其视为层。\n",
    "\n",
    "通过调用模型，不仅可以重用模型的体系结构，还可以重用其权重。 \n",
    "\n",
    "为了实际操作，这是对自动编码器示例的另一种处理方式，该示例创建一个编码器模型，一个解码器模型，并将它们链接到两个调用中以获得自动编码器模型："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')\n",
    "x = layers.Conv2D(16, 3, activation='relu')(encoder_input)\n",
    "x = layers.Conv2D(32, 3, activation='relu')(x)\n",
    "x = layers.MaxPooling2D(3)(x)\n",
    "x = layers.Conv2D(32, 3, activation='relu')(x)\n",
    "x = layers.Conv2D(16, 3, activation='relu')(x)\n",
    "encoder_output = layers.GlobalMaxPooling2D()(x)\n",
    "\n",
    "encoder = keras.Model(encoder_input, encoder_output, name='encoder')\n",
    "encoder.summary()\n",
    "\n",
    "decoder_input = keras.Input(shape=(16,), name='encoded_img')\n",
    "x = layers.Reshape((4, 4, 1))(decoder_input)\n",
    "x = layers.Conv2DTranspose(16, 3, activation='relu')(x)\n",
    "x = layers.Conv2DTranspose(32, 3, activation='relu')(x)\n",
    "x = layers.UpSampling2D(3)(x)\n",
    "x = layers.Conv2DTranspose(16, 3, activation='relu')(x)\n",
    "decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)\n",
    "\n",
    "decoder = keras.Model(decoder_input, decoder_output, name='decoder')\n",
    "decoder.summary()\n",
    "\n",
    "autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')\n",
    "encoded_img = encoder(autoencoder_input)\n",
    "decoded_img = decoder(encoded_img)\n",
    "autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')\n",
    "autoencoder.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如你所见，模型可以嵌套：模型可以包含子模型（因为模型就像一层一样）。模型嵌套的一个常见用例是集合。例如，以下是将一组模型合并为一个平均其预测的模型的方法："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_model():\n",
    "  inputs = keras.Input(shape=(128,))\n",
    "  outputs = layers.Dense(1)(inputs)\n",
    "  return keras.Model(inputs, outputs)\n",
    "\n",
    "model1 = get_model()\n",
    "model2 = get_model()\n",
    "model3 = get_model()\n",
    "\n",
    "inputs = keras.Input(shape=(128,))\n",
    "y1 = model1(inputs)\n",
    "y2 = model2(inputs)\n",
    "y3 = model3(inputs)\n",
    "outputs = layers.average([y1, y2, y3])\n",
    "ensemble_model = keras.Model(inputs=inputs, outputs=outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 复杂的拓扑图\n",
    "### 多输入输出模型\n",
    "对于不能使用顺序 API 处理的多输入输出，函数式 API 操作它们变得容易。\n",
    "\n",
    "比如：如果要建立一个按优先级对自定义发行票证排序并将其路由到正确部门的系统，那么该模型将具有三个输入：\n",
    "1. 票证的标题（文本输入）\n",
    "2. 票证的文本正文（文本输入）\n",
    "3. 用户添加的任何标签（分类输入）\n",
    "\n",
    "这个模型有两个输出：\n",
    "1. 0到1之间的优先级\n",
    "2. 应该处理票证的部门"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.\n",
      "Train on 1280 samples\n",
      "Epoch 1/2\n",
      "1280/1280 [==============================] - 5s 4ms/sample - loss: 1.3257 - priority_loss: 0.7071 - dep_loss: 3.0931\n",
      "Epoch 2/2\n",
      "1280/1280 [==============================] - 1s 598us/sample - loss: 1.3158 - priority_loss: 0.6988 - dep_loss: 3.0853\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x1c3e2b5d708>"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "num_tags = 12 # 标签的数量\n",
    "num_words = 10000 # 预处理文本数据时获得的词汇量\n",
    "num_dep = 4 # 部门数量\n",
    "\n",
    "title_input = keras.Input(shape=(None,), name='title') # 可变长度的整数序列\n",
    "body_input = keras.Input(shape=(None,), name='body') # 可变长度的整数序列\n",
    "tags_input = keras.Input(shape=(num_tags,), name='tags') # 大小为num_tags的二进制向量\n",
    "\n",
    "# 将标题中的每个单词嵌入到64维向量中\n",
    "title_features = layers.Embedding(num_words, 64)(title_input)\n",
    "# 将文本中的每个单词嵌入到64维向量中\n",
    "body_features = layers.Embedding(num_words, 64)(body_input)\n",
    "\n",
    "# 将标题中嵌入单词的序列减少为单个128维向量\n",
    "title_features = layers.LSTM(128)(title_features)\n",
    "# 将文本中嵌入单词的序列减少为单个128维向量\n",
    "body_features = layers.LSTM(128)(body_features)\n",
    "\n",
    "# 通过串联将所有可用特征合并到单个大向量中\n",
    "x = layers.concatenate([title_features, body_features, tags_input])\n",
    "\n",
    "# 在特征上进行逻辑回归以进行优先级预测\n",
    "priority_pred = layers.Dense(1, name='priority')(x)\n",
    "# 在首部特征上添加部门分类器\n",
    "dep_pred = layers.Dense(num_dep, name='dep')(x)\n",
    "\n",
    "# 实例化预测优先级和部门的端到端模型\n",
    "model = keras.Model(inputs=[title_input, body_input, tags_input], outputs=[priority_pred, dep_pred])\n",
    "\n",
    "# 绘制模型\n",
    "keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)\n",
    "\n",
    "# 编译此模型时，可以为每个输出分配不同的损耗。甚至可以为每个损失分配不同的权重，\n",
    "# 以调整它们对总训练损失的贡献。\n",
    "model.compile(optimizer=keras.optimizers.RMSprop(1e-3),\n",
    "              loss={'priority': keras.losses.BinaryCrossentropy(from_logits=True),\n",
    "                        'dep': keras.losses.CategoricalCrossentropy(from_logits=True)},\n",
    "             loss_weights=[1., 0.2])\n",
    "\n",
    "# 通过传递输入和目标的NumPy数组列表来训练模型：\n",
    "title_data = np.random.randint(num_words, size=(1280, 10))\n",
    "body_data = np.random.randint(num_words, size=(1280, 10))\n",
    "tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')\n",
    "\n",
    "priority_targets = np.random.random(size=(1280, 1))\n",
    "dept_targets = np.random.randint(2, size=(1280, num_dep))\n",
    "\n",
    "model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},\n",
    "          {'priority': priority_targets, 'dep': dept_targets},\n",
    "          epochs=2,\n",
    "          batch_size=32)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### ResNet模型\n",
    "除了具有多个输入和输出的模型之外，函数式 API 还使操作非线性连接拓扑变得更加容易——这些模型的层没有顺序连接。\n",
    "\n",
    "顺序API无法处理某些事情。 常见的用例是残差连接（[residual connections](https://mp.weixin.qq.com/s?__biz=MzA3NDIyMjM1NA==&mid=2649029645&idx=1&sn=75b494ec181fee3e8756bb0fa119e7ce&chksm=87134270b064cb66aea66e73b4a6dc283d5750cfa9d331015424f075ba117e38f857d2f25d07&token=1097604967&lang=zh_CN#rd)）。为CIFAR10建立一个[ResNet 模型](https://blog.csdn.net/u013709270/article/details/78838875)来演示这一点："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"toy_resnet\"\n",
      "__________________________________________________________________________________________________\n",
      "Layer (type)                    Output Shape         Param #     Connected to                     \n",
      "==================================================================================================\n",
      "img (InputLayer)                [(None, 32, 32, 3)]  0                                            \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_11 (Conv2D)              (None, 30, 30, 32)   896         img[0][0]                        \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_12 (Conv2D)              (None, 28, 28, 64)   18496       conv2d_11[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_2 (MaxPooling2D)  (None, 9, 9, 64)     0           conv2d_12[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_13 (Conv2D)              (None, 9, 9, 64)     36928       max_pooling2d_2[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_14 (Conv2D)              (None, 9, 9, 64)     36928       conv2d_13[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "add_2 (Add)                     (None, 9, 9, 64)     0           conv2d_14[0][0]                  \n",
      "                                                                 max_pooling2d_2[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_15 (Conv2D)              (None, 9, 9, 64)     36928       add_2[0][0]                      \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_16 (Conv2D)              (None, 9, 9, 64)     36928       conv2d_15[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "add_3 (Add)                     (None, 9, 9, 64)     0           conv2d_16[0][0]                  \n",
      "                                                                 add_2[0][0]                      \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_17 (Conv2D)              (None, 7, 7, 64)     36928       add_3[0][0]                      \n",
      "__________________________________________________________________________________________________\n",
      "global_average_pooling2d_1 (Glo (None, 64)           0           conv2d_17[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "dense_5 (Dense)                 (None, 256)          16640       global_average_pooling2d_1[0][0] \n",
      "__________________________________________________________________________________________________\n",
      "dropout_1 (Dropout)             (None, 256)          0           dense_5[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "dense_6 (Dense)                 (None, 10)           2570        dropout_1[0][0]                  \n",
      "==================================================================================================\n",
      "Total params: 223,242\n",
      "Trainable params: 223,242\n",
      "Non-trainable params: 0\n",
      "__________________________________________________________________________________________________\n",
      "Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz\n"
     ]
    }
   ],
   "source": [
    "inputs = keras.Input(shape=(32, 32, 3), name='img')\n",
    "x = layers.Conv2D(32, 3, activation='relu')(inputs)\n",
    "x = layers.Conv2D(64, 3, activation='relu')(x)\n",
    "block_1_output = layers.MaxPooling2D(3)(x)\n",
    "\n",
    "x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)\n",
    "x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)\n",
    "block_2_output = layers.add([x, block_1_output])\n",
    "\n",
    "x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)\n",
    "x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)\n",
    "block_3_output = layers.add([x, block_2_output])\n",
    "\n",
    "x = layers.Conv2D(64, 3, activation='relu')(block_3_output)\n",
    "x = layers.GlobalAveragePooling2D()(x)\n",
    "x = layers.Dense(256, activation='relu')(x)\n",
    "x = layers.Dropout(0.5)(x)\n",
    "outputs = layers.Dense(10)(x)\n",
    "\n",
    "model = keras.Model(inputs, outputs, name='toy_resnet')\n",
    "model.summary()\n",
    "\n",
    "(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()\n",
    "\n",
    "x_train = x_train.astype('float32') / 255.\n",
    "x_test = x_test.astype('float32') / 255.\n",
    "y_train = keras.utils.to_categorical(y_train, 10)\n",
    "y_test = keras.utils.to_categorical(y_test, 10)\n",
    "\n",
    "model.compile(optimizer=keras.optimizers.RMSprop(1e-3),\n",
    "              loss=keras.losses.CategoricalCrossentropy(from_logits=True),\n",
    "              metrics=['acc'])\n",
    "\n",
    "model.fit(x_train, y_train,\n",
    "          batch_size=64,\n",
    "          epochs=1,\n",
    "          validation_split=0.2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 共享层\n",
    "函数式 API 的另一个很好的用途是使用共享层的模型。\n",
    "\n",
    "共享层是在同一模型中多次重复使用的层实例——它们学习与层图中的多个路径相对应的特征。 共享层通常用于编码相似空间中的输入（例如，两个具有相似词汇的不同文本）。它们使得能够在这些不同的输入之间共享信息，并且使得有可能在更少的数据上训练这种模型。\n",
    "\n",
    "要在函数式 API 中使用共享层，请多次调用同一层实例。例如，这是一个在两个不同的文本输入之间共享的嵌入层："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "shared_embedding = layers.Embedding(1000, 128)\n",
    "\n",
    "text_input_a = keras.Input(shape=(None,), dtype='int32')\n",
    "\n",
    "text_input_b = keras.Input(shape=(None,), dtype='int32')\n",
    "\n",
    "encoded_input_a = shared_embedding(text_input_a)\n",
    "encoded_input_b = shared_embedding(text_input_b)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 提取和重用层图中的节点\n",
    "由于要处理的层图是静态数据结构，因此可以对其进行访问和检查。这就是将函数式模型绘制为图像的方式。 \n",
    "\n",
    "这也意味着你可以访问中间层（图中的“节点”）的激活，并将它们在其他地方重用——这对于诸如特征提取之类的操作非常有用。 \n",
    "\n",
    "让我们来看一个例子。这是一个在ImageNet上预训练权重的VGG19模型："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "vgg19 = tf.keras.applications.VGG19()\n",
    "\n",
    "# 这些是通过查询图形数据结构而获得的模型的中间激活：\n",
    "features_list = [layer.output for layer in vgg19.layers]\n",
    "\n",
    "# 使用以下特征来创建新的特征提取模型，该模型返回中间层激活的值：\n",
    "feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)\n",
    "\n",
    "img = np.random.random((1, 224, 224, 3)).astype('float32')\n",
    "extracted_features = feat_extraction_model(img)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用自定义层扩展API\n",
    "tf.keras包含各种内置层，例如：\n",
    "1. 卷积层：Conv1D、Conv2D、Conv3D、Conv2DTranspose\n",
    "2. 池化层：MaxPooling1D、MaxPooling2D、 MaxPooling3D、 AveragePooling1D\n",
    "3. RNN 层：GRU、 LSTM、 ConvLSTM2D\n",
    "4. 其它：BatchNormalization、Dropout、Embedding……\n",
    "但是，如果找不到所需的内容，则可以通过创建自己的图层来扩展API。所有图层都继承了Layer类并实现了：\n",
    "1. call 方法，指定由图层完成的计算。\n",
    "2. build 方法，会创建图层的权重（这只是一种样式约定，因为也可以在__init__中创建权重）。\n",
    "要了解有关从头开始创建图层的更多信息，可以阅读 [custom layers and models ](https://tensorflow.google.cn/guide/keras/custom_layers_and_models)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CustomDense(layers.Layer):\n",
    "  def __init__(self, units=32):\n",
    "    super(CustomDense, self).__init__()\n",
    "    self.units = units\n",
    "\n",
    "  def build(self, input_shape):\n",
    "    self.w = self.add_weight(shape=(input_shape[-1], self.units),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "    self.b = self.add_weight(shape=(self.units,),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return tf.matmul(inputs, self.w) + self.b\n",
    "\n",
    "\n",
    "inputs = keras.Input((4,))\n",
    "outputs = CustomDense(10)(inputs)\n",
    "\n",
    "model = keras.Model(inputs, outputs)\n",
    "\n",
    "# 为了在自定义层中支持序列化，可以定义一个get_config方法，该方法返回该层实例的构造函数参数：\n",
    "class CustomDense(layers.Layer):\n",
    "\n",
    "  def __init__(self, units=32):\n",
    "    super(CustomDense, self).__init__()\n",
    "    self.units = units\n",
    "\n",
    "  def build(self, input_shape):\n",
    "    self.w = self.add_weight(shape=(input_shape[-1], self.units),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "    self.b = self.add_weight(shape=(self.units,),\n",
    "                             initializer='random_normal',\n",
    "                             trainable=True)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    return tf.matmul(inputs, self.w) + self.b\n",
    "\n",
    "  def get_config(self):\n",
    "    return {'units': self.units}\n",
    "\n",
    "\n",
    "inputs = keras.Input((4,))\n",
    "outputs = CustomDense(10)(inputs)\n",
    "\n",
    "model = keras.Model(inputs, outputs)\n",
    "config = model.get_config()\n",
    "\n",
    "new_model = keras.Model.from_config(\n",
    "    config, custom_objects={'CustomDense': CustomDense})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用函数式 API 的一些注意\n",
    "什么时候应该使用 Keras 函数式 API 创建新模型，或者仅直接对 Model 类进行子类化？通常，函数式 API 是更高级别的，更容易且更安全的API，并且具有许多子类化模型不支持的功能。 但是，当构建不容易表示为有向无环图的模型时，模型子类提供了更大的灵活性。例如，当无法使用函数式 API 来实现 Tree-RNN，而必须直接继承 Model。\n",
    "\n",
    "要深入了解函数式 API 和模型子类之间的区别，请阅读 TensorFlow2 中的 [What are Symbolic and Imperative APIs in TensorFlow 2.0?.](https://blog.tensorflow.org/2019/01/what-are-symbolic-and-imperative-apis.html)。\n",
    "### 函数式 API 的优势\n",
    "以下属性对于顺序模型（也是数据结构）也适用，但对于子类模型（python 字节码而非数据结构）则不适用。\n",
    "\n",
    "**简短**：没有super（MyClass，self）.__ init __（...），没有def call（self，...）等"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "inputs = keras.Input(shape=(32,))\n",
    "x = layers.Dense(64, activation='relu')(inputs)\n",
    "outputs = layers.Dense(10)(x)\n",
    "mlp = keras.Model(inputs, outputs)\n",
    "\n",
    "# 子类版本\n",
    "class MLP(keras.Model):\n",
    "\n",
    "  def __init__(self, **kwargs):\n",
    "    super(MLP, self).__init__(**kwargs)\n",
    "    self.dense_1 = layers.Dense(64, activation='relu')\n",
    "    self.dense_2 = layers.Dense(10)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    x = self.dense_1(inputs)\n",
    "    return self.dense_2(x)\n",
    "\n",
    "mlp = MLP()\n",
    "_ = mlp(tf.zeros((1, 32)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**定义时进行模型验证**：在函数式 API 中，输入规范（shape 和 dtype）是预先创建的（使用 Input ）。每次调用图层时，该图层都会检查传递给它的规范是否符合其假设，如果不符合，它将引发有用的错误消息。 这样可以保证可以使用功能性API构建的任何模型都可以运行。除了与收敛有关的调试外，所有调试都在模型构建过程中静态发生，而不是在执行时发生。这类似于编译器中的类型检查。\n",
    "\n",
    "**可绘制和可检查**：你可以将模型绘制为图形，并且可以轻松访问此图形中的中间节点。例如，要提取并重用中间层的激活（如前面的示例所示）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "features_list = [layer.output for layer in vgg19.layers]\n",
    "feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**模型可以序列化或克隆**：因为函数模型是数据结构而不是一段代码，所以它可以安全地序列化，并且可以保存为单个文件，从而使你可以重新创建完全相同的模型，而无需访问任何原始代码。详细了解可参阅[saving and serialization guide](https://tensorflow.google.cn/guide/keras/save_and_serialize)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 函数式 API 劣势\n",
    "**不支持动态架构**：函数式 API 将模型视为层的 DAG。对于大多数深度学习架构而言，这是正确的，但并非全部—例如，递归网络或 Tree RNN 不遵循此假设，并且无法在功能性API中实现。\n",
    "\n",
    "**一切都从头开始**：在编写高级体系结构时，可能想做超出定义 DAG 层范围的事情。例如，必须使用模型子类在模型实例上公开多个自定义训练和推断方法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## API 混合模式\n",
    "在函数式 API 或模型子类之间进行选择不是一个将你限制在模型类别中的二元决策。 tf.keras API 中的所有模型都可以彼此交互，无论它们是顺序模型，功能模型还是从头开始编写的子类模型。\n",
    "\n",
    "你始终可以将功能模型或顺序模型用作子类化模型或层的一部分："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "units = 32\n",
    "timesteps = 10\n",
    "input_dim = 5\n",
    "\n",
    "inputs = keras.Input((None, units))\n",
    "x = layers.GlobalAveragePooling1D()(inputs)\n",
    "outputs = layers.Dense(1)(x)\n",
    "model = keras.Model(inputs, outputs)\n",
    "\n",
    "\n",
    "class CustomRNN(layers.Layer):\n",
    "  def __init__(self):\n",
    "    super(CustomRNN, self).__init__()\n",
    "    self.units = units\n",
    "    self.projection_1 = layers.Dense(units=units, activation='tanh')\n",
    "    self.projection_2 = layers.Dense(units=units, activation='tanh')\n",
    "    self.classifier = model\n",
    "\n",
    "  def call(self, inputs):\n",
    "    outputs = []\n",
    "    state = tf.zeros(shape=(inputs.shape[0], self.units))\n",
    "    for t in range(inputs.shape[1]):\n",
    "      x = inputs[:, t, :]\n",
    "      h = self.projection_1(x)\n",
    "      y = h + self.projection_2(state)\n",
    "      state = y\n",
    "      outputs.append(y)\n",
    "    features = tf.stack(outputs, axis=1)\n",
    "    print(features.shape)\n",
    "    return self.classifier(features)\n",
    "\n",
    "\n",
    "rnn_model = CustomRNN()\n",
    "_ = rnn_model(tf.zeros((1, timesteps, input_dim)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "你可以在函数式 API 中使用任何子类化的层或模型，只要它实现遵循以下模式之一的调用方法即可：\n",
    "1. call(self, inputs, ** kwargs)：中input是张量或张量的嵌套结构（例如张量列表），而** kwargs是非张量参数（非输入）。\n",
    "2. call(self, inputs, training=None, ** kwargs)：训练是一个布尔值，指示该层是否应在训练模式和推断模式下运行。\n",
    "3. call(self, inputs, mask=None, ** kwargs)：其中mask是一个布尔值掩码张量。\n",
    "4. call(self, inputs, training=None, mask=None, ** kwargs)：你可以同时具有屏蔽行为和特定于训练的行为。\n",
    "\n",
    "此外，如果在自定义图层或模型上实现 get_config 方法，则创建的函数模型仍将可序列化和克隆。\n",
    "\n",
    "这是在功能模型中从头开始编写的自定义 RNN 的快速示例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "units = 32\n",
    "timesteps = 10\n",
    "input_dim = 5\n",
    "batch_size = 16\n",
    "\n",
    "\n",
    "class CustomRNN(layers.Layer):\n",
    "  def __init__(self):\n",
    "    super(CustomRNN, self).__init__()\n",
    "    self.units = units\n",
    "    self.projection_1 = layers.Dense(units=units, activation='tanh')\n",
    "    self.projection_2 = layers.Dense(units=units, activation='tanh')\n",
    "    self.classifier = layers.Dense(1)\n",
    "\n",
    "  def call(self, inputs):\n",
    "    outputs = []\n",
    "    state = tf.zeros(shape=(inputs.shape[0], self.units))\n",
    "    for t in range(inputs.shape[1]):\n",
    "      x = inputs[:, t, :]\n",
    "      h = self.projection_1(x)\n",
    "      y = h + self.projection_2(state)\n",
    "      state = y\n",
    "      outputs.append(y)\n",
    "    features = tf.stack(outputs, axis=1)\n",
    "    return self.classifier(features)\n",
    "\n",
    "\n",
    "inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))\n",
    "x = layers.Conv1D(32, 3)(inputs)\n",
    "outputs = CustomRNN()(x)\n",
    "\n",
    "model = keras.Model(inputs, outputs)\n",
    "\n",
    "rnn_model = CustomRNN()\n",
    "_ = rnn_model(tf.zeros((1, 10, 5)))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
