{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Keras 循环神经网络\n",
    "递归神经网络（RNN）是一类神经网络，对于建模序列数据（例如时间序列或自然语言）非常有力。\n",
    "\n",
    "RNN层使用 for 循环在序列的时间步上进行迭代，同时保持内部状态，该状态对迄今为止已看到的时间步的信息进行编码。\n",
    "\n",
    "Keras RNN API 的设计注意点是：\n",
    "1. 易于使用：内置的 tf.keras.layers.RNN、tf.keras.layers.LSTM、tf.keras.layers.GRU 图层使你能够快速构建循环模型，而不必进行艰难的配置选择。\n",
    "2. 易于定制：你还可以通过自定义行为定义自己的 RNN 单元层（for循环的内部），并将其与通用tf.keras.layers.RNN 层（for循环本身）一起使用。这使你能够以最少的代码灵活地快速原型化不同的研究思路。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import collections\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "from tensorflow.keras import layers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 创建一个简单模型\n",
    "Keras 中有三个内置的 RNN 层：\n",
    "1. [tf.keras.layers.SimpleRNN](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/SimpleRNN)：一个完全连接的RNN，来自先前时间步的输出将馈送到下一个时间步。\n",
    "2. [tf.keras.layers.GRU](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/GRU)最先在使用 RNN 编码器/解码器进行统计机器翻译的学习短语表示中提出。\n",
    "3. [tf.keras.layers.LSTM](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/LSTM)：最早在长期短期记忆中提出。\n",
    "2015年初，Keras 拥有 LSTM 和 GRU 的第一个可重用的开源Python实现。 \n",
    "\n",
    "下例是一个顺序模型的简单示例，该模型处理整数序列，将每个整数嵌入到64维向量中，然后使用 LSTM 层处理向量序列。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "embedding (Embedding)        (None, None, 64)          64000     \n",
      "_________________________________________________________________\n",
      "lstm (LSTM)                  (None, 128)               98816     \n",
      "_________________________________________________________________\n",
      "dense (Dense)                (None, 10)                1290      \n",
      "=================================================================\n",
      "Total params: 164,106\n",
      "Trainable params: 164,106\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = tf.keras.Sequential()\n",
    "# 增加一个期望输入为 vocab 1000的嵌入层，输出嵌入尺寸为64\n",
    "model.add(layers.Embedding(input_dim=1000, output_dim=64))\n",
    "\n",
    "# 添加一个包含128个内部单元的LSTM层\n",
    "model.add(layers.LSTM(128))\n",
    "\n",
    "# 添加一个 10 个单元的密基层\n",
    "model.add(layers.Dense(10))\n",
    "\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 输出和状态\n",
    "默认情况下，RNN 层的输出每个样本包含一个向量。该向量是与最后一个时间步相对应的 RNN 单元输出，其中包含有关整个输入序列的信息。此输出的形状为（batch_size，units），其中 unit 对应于传递给图层构造函数的 units 参数。 如果你设置 return_sequences = True，则 RNN层 还可以返回每个样本的完整输出序列（每个样本每个时间步一个向量）。此输出的形状是（batch_size，timesteps，units）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_1\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "embedding_1 (Embedding)      (None, None, 64)          64000     \n",
      "_________________________________________________________________\n",
      "gru (GRU)                    (None, None, 256)         247296    \n",
      "_________________________________________________________________\n",
      "simple_rnn (SimpleRNN)       (None, 128)               49280     \n",
      "_________________________________________________________________\n",
      "dense_1 (Dense)              (None, 10)                1290      \n",
      "=================================================================\n",
      "Total params: 361,866\n",
      "Trainable params: 361,866\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = tf.keras.Sequential()\n",
    "model.add(layers.Embedding(input_dim=1000, output_dim=64))\n",
    "\n",
    "# GRU的输出将是维度的 3D 张量（batch_size，timesteps，256）\n",
    "model.add(layers.GRU(256, return_sequences=True))\n",
    "\n",
    "# SimpleRNN 的输出将是维度的 2D 张量（batch_size，128）\n",
    "model.add(layers.SimpleRNN(128))\n",
    "\n",
    "model.add(layers.Dense(10))\n",
    "\n",
    "model.summary() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "另外，RNN层可以返回其最终的内部状态。返回的状态可用于稍后恢复 RNN 执行或初始化另一个 RNN。此设置通常在编码器——解码器逐序列模型中使用，其中编码器的最终状态用作解码器的初始状态。 要将 RNN图层配置为返回其内部状态，需要在创建图层时将 return_state=True。\n",
    "\n",
    "注意：LSTM 具有2个状态张量，但 GRU 仅具有1个。 要配置图层的初始状态，只需使用其他关键字参数initial_state 调用图层即可。\n",
    "\n",
    "注意：状态的形状需要与图层的单位大小匹配，如以下示例所示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"model\"\n",
      "__________________________________________________________________________________________________\n",
      "Layer (type)                    Output Shape         Param #     Connected to                     \n",
      "==================================================================================================\n",
      "input_1 (InputLayer)            [(None, None)]       0                                            \n",
      "__________________________________________________________________________________________________\n",
      "input_2 (InputLayer)            [(None, None)]       0                                            \n",
      "__________________________________________________________________________________________________\n",
      "embedding_2 (Embedding)         (None, None, 64)     64000       input_1[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "embedding_3 (Embedding)         (None, None, 64)     128000      input_2[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "encoder (LSTM)                  [(None, 64), (None,  33024       embedding_2[0][0]                \n",
      "__________________________________________________________________________________________________\n",
      "decoder (LSTM)                  (None, 64)           33024       embedding_3[0][0]                \n",
      "                                                                 encoder[0][1]                    \n",
      "                                                                 encoder[0][2]                    \n",
      "__________________________________________________________________________________________________\n",
      "dense_2 (Dense)                 (None, 10)           650         decoder[0][0]                    \n",
      "==================================================================================================\n",
      "Total params: 258,698\n",
      "Trainable params: 258,698\n",
      "Non-trainable params: 0\n",
      "__________________________________________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "encoder_vocab = 1000\n",
    "decoder_vocab = 2000\n",
    "\n",
    "encoder_input = layers.Input(shape=(None, ))\n",
    "encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(encoder_input)\n",
    "\n",
    "# 状态返回不仅有输出\n",
    "output, state_h, state_c = layers.LSTM(\n",
    "    64, return_state=True, name='encoder')(encoder_embedded)\n",
    "encoder_state = [state_h, state_c]\n",
    "\n",
    "decoder_input = layers.Input(shape=(None, ))\n",
    "decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(decoder_input)\n",
    "\n",
    "# 将这两种状态作为初始状态传递到新的LSTM层\n",
    "decoder_output = layers.LSTM(\n",
    "    64, name='decoder')(decoder_embedded, initial_state=encoder_state)\n",
    "output = layers.Dense(10)(decoder_output)\n",
    "\n",
    "model = tf.keras.Model([encoder_input, decoder_input], output)\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## RNN 层和RNN 单元\n",
    "除了内置的 RNN 层，RNN API 还提供了单元级 API。与处理整批输入序列的 RNN 层不同，RNN 单元仅处理单个时间步。 \n",
    "\n",
    "该单元格是 RNN 层的 for 循环的内部。将单元格包装在 tf.keras.layers.RNN 图层内，可以为你提供能够处理一批序列的图层，例如RNN（LSTMCell(10)）。\n",
    "\n",
    "在数学上，RNN（LSTMCell(10)）产生与LSTM（10）相同的结果。实际上，在 TF1 中该层的实现只是创建相应的 RNN 单元并将其包装在 RNN 层中。但是，使用内置的 GRU 和 LSTM 层可以使用 CuDNN，你可能会看到更好的性能。 \n",
    "\n",
    "它内置三个RNN单元，每个单元对应于匹配的RNN层：\n",
    "1. [tf.keras.layers.SimpleRNNCell](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/SimpleRNNCell)对应于 SimpleRNN 层\n",
    "2. [tf.keras.layers.GRUCell](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/GRUCell)对应于 GRU 层。 \n",
    "3. [tf.keras.layers.LSTMCell](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/LSTMCell)对应于 LSTM 层。\n",
    "\n",
    "单元抽象以及通用的 [tf.keras.layers.RNN](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/RNN) 类使为研究实现自定义 RNN 体系结构变得非常容易。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 跨批处理状态\n",
    "当处理很长的序列（可能是无限的）时，你可能需要使用跨批处理状态模式。 \n",
    "\n",
    "通常，每次看到新批次时，都会重置 RNN 层的内部状态（即，假定该层看到的每个样本都独立于过去）。该层将仅在处理给定样本时保持状态。 \n",
    "\n",
    "如果你有很长的序列，把它们分成更短的序列是很有用的，然后把这些更短的序列按顺序输入一个RNN层，而不需要重新设置层的状态。这样，该层可以保留关于整个序列的信息，即使它一次只看到一个子序列。\n",
    "\n",
    "您可以通过在构造函数中设置 stateful = True 来实现。 如果序列 s = [t0，t1，... t1546，t1547]，则可以将其拆分为例如\n",
    "```txt\n",
    "s1 = [t0, t1, ... t100]\n",
    "s2 = [t101, ... t201]\n",
    "...\n",
    "s16 = [t1501, ... t1547]\n",
    "```\n",
    "然后，您可以通过以下方式进行处理：\n",
    "```txt\n",
    "lstm_layer = layers.LSTM(64, stateful=True)\n",
    "for s in sub_sequences:\n",
    "  output = lstm_layer(s)\n",
    "```\n",
    "当你想要清理状态时，可以使用 layer.reset_states()\n",
    "> 在此设置中，假定给定批次中的样本 i 是上一个批次中样本 i 的延续。这意味着所有批次应包含相同数量的样本（批次大小）。例如。如果一个批次包含[sequence_A_from_t0_to_t100，sequence_B_from_t0_to_t100]，则下一个批次应包含[sequence_A_from_t101_to_t200，sequence_B_from_t101_to_t200]。\n",
    "\n",
    "下面是一个完整的示例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "\n",
    "lstm_layer = layers.LSTM(64, stateful=True)\n",
    "output = lstm_layer(paragraph1)\n",
    "output = lstm_layer(paragraph2)\n",
    "output = lstm_layer(paragraph3)\n",
    "\n",
    "# reset_states() 将把缓存的状态重置为原始 initial_state。如果没有提供 initial_state，则默认使用零状态。\n",
    "lstm_layer.reset_states()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### RNN 状态重用\n",
    "RNN 层的记录状态不包含在 layer.weights() 中。如果你想从RNN层重用状态，则可以通过 Keras 函数API（如new_layer(input, initial_state=layer.states)或模型子类化）声明并使用它作为新层的初始状态。\n",
    "\n",
    "注意：在这种情况下可能不使用顺序模型，因为它仅支持具有单个输入和输出的图层，初始状态的额外输入使其无法在此处使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n",
    "\n",
    "lstm_layer = layers.LSTM(64, stateful=True)\n",
    "output = lstm_layer(paragraph1)\n",
    "output = lstm_layer(paragraph2)\n",
    "\n",
    "existing_state = lstm_layer.states\n",
    "\n",
    "new_lstm_layer = layers.LSTM(64)\n",
    "new_output = new_lstm_layer(paragraph3, initial_state=existing_state)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 双向 RNN\n",
    "对于时间序列以外的序列（例如文本），通常 RNN 模型不仅可以从头到尾处理序列，而且可以向后处理序列，因此性能更好。例如，要预测句子中的下一个单词，通常使单词周围具有上下文，而不仅仅对它之前的单词。\n",
    "\n",
    "Keras 提供了一个简单的 API，用于构建此类双向 RNN：[tf.keras.layers.Bidirectional](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/Bidirectional) 包装器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_2\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "bidirectional (Bidirectional (None, 5, 128)            38400     \n",
      "_________________________________________________________________\n",
      "bidirectional_1 (Bidirection (None, 64)                41216     \n",
      "_________________________________________________________________\n",
      "dense_3 (Dense)              (None, 10)                650       \n",
      "=================================================================\n",
      "Total params: 80,266\n",
      "Trainable params: 80,266\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "model = tf.keras.Sequential()\n",
    "\n",
    "model.add(layers.Bidirectional(layers.LSTM(64, return_sequences=True), \n",
    "                               input_shape=(5, 10)))\n",
    "model.add(layers.Bidirectional(layers.LSTM(32)))\n",
    "model.add(layers.Dense(10))\n",
    "\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在后台，Bidirectional 将复制传入的 RNN 层，并翻转新复制的层的 go_backwards 字段，以便它将以相反的顺序处理输入。 \n",
    "\n",
    "默认情况下，Bidirectional RNN 的输出将是前向图层输出和后向图层输出的总和。如果你需要其他合并行为，例如串联，要在 Bidirectional 构造函数中更改 merge_mode 参数。有关 Bidirectional 的更多详细信息，请查看 [API文档](https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/keras/layers/Bidirectional)。\n",
    "\n",
    "## tensorflow2 的性能优化和CuDNN内核\n",
    "在 tensorflow2 中，内置的 LSTM 和 GRU 层已更新为在 GPU 可用时默认情况下利用 CuDNN 内核。先前的keras.layers.CuDNNLSTM / CuDNNGRU层已被弃用，你可以构建模型而不必担心它将运行的硬件。 \n",
    "\n",
    "由于CuDNN内核是根据某些假设构建的，这意味着如果你更改内置LSTM或GRU层的默认设置，则该层将无法使用CuDNN内核。例如：\n",
    "1. 将激活功能从 tanh 更改为其他功能\n",
    "2. 将 recurrent_activation 函数从 Sigmoid 更改为其他内容\n",
    "3. 使用 recurrent_dropout> 0\n",
    "4. 将 unroll 设置为 True，这将强制 LSTM / GRU 将内部 tf.while_loop 分解为 unrolled for 循环\n",
    "5. 将 use_bias 设置为 False。 当输入数据未严格右填充时使用屏蔽（如果掩码对应于严格右填充数据，则仍可以使用CuDNN。这是最常见的情况）。\n",
    "\n",
    "有关约束的详细列表，参见 [LSTM](https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/keras/layers/LSTM)和 [GRU](https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/keras/layers/GRU) 层的文档\n",
    "\n",
    "### 在可用时使用CuDNN内核\n",
    "建立一个简单的 LSTM 模型来演示性能差异。 \n",
    "\n",
    "我们将MNIST数字的行序列作为输入序列（将每个像素行作为时间步进行处理），并预测该数字的标签:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 60000 samples, validate on 10000 samples\n",
      "Epoch 1/5\n",
      "60000/60000 [==============================] - 19s 314us/sample - loss: 0.9553 - accuracy: 0.7002 - val_loss: 0.5384 - val_accuracy: 0.8317\n",
      "Epoch 2/5\n",
      "60000/60000 [==============================] - 16s 265us/sample - loss: 0.3747 - accuracy: 0.8889 - val_loss: 0.4105 - val_accuracy: 0.8668\n",
      "Epoch 3/5\n",
      "60000/60000 [==============================] - 15s 254us/sample - loss: 0.2538 - accuracy: 0.9244 - val_loss: 0.2263 - val_accuracy: 0.9285\n",
      "Epoch 4/5\n",
      "60000/60000 [==============================] - 15s 252us/sample - loss: 0.2051 - accuracy: 0.9377 - val_loss: 0.2066 - val_accuracy: 0.9336\n",
      "Epoch 5/5\n",
      "60000/60000 [==============================] - 15s 257us/sample - loss: 0.1778 - accuracy: 0.9467 - val_loss: 0.1497 - val_accuracy: 0.9512\n",
      "Train on 60000 samples, validate on 10000 samples\n",
      "60000/60000 [==============================] - 17s 281us/sample - loss: 0.1547 - accuracy: 0.9526 - val_loss: 0.1578 - val_accuracy: 0.9507\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x256fe0612c8>"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "batch_size = 64\n",
    "# 每个MNIST图像批处理都是一个维度张量(batch_size, 28,28)。每个输入序列的大小(28,28)(高度被视为时间)。\n",
    "input_dim = 28\n",
    "\n",
    "units = 64\n",
    "output_size = 10  # 0-9 标签\n",
    "\n",
    "# 创建 RNN 模型\n",
    "def build_model(allow_cudnn_kernel=True):\n",
    "  # CuDNN只在层级别可用，而不是在单元级别。\n",
    " # 这意味着 \"LSTM(units)\" 将使用CuDNN内核，而RNN(LSTMCell(units))将运行在非CuDNN内核上。\n",
    "  if allow_cudnn_kernel:\n",
    "    # 带有默认选项的LSTM层使用CuDNN\n",
    "    lstm_layer = tf.keras.layers.LSTM(units, input_shape=(None, input_dim))\n",
    "  else:\n",
    "    # 在RNN层中包装 LSTMCell 将不使用 CuDNN\n",
    "    lstm_layer = tf.keras.layers.RNN(\n",
    "        tf.keras.layers.LSTMCell(units),\n",
    "        input_shape=(None, input_dim))\n",
    "  model = tf.keras.models.Sequential([\n",
    "      lstm_layer,\n",
    "      tf.keras.layers.BatchNormalization(),\n",
    "      tf.keras.layers.Dense(output_size)]\n",
    "  )\n",
    "  return model\n",
    "\n",
    "\"\"\"加载数据\"\"\"\n",
    "mnist = tf.keras.datasets.mnist\n",
    "\n",
    "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n",
    "x_train, x_test = x_train / 255.0, x_test / 255.0\n",
    "sample, sample_label = x_train[0], y_train[0]\n",
    "\n",
    "\"\"\"创建模型实例并进行编译\"\"\"\n",
    "\"\"\"\n",
    "选择sparse_categorical_crossentropy作为模型的损失函数。\n",
    "模型的输出的形状为[batch_size，10]。\n",
    "该模型的目标是一个整数向量，每个整数都在0到9的范围内。\n",
    "\"\"\"\n",
    "model = build_model(allow_cudnn_kernel=True)\n",
    "\n",
    "model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), \n",
    "              optimizer='sgd',\n",
    "              metrics=['accuracy'])\n",
    "model.fit(x_train, y_train,\n",
    "          validation_data=(x_test, y_test),\n",
    "          batch_size=batch_size,\n",
    "          epochs=5)\n",
    "\n",
    "\"\"\"在没有CuDNN内核的情况下构建新模型\"\"\"\n",
    "slow_model = build_model(allow_cudnn_kernel=False)\n",
    "slow_model.set_weights(model.get_weights())\n",
    "slow_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_lobgits=True), \n",
    "                   optimizer='sgd', \n",
    "                   metrics=['accuracy'])\n",
    "slow_model.fit(x_train, y_train, \n",
    "               validation_data=(x_test, y_test), \n",
    "               batch_size=batch_size,\n",
    "               epochs=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 具有列表/字典输入或嵌套输入的 RNN\n",
    "嵌套结构允许使用者在单个时间步之内包括更多信息。例如，一个视频帧可以同时具有音频和视频输入。在这种情况下，数据形状可能是： [batch, timestep, {\"video\": [height, width, channel]，\"audio\":[frequency]}]。\n",
    "\n",
    "在另一个示例中，笔迹数据可以具有笔的当前位置的坐标x和y以及压力信息。因此数据表示可以是： [batch, timestep, {\"location\": [x, y], \"pressure\": [force]}]，\n",
    "\n",
    "以下代码提供了一个示例，说明如何构建接受此类结构化输入的自定义RNN单元。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 6400 samples\n",
      "1024/6400 [===>..........................] - ETA: 2:29 - loss: 0.6439 - rnn_1_loss: 0.2394 - rnn_1_1_loss: 0.4045 - rnn_1_accuracy: 0.0869 - rnn_1_1_accuracy: 0.0318"
     ]
    }
   ],
   "source": [
    "# 定义一个支持嵌套输入/输出的自定义单元格\n",
    "NestedInput = collections.namedtuple('NestedInput', ['feature1', 'feature2'])\n",
    "NestedState = collections.namedtuple('NestedState', ['state1', 'state2'])\n",
    "\n",
    "class NestedCell(tf.keras.layers.Layer):\n",
    "\n",
    "  def __init__(self, unit_1, unit_2, unit_3, **kwargs):\n",
    "    self.unit_1 = unit_1\n",
    "    self.unit_2 = unit_2\n",
    "    self.unit_3 = unit_3\n",
    "    self.state_size = NestedState(state1=unit_1, \n",
    "                                  state2=tf.TensorShape([unit_2, unit_3]))\n",
    "    self.output_size = (unit_1, tf.TensorShape([unit_2, unit_3]))\n",
    "    super(NestedCell, self).__init__(**kwargs)\n",
    "\n",
    "  def build(self, input_shapes):\n",
    "    # 期望input_shape包含：[(batch, i1), (batch, i2, i3)]\n",
    "    input_1 = input_shapes.feature1[1]\n",
    "    input_2, input_3 = input_shapes.feature2[1:]\n",
    "\n",
    "    self.kernel_1 = self.add_weight(\n",
    "        shape=(input_1, self.unit_1), initializer='uniform', name='kernel_1')\n",
    "    self.kernel_2_3 = self.add_weight(\n",
    "        shape=(input_2, input_3, self.unit_2, self.unit_3),\n",
    "        initializer='uniform',\n",
    "        name='kernel_2_3')\n",
    "\n",
    "  def call(self, inputs, states):\n",
    "    # 输入应该是 [(batch, input_1)， (batch, input_2, input_3)]\n",
    "    # 状态应该是(batch, unit_1)， (batch, unit_2, unit_3)]\n",
    "    input_1, input_2 = tf.nest.flatten(inputs)\n",
    "    s1, s2 = states\n",
    "\n",
    "    output_1 = tf.matmul(input_1, self.kernel_1)\n",
    "    output_2_3 = tf.einsum('bij,ijkl->bkl', input_2, self.kernel_2_3)\n",
    "    state_1 = s1 + output_1\n",
    "    state_2_3 = s2 + output_2_3\n",
    "\n",
    "    output = [output_1, output_2_3]\n",
    "    new_states = NestedState(state1=state_1, state2=state_2_3)\n",
    "\n",
    "    return output, new_states\n",
    "\n",
    "# 使用嵌套的输入/输出构建RNN模型\n",
    "unit_1 = 10\n",
    "unit_2 = 20\n",
    "unit_3 = 30\n",
    "\n",
    "input_1 = 32\n",
    "input_2 = 64\n",
    "input_3 = 32\n",
    "batch_size = 64\n",
    "num_batch = 100\n",
    "timestep = 50\n",
    "\n",
    "cell = NestedCell(unit_1, unit_2, unit_3)\n",
    "rnn = tf.keras.layers.RNN(cell)\n",
    "\n",
    "inp_1 = tf.keras.Input((None, input_1))\n",
    "inp_2 = tf.keras.Input((None, input_2, input_3))\n",
    "\n",
    "outputs = rnn(NestedInput(feature1=inp_1, feature2=inp_2))\n",
    "\n",
    "model = tf.keras.models.Model([inp_1, inp_2], outputs)\n",
    "\n",
    "model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])\n",
    "\n",
    "# 使用随机生成的数据训练模型\n",
    "input_1_data = np.random.random((batch_size * num_batch, timestep, input_1))\n",
    "input_2_data = np.random.random((batch_size * num_batch, timestep, input_2, input_3))\n",
    "target_1_data = np.random.random((batch_size * num_batch, unit_1))\n",
    "target_2_data = np.random.random((batch_size * num_batch, unit_2, unit_3))\n",
    "input_data = [input_1_data, input_2_data]\n",
    "target_data = [target_1_data, target_2_data]\n",
    "\n",
    "model.fit(input_data, target_data, batch_size=batch_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用 Keras tf.keras.layers.RNN 层，只需要为序列中的单个步骤定义数学逻辑，并且 tf.keras.layers.RNN 层将为你处理序列迭代。这是一种快速开发新型RNN（例如LSTM变体）的强大方法。 \n",
    "\n",
    "有关更多详细信息，请访问 [API 文档](https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/keras/layers/RNN)。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6rc1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
