{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/chenyu313/TensorFlow-note/blob/main/1_%E4%BD%BF%E7%94%A8RNN%E7%94%9F%E6%88%90%E6%96%87%E6%9C%AC.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lFeOfkgKKnvK"
      },
      "source": [
        "## Text generation with an RNN\n",
        "将使用Andrej Karpathy的《循环神经网络的不可思议的有效性》中的莎士比亚（Shakespeare）作品数据集。给定来自该数据的字符序列(“Shakespear”)，训练一个模型来预测序列中的下一个字符(“e”)。通过反复调用模型可以生成更长的文本序列。\n",
        "\n",
        "虽然有些句子合乎语法，但大多数都没有意义。模型没有学习单词的意思，但是考虑:\n",
        "* 该模型是基于字符的。当训练开始时，模型不知道如何拼写英语单词，甚至不知道单词是文本的一个单位。\n",
        "* 输出的结构类似于一个积木块，文本块通常以说话人的名字开头，所有的大写字母与数据集相似。\n",
        "* 如下所示，该模型在小批量文本(每个文本100个字符)上进行训练，并且仍然能够生成具有连贯结构的更长的文本序列。"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WlHE1xMFKnvL"
      },
      "source": [
        "### 配置\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "SNY7nI5aKnvL"
      },
      "outputs": [],
      "source": [
        "import tensorflow as tf\n",
        "\n",
        "import numpy as np\n",
        "import os\n",
        "import time"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IZCfXEQ3KnvM"
      },
      "source": [
        "### 下载莎士比亚数据集\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "iv-sQudYKnvM",
        "outputId": "739f9398-1193-4ed2-8acc-fa78f01748a9"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt\n",
            "1115394/1115394 [==============================] - 0s 0us/step\n"
          ]
        }
      ],
      "source": [
        "path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Qp6zyX6-KnvN"
      },
      "source": [
        "### 阅读数据\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "uO12vOSBKnvN",
        "outputId": "c3e9c5d6-832e-42bd-c548-c801454efcbb"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Length of text: 1115394 characters\n"
          ]
        }
      ],
      "source": [
        "# Read, then decode for py2 compat.\n",
        "text = open(path_to_file, 'rb').read().decode(encoding='utf-8')\n",
        "# length of text is the number of characters in it\n",
        "print(f'Length of text: {len(text)} characters')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "UMmydQNmKnvN",
        "outputId": "2391d690-cc9c-4fd9-9721-7fcb9a6babb4"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "First Citizen:\n",
            "Before we proceed any further, hear me speak.\n",
            "\n",
            "All:\n",
            "Speak, speak.\n",
            "\n",
            "First Citizen:\n",
            "You are all resolved rather to die than to famish?\n",
            "\n",
            "All:\n",
            "Resolved. resolved.\n",
            "\n",
            "First Citizen:\n",
            "First, you know Caius Marcius is chief enemy to the people.\n",
            "\n"
          ]
        }
      ],
      "source": [
        "# 查看前250个字符\n",
        "print(text[:250])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "bUruL2xlKnvO",
        "outputId": "8d62ff33-4f67-40ac-f0ca-f1881a9a6eff"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "65 unique characters\n"
          ]
        }
      ],
      "source": [
        "# 文件中唯一的字符（将text装入set中，相当于计数文本中一共有多少个词（去重））\n",
        "vocab = sorted(set(text))\n",
        "print(f'{len(vocab)} unique characters')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I-EAuosGKnvO"
      },
      "source": [
        "### 处理数据\n",
        "\n",
        "#### 向量化文本\n",
        "在训练之前，您需要将字符串转换为数字表示形式。\n",
        " tf.keras.layers.StringLookup层可以将每个字符转换为数字ID。它只需要首先将文本拆分为标记。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "pdulX6G6KnvO",
        "outputId": "e5f9fd93-133c-4d7b-f399-50865c4fe3bc"
      },
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<tf.RaggedTensor [[b'a', b'b', b'c', b'd', b'e', b'f', b'g'], [b'x', b'y', b'z']]>"
            ]
          },
          "metadata": {},
          "execution_count": 7
        }
      ],
      "source": [
        "example_texts = ['abcdefg', 'xyz']\n",
        "\n",
        "chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8')\n",
        "chars"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "iCpM1Im0KnvO"
      },
      "outputs": [],
      "source": [
        "# 现在创建tf.keras.layers.StringLookup层:\n",
        "ids_from_chars = tf.keras.layers.StringLookup(\n",
        "    vocabulary=list(vocab), mask_token=None)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "它将tokens转换为字符id:"
      ],
      "metadata": {
        "id": "xRfULd3tLuSP"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "ids = ids_from_chars(chars)\n",
        "ids"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "MH7lagfBL0gT",
        "outputId": "06de7c4c-1ec3-4e8a-beba-51bc728ad6ff"
      },
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<tf.RaggedTensor [[40, 41, 42, 43, 44, 45, 46], [63, 64, 65]]>"
            ]
          },
          "metadata": {},
          "execution_count": 9
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "由于本教程的目标是生成文本，因此颠倒这种表示并从中恢复人类可读的字符串也很重要。为此，你可以使用tf.keras.layers.StringLookup(…,invert=True)。"
      ],
      "metadata": {
        "id": "sTSMhIOJMC0h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "chars_from_ids = tf.keras.layers.StringLookup(\n",
        "    vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None)"
      ],
      "metadata": {
        "id": "al1drW66MGgM"
      },
      "execution_count": 10,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# 从id转变为字符\n",
        "chars = chars_from_ids(ids)\n",
        "chars"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "gn4TYpQ3Mduq",
        "outputId": "7d904d98-32cf-45f3-f0bb-23e05b13189e"
      },
      "execution_count": 11,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<tf.RaggedTensor [[b'a', b'b', b'c', b'd', b'e', b'f', b'g'], [b'x', b'y', b'z']]>"
            ]
          },
          "metadata": {},
          "execution_count": 11
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "你可以使用tf.strings.reduce_join将字符连接回字符串。"
      ],
      "metadata": {
        "id": "U5numVzhMp9d"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "tf.strings.reduce_join(chars, axis=-1).numpy()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "aRP9ew84MtrC",
        "outputId": "bd2bbdbd-f17e-4638-db10-3045573b5159"
      },
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "array([b'abcdefg', b'xyz'], dtype=object)"
            ]
          },
          "metadata": {},
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "def text_from_ids(ids):\n",
        "  return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)"
      ],
      "metadata": {
        "id": "f1x5KYMyMuvc"
      },
      "execution_count": 13,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### 预测任务\n",
        "给定一个字符，或一个字符序列，下一个最可能的字符是什么?这就是你训练模型执行的任务。模型的输入将是一个字符序列，您训练模型来预测输出——在每个时间步上的以下字符。\n",
        "\n",
        "由于rnn维持一种依赖于先前看到的元素的内部状态，给定到目前为止计算的所有字符，下一个字符是什么?"
      ],
      "metadata": {
        "id": "GENg1titM8b-"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### 创建训练实例和目标\n",
        "接下来将文本划分为示例序列。每个输入序列将包含来自文本的seq_length字符。\n",
        "\n",
        "对于每个输入序列，对应的目标包含相同长度的文本，只是向右移动了一个字符。\n",
        "\n",
        "所以将文本分成seq_length+1的块。例如，假设seq_length为4，我们的文本为“Hello”。输入序列是“Hell”，目标序列是“ello”。\n",
        "\n",
        "要做到这一点，首先使用tf.data.Dataset.from_tensor_slices函数将文本向量转换为字符索引流。"
      ],
      "metadata": {
        "id": "RUWyBc5qNd0J"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 将字符转化为ids\n",
        "all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8'))\n",
        "all_ids"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "FrLGiL-8M3US",
        "outputId": "9824973b-5f31-4bca-9af9-ccb5dfa51734"
      },
      "execution_count": 14,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<tf.Tensor: shape=(1115394,), dtype=int64, numpy=array([19, 48, 57, ..., 46,  9,  1])>"
            ]
          },
          "metadata": {},
          "execution_count": 14
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)"
      ],
      "metadata": {
        "id": "jox8ywFnOQvv"
      },
      "execution_count": 15,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "for ids in ids_dataset.take(10):\n",
        "    print(chars_from_ids(ids).numpy().decode('utf-8'))"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "JNSj82liOUY0",
        "outputId": "83e56ed2-6551-422c-84cc-8ccf9e554b1c"
      },
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "F\n",
            "i\n",
            "r\n",
            "s\n",
            "t\n",
            " \n",
            "C\n",
            "i\n",
            "t\n",
            "i\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "seq_length = 100"
      ],
      "metadata": {
        "id": "nQpNtTswPBgy"
      },
      "execution_count": 17,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "批处理方法允许您轻松地将这些单个字符转换为所需大小的序列。"
      ],
      "metadata": {
        "id": "t10FFyoKPEyr"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "sequences = ids_dataset.batch(seq_length+1, drop_remainder=True)\n",
        "\n",
        "for seq in sequences.take(1):\n",
        "  print(chars_from_ids(seq))"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "bO1upgMPPGOZ",
        "outputId": "d5de7927-79c8-4d6c-b97c-95a5eadbf13b"
      },
      "execution_count": 18,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "tf.Tensor(\n",
            "[b'F' b'i' b'r' b's' b't' b' ' b'C' b'i' b't' b'i' b'z' b'e' b'n' b':'\n",
            " b'\\n' b'B' b'e' b'f' b'o' b'r' b'e' b' ' b'w' b'e' b' ' b'p' b'r' b'o'\n",
            " b'c' b'e' b'e' b'd' b' ' b'a' b'n' b'y' b' ' b'f' b'u' b'r' b't' b'h'\n",
            " b'e' b'r' b',' b' ' b'h' b'e' b'a' b'r' b' ' b'm' b'e' b' ' b's' b'p'\n",
            " b'e' b'a' b'k' b'.' b'\\n' b'\\n' b'A' b'l' b'l' b':' b'\\n' b'S' b'p' b'e'\n",
            " b'a' b'k' b',' b' ' b's' b'p' b'e' b'a' b'k' b'.' b'\\n' b'\\n' b'F' b'i'\n",
            " b'r' b's' b't' b' ' b'C' b'i' b't' b'i' b'z' b'e' b'n' b':' b'\\n' b'Y'\n",
            " b'o' b'u' b' '], shape=(101,), dtype=string)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "将tokens连接回字符串，则更容易看到这是在做什么:"
      ],
      "metadata": {
        "id": "Vf-F2GN9P-ME"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "for seq in sequences.take(5):\n",
        "  print(text_from_ids(seq).numpy())"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ptZ1fpoCQAMQ",
        "outputId": "506b0244-d963-4cad-a850-18eb8715c0da"
      },
      "execution_count": 19,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "b'First Citizen:\\nBefore we proceed any further, hear me speak.\\n\\nAll:\\nSpeak, speak.\\n\\nFirst Citizen:\\nYou '\n",
            "b'are all resolved rather to die than to famish?\\n\\nAll:\\nResolved. resolved.\\n\\nFirst Citizen:\\nFirst, you k'\n",
            "b\"now Caius Marcius is chief enemy to the people.\\n\\nAll:\\nWe know't, we know't.\\n\\nFirst Citizen:\\nLet us ki\"\n",
            "b\"ll him, and we'll have corn at our own price.\\nIs't a verdict?\\n\\nAll:\\nNo more talking on't; let it be d\"\n",
            "b'one: away, away!\\n\\nSecond Citizen:\\nOne word, good citizens.\\n\\nFirst Citizen:\\nWe are accounted poor citi'\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "对于训练，你需要一个(输入，标签)对的数据集。其中input和label是序列。在每个时间步，输入是当前字符，标签是下一个字符。\n",
        "\n",
        "下面是一个函数，它将一个序列作为输入，复制并移动它以使每个时间步的输入和标签对齐:"
      ],
      "metadata": {
        "id": "i8iBmQIYQY2i"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def split_input_target(sequence):\n",
        "    input_text = sequence[:-1]\n",
        "    target_text = sequence[1:]\n",
        "    return input_text, target_text"
      ],
      "metadata": {
        "id": "7EhEwLBPQhzI"
      },
      "execution_count": 20,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# 相当于给你“Tensorflo\"，让你预测“ensorflow\"\n",
        "split_input_target(list(\"Tensorflow\"))"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "b5PX9mzkQoHW",
        "outputId": "a7f0be77-ce15-4af4-e576-99870312d98f"
      },
      "execution_count": 21,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "(['T', 'e', 'n', 's', 'o', 'r', 'f', 'l', 'o'],\n",
              " ['e', 'n', 's', 'o', 'r', 'f', 'l', 'o', 'w'])"
            ]
          },
          "metadata": {},
          "execution_count": 21
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "dataset = sequences.map(split_input_target)\n",
        "\n",
        "for input_example, target_example in dataset.take(1):\n",
        "    print(\"Input :\", text_from_ids(input_example).numpy())\n",
        "    print(\"Target:\", text_from_ids(target_example).numpy()) #这里后面是一个空格"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yXOSktztReTB",
        "outputId": "b6badb42-d11f-4239-b8b1-370ea39c0838"
      },
      "execution_count": 22,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Input : b'First Citizen:\\nBefore we proceed any further, hear me speak.\\n\\nAll:\\nSpeak, speak.\\n\\nFirst Citizen:\\nYou'\n",
            "Target: b'irst Citizen:\\nBefore we proceed any further, hear me speak.\\n\\nAll:\\nSpeak, speak.\\n\\nFirst Citizen:\\nYou '\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### 创建训练批次\n",
        "使用用了tf.data将文本分割成可管理的序列的数据。但是，在将这些数据输入模型之前，您需要对数据进行洗牌并将其打包成批。"
      ],
      "metadata": {
        "id": "IeaadDjhR_Ng"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Batch size\n",
        "BATCH_SIZE = 64\n",
        "\n",
        "# 打乱数据集的缓冲区大小\n",
        "# (TF数据被设计用于处理可能无限的序列，因此它不会试图打乱内存中的整个序列。相反，它维护一个缓冲区，在其中对元素进行洗牌).\n",
        "BUFFER_SIZE = 10000\n",
        "\n",
        "dataset = (\n",
        "    dataset\n",
        "    .shuffle(BUFFER_SIZE)\n",
        "    .batch(BATCH_SIZE, drop_remainder=True)\n",
        "    .prefetch(tf.data.experimental.AUTOTUNE))\n",
        "\n",
        "dataset"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "5NKesOSTR7SY",
        "outputId": "64bb6f40-e1b6-41a0-ba73-4c9c6eb6efa4"
      },
      "execution_count": 23,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<_PrefetchDataset element_spec=(TensorSpec(shape=(64, 100), dtype=tf.int64, name=None), TensorSpec(shape=(64, 100), dtype=tf.int64, name=None))>"
            ]
          },
          "metadata": {},
          "execution_count": 23
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 建模\n",
        "这个模型有三层:\n",
        "* tf.keras.layers.Embedding: 输入层。一个可训练的查找表，将每个字符id映射到具有embedding_dim维度的向量;\n",
        "* tf.keras.layers.GRU: 一种大小单位=rnn_units的RNN(你也可以在这里使用LSTM层)。\n",
        "* tf.keras.layers.Dense: 输出层，输出为vocab_size。它为词汇表中的每个字符输出一个logit。根据模型，这些是每个字符的对数似然。"
      ],
      "metadata": {
        "id": "a47I2Cf9Sv7j"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# StringLookup层中词汇表的长度\n",
        "vocab_size = len(ids_from_chars.get_vocabulary())\n",
        "\n",
        "# 嵌入维数\n",
        "embedding_dim = 256\n",
        "\n",
        "# RNN单元的数量\n",
        "rnn_units = 1024"
      ],
      "metadata": {
        "id": "FYWdBunVSWba"
      },
      "execution_count": 24,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "class MyModel(tf.keras.Model):\n",
        "  def __init__(self, vocab_size, embedding_dim, rnn_units):\n",
        "    super().__init__(self)\n",
        "    self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n",
        "    self.gru = tf.keras.layers.GRU(rnn_units,\n",
        "                    return_sequences=True,\n",
        "                    return_state=True)\n",
        "    self.dense = tf.keras.layers.Dense(vocab_size)\n",
        "\n",
        "  def call(self, inputs, states=None, return_state=False, training=False):\n",
        "    x = inputs\n",
        "    x = self.embedding(x, training=training)\n",
        "    if states is None:\n",
        "      states = self.gru.get_initial_state(x)\n",
        "    x, states = self.gru(x, initial_state=states, training=training)\n",
        "    x = self.dense(x, training=training)\n",
        "\n",
        "    if return_state:\n",
        "      return x, states\n",
        "    else:\n",
        "      return x"
      ],
      "metadata": {
        "id": "t7WkCbtFVrme"
      },
      "execution_count": 25,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "model = MyModel(\n",
        "    vocab_size=vocab_size,\n",
        "    embedding_dim=embedding_dim,\n",
        "    rnn_units=rnn_units)"
      ],
      "metadata": {
        "id": "qlcGmDFOW0y_"
      },
      "execution_count": 26,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "对于每个字符，模型查找嵌入，以嵌入作为输入运行GRU一个时间步，并应用密集层生成预测下一个字符的对数似然的logits:\n",
        "\n",
        "![image.png]()"
      ],
      "metadata": {
        "id": "am60wa5dXBFF"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "现在运行模型，看看它的行为是否符合预期。\n",
        "\n",
        "首先检查输出的形状:"
      ],
      "metadata": {
        "id": "u_RlqVB3XlDQ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "for input_example_batch, target_example_batch in dataset.take(1):\n",
        "    example_batch_predictions = model(input_example_batch)\n",
        "    print(example_batch_predictions.shape, \"# (batch_size, sequence_length, vocab_size)\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "e1jloLFOXJqe",
        "outputId": "cb9751ff-fa54-43be-cff0-5b71ab5ae5e5"
      },
      "execution_count": 27,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "(64, 100, 66) # (batch_size, sequence_length, vocab_size)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "model.summary()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "z68smF2JX4sR",
        "outputId": "cf9aab56-37c6-49d6-da12-59dcb9ae8f04"
      },
      "execution_count": 28,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Model: \"my_model\"\n",
            "_________________________________________________________________\n",
            " Layer (type)                Output Shape              Param #   \n",
            "=================================================================\n",
            " embedding (Embedding)       multiple                  16896     \n",
            "                                                                 \n",
            " gru (GRU)                   multiple                  3938304   \n",
            "                                                                 \n",
            " dense (Dense)               multiple                  67650     \n",
            "                                                                 \n",
            "=================================================================\n",
            "Total params: 4,022,850\n",
            "Trainable params: 4,022,850\n",
            "Non-trainable params: 0\n",
            "_________________________________________________________________\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "为了从模型中获得实际的预测，您需要从输出分布中采样，以获得实际的字符索引。这种分布是由字符词汇表上的logits定义的。\n",
        "\n",
        "注意:从这个分布中抽样很重要，因为取分布的argmax很容易使模型陷入循环。"
      ],
      "metadata": {
        "id": "uC3eG-uNYEIq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 处理一个例子尝试一下\n",
        "sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)\n",
        "sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy()"
      ],
      "metadata": {
        "id": "_2iqVAHRX5FH"
      },
      "execution_count": 29,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# 在每个时间步，这将为我们提供下一个字符索引的预测:\n",
        "sampled_indices"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Fd8krn22YPmP",
        "outputId": "12e0b3b5-a310-41b8-c5bc-9f91b864bb02"
      },
      "execution_count": 30,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "array([16, 44, 19, 33, 21,  6, 61, 38, 20, 24, 55,  0, 35, 32, 51, 29,  5,\n",
              "       19, 41, 57, 21, 56, 63, 56, 47, 55, 53, 64, 53, 64, 57, 33, 48, 12,\n",
              "        2, 21,  0, 10, 23, 39, 10,  8,  8, 33, 64, 63, 62, 37, 34,  8, 11,\n",
              "       35, 64, 43, 48, 18, 32, 60, 58, 14, 44, 51,  3, 25, 40, 47, 33, 22,\n",
              "       58, 57, 42, 57, 37, 29,  5, 60, 21, 28, 10, 43, 25, 30, 59, 23, 13,\n",
              "       40, 40, 61, 50,  2, 12,  9,  2, 38, 42, 20, 36, 21, 24,  3])"
            ]
          },
          "metadata": {},
          "execution_count": 30
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "解码这些，看看这个未经训练的模型预测的文本:"
      ],
      "metadata": {
        "id": "wHxWC7PpYz14"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "print(\"Input:\\n\", text_from_ids(input_example_batch[0]).numpy())\n",
        "print()\n",
        "print(\"Next Char Predictions:\\n\", text_from_ids(sampled_indices).numpy())"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dy93CnP_Y1AO",
        "outputId": "6c8dd367-7482-4f3c-b79b-8f5595b1b9f4"
      },
      "execution_count": 31,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Input:\n",
            " b\"\\nBut stay thee, 'tis the fruits of love I mean.\\n\\nLADY GREY:\\nThe fruits of love I mean, my loving lie\"\n",
            "\n",
            "Next Char Predictions:\n",
            " b\"CeFTH'vYGKp[UNK]VSlP&FbrHqxqhpnynyrTi; H[UNK]3JZ3--TyxwXU-:VydiESusAel!LahTIsrcrXP&uHO3dLQtJ?aavk ;. YcGWHK!\"\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 训练模型\n",
        "在这一点上，这个问题可以被视为一个标准的分类问题。给定之前的RNN状态，以及这个时间步长的输入，预测下一个字符的类别。\n",
        "\n"
      ],
      "metadata": {
        "id": "hHJBf3OfZTWS"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 稀疏交叉分类损失\n",
        "loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)"
      ],
      "metadata": {
        "id": "vUyEJTnSZtC9"
      },
      "execution_count": 32,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "example_batch_mean_loss = loss(target_example_batch, example_batch_predictions)\n",
        "print(\"Prediction shape: \", example_batch_predictions.shape, \" # (batch_size, sequence_length, vocab_size)\")\n",
        "print(\"Mean loss:        \", example_batch_mean_loss)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wJ5TSvQdZxPO",
        "outputId": "bd9df822-082d-4aa2-a005-ea3588f88f99"
      },
      "execution_count": 33,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Prediction shape:  (64, 100, 66)  # (batch_size, sequence_length, vocab_size)\n",
            "Mean loss:         tf.Tensor(4.1890483, shape=(), dtype=float32)\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "新初始化的模型不应该太确定自己，输出逻辑值应该都有相似的大小。为了证实这一点，您可以检查平均损失的指数是否近似等于词汇量大小。更高的损失意味着模型确定其错误的答案，并且初始化很糟糕:"
      ],
      "metadata": {
        "id": "kcXfr2VqaKz-"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "tf.exp(example_batch_mean_loss).numpy()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "bmzzZl4dehjT",
        "outputId": "c881713d-dd1b-45cc-94d1-28697b69988a"
      },
      "execution_count": 34,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "65.959984"
            ]
          },
          "metadata": {},
          "execution_count": 34
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "使用tf.keras.Model.compile方法配置训练过程。使用带有默认参数和损失函数的tf.keras.optimizers.Adam。"
      ],
      "metadata": {
        "id": "NWefKfrAerP5"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "model.compile(optimizer='adam', loss=loss)"
      ],
      "metadata": {
        "id": "0qc5EdEpeh5s"
      },
      "execution_count": 35,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### 配置检查点\n",
        "使用tf.keras.callback . modelcheckpoint来确保在训练期间保存检查点:"
      ],
      "metadata": {
        "id": "BHTVaBg-ewOW"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 将保存检查点的目录\n",
        "checkpoint_dir = './training_checkpoints'\n",
        "# 检查点文件的名称\n",
        "checkpoint_prefix = os.path.join(checkpoint_dir, \"ckpt_{epoch}\")\n",
        "\n",
        "checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(\n",
        "    filepath=checkpoint_prefix,\n",
        "    save_weights_only=True)"
      ],
      "metadata": {
        "id": "ERCEZGOSesqO"
      },
      "execution_count": 36,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "#### 开始训练\n"
      ],
      "metadata": {
        "id": "J2DFv9QrfBSR"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "EPOCHS = 20\n",
        "\n",
        "history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "n9PGiSLvfFvD",
        "outputId": "8d98bc36-45e5-4301-c9a9-322663250138"
      },
      "execution_count": 37,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Epoch 1/20\n",
            "172/172 [==============================] - 15s 58ms/step - loss: 2.7277\n",
            "Epoch 2/20\n",
            "172/172 [==============================] - 11s 54ms/step - loss: 1.9996\n",
            "Epoch 3/20\n",
            "172/172 [==============================] - 11s 55ms/step - loss: 1.7157\n",
            "Epoch 4/20\n",
            "172/172 [==============================] - 14s 56ms/step - loss: 1.5527\n",
            "Epoch 5/20\n",
            "172/172 [==============================] - 12s 55ms/step - loss: 1.4528\n",
            "Epoch 6/20\n",
            "172/172 [==============================] - 12s 56ms/step - loss: 1.3842\n",
            "Epoch 7/20\n",
            "172/172 [==============================] - 12s 56ms/step - loss: 1.3310\n",
            "Epoch 8/20\n",
            "172/172 [==============================] - 12s 56ms/step - loss: 1.2848\n",
            "Epoch 9/20\n",
            "172/172 [==============================] - 12s 57ms/step - loss: 1.2432\n",
            "Epoch 10/20\n",
            "172/172 [==============================] - 13s 58ms/step - loss: 1.2035\n",
            "Epoch 11/20\n",
            "172/172 [==============================] - 12s 57ms/step - loss: 1.1628\n",
            "Epoch 12/20\n",
            "172/172 [==============================] - 12s 57ms/step - loss: 1.1216\n",
            "Epoch 13/20\n",
            "172/172 [==============================] - 13s 57ms/step - loss: 1.0770\n",
            "Epoch 14/20\n",
            "172/172 [==============================] - 12s 57ms/step - loss: 1.0313\n",
            "Epoch 15/20\n",
            "172/172 [==============================] - 12s 58ms/step - loss: 0.9818\n",
            "Epoch 16/20\n",
            "172/172 [==============================] - 12s 58ms/step - loss: 0.9311\n",
            "Epoch 17/20\n",
            "172/172 [==============================] - 12s 57ms/step - loss: 0.8779\n",
            "Epoch 18/20\n",
            "172/172 [==============================] - 13s 57ms/step - loss: 0.8260\n",
            "Epoch 19/20\n",
            "172/172 [==============================] - 13s 59ms/step - loss: 0.7738\n",
            "Epoch 20/20\n",
            "172/172 [==============================] - 13s 58ms/step - loss: 0.7250\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 生成文本\n",
        "使用该模型生成文本的最简单方法是在循环中运行它，并在执行时跟踪模型的内部状态。\n",
        "![image.png]()"
      ],
      "metadata": {
        "id": "hTzrp1e5gqjK"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "每次调用模型时，都会传入一些文本和一个内部状态。该模型返回下一个字符及其新状态的预测。将预测和状态传回以继续生成文本。\n",
        "\n",
        "下面做一个单步预测:"
      ],
      "metadata": {
        "id": "gZEDSFWegyTP"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class OneStep(tf.keras.Model):\n",
        "  # 单步预测\n",
        "  def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0):\n",
        "    super().__init__()\n",
        "    self.temperature = temperature\n",
        "    self.model = model\n",
        "    self.chars_from_ids = chars_from_ids\n",
        "    self.ids_from_chars = ids_from_chars\n",
        "\n",
        "    # 创建一个掩码以防止“[UNK]”生成。\n",
        "    skip_ids = self.ids_from_chars(['[UNK]'])[:, None]\n",
        "    sparse_mask = tf.SparseTensor(\n",
        "        # 在每个不好的索引上加一个-inf。\n",
        "        values=[-float('inf')]*len(skip_ids),\n",
        "        indices=skip_ids,\n",
        "        # 将形状与词汇相匹配\n",
        "        dense_shape=[len(ids_from_chars.get_vocabulary())])\n",
        "    self.prediction_mask = tf.sparse.to_dense(sparse_mask)\n",
        "\n",
        "  @tf.function\n",
        "  def generate_one_step(self, inputs, states=None):\n",
        "    # 将字符串转换为令牌id。\n",
        "    input_chars = tf.strings.unicode_split(inputs, 'UTF-8')\n",
        "    input_ids = self.ids_from_chars(input_chars).to_tensor()\n",
        "\n",
        "    # 运行模型\n",
        "    # predicted_logits.shape is [batch, char, next_char_logits]\n",
        "    predicted_logits, states = self.model(inputs=input_ids, states=states,\n",
        "                        return_state=True)\n",
        "    # 只使用最后的预测。\n",
        "    predicted_logits = predicted_logits[:, -1, :]\n",
        "    predicted_logits = predicted_logits/self.temperature\n",
        "    # 应用预测掩码:防止“[UNK]”生成。\n",
        "    predicted_logits = predicted_logits + self.prediction_mask\n",
        "\n",
        "    # 对输出日志进行采样以生成token id。\n",
        "    predicted_ids = tf.random.categorical(predicted_logits, num_samples=1)\n",
        "    predicted_ids = tf.squeeze(predicted_ids, axis=-1)\n",
        "\n",
        "    # 将令牌id转换为字符\n",
        "    predicted_chars = self.chars_from_ids(predicted_ids)\n",
        "\n",
        "    # 返回字符和模型状态。\n",
        "    return predicted_chars, states"
      ],
      "metadata": {
        "id": "9ZZiOvfafIDs"
      },
      "execution_count": 38,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "one_step_model = OneStep(model, chars_from_ids, ids_from_chars)"
      ],
      "metadata": {
        "id": "LQN_76AQiDCn"
      },
      "execution_count": 39,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "在循环中运行它以生成一些文本。查看生成的文本，您将看到该模型知道何时大写、撰写段落并模仿莎士比亚式的写作词汇。由于训练次数少，它还没有学会形成连贯的句子。"
      ],
      "metadata": {
        "id": "VEB4FVA3iKxh"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "start = time.time()\n",
        "states = None\n",
        "next_char = tf.constant(['ROMEO:'])\n",
        "result = [next_char]\n",
        "\n",
        "for n in range(1000):\n",
        "  next_char, states = one_step_model.generate_one_step(next_char, states=states)\n",
        "  result.append(next_char)\n",
        "\n",
        "result = tf.strings.join(result)\n",
        "end = time.time()\n",
        "print(result[0].numpy().decode('utf-8'), '\\n\\n' + '_'*80)\n",
        "print('\\nRun time:', end - start)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "M9BBTr4UiLTi",
        "outputId": "845b659c-ab96-4666-882e-5ac38a4590e4"
      },
      "execution_count": 44,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "ROMEO:\n",
            "The eldest wake a worse than I must wrong\n",
            "Henceforthy I am out--but for it.\n",
            "\n",
            "ADGID:\n",
            "He hath a harver seas' that apply-tain it and\n",
            "By revenue and death shall breed ones.\n",
            "But such a case is the dost of these sad stirring!\n",
            "But when he pitites calls out of it.\n",
            "Mistake me not, I must have spoken from such grace,\n",
            "To give me from Dull of Buckingham to thee,\n",
            "The next give way that subfuble from the virgin post\n",
            "Should, said that hath a pedlar: though it be dissiden.\n",
            "\n",
            "KING RICHARD III:\n",
            "If, Tybalt, trunce! But, speak!\n",
            "I am a wish, was return to you.\n",
            "\n",
            "ANGELO:\n",
            "See where we would entreat brings the ungent our teddoman,\n",
            "or like a counterfeit bawd. Give me the banks\n",
            "Which 'twas he entertain'd thy poverty,\n",
            "And many slaughter with your heart wept nor less hand,\n",
            "Though 'tis thus can he lies.\n",
            "\n",
            "SEBASTIAN:\n",
            "No; for I am ready to lendy heart.\n",
            "Thus plucks high thoughts, not took her past\n",
            "And chamber appear'd. Come, sirrah yourself\n",
            "I tell yea, and I will pardon me and\n",
            "Of this vice buck and leave their walls.\n",
            "\n",
            " \n",
            "\n",
            "________________________________________________________________________________\n",
            "\n",
            "Run time: 4.375721454620361\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "你能做的最简单的事情就是延长训练时间(试试EPOCHS = 30)。\n",
        "\n",
        "您还可以尝试使用不同的起始字符串，尝试添加另一个RNN层以提高模型的准确性，或者调整温度参数以生成或多或少随机的预测。\n",
        "\n",
        "如果您希望模型更快地生成文本，您可以做的最简单的事情就是批量生成文本。在下面的例子中，模型生成5个输出的时间与上面生成1个输出的时间大致相同。"
      ],
      "metadata": {
        "id": "6z-_G5kai2K-"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "start = time.time()\n",
        "states = None\n",
        "next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:'])\n",
        "result = [next_char]\n",
        "\n",
        "for n in range(1000):\n",
        "  next_char, states = one_step_model.generate_one_step(next_char, states=states)\n",
        "  result.append(next_char)\n",
        "\n",
        "result = tf.strings.join(result)\n",
        "end = time.time()\n",
        "print(result, '\\n\\n' + '_'*80)\n",
        "print('\\nRun time:', end - start)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "onIs4gjbjCKS",
        "outputId": "88c6168f-c9e8-467b-dcb1-a66c7a6aea55"
      },
      "execution_count": 48,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "tf.Tensor(\n",
            "[b\"ROMEO:\\nThese lands all taunts in below his hands\\nWhich he shall find like galland, like a nuked\\nYou will revenge it still upon him.--Low in the curses,\\nWhose hand shed want her kindness and his last,\\nAnd 'twere to do myself to but myssilt end:\\nRemember judgment, scarce any journain of mine,\\nWhen, ere they, in the axeling part not this cofflows in the flinty\\nAs if her eye-bell ere the city I have in him.\\nBut I shall love My lord,\\nMust get upon my brother.\\n\\nMIRANDA:\\nO my poor Margaret, and perish!\\nThus, if thou say the King of Buckingham, battlew must\\nForsook her; but on the way\\nI was from the visage of ten thousand,\\nNothing pups sworms, and express come neaven\\nmuch to death, and her burpher blush in earth.\\n\\nFRIAR LAURENCE:\\nSo chose this odds be, let no great Apollo's ward.\\nWhat, is there some sword, and do contest the earth?\\nWhy, this shall see, the title will be tempted to oxenswife's angers\\nNearved fruitful visition well,\\nThat thought to beat do stirs of them; rule,\\nWho duck bring fashion o\"\n",
            " b\"ROMEO:\\nO that guess, daughter, be thy father.\\n\\nYORK:\\nI am able to be of great amaim\\nDise that are not being charged. If you\\nhave ever made my cousin Hereford, heigh! fear be.\\nIt said 'shall'? flay't with the marriage--\\n\\nSAMPSON:\\nDraw, Bolingbroke: you cannot guess.\\n\\nGLOUCESTER:\\nWhat news abroad, I will; we will commend\\nThe flesh of mercy. Sirron, post that ha!\\n\\nQUEEN MARGARET:\\nPeace, it stands yourself than we did keep to-caite,\\nAm I mine honesty welcome here.\\n\\nSICINIUS:\\nStarf, I have my purpose doth.\\n\\nAUFIDIUS:\\nEarl of justice!\\nHe'er had better witnesses. My father's sad\\nstars, starts, another, speaking, spoke unto you;\\nCome, you are but reweven. Heaven be so featled, call not\\nThe state that came the lieful story have got\\nThe unjustle; thou, sooththing!\\nFor this I think, is content to ned: and he leads\\nTo enter O no one discharge. These eyes\\nOf Brown and true look'd Edward, king,\\nWhen stepal timely hands are thus.\\nTo turn these both fathers 'gainst the sea for one of you;\\nFor being abbotion\"\n",
            " b\"ROMEO:\\nYes, till I have pardon'd you abyord;\\nFor through their daughter, they possess god worthy mercy?\\nTake her gone; and if I will be mine enemies.\\nKnow that over-much to do alive?\\n\\nISABELLA:\\nAy, no longest tear; I am sorry be thy\\nwitisme, the old lady, ladies, leads o' the return:\\nSay, you love peace!\\n\\nMENENIUS:\\n'Tis thought to strike to be your\\npart.\\n\\nHENRY BOLINGBROKE:\\nLook, on that vice, Plantagenet!\\nDown knew of justice, set us, ay, and due too.\\n\\nISABELLA:\\nPlease thee, like such a gentler, sir, I am\\nToo fond to tremble; from the reck; and thankfulness\\nAs I he is the poison of ten daughter.\\nThese letter'd flowers be gracious in pardon.\\n\\nLedisg's young prince, Belkethand on Hereford's sake.\\n\\nQUEEN MARGARET:\\nAy, his apleament, not these greeting to my bont\\nThat pressage the blockness removed: must\\nHe hath beasts one that runs steal'd lies, and they rather\\nThan Romeo seek her true lips to\\nHer judge and baideth still; unger augne right\\nVinner, to their best part that his back.\\nThe law that\"\n",
            " b\"ROMEO:\\nHe prasses, never consul, peace it in the field,\\nMaster, your friends are straised with sigh?--\\nBagot to make alike to arm, and charge\\nThe scandamor brether; those beheld as march.\\nIt dare they bless smothed with it their elyess.\\n\\nCORIOLANUS:\\nLet us revenge you with before: so she divideth haste,\\nAnd every offect imperies strangely.\\n\\nGRUMIO:\\nFallen upon a bank, would ill-for coasen is\\nmy life and lad their parest help King in death,\\nAnd such a fellow happy trip.\\n\\nMIAN ALY:\\nDare you in babe,\\nwill get you to the maid so in my humble with\\nher privilege of grief day would, with such nest\\nShall be extinct with dift. Clarence, and yet I come\\nA labour with your eyes, report my neck,\\nOr I'll kiss me: 'tis well in his passage.\\n\\nSICINIUS:\\nLet's follow: we shall not cet a bank of in\\nWhich nature makes the word.\\n\\nANTONIO:\\nWhy dost thou slity too children?\\nLet me speak say.\\n\\nPETRUCHIO:\\nSome, thou never shall, my lord deed day.\\n\\nHENRY BOLINGBROKE:\\nMany son, I fear thee, sleep, and sufford!\\nSo III:\\n\"\n",
            " b\"ROMEO:\\nStrike the dotiful occaportake he\\nmake legs that beto the Truth, redou to nigh--\\nLady with the heavy rabbind mated!\\nMy daughter here one word their complexion: this is he myself.\\n\\nWARWICK:\\nAnd Buckingham, and yet yourselves:\\nFarewell: O, what, Shall we to die\\nTill Juliot but a sort. What's within; which no lament\\nWith all my faults may chance to seek their infrimented like\\nBefore we could thrive in her eyes,\\nThe saluness are from the strimes that had much\\ndeserved our fortune in the seat o' the cest.\\n\\nGLOUCESTER:\\nMortal, my Lord, here will I remember,\\nA chear bedrage left unto us.\\n\\nCOMINIUS:\\nSoft! I fear, I fear,--\\n\\nQUEEN ELIZABETH:\\nThis is the father, A dozen timorous\\nFrozen friends and to my soul in their eyes.\\nI have a noble scratch that came with you.\\n\\nAUTOLYCUS:\\nI must become to kill him.\\n\\nSICINIUS:\\nThe hence will eaten up your loves!\\nPut thy toolmer--bed I had ruth, not\\nto we or their lords and brought her publicly,\\nhaving no ear. 'tis not for our husband. O,\\nIt is excelse my fa\"], shape=(5,), dtype=string) \n",
            "\n",
            "________________________________________________________________________________\n",
            "\n",
            "Run time: 3.524188756942749\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 导出生成器\n",
        "这个单步模型可以很容易地保存和恢复，tf.saved_model允许您在任何地方使用它。"
      ],
      "metadata": {
        "id": "9Av1N8E-jdiM"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "tf.saved_model.save(one_step_model, 'one_step')\n",
        "one_step_reloaded = tf.saved_model.load('one_step')"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "bQCZUDPKjsmi",
        "outputId": "e97cd148-7cc2-442d-dbee-14ca4af7be9c"
      },
      "execution_count": 49,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "WARNING:tensorflow:Skipping full serialization of Keras layer <__main__.OneStep object at 0x7fa9ecd92c20>, because it is not built.\n",
            "WARNING:tensorflow:Model's `__init__()` arguments contain non-serializable objects. Please implement a `get_config()` method in the subclassed Model for proper saving and loading. Defaulting to empty config.\n",
            "WARNING:tensorflow:Model's `__init__()` arguments contain non-serializable objects. Please implement a `get_config()` method in the subclassed Model for proper saving and loading. Defaulting to empty config.\n",
            "WARNING:absl:Found untraced functions such as gru_cell_layer_call_fn, gru_cell_layer_call_and_return_conditional_losses while saving (showing 2 of 2). These functions will not be directly callable after loading.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "states = None\n",
        "next_char = tf.constant(['ROMEO:'])\n",
        "result = [next_char]\n",
        "\n",
        "for n in range(100):\n",
        "  next_char, states = one_step_reloaded.generate_one_step(next_char, states=states)\n",
        "  result.append(next_char)\n",
        "\n",
        "print(tf.strings.join(result)[0].numpy().decode(\"utf-8\"))"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "66DsE1mbj5In",
        "outputId": "fcadb5e8-bb15-4419-ab7a-70c2a754d1e8"
      },
      "execution_count": 50,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "ROMEO:\n",
            "These lips must behold me again,\n",
            "When mightish taughs her ears to stalve him; for her well her sain\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 高级：定制训练\n",
        "上面的训练程序很简单，但并没有给你太多的控制权。它使用教师强制来防止错误的预测反馈到模型中，因此模型永远不会学会从错误中恢复过来。\n",
        "\n",
        "现在您已经看到了如何手动运行模型，接下来您将实现训练循环。例如，如果您希望实现课程学习以帮助稳定模型的开环输出，那么这提供了一个起点。\n",
        "\n",
        "自定义训练循环中最重要的部分是训练阶跃函数。\n",
        "\n",
        "使用tf.GradientTape跟踪梯度。\n",
        "\n",
        "基本程序如下：\n",
        "* 执行模型并计算在tf.GradientTape下的损耗。\n",
        "* 计算更新并使用优化器将其应用于模型。"
      ],
      "metadata": {
        "id": "nJUUPr34kJyO"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class CustomTraining(MyModel):\n",
        "  @tf.function\n",
        "  def train_step(self, inputs):\n",
        "      inputs, labels = inputs\n",
        "      with tf.GradientTape() as tape:\n",
        "          predictions = self(inputs, training=True)\n",
        "          loss = self.loss(labels, predictions)\n",
        "      grads = tape.gradient(loss, model.trainable_variables)\n",
        "      self.optimizer.apply_gradients(zip(grads, model.trainable_variables))\n",
        "\n",
        "      return {'loss': loss}"
      ],
      "metadata": {
        "id": "_DrXk1xDkxBB"
      },
      "execution_count": 51,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "model.compile(optimizer = tf.keras.optimizers.Adam(),\n",
        "       loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))"
      ],
      "metadata": {
        "id": "DkM44Ju_k5QK"
      },
      "execution_count": 53,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "model.fit(dataset, epochs=1)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "GmZlIKkRk8DU",
        "outputId": "f3360f20-a580-4184-ae2d-a5fa45dd4534"
      },
      "execution_count": 54,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "172/172 [==============================] - 14s 59ms/step - loss: 0.6559\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<keras.callbacks.History at 0x7fa9daf4d930>"
            ]
          },
          "metadata": {},
          "execution_count": 54
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "EPOCHS = 10\n",
        "\n",
        "mean = tf.metrics.Mean()\n",
        "\n",
        "for epoch in range(EPOCHS):\n",
        "    start = time.time()\n",
        "\n",
        "    mean.reset_states()\n",
        "    for (batch_n, (inp, target)) in enumerate(dataset):\n",
        "        logs = model.train_step([inp, target])\n",
        "        mean.update_state(logs['loss'])\n",
        "\n",
        "        if batch_n % 50 == 0:\n",
        "            template = f\"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}\"\n",
        "            print(template)\n",
        "\n",
        "    # saving (checkpoint) the model every 5 epochs\n",
        "    if (epoch + 1) % 5 == 0:\n",
        "        model.save_weights(checkpoint_prefix.format(epoch=epoch))\n",
        "\n",
        "    print()\n",
        "    print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}')\n",
        "    print(f'Time taken for 1 epoch {time.time() - start:.2f} sec')\n",
        "    print(\"_\"*80)\n",
        "\n",
        "model.save_weights(checkpoint_prefix.format(epoch=epoch))"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3BvSwuj2lL9k",
        "outputId": "c1a502fe-98cf-40ae-8e44-b3457ca20fd9"
      },
      "execution_count": 55,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "WARNING:tensorflow:5 out of the last 5 calls to <function _BaseOptimizer._update_step_xla at 0x7fa9dac65cf0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.\n",
            "WARNING:tensorflow:6 out of the last 6 calls to <function _BaseOptimizer._update_step_xla at 0x7fa9dac65cf0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Epoch 1 Batch 0 Loss 0.6552\n",
            "Epoch 1 Batch 50 Loss 0.6324\n",
            "Epoch 1 Batch 100 Loss 0.6237\n",
            "Epoch 1 Batch 150 Loss 0.6226\n",
            "\n",
            "Epoch 1 Loss: 0.6299\n",
            "Time taken for 1 epoch 15.64 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 2 Batch 0 Loss 0.6232\n",
            "Epoch 2 Batch 50 Loss 0.6093\n",
            "Epoch 2 Batch 100 Loss 0.6023\n",
            "Epoch 2 Batch 150 Loss 0.6003\n",
            "\n",
            "Epoch 2 Loss: 0.6065\n",
            "Time taken for 1 epoch 12.80 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 3 Batch 0 Loss 0.6002\n",
            "Epoch 3 Batch 50 Loss 0.5901\n",
            "Epoch 3 Batch 100 Loss 0.5839\n",
            "Epoch 3 Batch 150 Loss 0.5815\n",
            "\n",
            "Epoch 3 Loss: 0.5872\n",
            "Time taken for 1 epoch 13.45 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 4 Batch 0 Loss 0.5813\n",
            "Epoch 4 Batch 50 Loss 0.5736\n",
            "Epoch 4 Batch 100 Loss 0.5682\n",
            "Epoch 4 Batch 150 Loss 0.5660\n",
            "\n",
            "Epoch 4 Loss: 0.5709\n",
            "Time taken for 1 epoch 20.47 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 5 Batch 0 Loss 0.5654\n",
            "Epoch 5 Batch 50 Loss 0.5587\n",
            "Epoch 5 Batch 100 Loss 0.5543\n",
            "Epoch 5 Batch 150 Loss 0.5520\n",
            "\n",
            "Epoch 5 Loss: 0.5564\n",
            "Time taken for 1 epoch 13.61 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 6 Batch 0 Loss 0.5516\n",
            "Epoch 6 Batch 50 Loss 0.5459\n",
            "Epoch 6 Batch 100 Loss 0.5420\n",
            "Epoch 6 Batch 150 Loss 0.5400\n",
            "\n",
            "Epoch 6 Loss: 0.5439\n",
            "Time taken for 1 epoch 13.04 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 7 Batch 0 Loss 0.5396\n",
            "Epoch 7 Batch 50 Loss 0.5348\n",
            "Epoch 7 Batch 100 Loss 0.5315\n",
            "Epoch 7 Batch 150 Loss 0.5297\n",
            "\n",
            "Epoch 7 Loss: 0.5330\n",
            "Time taken for 1 epoch 13.03 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 8 Batch 0 Loss 0.5293\n",
            "Epoch 8 Batch 50 Loss 0.5250\n",
            "Epoch 8 Batch 100 Loss 0.5219\n",
            "Epoch 8 Batch 150 Loss 0.5203\n",
            "\n",
            "Epoch 8 Loss: 0.5233\n",
            "Time taken for 1 epoch 13.18 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 9 Batch 0 Loss 0.5199\n",
            "Epoch 9 Batch 50 Loss 0.5161\n",
            "Epoch 9 Batch 100 Loss 0.5134\n",
            "Epoch 9 Batch 150 Loss 0.5119\n",
            "\n",
            "Epoch 9 Loss: 0.5146\n",
            "Time taken for 1 epoch 13.22 sec\n",
            "________________________________________________________________________________\n",
            "Epoch 10 Batch 0 Loss 0.5116\n",
            "Epoch 10 Batch 50 Loss 0.5084\n",
            "Epoch 10 Batch 100 Loss 0.5060\n",
            "Epoch 10 Batch 150 Loss 0.5047\n",
            "\n",
            "Epoch 10 Loss: 0.5071\n",
            "Time taken for 1 epoch 16.24 sec\n",
            "________________________________________________________________________________\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "参考： https://www.tensorflow.org/text/tutorials/text_generation"
      ],
      "metadata": {
        "id": "PPmiP1ngosKC"
      }
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "tensorflow",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.16"
    },
    "orig_nbformat": 4,
    "colab": {
      "provenance": [],
      "gpuType": "T4",
      "include_colab_link": true
    },
    "accelerator": "GPU",
    "gpuClass": "standard"
  },
  "nbformat": 4,
  "nbformat_minor": 0
}