{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 循环神经网络的从零开始实现\n",
    "<a target=\"_blank\" href=\"https://colab.research.google.com/github/Charmve/computer-vision-in-action/blob/main/notebooks/chapter04_recurrent-neural-networks/rnn-scratch.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" align=\"center\"></a> <br> <a target=\"_blank\" href=\"https://nbviewer.jupyter.org/format/slides/github/Charmve/computer-vision-in-action/blob/main/notebooks/chapter04_recurrent-neural-networks/rnn-scratch.ipynb\"><img src=\"https://img.shields.io/badge/-View%20on%20Binder-yellow.svg?logo=jupyter\" align=\"center\"></a>\n",
    "\n",
    "\n",
    "在本节中，我们将从零开始实现一个基于字符级循环神经网络的语言模型，并在周杰伦专辑歌词数据集上训练一个模型来进行歌词创作。首先，我们读取周杰伦专辑歌词数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "1"
    }
   },
   "outputs": [],
   "source": [
    "import L0CV\n",
    "import math\n",
    "from mxnet import autograd, nd\n",
    "from mxnet.gluon import loss as gloss\n",
    "import time\n",
    "\n",
    "(corpus_indices, char_to_idx, idx_to_char,\n",
    " vocab_size) = L0CV.load_data_jay_lyrics()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## one-hot向量\n",
    "\n",
    "为了将词表示成向量输入到神经网络，一个简单的办法是使用one-hot向量。假设词典中不同字符的数量为$N$（即词典大小`vocab_size`），每个字符已经同一个从0到$N-1$的连续整数值索引一一对应。如果一个字符的索引是整数$i$, 那么我们创建一个全0的长为$N$的向量，并将其位置为$i$的元素设成1。该向量就是对原字符的one-hot向量。下面分别展示了索引为0和2的one-hot向量，向量长度等于词典大小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "2"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\n",
       "[[1. 0. 0. ... 0. 0. 0.]\n",
       " [0. 0. 1. ... 0. 0. 0.]]\n",
       "<NDArray 2x1027 @cpu(0)>"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "nd.one_hot(nd.array([0, 2]), vocab_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们每次采样的小批量的形状是(批量大小, 时间步数)。下面的函数将这样的小批量变换成数个可以输入进网络的形状为(批量大小, 词典大小)的矩阵，矩阵个数等于时间步数。也就是说，时间步$t$的输入为$\\boldsymbol{X}_t \\in \\mathbb{R}^{n \\times d}$，其中$n$为批量大小，$d$为输入个数，即one-hot向量长度（词典大小）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "3"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(5, (2, 1027))"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def to_onehot(X, size):  # 本函数已保存在d2lzh包中方便以后使用\n",
    "    return [nd.one_hot(x, size) for x in X.T]\n",
    "\n",
    "X = nd.arange(10).reshape((2, 5))\n",
    "inputs = to_onehot(X, vocab_size)\n",
    "len(inputs), inputs[0].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 初始化模型参数\n",
    "\n",
    "接下来，我们初始化模型参数。隐藏单元个数 `num_hiddens`是一个超参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "4"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "will use gpu(0)\n"
     ]
    }
   ],
   "source": [
    "num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size\n",
    "ctx = d2l.try_gpu()\n",
    "print('will use', ctx)\n",
    "\n",
    "def get_params():\n",
    "    def _one(shape):\n",
    "        return nd.random.normal(scale=0.01, shape=shape, ctx=ctx)\n",
    "\n",
    "    # 隐藏层参数\n",
    "    W_xh = _one((num_inputs, num_hiddens))\n",
    "    W_hh = _one((num_hiddens, num_hiddens))\n",
    "    b_h = nd.zeros(num_hiddens, ctx=ctx)\n",
    "    # 输出层参数\n",
    "    W_hq = _one((num_hiddens, num_outputs))\n",
    "    b_q = nd.zeros(num_outputs, ctx=ctx)\n",
    "    # 附上梯度\n",
    "    params = [W_xh, W_hh, b_h, W_hq, b_q]\n",
    "    for param in params:\n",
    "        param.attach_grad()\n",
    "    return params"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义模型\n",
    "\n",
    "我们根据循环神经网络的计算表达式实现该模型。首先定义`init_rnn_state`函数来返回初始化的隐藏状态。它返回由一个形状为(批量大小, 隐藏单元个数)的值为0的`NDArray`组成的元组。使用元组是为了更便于处理隐藏状态含有多个`NDArray`的情况。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "5"
    }
   },
   "outputs": [],
   "source": [
    "def init_rnn_state(batch_size, num_hiddens, ctx):\n",
    "    return (nd.zeros(shape=(batch_size, num_hiddens), ctx=ctx), )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面的`rnn`函数定义了在一个时间步里如何计算隐藏状态和输出。这里的激活函数使用了tanh函数。[“多层感知机”](../chapter_deep-learning-basics/mlp.ipynb)一节中介绍过，当元素在实数域上均匀分布时，tanh函数值的均值为0。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "6"
    }
   },
   "outputs": [],
   "source": [
    "def rnn(inputs, state, params):\n",
    "    # inputs和outputs皆为num_steps个形状为(batch_size, vocab_size)的矩阵\n",
    "    W_xh, W_hh, b_h, W_hq, b_q = params\n",
    "    H, = state\n",
    "    outputs = []\n",
    "    for X in inputs:\n",
    "        H = nd.tanh(nd.dot(X, W_xh) + nd.dot(H, W_hh) + b_h)\n",
    "        Y = nd.dot(H, W_hq) + b_q\n",
    "        outputs.append(Y)\n",
    "    return outputs, (H,)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "做个简单的测试来观察输出结果的个数（时间步数），以及第一个时间步的输出层输出的形状和隐藏状态的形状。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "7"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(5, (2, 1027), (2, 256))"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "state = init_rnn_state(X.shape[0], num_hiddens, ctx)\n",
    "inputs = to_onehot(X.as_in_context(ctx), vocab_size)\n",
    "params = get_params()\n",
    "outputs, state_new = rnn(inputs, state, params)\n",
    "len(outputs), outputs[0].shape, state_new[0].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义预测函数\n",
    "\n",
    "以下函数基于前缀`prefix`（含有数个字符的字符串）来预测接下来的`num_chars`个字符。这个函数稍显复杂，其中我们将循环神经单元`rnn`设置成了函数参数，这样在后面小节介绍其他循环神经网络时能重复使用这个函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "8"
    }
   },
   "outputs": [],
   "source": [
    "# 本函数已保存在d2lzh包中方便以后使用\n",
    "def predict_rnn(prefix, num_chars, rnn, params, init_rnn_state,\n",
    "                num_hiddens, vocab_size, ctx, idx_to_char, char_to_idx):\n",
    "    state = init_rnn_state(1, num_hiddens, ctx)\n",
    "    output = [char_to_idx[prefix[0]]]\n",
    "    for t in range(num_chars + len(prefix) - 1):\n",
    "        # 将上一时间步的输出作为当前时间步的输入\n",
    "        X = to_onehot(nd.array([output[-1]], ctx=ctx), vocab_size)\n",
    "        # 计算输出和更新隐藏状态\n",
    "        (Y, state) = rnn(X, state, params)\n",
    "        # 下一个时间步的输入是prefix里的字符或者当前的最佳预测字符\n",
    "        if t < len(prefix) - 1:\n",
    "            output.append(char_to_idx[prefix[t + 1]])\n",
    "        else:\n",
    "            output.append(int(Y[0].argmax(axis=1).asscalar()))\n",
    "    return ''.join([idx_to_char[i] for i in output])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们先测试一下`predict_rnn`函数。我们将根据前缀“分开”创作长度为10个字符（不考虑前缀长度）的一段歌词。因为模型参数为随机值，所以预测结果也是随机的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "9"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'分开句?彷清旧饿多护笔嘟'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "predict_rnn('分开', 10, rnn, params, init_rnn_state, num_hiddens, vocab_size,\n",
    "            ctx, idx_to_char, char_to_idx)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 裁剪梯度\n",
    "\n",
    "循环神经网络中较容易出现梯度衰减或梯度爆炸。我们会在[“通过时间反向传播”](bptt.ipynb)一节中解释原因。为了应对梯度爆炸，我们可以裁剪梯度（clip gradient）。假设我们把所有模型参数梯度的元素拼接成一个向量 $\\boldsymbol{g}$，并设裁剪的阈值是$\\theta$。裁剪后的梯度\n",
    "\n",
    "$$ \\min\\left(\\frac{\\theta}{\\|\\boldsymbol{g}\\|}, 1\\right)\\boldsymbol{g}$$\n",
    "\n",
    "的$L_2$范数不超过$\\theta$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "10"
    }
   },
   "outputs": [],
   "source": [
    "# 本函数已保存在d2lzh包中方便以后使用\n",
    "def grad_clipping(params, theta, ctx):\n",
    "    norm = nd.array([0], ctx)\n",
    "    for param in params:\n",
    "        norm += (param.grad ** 2).sum()\n",
    "    norm = norm.sqrt().asscalar()\n",
    "    if norm > theta:\n",
    "        for param in params:\n",
    "            param.grad[:] *= theta / norm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 困惑度\n",
    "\n",
    "我们通常使用困惑度（perplexity）来评价语言模型的好坏。回忆一下[“softmax回归”](../chapter_deep-learning-basics/softmax-regression.ipynb)一节中交叉熵损失函数的定义。困惑度是对交叉熵损失函数做指数运算后得到的值。特别地，\n",
    "\n",
    "* 最佳情况下，模型总是把标签类别的概率预测为1，此时困惑度为1；\n",
    "* 最坏情况下，模型总是把标签类别的概率预测为0，此时困惑度为正无穷；\n",
    "* 基线情况下，模型总是预测所有类别的概率都相同，此时困惑度为类别个数。\n",
    "\n",
    "显然，任何一个有效模型的困惑度必须小于类别个数。在本例中，困惑度必须小于词典大小`vocab_size`。\n",
    "\n",
    "## 定义模型训练函数\n",
    "\n",
    "与之前章节的模型训练函数相比，这里的模型训练函数有以下几点不同：\n",
    "\n",
    "1. 使用困惑度评价模型。\n",
    "2. 在迭代模型参数前裁剪梯度。\n",
    "3. 对时序数据采用不同采样方法将导致隐藏状态初始化的不同。相关讨论可参考[“语言模型数据集（周杰伦专辑歌词）”](lang-model-dataset.ipynb)一节。\n",
    "\n",
    "另外，考虑到后面将介绍的其他循环神经网络，为了更通用，这里的函数实现更长一些。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "11"
    }
   },
   "outputs": [],
   "source": [
    "# 本函数已保存在d2lzh包中方便以后使用\n",
    "def train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,\n",
    "                          vocab_size, ctx, corpus_indices, idx_to_char,\n",
    "                          char_to_idx, is_random_iter, num_epochs, num_steps,\n",
    "                          lr, clipping_theta, batch_size, pred_period,\n",
    "                          pred_len, prefixes):\n",
    "    if is_random_iter:\n",
    "        data_iter_fn = d2l.data_iter_random\n",
    "    else:\n",
    "        data_iter_fn = d2l.data_iter_consecutive\n",
    "    params = get_params()\n",
    "    loss = gloss.SoftmaxCrossEntropyLoss()\n",
    "\n",
    "    for epoch in range(num_epochs):\n",
    "        if not is_random_iter:  # 如使用相邻采样，在epoch开始时初始化隐藏状态\n",
    "            state = init_rnn_state(batch_size, num_hiddens, ctx)\n",
    "        l_sum, n, start = 0.0, 0, time.time()\n",
    "        data_iter = data_iter_fn(corpus_indices, batch_size, num_steps, ctx)\n",
    "        for X, Y in data_iter:\n",
    "            if is_random_iter:  # 如使用随机采样，在每个小批量更新前初始化隐藏状态\n",
    "                state = init_rnn_state(batch_size, num_hiddens, ctx)\n",
    "            else:  # 否则需要使用detach函数从计算图分离隐藏状态\n",
    "                for s in state:\n",
    "                    s.detach()\n",
    "            with autograd.record():\n",
    "                inputs = to_onehot(X, vocab_size)\n",
    "                # outputs有num_steps个形状为(batch_size, vocab_size)的矩阵\n",
    "                (outputs, state) = rnn(inputs, state, params)\n",
    "                # 连结之后形状为(num_steps * batch_size, vocab_size)\n",
    "                outputs = nd.concat(*outputs, dim=0)\n",
    "                # Y的形状是(batch_size, num_steps)，转置后再变成长度为\n",
    "                # batch * num_steps 的向量，这样跟输出的行一一对应\n",
    "                y = Y.T.reshape((-1,))\n",
    "                # 使用交叉熵损失计算平均分类误差\n",
    "                l = loss(outputs, y).mean()\n",
    "            l.backward()\n",
    "            grad_clipping(params, clipping_theta, ctx)  # 裁剪梯度\n",
    "            d2l.sgd(params, lr, 1)  # 因为误差已经取过均值，梯度不用再做平均\n",
    "            l_sum += l.asscalar() * y.size\n",
    "            n += y.size\n",
    "\n",
    "        if (epoch + 1) % pred_period == 0:\n",
    "            print('epoch %d, perplexity %f, time %.2f sec' % (\n",
    "                epoch + 1, math.exp(l_sum / n), time.time() - start))\n",
    "            for prefix in prefixes:\n",
    "                print(' -', predict_rnn(\n",
    "                    prefix, pred_len, rnn, params, init_rnn_state,\n",
    "                    num_hiddens, vocab_size, ctx, idx_to_char, char_to_idx))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练模型并创作歌词\n",
    "\n",
    "现在我们可以训练模型了。首先，设置模型超参数。我们将根据前缀“分开”和“不分开”分别创作长度为50个字符（不考虑前缀长度）的一段歌词。我们每过50个迭代周期便根据当前训练的模型创作一段歌词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "12"
    }
   },
   "outputs": [],
   "source": [
    "num_epochs, num_steps, batch_size, lr, clipping_theta = 250, 35, 32, 1e2, 1e-2\n",
    "pred_period, pred_len, prefixes = 50, 50, ['分开', '不分开']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面采用随机采样训练模型并创作歌词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "13"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 50, perplexity 70.015369, time 0.22 sec\n",
      " - 分开 我想要这想你 我知想这想你 我知想这想你 我知想这想你 我知想这想你 我知想这想你 我知想这想你 \n",
      " - 不分开 我想要再想你 我知想这想你 我知想这想你 我知想这想你 我知想这想你 我知想这想你 我知想这想你 \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 100, perplexity 9.628899, time 0.23 sec\n",
      " - 分开 有一定人 在小村外 在一场空 你有不知 你一定空 不有不知 你一定空 不一样容 你一定空 不人样元\n",
      " - 不分开吗 我不能看想 我不 我不 我不能再想你 不知我遇 你是一场悲剧 又知后  你在的让快人不 一只令酒\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 150, perplexity 2.877221, time 0.22 sec\n",
      " - 分开 在天心不过口 哼哼的客栈 还后过人的太暖 我想 你静 我不要再想 我不 我不 我不要 爱情走的太快\n",
      " - 不分开吗 我叫你爸 你打我妈 这样了吗的嘛用 我说店够二 三两银够不够 景色入秋 漫天黄潮太我 说分不觉球\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 200, perplexity 1.577534, time 0.22 sec\n",
      " - 分开对落每白墙许远 在感来 穿梭时间的画面的钟 从反方向开始移动 回到当初爱你的时空 停格内容不忠 所有\n",
      " - 不分开吗把的胖女巫 用拉丁文念咒语啦啦呜 她养的黑猫笑起来像哭 啦啦啦呜 一人之枪 在底夜空 过去不同 你\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 250, perplexity 1.293438, time 0.22 sec\n",
      " - 分开不想 藤说还底心里 单多壁以居球 漂亮抢靠我 出话让人难喝 心伤透 娘子她人在江南等我 泪不休 语沉\n",
      " - 不分开吗 然后将过去 慢慢温习 让我爱上你 那场悲剧 是你完美演出的一场戏 宁愿心碎哭泣 再狠狠忘记 你爱\n"
     ]
    }
   ],
   "source": [
    "train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,\n",
    "                      vocab_size, ctx, corpus_indices, idx_to_char,\n",
    "                      char_to_idx, True, num_epochs, num_steps, lr,\n",
    "                      clipping_theta, batch_size, pred_period, pred_len,\n",
    "                      prefixes)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来采用相邻采样训练模型并创作歌词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "attributes": {
     "classes": [],
     "id": "",
     "n": "19"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 50, perplexity 61.477279, time 0.22 sec\n",
      " - 分开 我想要的可爱女人 坏坏的让我疯狂的可爱女人 坏坏的让我疯狂的可爱女人 坏坏的让我疯狂的可爱女人 坏\n",
      " - 不分开 我想我的可爱女人 坏坏的让我疯狂的可爱女人 坏坏的让我疯狂的可爱女人 坏坏的让我疯狂的可爱女人 坏\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 100, perplexity 7.224846, time 0.22 sec\n",
      " - 分开 我说想这生活 不知不觉 我跟经这节奏 后知后觉 如果用双截棍 一北哈兮 快使用双截棍 一哼哈兮 快\n",
      " - 不分开只 就样经抽 你后的脚 有果我纵 你有一直 我一定空 如自己纵 你一没空 不一己痛 你一定痛 不一己\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 150, perplexity 2.090715, time 0.22 sec\n",
      " - 分开 我说的这样 别没什么走 仙里在怕落 我对 他子  谁常依忆 我该儿这生  没有你在我有多难熬多烦恼\n",
      " - 不分开步 你已经离开我 不知不觉 我跟了这节奏 后知后觉 又过了一个秋 后知后觉 我该好好生活 我该好好生\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 200, perplexity 1.316393, time 0.22 sec\n",
      " - 分开 我说啊这叹 别没开要不够 景色入秋 漫天黄沙凉过 塞北的客栈人多 牧草有没有 我马儿有些瘦 天涯尽\n",
      " - 不分开觉 你已经离开我 不知不觉 我跟了这节奏 后知后觉 又过了一个秋 后知后觉 我该好好生活 我该好好生\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 250, perplexity 1.164650, time 0.22 sec\n",
      " - 分开 我叫的话爱  你有着对我有 难想了这样在 对天在最不舍的一片丘 宁狼安老斑鸠 再狠狠忘记 几天都没\n",
      " - 不分开觉 你已经离开我 不知不觉 我跟了这节奏 后知后觉 后知了一个秋 后知后觉 我该好好生活 我该好好生\n"
     ]
    }
   ],
   "source": [
    "train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,\n",
    "                      vocab_size, ctx, corpus_indices, idx_to_char,\n",
    "                      char_to_idx, False, num_epochs, num_steps, lr,\n",
    "                      clipping_theta, batch_size, pred_period, pred_len,\n",
    "                      prefixes)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 小结\n",
    "\n",
    "* 可以用基于字符级循环神经网络的语言模型来生成文本序列，例如创作歌词。\n",
    "* 当训练循环神经网络时，为了应对梯度爆炸，可以裁剪梯度。\n",
    "* 困惑度是对交叉熵损失函数做指数运算后得到的值。\n",
    "\n",
    "\n",
    "## 练习\n",
    "\n",
    "* 调调超参数，观察并分析对运行时间、困惑度以及创作歌词的结果造成的影响。\n",
    "* 不裁剪梯度，运行本节中的代码，结果会怎样？\n",
    "* 将`pred_period`变量设为1，观察未充分训练的模型（困惑度高）是如何创作歌词的。你获得了什么启发？\n",
    "* 将相邻采样改为不从计算图分离隐藏状态，运行时间有没有变化？\n",
    "* 将本节中使用的激活函数替换成ReLU，重复本节的实验。\n",
    "\n",
    "\n",
    "\n",
    "## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/989)\n",
    "\n",
    "![](../img/qr_rnn-scratch.svg)"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
