{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "AyYaroWfELFo"
   },
   "source": "# RNN前向传播计算过程"
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "fXWmy3GdELFr",
    "ExecuteTime": {
     "end_time": "2025-10-13T05:40:18.775761Z",
     "start_time": "2025-10-13T05:40:18.087606Z"
    }
   },
   "source": [
    "import numpy as np\n",
    "from rnn_utils import *\n",
    "from public_tests import *"
   ],
   "outputs": [],
   "execution_count": 3
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "03kGDM_sELFv"
   },
   "source": "## 1 - 基本RNN的前向传播"
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "ZBxzkRzsELFv"
   },
   "source": [
    "<img src=\"images/RNN.png\" style=\"width:500;height:300px;\">\n",
    "<caption><center><font color='purple'><b>Figure 1</b>: 基本RNN </center></caption>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "jFNRtAP_ELFw"
   },
   "source": [
    "### 输入 $x$ 的维度\n",
    "\n",
    "#### 包含 $n_x$ 个单元的输入\n",
    "* 对于单个输入示例的单个时间步长，$x^{(i) \\langle t \\rangle }$ 是一个一维输入向量\n",
    "* 以语言为例，词汇量为 5000 的语言可以进行独热编码，生成一个包含 5000 个单元的向量。因此，$x^{(i)\\langle t \\rangle}$ 的形状为 (5000,)\n",
    "* 此处使用符号 $n_x$ 表示单个训练示例单个时间步长的单元数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "EnYGy4L-ELFx"
   },
   "source": [
    "#### 大小为 $T_{x}$ 的时间步长\n",
    "* 循环神经网络有多个时间步长，你可以用 $t$ 来索引它们。\n",
    "* 由多个时间步长 $T_x$ 组成的训练样本 $x^{(i)}$。 $T_{x}$ 表示最长序列中的时间步长数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Azzhk7jCELFx"
   },
   "source": [
    "#### 大小为 $m$ 的批次\n",
    "* 假设我们有一些小批次，每个批次包含 20 个训练样本\n",
    "* 为了利用向量化的优势，你需要堆叠 20 列 $x^{(i)}$ 个样本\n",
    "* 例如，这个张量的形状为 (5000,20,10)\n",
    "* 你将使用 $m$ 表示训练样本的数量\n",
    "* 因此，小批次的形状为 $(n_x,m,T_x)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "qNR7VozOELFy"
   },
   "source": [
    "#### 形状为 $(n_{x},m,T_{x})$ 的三维张量\n",
    "* 形状为 $(n_x,m,T_x)$ 的三维张量 $x$ 表示输入到 RNN 的输入 $x$。\n",
    "\n",
    "#### 为每个时间步取一个二维切片：$x^{\\langle t \\rangle}$\n",
    "* 在每个时间步，你将使用一个小批量的训练样本（而不仅仅是单个样本）。\n",
    "* 因此，对于每个时间步 $t$，你将使用一个形状为 $(n_x,m)$ 的二维切片。\n",
    "* 这个二维切片被称为 $x^{\\langle t \\rangle}$。代码中的变量名为 `xt`。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "VOzhhTj4ELFy"
   },
   "source": [
    "### 隐藏状态 $a$ 的定义\n",
    "\n",
    "* 从一个时间步传递到 RNN 的激活值 $a^{\\langle t \\rangle}$ 被称为“隐藏状态”。\n",
    "\n",
    "### 隐藏状态 $a$ 的维度\n",
    "\n",
    "* 与输入张量 $x$ 类似，单个训练样本的隐藏状态是一个长度为 $n_{a}$ 的向量。\n",
    "* 如果包含一个包含 $m$ 个训练样本的小批量，则小批量的形状为 $(n_{a},m)$。\n",
    "* 如果包含时间步长维度，则隐藏状态的形状为 $(n_{a}, m, T_x)$。\n",
    "* 你将循环遍历索引为 $t$ 的时间步长，并使用三维张量的二维切片。\n",
    "* 此二维切片称为 $a^{\\langle t \\rangle}$。\n",
    "* 在代码中，使用的变量名称是 `a_prev` 或 `a_next`，具体取决于所实现的函数。\n",
    "* 此二维切片的形状为 $(n_{a}, m)$。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "67vYjIRTELFz"
   },
   "source": [
    "### 预测维度 $\\hat{y}$\n",
    "* 与输入和隐藏状态类似，$\\hat{y}$ 是一个三维张量，形状为 $(n_{y}, m, T_{y})$\n",
    "* $n_{y}$：表示预测的向量中的单元数\n",
    "* $m$：小批量中的样本数\n",
    "* $T_{y}$：预测的时间步数\n",
    "* 对于单个时间步 $t$，二维切片 $\\hat{y}^{\\langle t \\rangle}$ 的形状为 $(n_{y}, m)$\n",
    "* 代码中的变量名称如下：\n",
    "- `y_pred`: $\\hat{y}$\n",
    "- `yt_pred`: $\\hat{y}^{\\langle t \\rangle}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "9ZrlQ4X8ELFz"
   },
   "source": [
    "以下是实现 RNN 的方法：\n",
    "\n",
    "### 步骤：\n",
    "1. 实现 RNN 一个时间步所需的计算。\n",
    "2. 实现一个循环，遍历 $T_x$ 个时间步，以便一次处理所有输入。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "6oXWAKeTELF0"
   },
   "source": [
    "<a name='1-1'></a>\n",
    "### 1.1 - RNN 单元\n",
    "\n",
    "你可以将循环神经网络视为单个单元的重复使用。首先，你将实现单个时间步长的计算。下图描述了 RNN 单元单个时间步长的操作：\n",
    "\n",
    "<img src=\"images/rnn_step_forward_figure2_v3a.png\" style=\"width:700px;height:300px;\">\n",
    "<caption><center><b>图 2</b>：基本 RNN 单元。输入 $x^{\\langle t \\rangle}$（当前输入）和 $a^{\\langle t - 1\\rangle}$（包含过去信息的先前隐藏状态），输出 $a^{\\langle t \\rangle}$，该输出将传递给下一个 RNN 单元，并用于预测 $\\hat{y}^{\\langle t \\rangle}$\n",
    "</center></caption>\n",
    "\n",
    "**`RNN 单元` 与 `RNN_cell_forward`**：\n",
    "* 注意，RNN 单元输出隐藏状态 $a^{\\langle t \\rangle}$。\n",
    "* `RNN cell` 在图中显示为实线内框\n",
    "* 你将要实现的函数 `rnn_cell_forward` 还会计算预测 $\\hat{y}^{\\langle t \\rangle}$\n",
    "* `RNN_cell_forward` 在图中显示为虚线外框"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "HhFLBKwbELF0"
   },
   "source": [
    "### 练习 1 - rnn_cell_forward\n",
    "\n",
    "实现图 2 中描述的 RNN 单元。\n",
    "\n",
    "**说明**:\n",
    "1. 使用 tanh 激活函数计算隐藏状态：$a^{\\langle t \\rangle} = \\tanh(W_{aa} a^{\\langle t-1 \\rangle} + W_{ax} x^{\\langle t \\rangle} + b_a)$\n",
    "2. 使用新的隐藏状态 $a^{\\langle t \\rangle}$ 计算预测值 $\\hat{y}^{\\langle t \\rangle} = softmax(W_{ya} a^{\\langle t \\rangle} + b_y)$。 （已提供函数 `softmax`）\n",
    "3. 将 $(a^{\\langle t \\rangle}, a^{\\langle t-1 \\rangle}, x^{\\langle t \\rangle}, 参数)$ 存储在 `cache` 中\n",
    "4. 返回 $a^{\\langle t \\rangle}$ 、 $\\hat{y}^{\\langle t \\rangle}$ 和 `cache`\n",
    "\n",
    "#### 额外提示\n",
    "* 激活函数：np.tanh\n",
    "* 有现有的 `softmax` 函数提供使用。它位于文件 'rnn_utils.py' 中，并且已导入。\n",
    "* 矩阵乘法：np.dot"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "fxI-F0HWELF1",
    "ExecuteTime": {
     "end_time": "2025-10-13T05:39:27.969473Z",
     "start_time": "2025-10-13T05:39:27.927651Z"
    }
   },
   "source": [
    "def rnn_cell_forward(xt, a_prev, parameters):\n",
    "    \"\"\"\n",
    "    实现 RNN 单元的单步前向传播，如图 (2) 所示。\n",
    "\n",
    "    参数：\n",
    "    xt - 时间步长 \"t\" 的输入数据，numpy 数组，形状为 (n_x, m)。\n",
    "    a_prev - 时间步长 \"t-1\" 的隐藏状态，numpy 数组，形状为 (n_a, m)。\n",
    "    parameters - 包含以下内容的 Python 字典：\n",
    "        Wax - 与输入相乘的权重矩阵，numpy 数组，形状为 (n_a, n_x)。\n",
    "        Waa - 与隐藏状态相乘的权重矩阵，numpy 数组，形状为 (n_a, n_a)。\n",
    "        Wya - 关联隐藏状态和输出的权重矩阵，numpy 数组，形状为 (n_y, n_a)。\n",
    "        ba - 偏差，numpy 数组，形状为 (n_a, 1)。\n",
    "        by - 关联隐藏状态和输出的偏差，numpy 数组，形状为(n_y, 1)\n",
    "\n",
    "    返回：\n",
    "    a_next - 下一个隐藏状态，形状为 (n_a, m)\n",
    "    yt_pred - 时间步 \"t\" 的预测，形状为 (n_y, m) 的 NumPy 数组\n",
    "    cache - 反向传播所需值的元组，包含 (a_next, a_prev, xt, parameters)\n",
    "    \"\"\"\n",
    "\n",
    "    # todo: 1. 从“parameters”中提取权重+偏置\n",
    "    Wax = parameters[\"Wax\"]\n",
    "    Waa = parameters[\"Waa\"]\n",
    "    Wya = parameters[\"Wya\"]\n",
    "    ba = parameters[\"ba\"]\n",
    "    by = parameters[\"by\"]\n",
    "    # todo: 2. 使用上述公式计算下一个激活状态\n",
    "    a_next = np.tanh(np.dot(Wax, xt) + np.dot(Waa, a_prev) + ba)\n",
    "\n",
    "    # todo: 3. 使用公式计算当前单元的输出\n",
    "    yt_pred = softmax(np.dot(Wya, a_next) + by)\n",
    "\n",
    "    # 将反向传播所需的值存储在缓存中\n",
    "    cache = (a_next, a_prev, xt, parameters)\n",
    "\n",
    "    return a_next, yt_pred, cache"
   ],
   "outputs": [],
   "execution_count": 1
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "V03ZGazVELF4",
    "ExecuteTime": {
     "end_time": "2025-10-13T05:40:24.956470Z",
     "start_time": "2025-10-13T05:40:24.884653Z"
    }
   },
   "source": [
    "np.random.seed(1)\n",
    "xt_tmp = np.random.randn(3, 10)\n",
    "a_prev_tmp = np.random.randn(5, 10)\n",
    "parameters_tmp = {}\n",
    "parameters_tmp['Waa'] = np.random.randn(5, 5)\n",
    "parameters_tmp['Wax'] = np.random.randn(5, 3)\n",
    "parameters_tmp['Wya'] = np.random.randn(2, 5)\n",
    "parameters_tmp['ba'] = np.random.randn(5, 1)\n",
    "parameters_tmp['by'] = np.random.randn(2, 1)\n",
    "\n",
    "a_next_tmp, yt_pred_tmp, cache_tmp = rnn_cell_forward(xt_tmp, a_prev_tmp, parameters_tmp)\n",
    "print(\"a_next[4] = \\n\", a_next_tmp[4])\n",
    "print(\"a_next.shape = \\n\", a_next_tmp.shape)\n",
    "print(\"yt_pred[1] =\\n\", yt_pred_tmp[1])\n",
    "print(\"yt_pred.shape = \\n\", yt_pred_tmp.shape)\n",
    "\n",
    "# UNIT TESTS\n",
    "rnn_cell_forward_tests(rnn_cell_forward)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "a_next[4] = \n",
      " [ 0.59584544  0.18141802  0.61311866  0.99808218  0.85016201  0.99980978\n",
      " -0.18887155  0.99815551  0.6531151   0.82872037]\n",
      "a_next.shape = \n",
      " (5, 10)\n",
      "yt_pred[1] =\n",
      " [0.9888161  0.01682021 0.21140899 0.36817467 0.98988387 0.88945212\n",
      " 0.36920224 0.9966312  0.9982559  0.17746526]\n",
      "yt_pred.shape = \n",
      " (2, 10)\n",
      "\u001B[92mAll tests passed\n"
     ]
    }
   ],
   "execution_count": 4
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "c_6KJp0lELF7"
   },
   "source": [
    "**Expected Output**: \n",
    "```Python\n",
    "a_next[4] = \n",
    " [ 0.59584544  0.18141802  0.61311866  0.99808218  0.85016201  0.99980978\n",
    " -0.18887155  0.99815551  0.6531151   0.82872037]\n",
    "a_next.shape = \n",
    " (5, 10)\n",
    "yt_pred[1] =\n",
    " [ 0.9888161   0.01682021  0.21140899  0.36817467  0.98988387  0.88945212\n",
    "  0.36920224  0.9966312   0.9982559   0.17746526]\n",
    "yt_pred.shape = \n",
    " (2, 10)\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "rjz378-tELF7"
   },
   "source": [
    "### 1.2 - RNN 前向传递\n",
    "\n",
    "- 循环神经网络 (RNN) 是刚刚构建的 RNN 单元的重复。\n",
    "- 如果你的输入数据序列长度为 10 个时间步，那么你将重复使用 RNN 单元 10 次。\n",
    "- 每个单元在每个时间步接受两个输入：\n",
    "- $a^{\\langle t-1 \\rangle}$：来自前一个单元的隐藏状态\n",
    "- $x^{\\langle t \\rangle}$：当前时间步的输入数据\n",
    "- 它在每个时间步有两个输出：\n",
    "- 隐藏状态 ($a^{\\langle t \\rangle}$)\n",
    "- 预测 ($y^{\\langle t \\rangle}$)\n",
    "- 权重和偏差 $(W_{aa}, W_{ax}, b_{a}, W_{ay}, b_{y})$ 在每个时间步重复使用。\n",
    "- 它们在调用 `rnn_cell_forward` 函数时保存在 `parameters` 字典中。\n",
    "\n",
    "<img src=\"images/rnn_forward_sequence_figure3_v3a.png\" style=\"width:800px;height:180px;\">\n",
    "<caption><center><b>图 3</b>：基本 RNN。输入序列 $x = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ 沿 $T_x$ 个时间步进行。网络输出 $y = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$。</center></caption>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "r104TTL-ELF8"
   },
   "source": [
    "### Exercise 2 - rnn_forward（RNN 前向传播）\n",
    "\n",
    "**指示说明：**\n",
    "\n",
    "- 创建一个三维的全零数组 $a$，形状为 $(n_a, m, T_x)$，用于存储 RNN 在所有时间步计算得到的隐藏状态（hidden states）。\n",
    "- 创建一个三维的全零数组 $\\hat{y}$，形状为 $(n_y, m, T_x)$，用于存储每个时间步的预测值。\n",
    "  - 注意：此处 $T_y = T_x$（即预测序列与输入序列具有相同的时间步数）。\n",
    "- 初始化二维隐藏状态 `a_next`，将其设为初始隐藏状态 $a_0$。\n",
    "- 对于每一个时间步 $t$：\n",
    "\n",
    "  - 取出单个时间步的输入片段 $x^{\\langle t \\rangle}$，即从 $x$ 中截取出对应时间步 $t$ 的二维切片：\n",
    "    - $x^{\\langle t \\rangle}$ 的形状为 $(n_x, m)$\n",
    "    - 整个输入 $x$ 的形状为 $(n_x, m, T_x)$\n",
    "\n",
    "  - 调用 `rnn_cell_forward`，更新当前时间步的二维隐藏状态 $a^{\\langle t \\rangle}$（变量名 `a_next`）、预测值 $\\hat{y}^{\\langle t \\rangle}$（变量名 `yt_pred`），并获得对应的缓存 `cache`。\n",
    "    - $a^{\\langle t \\rangle}$ 的形状为 $(n_a, m)$\n",
    "\n",
    "  - 将当前的二维隐藏状态存入三维张量 $a$ 的第 $t$ 个位置。\n",
    "    - $a$ 的形状为 $(n_a, m, T_x)$\n",
    "\n",
    "  - 将当前的二维预测值 $\\hat{y}^{\\langle t \\rangle}$（变量名 `yt_pred`）存入三维张量 $\\hat{y}_{pred}$ 的第 $t$ 个位置。\n",
    "    - $\\hat{y}^{\\langle t \\rangle}$ 的形状为 $(n_y, m)$\n",
    "    - $\\hat{y}$ 的形状为 $(n_y, m, T_x)$\n",
    "\n",
    "  - 将本时间步的缓存 `cache` 添加进缓存列表 `caches`。\n",
    "\n",
    "- 最后返回三维张量 $a$、$\\hat{y}$ 以及缓存列表 `caches`。\n"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "VmeprGJpELF9",
    "ExecuteTime": {
     "end_time": "2025-10-13T06:50:10.806261Z",
     "start_time": "2025-10-13T06:50:10.767812Z"
    }
   },
   "source": [
    "def rnn_forward(x, a0, parameters):\n",
    "    \"\"\"\n",
    "    实现循环神经网络的前向传播。\n",
    "\n",
    "    参数：\n",
    "    x - 每个时间步的输入数据，形状为 (n_x, m, T_x)。\n",
    "    a0 - 初始隐藏状态，形状为 (n_a, m)。\n",
    "    parameters - 包含以下内容的 Python 字典：\n",
    "        Waa - 权重矩阵，用于乘以隐藏状态，numpy 数组，形状为 (n_a, n_a)。\n",
    "        Wax - 权重矩阵，用于乘以输入，numpy 数组，形状为 (n_a, n_x)。\n",
    "        Wya - 权重矩阵，用于关联隐藏状态和输出，numpy 数组，形状为 (n_y, n_a)。\n",
    "        ba - 偏差，numpy 数组，形状为 (n_a, 1)。\n",
    "        by - 偏差，用于关联隐藏状态和输出，numpy 数组，形状为 (n_y, 1)。\n",
    "\n",
    "    返回：\n",
    "    a - 每个时间步的隐藏状态时间步长，numpy 数组，形状为 (n_a, m, T_x)\n",
    "    y_pred - 每个时间步长的预测值，numpy 数组，形状为 (n_y, m, T_x)\n",
    "    caches - 反向传播所需值的元组，包含 (caches 列表, x)\n",
    "    \"\"\"\n",
    "\n",
    "    # 初始化“caches”，其中包含所有 caches 的列表\n",
    "    caches = []\n",
    "\n",
    "    # 从 x 和 parameters[\"Wya\"] 的形状中获取维度\n",
    "    n_x, m, T_x = x.shape\n",
    "    n_y, n_a = parameters[\"Wya\"].shape\n",
    "\n",
    "    ### 从此处开始代码###\n",
    "\n",
    "    # 用零初始化“a”和“y_pred”\n",
    "    a = np.zeros((n_a, m, T_x))  # 每个时间步的隐藏状态都将存在这里\n",
    "    y_pred = np.zeros((n_y, m, T_x))\n",
    "\n",
    "    #初始化 a_next\n",
    "    a_next = a0\n",
    "\n",
    "    # TODO: 循环遍历所有时间步\n",
    "    for t in range(T_x):\n",
    "        # 1. 获取当前时间步的输入数据xt（形状：(n_x, m)）\n",
    "        xt = x[:, :, t]\n",
    "\n",
    "        # 2. 调用单步RNN函数计算当前时间步的隐藏状态和预测\n",
    "        a_next, yt_pred, cache = rnn_cell_forward(xt, a_next, parameters)\n",
    "\n",
    "        # 3. 将当前时间步的隐藏状态存入a数组\n",
    "        a[:, :, t] = a_next\n",
    "\n",
    "        # 4. 将当前时间步的预测结果存入y_pred数组\n",
    "        y_pred[:, :, t] = yt_pred\n",
    "\n",
    "        # 5. 将当前时间步的缓存加入caches列表\n",
    "        caches.append(cache)\n",
    "    # 将反向传播所需的值存储在缓存中\n",
    "    caches = (caches, x)\n",
    "    return a, y_pred, caches"
   ],
   "outputs": [],
   "execution_count": 5
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "jEPrd77rELF_",
    "ExecuteTime": {
     "end_time": "2025-10-13T06:50:13.596644Z",
     "start_time": "2025-10-13T06:50:13.564807Z"
    }
   },
   "source": [
    "np.random.seed(1)\n",
    "x_tmp = np.random.randn(3, 10, 4)\n",
    "a0_tmp = np.random.randn(5, 10)\n",
    "parameters_tmp = {}\n",
    "parameters_tmp['Waa'] = np.random.randn(5, 5)\n",
    "parameters_tmp['Wax'] = np.random.randn(5, 3)\n",
    "parameters_tmp['Wya'] = np.random.randn(2, 5)\n",
    "parameters_tmp['ba'] = np.random.randn(5, 1)\n",
    "parameters_tmp['by'] = np.random.randn(2, 1)\n",
    "\n",
    "a_tmp, y_pred_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)\n",
    "print(\"a[4][1] = \\n\", a_tmp[4][1])\n",
    "print(\"a.shape = \\n\", a_tmp.shape)\n",
    "print(\"y_pred[1][3] =\\n\", y_pred_tmp[1][3])\n",
    "print(\"y_pred.shape = \\n\", y_pred_tmp.shape)\n",
    "print(\"caches[1][1][3] =\\n\", caches_tmp[1][1][3])\n",
    "print(\"len(caches) = \\n\", len(caches_tmp))\n",
    "\n",
    "#UNIT TEST    \n",
    "rnn_forward_test(rnn_forward)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "a[4][1] = \n",
      " [-0.99999375  0.77911235 -0.99861469 -0.99833267]\n",
      "a.shape = \n",
      " (5, 10, 4)\n",
      "y_pred[1][3] =\n",
      " [0.79560373 0.86224861 0.11118257 0.81515947]\n",
      "y_pred.shape = \n",
      " (2, 10, 4)\n",
      "caches[1][1][3] =\n",
      " [-1.1425182  -0.34934272 -0.20889423  0.58662319]\n",
      "len(caches) = \n",
      " 2\n",
      "\u001B[92mAll tests passed\n"
     ]
    }
   ],
   "execution_count": 6
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "R135qjynELGC"
   },
   "source": [
    "**Expected Output**:\n",
    "\n",
    "```Python\n",
    "a[4][1] = \n",
    " [-0.99999375  0.77911235 -0.99861469 -0.99833267]\n",
    "a.shape = \n",
    " (5, 10, 4)\n",
    "y_pred[1][3] =\n",
    " [ 0.79560373  0.86224861  0.11118257  0.81515947]\n",
    "y_pred.shape = \n",
    " (2, 10, 4)\n",
    "caches[1][1][3] =\n",
    " [-1.1425182  -0.34934272 -0.20889423  0.58662319]\n",
    "len(caches) = \n",
    " 2\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "x2QbsWFzELGD"
   },
   "source": [
    "## 2 - Long Short-Term Memory (LSTM) 网络\n",
    "\n",
    "下图展示了 LSTM 单元的操作：\n",
    "\n",
    "<img src=\"images/LSTM_figure4_v3a.png\" style=\"width:500;height:400px;\">\n",
    "<caption><center><b>图 4</b>：LSTM 单元。它会在每个时间步长跟踪并更新“单元状态”，即记忆变量 $c^{\\langle t \\rangle}$，该变量可能与 $a^{\\langle t \\rangle}$ 不同。\n",
    "注意，$softmax^{}$ 包含一个全连接层和一个 softmax 层。</center></caption>\n",
    "\n",
    "与上面的 RNN 示例类似，你将首先实现单个时间步长的 LSTM 单元。然后，你将在“for 循环”中迭代调用它，使其以 $T_x$ 个时间步长处理输入。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "fyUcxGCJELGD"
   },
   "source": [
    "### 门和状态概述\n",
    "\n",
    "#### 遗忘门 $\\mathbf{\\Gamma}_{f}$\n",
    "\n",
    "* 假设你正在阅读一段文本中的单词，并计划使用 LSTM 来跟踪语法结构，例如主语是单数（“puppy”）还是复数（“puppies”）。\n",
    "* 如果主语的状态发生变化（从单数词变为复数词），先前状态的记忆就会变得过时，因此你会“忘记”该过时状态。\n",
    "* “遗忘门”是一个包含 0 到 1 之间值的张量。\n",
    "* 如果遗忘门中某个单元的值接近于 0，LSTM 就会“忘记”先前单元状态对应单元中存储的状态。\n",
    "* 如果遗忘门中某个单元的值接近于 1，LSTM 会基本记住存储状态中对应的值。\n",
    "\n",
    "##### 方程\n",
    "\n",
    "$$\\mathbf{\\Gamma}_f^{\\langle t \\rangle} = \\sigma(\\mathbf{W}_f[\\mathbf{a}^{\\langle t-1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_f)\\tag{1} $$\n",
    "\n",
    "##### 等式的解释：\n",
    "\n",
    "* $\\mathbf{W_{f}}$ 包含控制遗忘门行为的权重。\n",
    "* 前一个时间步的隐藏状态 $[a^{\\langle t-1 \\rangle}$ 和当前时间步的输入 $x^{\\langle t \\rangle}]$ 连接在一起并乘以 $\\mathbf{W_{f}}$。\n",
    "* 使用 S 型函数使每个门张量 $\\mathbf{\\Gamma}_f^{\\langle t \\rangle}$ 的值介于 0 到 1 之间。\n",
    "* 遗忘门 $\\mathbf{\\Gamma}_f^{\\langle t \\rangle}$ 的维度与前一个cell状态 $c^{\\langle t-1 \\rangle}$ 相同。\n",
    "* 这意味着两者可以逐个元素相乘。\n",
    "* 将张量 $\\mathbf{\\Gamma}_f^{\\langle t \\rangle} * \\mathbf{c}^{\\langle t-1 \\rangle}$ 相乘就像在前一个cell状态上应用了一个掩码。\n",
    "* 如果 $\\mathbf{\\Gamma}_f^{\\langle t \\rangle}$ 中的单个值为 0 或接近于 0，则乘积接近于 0。\n",
    "* 这使得 $\\mathbf{c}^{\\langle t-1 \\rangle}$ 中相应单元存储的信息不会被记住用于下一个时间步。\n",
    "* 类似地，如果一个值接近于 1，则乘积接近于前一个单元状态的原始值。\n",
    "* LSTM 会保留 $\\mathbf{c}^{\\langle t-1 \\rangle}$ 相应单元的信息，以便在下一个时间步使用。\n",
    "\n",
    "##### 代码中的变量名称\n",
    "代码中的变量名称与方程式类似，但略有不同。\n",
    "* `Wf`: 遗忘门权重 $\\mathbf{W}_{f}$\n",
    "* `bf`: 遗忘门偏置 $\\mathbf{b}_{f}$\n",
    "* `ft`: 遗忘门 $\\Gamma_f^{\\langle t \\rangle}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "HHeiRiqKELGE"
   },
   "source": [
    "#### 候选值 $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$\n",
    "* 候选值是一个张量，包含当前时间步的信息，这些信息**可能**存储在当前单元状态 $\\mathbf{c}^{\\langle t \\rangle}$ 中。\n",
    "* 候选值中哪些部分会被传递取决于更新（输入）门。\n",
    "* 候选值是一个张量，其值范围从 -1 到 1。\n",
    "* 波浪号“~”用于区分候选值 $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ 和单元状态 $\\mathbf{c}^{\\langle t \\rangle}$。\n",
    "\n",
    "##### 方程\n",
    "$$\\mathbf{\\tilde{c}}^{\\langle t \\rangle} = \\tanh\\left( \\mathbf{W}_{c} [\\mathbf{a}^{\\langle t - 1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_{c} \\right) \\tag{3}$$\n",
    "\n",
    "##### 方程式解释\n",
    "* *tanh* 函数的值为 -1 到 1 之间的值。\n",
    "\n",
    "##### 代码中的变量名\n",
    "* `cct`：候选值 $\\mathbf{\\tilde{c}}^{\\langle t \\rangle}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "xewEj_FuELGF"
   },
   "source": [
    "#### 输入门 $\\mathbf{\\Gamma}_{i}$\n",
    "\n",
    "* 使用输入门来决定将候选张量 $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ 的哪些部分添加到单元状态 $c^{\\langle t \\rangle}$。\n",
    "* 输入门决定将“候选”张量 $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ 的哪些部分传递到单元状态 $\\mathbf{c}^{\\langle t \\rangle}$。\n",
    "* 输入门是一个包含 0 到 1 之间值的张量。\n",
    "* 当输入门中的某个单元接近于 1 时，它允许候选值 $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$ 的值传递到隐藏状态 $\\mathbf{c}^{\\langle t \\rangle}$。\n",
    "* 当输入门中的某个单元接近于 0 时，它会阻止候选值中相应的值传递到隐藏状态。\n",
    "* 注意，为了遵循文献中的惯例，我们使用下标“i”而不是“u”。\n",
    "\n",
    "##### 公式\n",
    "\n",
    "$$\\mathbf{\\Gamma}_i^{\\langle t \\rangle} = \\sigma(\\mathbf{W}_i[a^{\\langle t-1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_i)\\tag{2} $$\n",
    "\n",
    "##### 公式解释\n",
    "\n",
    "* 与遗忘门类似，这里 $\\mathbf{\\Gamma}_i^{\\langle t \\rangle}$，S 型函数产生的值介于 0 到 1 之间。\n",
    "* 输入门与候选元素逐个相乘，该乘积（$\\mathbf{\\Gamma}_{i}^{\\langle t \\rangle} * \\tilde{c}^{\\langle t \\rangle}$）用于确定cell状态 $\\mathbf{c}^{\\langle t \\rangle}$。\n",
    "\n",
    "##### 代码中的变量名称（请注意，它们与方程式不同）\n",
    "在代码中，你将使用学术文献中找到的变量名称。这些变量不使用“u”表示“更新”。\n",
    "* `Wi` 是输入门权重 $\\mathbf{W}_i$（不是“Wu”）\n",
    "* `bi` 是输入门偏置 $\\mathbf{b}_i$（不是“bu”）\n",
    "* `it` 是输入门 $\\mathbf{\\Gamma}_i^{\\langle t \\rangle}$（不是“ut”）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "yvxVv83-ELGF"
   },
   "source": [
    "#### 单元状态 $\\mathbf{c}^{\\langle t \\rangle}$\n",
    "\n",
    "* 单元状态是传递到未来时间步的“记忆”。\n",
    "* 新的单元状态 $\\mathbf{c}^{\\langle t \\rangle}$ 是先前单元状态和候选值的组合。\n",
    "\n",
    "##### 方程\n",
    "\n",
    "$$ \\mathbf{c}^{\\langle t \\rangle} = \\mathbf{\\Gamma}_f^{\\langle t \\rangle}* \\mathbf{c}^{\\langle t-1 \\rangle} + \\mathbf{\\Gamma}_{i}^{\\langle t \\rangle} *\\mathbf{\\tilde{c}}^{\\langle t \\rangle} \\tag{4} $$\n",
    "\n",
    "##### 等式的解释\n",
    "* 之前的cell状态 $\\mathbf{c}^{\\langle t-1 \\rangle}$ 由遗忘门调整（加权） $\\mathbf{\\Gamma}_{f}^{\\langle t \\rangle}$\n",
    "* 和候选值 $\\tilde{\\mathbf{c}}^{\\langle t \\rangle}$，由更新门 $\\mathbf{\\Gamma}_{i}^{\\langle t \\rangle}$ 调整（加权）。\n",
    "\n",
    "##### 代码中的变量名称和形状\n",
    "* `c`：cell状态，包含所有时间步，$\\mathbf{c}$ 形状 $(n_{a}, m, T_x)$\n",
    "* `c_next`：新的（下一个）cell状态，$\\mathbf{c}^{\\langle t \\rangle}$ 形状 $(n_{a}, m)$\n",
    "* `c_prev`：前一个cell状态，$\\mathbf{c}^{\\langle t-1 \\rangle}$，形状 $(n_{a}, m)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "XHVgaJxiELGG"
   },
   "source": [
    "#### 输出门 $\\mathbf{\\Gamma}_{o}$\n",
    "\n",
    "* 输出门决定将什么作为时间步长的预测（输出）发送。\n",
    "* 输出门与其他门类似，其值的范围为 0 到 1。\n",
    "\n",
    "##### 公式\n",
    "\n",
    "$$ \\mathbf{\\Gamma}_o^{\\langle t \\rangle}= \\sigma(\\mathbf{W}_o[\\mathbf{a}^{\\langle t-1 \\rangle}, \\mathbf{x}^{\\langle t \\rangle}] + \\mathbf{b}_{o})\\tag{5}$$\n",
    "\n",
    "##### 公式解释\n",
    "* 输出门由前一个隐藏状态 $\\mathbf{a}^{\\langle t-1 \\rangle}$ 和当前输入 $\\mathbf{x}^{\\langle t \\rangle}$ 决定。\n",
    "* S 型函数使门的值范围为 0 到 1。\n",
    "\n",
    "##### 变量代码中的名称\n",
    "* `Wo`：输出门权重，$\\mathbf{W_o}$\n",
    "* `bo`：输出门偏置，$\\mathbf{b_o}$\n",
    "* `ot`：输出门，$\\mathbf{\\Gamma}_{o}^{\\langle t \\rangle}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "fWkiZ0M-ELGG"
   },
   "source": [
    "#### 隐藏状态 $\\mathbf{a}^{\\langle t \\rangle}$\n",
    "\n",
    "* 隐藏状态会传递到 LSTM 单元的下一个时间步。\n",
    "* 它用于确定下一个时间步的三个门控（$\\mathbf{\\Gamma}_{f}, \\mathbf{\\Gamma}_{u}, \\mathbf{\\Gamma}_{o}$）。\n",
    "* 隐藏状态也用于预测 $y^{\\langle t \\rangle}$。\n",
    "\n",
    "##### 公式\n",
    "\n",
    "$$ \\mathbf{a}^{\\langle t \\rangle} = \\mathbf{\\Gamma}_o^{\\langle t \\rangle} * \\tanh(\\mathbf{c}^{\\langle t \\rangle})\\tag{6} $$\n",
    "\n",
    "##### 公式解释\n",
    "* 隐藏状态 $\\mathbf{a}^{\\langle t \\rangle}$ 由单元状态 $\\mathbf{c}^{\\langle t \\rangle}$ 和输出门 $\\mathbf{\\Gamma}_{o}$ 共同决定。\n",
    "* 单元状态通过 `tanh` 函数传递，将值缩放到 -1 到 1 之间。\n",
    "* 输出门的作用类似于“掩码”，它要么保留 $\\tanh(\\mathbf{c}^{\\langle t \\rangle})$ 的值，要么阻止这些值包含在隐藏状态 $\\mathbf{a}^{\\langle t \\rangle}$ 中。\n",
    "\n",
    "##### 代码中的变量名称和形状\n",
    "* `a`：隐藏状态，包括时间步长。$\\mathbf{a}$ 的形状为 $(n_{a}, m, T_{x})$。\n",
    "* `a_prev`：上一时间步长的隐藏状态。$\\mathbf{a}^{\\langle t-1 \\rangle}$ 的形状为 $(n_{a}, m)$。\n",
    "* `a_next`：下一时间步长的隐藏状态。 $\\mathbf{a}^{\\langle t \\rangle}$ 的形状为 $(n_{a}, m)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "7OYaNPNPELGH"
   },
   "source": [
    "#### 预测 $\\mathbf{y}^{\\langle t \\rangle}_{pred}$\n",
    "* 本用例中的预测是分类，因此你将使用 softmax。\n",
    "\n",
    "公式如下：\n",
    "$$\\mathbf{y}^{\\langle t \\rangle}_{pred} = \\textrm{softmax}(\\mathbf{W}_{y} \\mathbf{a}^{\\langle t \\rangle} + \\mathbf{b}_{y})$$\n",
    "\n",
    "##### 代码中的变量名称和形状\n",
    "* `y_pred`：预测，包含所有时间步长。$\\mathbf{y}_{pred}$ 的形状为 $(n_{y}, m, T_{x})$。注意，本例中为 $(T_{y} = T_{x})$。\n",
    "* `yt_pred`：当前时间步长 $t$ 的预测。$\\mathbf{y}^{\\langle t \\rangle}_{pred}$ 的形状为 $(n_{y}, m)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "G49sqmnoELGI"
   },
   "source": [
    "### 2.1 - LSTM 单元\n",
    "\n",
    "### 练习 3 - lstm_cell_forward\n",
    "\n",
    "实现图中所示的 LSTM 单元。\n",
    "\n",
    "**说明**：\n",
    "1. 将隐藏状态 $a^{\\langle t-1 \\rangle}$ 和输入 $x^{\\langle t \\rangle}$ 连接成一个矩阵：\n",
    "\n",
    "$$concat = \\begin{bmatrix} a^{\\langle t-1 \\rangle} \\\\ x^{\\langle t \\rangle} \\end{bmatrix}$$\n",
    "\n",
    "2. 计算门控、隐藏状态和单元状态的所有公式（1 到 6）。\n",
    "3. 计算预测值 $y^{\\langle t \\rangle}$。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "RZ1Uq6pmELGI"
   },
   "source": [
    "#### 其他提示\n",
    "* 函数 `sigmoid()` 和 `softmax` 是从 `rnn_utils.py` 导入的。\n",
    "* 请注意，变量名 `Wi` 和 `bi` 分别表示**更新**门的权重和偏差。此函数中没有名为 \"Wu\" 或 \"bu\" 的变量。"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "JU3tUxvmELGJ",
    "ExecuteTime": {
     "end_time": "2025-10-13T06:52:41.792172Z",
     "start_time": "2025-10-13T06:52:41.749627Z"
    }
   },
   "source": [
    "def lstm_cell_forward(xt, a_prev, c_prev, parameters):\n",
    "    \"\"\"\n",
    "    实现如图 所示的 LSTM 单元的单步前向传播。\n",
    "\n",
    "    参数：\n",
    "    xt - 时间步 \"t\" 的输入数据，numpy 数组，形状为 (n_x, m)。\n",
    "    a_prev - 时间步 \"t-1\" 的隐藏状态，numpy 数组，形状为 (n_a, m)。\n",
    "    c_prev - 时间步 \"t-1\" 的记忆状态，numpy 数组，形状为 (n_a, m)。\n",
    "    parameters - 包含以下内容的 Python 字典：\n",
    "        Wf - 遗忘门的权重矩阵，numpy 数组，形状为 (n_a, n_a + n_x)。\n",
    "        bf - 遗忘门的偏差，numpy 数组，形状为 (n_a, 1)。\n",
    "        Wi - 更新门的权重矩阵，numpy 数组，形状为 (n_a, n_a + n_x)。\n",
    "        bi - 更新门的偏差。 numpy 数组，形状为 (n_a, 1)\n",
    "        Wc - 第一个 tanh 函数的权重矩阵，numpy 数组，形状为 (n_a, n_a + n_x)\n",
    "        bc - 第一个 tanh 函数的偏差，numpy 数组，形状为 (n_a, 1)\n",
    "        Wo - 输出门的权重矩阵，numpy 数组，形状为 (n_a, n_a + n_x)\n",
    "        bo - 输出门的偏差，numpy 数组，形状为 (n_a, 1)\n",
    "        Wy - 隐藏状态与输出之间的权重矩阵，numpy 数组，形状为 (n_y, n_a)\n",
    "        by - 隐藏状态与输出之间的偏差，numpy 数组，形状为 (n_y, 1)\n",
    "\n",
    "    返回：\n",
    "    a_next - 下一个隐藏状态，形状为 (n_a, m)\n",
    "    c_next - 下一个记忆状态，形状为 (n_a, m)\n",
    "    yt_pred - 时间步的预测\"t\", numpy 数组，形状为 (n_y, m)\n",
    "    cache - 反向传播所需值的元组，包含 (a_next, c_next, a_prev, c_prev, xt, parameters)\n",
    "\n",
    "    注意：ft/it/ot 分别代表遗忘/更新/输出门，cct 代表候选值（波浪号 c），\n",
    "    c 代表单元状态（记忆）\n",
    "    \"\"\"\n",
    "\n",
    "    # 从 \"parameters\" 中检索参数\n",
    "    Wf = parameters[\"Wf\"] # 遗忘门权重\n",
    "    bf = parameters[\"bf\"]\n",
    "    Wi = parameters[\"Wi\"] # 更新门权重（注意变量名）\n",
    "    bi = parameters[\"bi\"] # （注意变量名）\n",
    "    Wc = parameters[\"Wc\"] # 候选值权重\n",
    "    bc = parameters[\"bc\"]\n",
    "    Wo = parameters[\"Wo\"] # 输出门权重\n",
    "    bo = parameters[\"bo\"]\n",
    "    Wy = parameters[\"Wy\"] # 预测权重\n",
    "    by = parameters[\"by\"]\n",
    "\n",
    "    # 从 xt 和 Wy 的形状中获取维度\n",
    "    n_x, m = xt.shape\n",
    "    n_y, n_a = Wy.shape\n",
    "\n",
    "    ### 从此处开始代码###\n",
    "    # todo：1. 连接 a_prev 和 xt\n",
    "    concat = np.concatenate((a_prev, xt), axis=0)  # axis=0表示按行拼接（特征维度）\n",
    "\n",
    "    # todo：2. 使用公式计算 ft、it、cct、c_next、ot、a_next 的值\n",
    "    ft = sigmoid(np.dot(Wf, concat) + bf)\n",
    "\n",
    "    # 3. 计算更新门（it）：决定更新多少新信息\n",
    "    # 公式：it = sigmoid(Wi · concat + bi)\n",
    "    it = sigmoid(np.dot(Wi, concat) + bi)\n",
    "\n",
    "    # 4. 计算候选记忆状态（cct）：新的候选记忆信息\n",
    "    # 公式：cct = tanh(Wc · concat + bc)\n",
    "    cct = np.tanh(np.dot(Wc, concat) + bc)\n",
    "\n",
    "    # 5. 计算新记忆状态（c_next）：结合遗忘门和更新门的结果\n",
    "    # 公式：c_next = ft * c_prev + it * cct （*表示元素-wise乘法）\n",
    "    c_next = ft * c_prev + it * cct\n",
    "\n",
    "    # 6. 计算输出门（ot）：决定从新记忆状态中输出多少信息到隐藏状态\n",
    "    # 公式：ot = sigmoid(Wo · concat + bo)\n",
    "    ot = sigmoid(np.dot(Wo, concat) + bo)\n",
    "\n",
    "    # 7. 计算新隐藏状态（a_next）：输出门控制后的记忆状态\n",
    "    # 公式：a_next = ot * tanh(c_next)\n",
    "    a_next = ot * np.tanh(c_next)\n",
    "\n",
    "\n",
    "    # todo: 3. 计算 LSTM 单元的预测值（约 1 行）\n",
    "    yt_pred = softmax(np.dot(Wy, a_next) + by)\n",
    "\n",
    "    ### 代码至此结束 ###\n",
    "\n",
    "    # 将反向传播所需的值存储在缓存中\n",
    "    cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)\n",
    "\n",
    "    return a_next, c_next, yt_pred, cache"
   ],
   "outputs": [],
   "execution_count": 7
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "h9ssBEoxELGN",
    "scrolled": true,
    "ExecuteTime": {
     "end_time": "2025-10-13T06:53:00.696257Z",
     "start_time": "2025-10-13T06:53:00.646048Z"
    }
   },
   "source": [
    "np.random.seed(1)\n",
    "xt_tmp = np.random.randn(3, 10)\n",
    "a_prev_tmp = np.random.randn(5, 10)\n",
    "c_prev_tmp = np.random.randn(5, 10)\n",
    "parameters_tmp = {}\n",
    "parameters_tmp['Wf'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bf'] = np.random.randn(5, 1)\n",
    "parameters_tmp['Wi'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bi'] = np.random.randn(5, 1)\n",
    "parameters_tmp['Wo'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bo'] = np.random.randn(5, 1)\n",
    "parameters_tmp['Wc'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bc'] = np.random.randn(5, 1)\n",
    "parameters_tmp['Wy'] = np.random.randn(2, 5)\n",
    "parameters_tmp['by'] = np.random.randn(2, 1)\n",
    "\n",
    "a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)\n",
    "\n",
    "print(\"a_next[4] = \\n\", a_next_tmp[4])\n",
    "print(\"a_next.shape = \", a_next_tmp.shape)\n",
    "print(\"c_next[2] = \\n\", c_next_tmp[2])\n",
    "print(\"c_next.shape = \", c_next_tmp.shape)\n",
    "print(\"yt[1] =\", yt_tmp[1])\n",
    "print(\"yt.shape = \", yt_tmp.shape)\n",
    "print(\"cache[1][3] =\\n\", cache_tmp[1][3])\n",
    "print(\"len(cache) = \", len(cache_tmp))\n",
    "\n",
    "# UNIT TEST\n",
    "lstm_cell_forward_test(lstm_cell_forward)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "a_next[4] = \n",
      " [-0.66408471  0.0036921   0.02088357  0.22834167 -0.85575339  0.00138482\n",
      "  0.76566531  0.34631421 -0.00215674  0.43827275]\n",
      "a_next.shape =  (5, 10)\n",
      "c_next[2] = \n",
      " [ 0.63267805  1.00570849  0.35504474  0.20690913 -1.64566718  0.11832942\n",
      "  0.76449811 -0.0981561  -0.74348425 -0.26810932]\n",
      "c_next.shape =  (5, 10)\n",
      "yt[1] = [0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381\n",
      " 0.00943007 0.12666353 0.39380172 0.07828381]\n",
      "yt.shape =  (2, 10)\n",
      "cache[1][3] =\n",
      " [-0.16263996  1.03729328  0.72938082 -0.54101719  0.02752074 -0.30821874\n",
      "  0.07651101 -1.03752894  1.41219977 -0.37647422]\n",
      "len(cache) =  10\n",
      "\u001B[92mAll tests passed\n"
     ]
    }
   ],
   "execution_count": 8
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "mjSDYoQoELGP"
   },
   "source": [
    "**Expected Output**:\n",
    "\n",
    "```Python\n",
    "a_next[4] = \n",
    " [-0.66408471  0.0036921   0.02088357  0.22834167 -0.85575339  0.00138482\n",
    "  0.76566531  0.34631421 -0.00215674  0.43827275]\n",
    "a_next.shape =  (5, 10)\n",
    "c_next[2] = \n",
    " [ 0.63267805  1.00570849  0.35504474  0.20690913 -1.64566718  0.11832942\n",
    "  0.76449811 -0.0981561  -0.74348425 -0.26810932]\n",
    "c_next.shape =  (5, 10)\n",
    "yt[1] = [ 0.79913913  0.15986619  0.22412122  0.15606108  0.97057211  0.31146381\n",
    "  0.00943007  0.12666353  0.39380172  0.07828381]\n",
    "yt.shape =  (2, 10)\n",
    "cache[1][3] =\n",
    " [-0.16263996  1.03729328  0.72938082 -0.54101719  0.02752074 -0.30821874\n",
    "  0.07651101 -1.03752894  1.41219977 -0.37647422]\n",
    "len(cache) =  10\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "tb-4WWn4ELGQ"
   },
   "source": [
    "### 2.2 - LSTM 的前向传递\n",
    "\n",
    "现在您已经实现了 LSTM 的一个步骤，您可以使用 for 循环对其进行迭代，以处理 $T_x$ 个输入序列。\n",
    "\n",
    "<img src=\"images/LSTM_rnn.png\" style=\"width:500;height:300px;\">\n",
    "<caption><center><b>图 5</b>：多个时间步长的 LSTM。</center></caption>\n",
    "\n",
    "<a name='ex-4'></a>\n",
    "### 练习 4 - lstm_forward\n",
    "\n",
    "实现 `lstm_forward()` 函数，以在 $T_x$ 个时间步长上运行 LSTM。\n",
    "\n",
    "**说明**\n",
    "* 根据变量 `x` 和 `parameters` 的形状获取维度 $n_x, n_a, n_y, m, T_x$\n",
    "* 初始化三维张量 $a$、$c$ 和 $y$\n",
    "- $a$：隐藏状态，形状为 $(n_{a}, m, T_{x})$\n",
    "- $c$：单元状态，形状为 $(n_{a}, m, T_{x})$\n",
    "- $y$：预测，形状为 $(n_{y}, m, T_{x})$（注意，本例中 $T_{y} = T_{x}$）\n",
    "- **注意** 将一个变量设置为另一个变量相等是一种“引用复制”。换句话说，不要使用 `c = a'，否则这两个变量将指向同一个底层变量。\n",
    "* 初始化二维张量 $a^{\\langle t \\rangle}$\n",
    "- $a^{\\langle t \\rangle}$ 存储时间步 $t$ 的隐藏状态。变量名为 `a_next`。\n",
    "- $a^{\\langle 0 \\rangle}$ 是时间步 0 的初始隐藏状态，在调用函数时传入。变量名为 `a0`。\n",
    "- $a^{\\langle t \\rangle}$ 和 $a^{\\langle 0 \\rangle}$ 表示单个时间步，因此它们的形状均为 $(n_{a}, m)$。\n",
    "- 通过将 $a^{\\langle t \\rangle}$ 设置为传入函数的初始隐藏状态 ($a^{\\langle 0 \\rangle}$) 来初始化 $a^{\\langle t \\rangle}$。\n",
    "* 用零初始化 $c^{\\langle t \\rangle}$。\n",
    "- 变量名为 `c_next`\n",
    "- $c^{\\langle t \\rangle}$ 表示单个时间步，因此其形状为 $(n_{a}, m)$\n",
    "- **注意**：将 `c_next` 创建为独立的变量，并在内存中拥有自己的位置。不要将其初始化为三维张量 $c$ 的切片。换句话说，**不要**执行 `c_next = c[:,:,0]`。\n",
    "* 对于每个时间步，执行以下操作：\n",
    "- 从三维张量 $x$ 中，获取时间步 $t$ 的二维切片 $x^{\\langle t \\rangle}$\n",
    "- 调用之前定义的 `lstm_cell_forward` 函数，获取隐藏状态、单元状态、预测和缓存\n",
    "- 将隐藏状态、单元状态和预测（二维张量）存储在三维张量中\n",
    "- 将缓存附加到缓存列表中"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "XMmJrPSdELGQ",
    "ExecuteTime": {
     "end_time": "2025-10-13T06:55:08.472929Z",
     "start_time": "2025-10-13T06:55:08.432090Z"
    }
   },
   "source": [
    "def lstm_forward(x, a0, parameters):\n",
    "    \"\"\"\n",
    "    使用图所示的 LSTM 单元实现循环神经网络的前向传播。\n",
    "\n",
    "    参数：\n",
    "    x - 每个时间步的输入数据，形状为 (n_x, m, T_x)。\n",
    "    a0 - 初始隐藏状态，形状为 (n_a, m)。\n",
    "    parameters - 包含以下内容的 Python 字典：\n",
    "        Wf - 遗忘门的权重矩阵，形状为 (n_a, n_a + n_x) 的 NumPy 数组。\n",
    "        bf - 遗忘门的偏差，形状为 (n_a, 1) 的 NumPy 数组。\n",
    "        Wi - 更新门的权重矩阵，形状为 (n_a, n_a + n_x) 的 NumPy 数组。\n",
    "        bi - 更新门的偏差，形状为 (n_a, 1) 的 NumPy 数组。\n",
    "        Wc - 第一个“tanh”函数的权重矩阵，形状为 (n_a, n_a + n_x) 的 NumPy 数组。\n",
    "        bc -- 第一个 tanh 函数的偏差，numpy 数组，形状为 (n_a, 1)\n",
    "        Wo -- 输出门的权重矩阵，numpy 数组，形状为 (n_a, n_a + n_x)\n",
    "        bo -- 输出门的偏差，numpy 数组，形状为 (n_a, 1)\n",
    "        Wy -- 隐藏状态与输出之间的权重矩阵，numpy 数组，形状为 (n_y, n_a)\n",
    "        by -- 隐藏状态与输出之间的偏差，numpy 数组，形状为 (n_y, 1)\n",
    "\n",
    "    返回：\n",
    "    a -- 每个时间步的隐藏状态，numpy 数组，形状为 (n_a, m, T_x)\n",
    "    y -- 每个时间步的预测值，numpy 数组，形状为 (n_y, m, T_x)\n",
    "    c -- 单元状态的值，numpy 数组，形状为 (n_a, m, T_x)\n",
    "    caches -- 反向传播所需值的元组包含（所有缓存列表，x）\n",
    "    \"\"\"\n",
    "\n",
    "    # 初始化“caches”，它将跟踪所有缓存的列表\n",
    "    caches = []\n",
    "\n",
    "    ### 从此处开始代码 ###\n",
    "    Wy = parameters['Wy'] # 将 parameters['Wy'] 保存到局部变量Wy中\n",
    "    # 从 x 和 parameters['Wy'] 的形状中获取维度\n",
    "    n_x, m, T_x = x.shape  # n_x:输入特征数；m:样本数；T_x:时间步数\n",
    "    n_y, n_a = Wy.shape\n",
    "\n",
    "    # 用零初始化 \"a\", \"c\" 和 \"y\"\n",
    "    a = np.zeros((n_a, m, T_x))  # 存储所有时间步的隐藏状态\n",
    "    c = np.zeros((n_a, m, T_x))  # 存储所有时间步的记忆状态\n",
    "    y = np.zeros((n_y, m, T_x))  # 存储所有时间步的预测结果\n",
    "\n",
    "    # 初始化 a_next 和c_next\n",
    "    a_next = a0          # 初始隐藏状态（n_a, m）\n",
    "    c_next = np.zeros((n_a, m))  # 初始记忆状态通常设为全0（n_a, m）\n",
    "\n",
    "    # 循环遍历所有时间步\n",
    "    for t in range(T_x):\n",
    "        # 1. 获取当前时间步的输入数据xt（形状：(n_x, m)）\n",
    "        xt = x[:, :, t]\n",
    "\n",
    "        # 2. 调用单步LSTM函数计算当前时间步的结果\n",
    "        a_next, c_next, yt_pred, cache = lstm_cell_forward(xt, a_next, c_next, parameters)\n",
    "\n",
    "        # 3. 将当前时间步的隐藏状态存入a数组\n",
    "        a[:, :, t] = a_next\n",
    "\n",
    "        # 4. 将当前时间步的记忆状态存入c数组\n",
    "        c[:, :, t] = c_next\n",
    "\n",
    "        # 5. 将当前时间步的预测结果存入y数组\n",
    "        y[:, :, t] = yt_pred\n",
    "\n",
    "        # 6. 将当前时间步的缓存加入caches列表\n",
    "        caches.append(cache)\n",
    "    ### 代码至此结束 ###\n",
    "\n",
    "    # 将反向传播所需的值存储在缓存中\n",
    "    caches = (caches, x)\n",
    "\n",
    "    return a, y, c, caches"
   ],
   "outputs": [],
   "execution_count": 9
  },
  {
   "cell_type": "code",
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "JehC5gwdELGS",
    "ExecuteTime": {
     "end_time": "2025-10-13T06:55:12.722742Z",
     "start_time": "2025-10-13T06:55:12.649683Z"
    }
   },
   "source": [
    "np.random.seed(1)\n",
    "x_tmp = np.random.randn(3, 10, 7)\n",
    "a0_tmp = np.random.randn(5, 10)\n",
    "parameters_tmp = {}\n",
    "parameters_tmp['Wf'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bf'] = np.random.randn(5, 1)\n",
    "parameters_tmp['Wi'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bi']= np.random.randn(5, 1)\n",
    "parameters_tmp['Wo'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bo'] = np.random.randn(5, 1)\n",
    "parameters_tmp['Wc'] = np.random.randn(5, 5 + 3)\n",
    "parameters_tmp['bc'] = np.random.randn(5, 1)\n",
    "parameters_tmp['Wy'] = np.random.randn(2, 5)\n",
    "parameters_tmp['by'] = np.random.randn(2, 1)\n",
    "\n",
    "a_tmp, y_tmp, c_tmp, caches_tmp = lstm_forward(x_tmp, a0_tmp, parameters_tmp)\n",
    "print(\"a[4][3][6] = \", a_tmp[4][3][6])\n",
    "print(\"a.shape = \", a_tmp.shape)\n",
    "print(\"y[1][4][3] =\", y_tmp[1][4][3])\n",
    "print(\"y.shape = \", y_tmp.shape)\n",
    "print(\"caches[1][1][1] =\\n\", caches_tmp[1][1][1])\n",
    "print(\"c[1][2][1]\", c_tmp[1][2][1])\n",
    "print(\"len(caches) = \", len(caches_tmp))\n",
    "\n",
    "# UNIT TEST    \n",
    "lstm_forward_test(lstm_forward)"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "a[4][3][6] =  0.17211776753291663\n",
      "a.shape =  (5, 10, 7)\n",
      "y[1][4][3] = 0.9508734618501101\n",
      "y.shape =  (2, 10, 7)\n",
      "caches[1][1][1] =\n",
      " [ 0.82797464  0.23009474  0.76201118 -0.22232814 -0.20075807  0.18656139\n",
      "  0.41005165]\n",
      "c[1][2][1] -0.8555449167181983\n",
      "len(caches) =  2\n",
      "\u001B[92mAll tests passed\n"
     ]
    }
   ],
   "execution_count": 10
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "TAETfQVFELGV"
   },
   "source": [
    "**Expected Output**:\n",
    "\n",
    "```Python\n",
    "a[4][3][6] =  0.172117767533\n",
    "a.shape =  (5, 10, 7)\n",
    "y[1][4][3] = 0.95087346185\n",
    "y.shape =  (2, 10, 7)\n",
    "caches[1][1][1] =\n",
    " [ 0.82797464  0.23009474  0.76201118 -0.22232814 -0.20075807  0.18656139\n",
    "  0.41005165]\n",
    "c[1][2][1] -0.855544916718\n",
    "len(caches) =  2\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "CLgW871YELGW"
   },
   "source": [
    "### 恭喜！\n",
    "\n",
    "现在已经为基本 RNN 和 LSTM 实现了前向传递。使用深度学习框架时，实现前向传递足以构建性能卓越的系统。框架会处理剩下的事情。\n",
    "\n",
    "应该记住：\n",
    "\n",
    "* LSTM 与 RNN 类似，它们都使用隐藏状态来传递信息，但 LSTM 还使用单元状态（类似于长期记忆）来帮助处理梯度消失问题。\n",
    "* LSTM 单元由一个单元状态（或长期记忆）、一个隐藏状态（或短期记忆）以及三个不断更新其输入相关性的门组成：\n",
    "* 一个<b>遗忘</b>门，它决定哪些输入单元应该被记住并传递。它是一个值在 0 到 1 之间的张量。\n",
    "* 如果一个单元的值接近于 0，LSTM 会“忘记”前一个单元状态中存储的状态。\n",
    "* 如果值接近 1，LSTM 通常会记住相应的值。\n",
    "* 一个<b>更新</b>门，同样是一个包含 0 到 1 之间值的张量。它决定丢弃哪些信息，以及添加哪些新信息。\n",
    "* 当更新门中的单元接近 1 时，其候选值将传递到隐藏状态。\n",
    "* 当更新门中的单元接近 0 时，它将被阻止传递到隐藏状态。\n",
    "* 还有一个<b>输出</b>门，它决定将什么作为时间步的输出发送。\n"
   ]
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": ""
  }
 ],
 "metadata": {
  "coursera": {
   "schema_names": [
    "DLSC5W1-A1"
   ]
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
