{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 五、循环神经网络\n",
    "(Recurrent Neural Networks, RNN)\n",
    "## 5.1 基本概念\n",
    "循环神经网络（RNN）是一种处理序列数据的神经网络，它具有记忆性，能够捕捉时间序列中的动态信息。RNN 的主要特点是网络中存在着环路，信息可以在环路中传递。\n",
    "## 5.2 关键技术\n",
    "RNN 的关键技术包括隐藏状态和循环结构。\n",
    "![rnn-network](../images/5-rnn-network.webp)<br/>\n",
    "隐藏状态：在每个时间步，RNN 都会根据当前输入和前一时间步的隐藏状态来更新当前的隐藏状态。这使得 RNN 能够记住序列中的信息。\n",
    "循环结构：RNN 的网络结构中存在着环路，这使得信息可以在网络中循环传播，从而捕捉序列中的动态信息。\n",
    "![rnn-network](../images/5-rnn-network2.webp)<br/>\n",
    "RNN 的前向传播过程可以用以下数学公式表示：\n",
    "![rnn-network](../images/5-rnn-math.webp)<br/>\n",
    "其中，ℎ ℎ 是时间步 的隐藏状态， 是时间步 的输入， 是时间步 的输出， ℎℎ、 ℎ 和 ℎ 是权重矩阵， ℎ 和 是偏置， 是激活函数。\n",
    "## 5.3 应用领域\n",
    "RNN 广泛应用于自然语言处理、语音识别、时间序列预测等领域。著名的深度学习模型如 LSTM 和 GRU 都是基于 RNN 的。\n",
    "## 5.4 优点\n",
    "RNN 的主要优点是能够处理序列长度可变的数据，并且能够捕捉序列中的动态信息。\n",
    "## 5.5 缺点\n",
    "RNN 的主要缺点是训练过程可能会遇到梯度消失或梯度爆炸问题，特别是当序列长度较长时。此外，RNN 无法处理序列中的长期依赖问题。\n",
    "## 5.6 实例分析\n",
    "LSTM 和 GRU 是两种改进的 RNN 结构，它们通过引入门控机制来解决 RNN 的梯度消失和长期依赖问题。\n",
    "## 5.7 手动实现\n",
    "以下是一个简单的 RNN 的 Python 实现："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "class RNN:\n",
    "    def __init__(self, input_size, hidden_size, output_size):\n",
    "        self.hidden_size = hidden_size\n",
    "        self.Wxh = np.random.randn(hidden_size, input_size) * 0.01\n",
    "        self.Whh = np.random.randn(hidden_size, hidden_size) * 0.01\n",
    "        self.Why = np.random.randn(output_size, hidden_size) * 0.01\n",
    "        self.bh = np.zeros((hidden_size, 1))\n",
    "        self.by = np.zeros((output_size, 1))\n",
    "\n",
    "    def forward(self, inputs):\n",
    "        h = np.zeros((self.hidden_size, 1))\n",
    "        ys = []\n",
    "        for i in inputs:\n",
    "            h = np.tanh(np.dot(self.Wxh, i) + np.dot(self.Whh, h) + self.bh)\n",
    "            y = np.dot(self.Why, h) + self.by\n",
    "            ys.append(y)\n",
    "        return ys, h\n",
    "\n",
    "    def backward(self, inputs, ys, targets):\n",
    "        dWxh, dWhh, dWhy = np.zeros_like(self.Wxh), np.zeros_like(self.Whh), np.zeros_like(self.Why)\n",
    "        dbh, dby = np.zeros_like(self.bh), np.zeros_like(self.by)\n",
    "        dhnext = np.zeros_like(ys[0])\n",
    "        for t in reversed(range(len(inputs))):\n",
    "            dy = np.copy(ys[t])\n",
    "            dy[targets[t]] -= 1\n",
    "            dWhy += np.dot(dy, ys[t].T)\n",
    "            dby += dy\n",
    "            dh = np.dot(self.Why.T, dy) + dhnext\n",
    "            dhraw = (1 - ys[t] * ys[t]) * dh\n",
    "            dbh += dhraw\n",
    "            dWxh += np.dot(dhraw, inputs[t].T)\n",
    "            dWhh += np.dot(dhraw, ys[t-1].T)\n",
    "            dhnext = np.dot(self.Whh.T, dhraw)\n",
    "        return dWxh, dWhh, dWhy, dbh, dby"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码首先定义了一个 RNN 类，然后在 forward 方法中实现了 RNN 的前向传播过程，在 backward 方法中实现了 RNN 的反向传播过程。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "notes",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
