{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5d6514d7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# <img src=\"./picture/2.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>  # 这个是保存图片的代码"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "234f8d21",
   "metadata": {},
   "source": [
    "# （一）首先是序列模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fa879ca4",
   "metadata": {},
   "source": [
    "<img src=\"./picture/1.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae91bc30",
   "metadata": {},
   "source": [
    "这里介绍了不同的序列模型，其中可以知道，x和y可以不用一一的对应，有的时候甚至不需要x也可以得出\n",
    "想要的y,本次的课程会介绍不同的序列模型。总而言之，好好学习吧，至少有一点方向了。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e3db893",
   "metadata": {},
   "source": [
    "# 二。数学符号"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e04abffd",
   "metadata": {},
   "source": [
    "### 2.1首先是符号说明"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e0cbbfe5",
   "metadata": {},
   "source": [
    "<img src=\"./picture/2.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "74c29494",
   "metadata": {},
   "source": [
    "上面是要识别是不是名字，其中$x$表示的是训练集，$y$表示的是测试集，然后$x^{<i>}$表示的是第几个单词\n",
    "$x^{(i)<t>}$表示的是第几个训练样本中的第几个单词 $T^{(i)}_x$表示的是第几个样本中的单词总数量"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8677bf63",
   "metadata": {},
   "source": [
    "### 2.2接下来是如何表示一个单词"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eab1f260",
   "metadata": {},
   "source": [
    "<img src=\"./picture/3.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76fa721c",
   "metadata": {},
   "source": [
    "首先你需要一个字典，然后将这个句子的每一个单词都设置成一个$one-hot$编码即可，但是我觉得应该\n",
    "还会有其他的方法来进行优化。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "986c5b29",
   "metadata": {},
   "source": [
    "# 三。循环神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "50a52cfa",
   "metadata": {},
   "source": [
    "## 3.1 为何传统的网络已经不适用了。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e0a56176",
   "metadata": {},
   "source": [
    "<img src=\"./picture/4.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffa8cab5",
   "metadata": {},
   "source": [
    "这里传统的已经不适用了，首先就是传入和输出的向量大小不一致，其次不同的向量不能共享相同的特征，最后就是计算量庞大。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b04a978",
   "metadata": {},
   "source": [
    "### 3.2 结构"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c4c38ca",
   "metadata": {},
   "source": [
    "<img src=\"./picture/5.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "263a6570",
   "metadata": {},
   "source": [
    "上述的图像讲述了循环神经网络的的基本结构：1.$x^{<1>},y^{hat1}$表示的是输入和输出向量\n",
    "\n",
    "2.接下来$a^{<1>}$表示的是一个时间序列的东西（暂时这么）其中他的参数永远都是$w_{aa}$\n",
    "\n",
    "3.然后即是$x$输入的时候会有一个权重$w_{ax}$,经过一个网络之后输出层也有一个权重叫做$w_{yx}$\n",
    "\n",
    "4.这里的意思表示的就是，如果我要预测$x^{<3>}$,那么由于中间的连接，我也要使用到$x^{<1>},x^{<2>}$\n",
    "的信息。\n",
    "\n",
    "5.这里网络的缺点就是，只是使用到了前面序列的信息，并没有使用到这个词后面的信息，比如第三个词，只能得到前面\n",
    "两个词的信息，但是后面的信息完全得不到，这是一个缺点，后续应该会有改进。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2eaa2e4",
   "metadata": {},
   "source": [
    "### 3.3 结构解释"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54971fea",
   "metadata": {},
   "source": [
    "<img src=\"./picture/6.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4ad0d0d",
   "metadata": {},
   "source": [
    "上述讲解了各个变量是怎么来的，激活函数通常使用tanh。解释了神经网络的前向传播。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6307d7b4",
   "metadata": {},
   "source": [
    "<img src=\"./picture/7.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a8b0307",
   "metadata": {},
   "source": [
    "上面的式子进行了简化，我感觉还是很不错的，使用矩阵的形式来进行表达即可。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c8fa3bef",
   "metadata": {},
   "source": [
    "# 四.通过时间的反向传播"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f96b1d58",
   "metadata": {},
   "source": [
    "<img src=\"./picture/8.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a585870",
   "metadata": {},
   "source": [
    "其中红色代表的就是反向传播的过程，绿色的代表的就是前向传播。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9e5d85b7",
   "metadata": {},
   "source": [
    "# 五。不同类型的循环神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b83459a9",
   "metadata": {},
   "source": [
    "### 5.1 多对一"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8140ff97",
   "metadata": {},
   "source": [
    "<img src=\"./picture/110.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cfa140aa",
   "metadata": {},
   "source": [
    "不同于基本的RNN,这里表示的是一个句子只有一个输出，那么整个网络直接在最后输出一个答案即可。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f372d51a",
   "metadata": {},
   "source": [
    "### 5.2 这里是一对多的网络结构"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d480fc46",
   "metadata": {},
   "source": [
    "<img src=\"./picture/10.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8c7c836",
   "metadata": {},
   "source": [
    "上述的就是一对多的，例如音乐生成。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c236903c",
   "metadata": {},
   "source": [
    "### 5.3 多对多的时候输入和输出的向量长度不一致的问题。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5500b6e1",
   "metadata": {},
   "source": [
    "<img src=\"./picture/11.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a6040b9",
   "metadata": {},
   "source": [
    "在这里表示的就是要使用解码器来将数据进行解码，我觉得用的最多的应该就是这种的"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eed464de",
   "metadata": {},
   "source": [
    "# 六.语言模型和序列的生成"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ecec6d75",
   "metadata": {},
   "source": [
    "<img src=\"./picture/12.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a513953f",
   "metadata": {},
   "source": [
    "上面讲述了怎么将一个句子进行转化，首先在训练集中要标记出结束的东西，eos就是标记出来的。\n",
    "\n",
    "然后在你的字典里面要是没有这个词语的话，就使用uwk来替代。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adfad95a",
   "metadata": {},
   "source": [
    "<img src=\"./picture/13.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "29da0aec",
   "metadata": {},
   "source": [
    "上面讲述了RNN的基本运算，其实说白了就是预测下一个词的概率，仅此而已"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "befe8e41",
   "metadata": {},
   "source": [
    "# 七.RNN中梯度消失的问题"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bcf5a382",
   "metadata": {},
   "source": [
    "### 7.1 为何"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc241e39",
   "metadata": {},
   "source": [
    "<img src=\"./picture/14.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7196c1e0",
   "metadata": {},
   "source": [
    "这里的梯度消失以及梯度爆炸很像前面的，比如我要输出第三个词语，他受限制于附近的值，而且也会遇到\n",
    "梯度消失的问题。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "117fa45c",
   "metadata": {},
   "source": [
    "### 7.2 GRU单元"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "36bdf9de",
   "metadata": {},
   "source": [
    " <img src=\"./picture/16.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b1591e73",
   "metadata": {},
   "source": [
    "上述的图像讲解了RNN的基本单元位置，其中$a^{<t-1>}$表示的是上一次传进来的参数，然后进过右边公式的\n",
    "线性层再使用tanh函数来进行激活，最后可以得到$a^{t}$然后可以将$a^{t}$放到一个软化层中去，这个软化层可以是\n",
    "softmax或者sigmoid."
   ]
  },
  {
   "cell_type": "raw",
   "id": "faf96551",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "96c54e14",
   "metadata": {},
   "source": [
    " <img src=\"./picture/15.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f853bd45",
   "metadata": {},
   "source": [
    "上述的图像讲述了GRU的基本原理，其实就是使用一个记忆细胞，储存前面的数字，具体的实现其实我感觉\n",
    "有点复杂"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a4c7d0cf",
   "metadata": {},
   "source": [
    "首先这里会有一个记忆单元，比如到了cat这个单词这里，门直接是1，然后直接更新信息，这里要注意\n",
    "的一点就是cat和was之间的门全部都是0,因为他不需要用于更新信息。可以看到$c^{<t>}$和$c^{n<t>}$\n",
    "是不太一样的他们的参数是不一样的，而且门是使用了sigmoid函数来进行激活的，最后注意的一点就是\n",
    "这里只是在展示前向传播的过程。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e968e80",
   "metadata": {},
   "source": [
    " <img src=\"./picture/17.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bcb72842",
   "metadata": {},
   "source": [
    "上面讲述了一个比较完整的GRU，他在t这个记忆细胞当中还加入了一个相关性的门，用于连接这一个单词和\n",
    "上一个单词的相关性，这个也是使用sigmoid函数来进行激活的。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1676e111",
   "metadata": {},
   "source": [
    "### 7.3 LSTM（长短期记忆）（long short term memory）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80ffec5c",
   "metadata": {},
   "outputs": [],
   "source": [
    "lstm是一个比较通用的方法来防止梯度消失的一个方法。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "226b023e",
   "metadata": {},
   "source": [
    " <img src=\"./picture/18.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0b9581a",
   "metadata": {},
   "source": [
    "个人感觉LSTM会更加好理解一点：\n",
    "\n",
    "1.首先这里有3个门，第一个是更新门，第二个是遗忘门，第三个是输出门，都是使用sigmoid函数来进行激活的\n",
    "\n",
    "2.遗忘门主要是看对上一个记忆细胞是否遗忘，而更新门表示是否更新新的记忆细胞，新的记忆细胞也有一个单独的公式\n",
    "\n",
    "3.然后就是更新门，主要是连接$a^t$和$c^t$之间的联系。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5e90739",
   "metadata": {},
   "source": [
    "# 八.双向神经网络（BRNN）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "830f2b9f",
   "metadata": {},
   "source": [
    " <img src=\"./picture/19.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4a2642f8",
   "metadata": {},
   "source": [
    "上面的介绍了BRNN的基本结构：\n",
    "\n",
    "1.开始进行前向传播的时候，从左开始先从$a^{<1>}$开始一直到$a^{<4>}$，然后再从$a^{<4>}$\n",
    "最后到$a^{<1>}$.\n",
    "\n",
    "2.其实说白了就是将后面的信息再传过去，就是这么简单，从逻辑上面很好理解。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "be787852",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "46b3cb7c",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "d64eae03",
   "metadata": {},
   "source": [
    "# 九.深层循环神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b1c5be23",
   "metadata": {},
   "source": [
    "总结一句话，就是使用上面的来构建自己的深层循环神经网络，就是这么简单。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c535133c",
   "metadata": {},
   "source": [
    " <img src=\"./picture/20.png\" alt=\"Drawing\" style=\"width: 500px;\" align=\"left\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1039d4d4",
   "metadata": {},
   "source": [
    "上述就是RNN堆叠出来的结果了：\n",
    "\n",
    "1.首先要注意符号的问题。$a^{[l]<t>}$表示的是第l层第t个时间维度\n",
    "\n",
    "2.然后上面也介绍了$a^{[2]<3>}$是怎么来的，就是有两个输入（看箭头即可），最后使用激活函数来进行激活。\n",
    "\n",
    "3.第一层也就是$a^{[1]}$，然后进行堆叠\n",
    "\n",
    "4.这里的每一个单元可以是$LSTM$或者$GRU$，然后也可以创建一个双向的循环神经网络。\n",
    "\n",
    "5.最后的输出层可以设置一些网络然后才可以得出最终的答案。\n",
    "\n",
    "6.网络结构相较图像处理的非常庞大，总之也见识到了为啥NLP比较难的原因了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1729cb4b",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
