{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# RNN 循环神经网络\n",
    "\n",
    "来自视频 [徒手实现循环神经网络--RNN的代码详解](https://www.bilibili.com/video/BV1R4421Q7tQ) 。、\n",
    "\n",
    "推荐先观看视频 [飞天侠客的 RNN 介绍视频](https://www.bilibili.com/video/BV1NCgVzoEG9) 。\n",
    "\n",
    "![](./images/RNN%20循环神经网络.png)\n",
    "![](./images/RNN%20循环神经网络的解释.png)\n",
    "![](./images/RNN%20循环神经网络的解释2.png)"
   ],
   "id": "b3914a7c2c51d705"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "我们都知道 RNN 网络就是将一个时间刻内产生的输出作为隐藏状态重新给到了下一个时刻，以此循环来保留序列中的顺序信息。\n",
    "\n",
    "看着上面画的图实际上在刚开始了解的时候我陷入了一个误区：上面黄色的“神经元”只有一个，难道是重复将输出的内容给到仅有一个“神经元”进行处理吗？这样是不是效果不好？\n",
    "\n",
    "实际上，上面的黄色的圆圈表示的是一个隐藏层，并非单一的神经元，同样，这个黄色圆圈可以换成多个隐藏层的全连接或者其他什么东西。\n",
    "\n",
    "另外，在飞天侠客的视频中，在介绍 RNN 输出时（输出下一个文字时），将最终要输出的内容又经历一个层级的处理，实际上唐一旦老师的视频中在后期的代码实现中也是这样的，上面的图片没有画出来。"
   ],
   "id": "ebfd698a5ee0ac4d"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T12:20:42.677947Z",
     "start_time": "2025-08-31T12:20:28.322738Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import string\n",
    "\n",
    "char2indx = {s: i for i, s in enumerate(string.ascii_lowercase)}\n",
    "\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import DataLoader\n",
    "from datasets import load_dataset\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "torch.manual_seed(12046)\n",
    "\n",
    "# https://huggingface.co/datasets/code-search-net/code_search_net/tree/main/data\n",
    "datasets = load_dataset('json', data_files='./datasets/python/final/jsonl/train/*.jsonl.gz')\n",
    "datasets = datasets['train'].filter(lambda x: 'apache/spark' in x['repo'])"
   ],
   "id": "e233bb4e8dd53c11",
   "outputs": [],
   "execution_count": 1
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T12:54:15.781636Z",
     "start_time": "2025-08-31T12:54:15.773733Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class RNNCell(nn.Module):\n",
    "\n",
    "    def __init__(self, input_size, hidden_size):\n",
    "        super().__init__()\n",
    "        self.input_size = input_size\n",
    "        self.hidden_size = hidden_size\n",
    "        self.i2h = nn.Linear(input_size + hidden_size, hidden_size)  # 线性层，这里代码只定义了一层\n",
    "\n",
    "    def forward(self, input, hidden=None):\n",
    "        # input: (1, I)  I 是输入文本特征个数\n",
    "        # hidden: (1, H)  H 是隐藏层的特征个数，也可说是隐藏状态的个数\n",
    "        if hidden is None:\n",
    "            hidden = self.init_hidden()\n",
    "\n",
    "        combined = torch.concat((input, hidden), dim=-1)  # (1, I + H)\n",
    "        hidden = F.relu(self.i2h(combined))   # (1,  H)\n",
    "        return hidden\n",
    "\n",
    "    def init_hidden(self):\n",
    "        return torch.zeros((1, self.hidden_size), device='cuda')\n"
   ],
   "id": "4b08a762803d9683",
   "outputs": [],
   "execution_count": 2
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "![](images/RNNCell.png)",
   "id": "12e3f06637ace00b"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T12:57:10.947991Z",
     "start_time": "2025-08-31T12:57:10.930592Z"
    }
   },
   "cell_type": "code",
   "source": [
    "r_model = RNNCell(2, 3).to('cuda')\n",
    "data = torch.randn(4, 1, 2, device='cuda')  # 有 4 个 token ，每个 token 1 个 1 个批次输入， 每个 token 2 个特征，当然这里实际上是模拟数据，只是为了模型中的输入定义服务的\n",
    "\n",
    "hidden = None\n",
    "\n",
    "for i in range(data.shape[0]):\n",
    "    hidden = r_model(data[i], hidden)\n",
    "    print(hidden)"
   ],
   "id": "6a45cd0dbced6c91",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0.0000, 0.0000, 0.1674]], device='cuda:0', grad_fn=<ReluBackward0>)\n",
      "tensor([[0.0000, 0.0000, 0.0652]], device='cuda:0', grad_fn=<ReluBackward0>)\n",
      "tensor([[0.1610, 0.3545, 0.4305]], device='cuda:0', grad_fn=<ReluBackward0>)\n",
      "tensor([[0.0000, 0.3773, 0.5489]], device='cuda:0', grad_fn=<ReluBackward0>)\n"
     ]
    }
   ],
   "execution_count": 9
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:15:31.437750Z",
     "start_time": "2025-08-31T13:15:31.431192Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class CharRNN(nn.Module):  # CharRNN 做的就是将隐藏层变为文本输出\n",
    "\n",
    "    def __init__(self, vs):  # vs 文本编号个数\n",
    "        super().__init__()\n",
    "\n",
    "        self.emb = nn.Embedding(vs, 30)  # 30 个特征\n",
    "        self.rnn = RNNCell(30, 50)  # 30 个输入特征，50 个隐藏特征\n",
    "        self.lm = nn.Linear(50, vs) # 输出的 50 个隐藏特征转化为文本编号概率\n",
    "\n",
    "    def forward(self, x, hidden=None):\n",
    "        # x: (1)\n",
    "        # hidden: (1, 50)\n",
    "        embeddings = self.emb(x)  # (1, 30)\n",
    "        hidden = self.rnn(embeddings, hidden)  # (1, 50)\n",
    "        out = self.lm(hidden)  # (1, vs)\n",
    "        return out, hidden"
   ],
   "id": "f2c880bcd2b858db",
   "outputs": [],
   "execution_count": 10
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:15:43.500973Z",
     "start_time": "2025-08-31T13:15:43.496533Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class CharTokenizer:\n",
    "\n",
    "    def __init__(self, data, end_ind=0):\n",
    "        # data: list[str]\n",
    "        # 得到所有的字符\n",
    "        chars = sorted(list(set(''.join(data))))\n",
    "        #self.char2ind = {s: i + 2 for i, s in enumerate(chars)}\n",
    "        self.char2ind = {s: i + 1 for i, s in enumerate(chars)}\n",
    "        #self.char2ind['<|b|>'] = begin_ind\n",
    "        self.char2ind['<|e|>'] = end_ind\n",
    "        self.ind2char = {v: k for k, v in self.char2ind.items()}\n",
    "        #self.begin_ind = begin_ind\n",
    "        self.end_ind = end_ind\n",
    "\n",
    "    def encode(self, x):\n",
    "        # x: str\n",
    "        return [self.char2ind[i] for i in x]\n",
    "\n",
    "    def decode(self, x):\n",
    "        # x: int or list[x]\n",
    "        if isinstance(x, int):\n",
    "            return self.ind2char[x]\n",
    "        return [self.ind2char[i] for i in x]"
   ],
   "id": "d0795a60ab04480a",
   "outputs": [],
   "execution_count": 11
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:16:42.776679Z",
     "start_time": "2025-08-31T13:16:42.749534Z"
    }
   },
   "cell_type": "code",
   "source": [
    "tokenizer = CharTokenizer(datasets['original_string'])\n",
    "test_str = 'def f(x):'\n",
    "tokens = tokenizer.encode(test_str)\n",
    "tokens"
   ],
   "id": "2ad71d7f15381ad7",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[70, 71, 72, 2, 72, 10, 90, 11, 28]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 13
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:18:37.068879Z",
     "start_time": "2025-08-31T13:18:37.063458Z"
    }
   },
   "cell_type": "code",
   "source": [
    "c_model = CharRNN(len(tokenizer.char2ind)).to('cuda')\n",
    "c_model  # 第三层在大语言模型处理中叫做语言建模层（language modeling head）"
   ],
   "id": "71dcb96fbfa386b0",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "CharRNN(\n",
       "  (emb): Embedding(98, 30)\n",
       "  (rnn): RNNCell(\n",
       "    (i2h): Linear(in_features=80, out_features=50, bias=True)\n",
       "  )\n",
       "  (lm): Linear(in_features=50, out_features=98, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 15
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:19:03.883050Z",
     "start_time": "2025-08-31T13:19:03.787929Z"
    }
   },
   "cell_type": "code",
   "source": [
    "inputs = torch.tensor(tokenizer.encode('d')).to('cuda')\n",
    "out, hidden = c_model(inputs)\n",
    "out.shape, hidden.shape"
   ],
   "id": "2d336054739b763b",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([1, 98]), torch.Size([1, 50]))"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 16
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:21:57.552814Z",
     "start_time": "2025-08-31T13:21:57.547924Z"
    }
   },
   "cell_type": "code",
   "source": [
    "@torch.no_grad()\n",
    "def generate(model, idx, tokenizer, max_new_tokens=300):\n",
    "    # idx: (1)\n",
    "    out = idx.tolist()\n",
    "    hidden = None\n",
    "    model.eval()\n",
    "    for _ in range(max_new_tokens):\n",
    "        logits, hidden = model(idx, hidden)\n",
    "        probs = F.softmax(logits, dim=-1)  # (1, 98)\n",
    "        # 随机生成文本\n",
    "        ix = torch.multinomial(probs, num_samples=1)  # (1, 1)  给你一组概率，它随机选出一个或多个“最可能”的索引，选中的概率正比于这些值。\n",
    "        ## 更新背景\n",
    "        #context = torch.concat((context[:, 1:], ix), dim=-1)\n",
    "        out.append(ix.item())\n",
    "        idx = ix.squeeze(0)\n",
    "        if out[-1] == tokenizer.end_ind:\n",
    "            break\n",
    "    model.train()\n",
    "    return out"
   ],
   "id": "e5080d0c7e164ae2",
   "outputs": [],
   "execution_count": 17
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:45:21.962253Z",
     "start_time": "2025-08-31T13:45:21.607885Z"
    }
   },
   "cell_type": "code",
   "source": [
    "inputs = torch.tensor(tokenizer.encode('d'), device='cuda')\n",
    "print(''.join(tokenizer.decode(generate(c_model, inputs, tokenizer))))"
   ],
   "id": "cc80571803f6807b",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "d]0f.\n",
      "           b         seter the n|d ._ustrithe setib fam poamhat if ret py Sx :Ist = 2utung(mon cal. = arewhe the deyt and inionthe thin(sectise the alect:, Uinn joliond))\n",
      "     Sp iolsemum =          `hetrrat it chams uor         efuinnevartea vextarnf in balionde))\n",
      "           on (setoe =onCHape\n"
     ]
    }
   ],
   "execution_count": 26
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:26:30.769248Z",
     "start_time": "2025-08-31T13:26:30.766082Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def process(text, tokenizer):\n",
    "    # text: str\n",
    "    enc = tokenizer.encode(text)\n",
    "    inputs = enc\n",
    "    labels = enc[1:] + [tokenizer.end_ind]\n",
    "    return torch.tensor(inputs, device='cuda'), torch.tensor(labels, device='cuda')"
   ],
   "id": "f61a87c6a68e7224",
   "outputs": [],
   "execution_count": 23
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-31T13:45:01.356972Z",
     "start_time": "2025-08-31T13:38:49.966414Z"
    }
   },
   "cell_type": "code",
   "source": [
    "lossi = []\n",
    "epochs = 1\n",
    "optimizer = optim.Adam(c_model.parameters(), lr=0.001)\n",
    "for e in range(epochs):\n",
    "    for data in datasets:\n",
    "        inputs, labels = process(data['original_string'], tokenizer)\n",
    "        hidden = None\n",
    "        _loss = 0.0\n",
    "        lens = len(inputs)\n",
    "        for i in range(lens):\n",
    "            logits, hidden = c_model(inputs[i].unsqueeze(0), hidden)\n",
    "            _loss += F.cross_entropy(logits, labels[i].unsqueeze(0)) / lens\n",
    "        lossi.append(_loss.item())\n",
    "        optimizer.zero_grad()\n",
    "        _loss.backward()\n",
    "        optimizer.step()\n"
   ],
   "id": "267672d8d5d8a8a2",
   "outputs": [],
   "execution_count": 24
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": "",
   "id": "ef83c07b48f477c5"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
