{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# LSTM的结构与代码\n",
    "\n",
    "来自视频 [徒手实现长短期记忆网络](https://www.bilibili.com/video/BV1pE421P7Ej) 。\n",
    "\n",
    "LSTM（长短期记忆网络）是一种特殊的循环神经网络（RNN），它能够学习长期依赖信息。LSTM的关键在于其内部结构，它包含了三个门（输入门、遗忘门、输出门）和一个细胞状态（cell state），这些结构使得LSTM能够在处理序列数据时，有效地保留长期信息和忽略无关信息。\n",
    "\n",
    "LSTM的核心结构包括：\n",
    "\n",
    "+ 输入门：决定哪些信息需要更新。\n",
    "+ 遗忘门：决定哪些信息需要丢弃。\n",
    "+ 输出门：决定哪些信息需要输出。\n",
    "+ 细胞状态：作为信息的传递载体，保留重要的长期信息。\n",
    "\n",
    "在传统的 RNN 网络中， $h_t$ 既作为输出又作为隐藏状态传递，\n",
    "\n",
    "而在 LSTM 中， $C_t$ （记忆单元）作为主要的记忆载体，而 $h_t$ 是 $C_t$ 中“节选”出来用于输出和传递的部分。\n",
    "\n",
    "相较于 RNN ，LSTM 中的同一个时刻的计算需要用到：当前输入 $x_t$ ，上一时刻隐藏状态 $h_{t-1}$ 和上一时刻细胞状态 $C_{t-1}$，并计算的结构有这个时刻的隐藏状态和这个时刻的细胞状态，隐藏状态作为语言建模头。\n",
    "\n",
    "另外，上述的隐藏状态和细胞状态的计算都有此时刻的输入和上一时刻的输出参与，由门控控制。\n",
    "\n",
    "下面是 RNN 的短期记忆的几张图：\n",
    "\n",
    "![](./images/RNN%20短期记忆1.png)\n",
    "![](./images/RNN%20短期记忆2.png)\n"
   ],
   "id": "c5643665158c3474"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T07:55:41.425008Z",
     "start_time": "2025-09-02T07:55:41.416326Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import DataLoader\n",
    "from datasets import load_dataset\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "\n",
    "\n",
    "torch.manual_seed(12046)"
   ],
   "id": "881b7bfdd9ce8826",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch._C.Generator at 0x2165825a6d0>"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 2
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T07:55:41.575268Z",
     "start_time": "2025-09-02T07:55:41.491288Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 一些超参数\n",
    "learning_rate = 1e-3\n",
    "eval_iters = 10\n",
    "batch_size=1000\n",
    "sequence_len=64\n",
    "# 如果有GPU，该脚本将使用GPU进行计算\n",
    "device = 'cuda' if torch.cuda.is_available() else 'cpu'"
   ],
   "id": "d55d473408f7715b",
   "outputs": [],
   "execution_count": 3
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T07:55:43.352491Z",
     "start_time": "2025-09-02T07:55:41.580236Z"
    }
   },
   "cell_type": "code",
   "source": [
    "datasets = load_dataset('json', data_files='./datasets/python/final/jsonl/train/*.jsonl.gz')\n",
    "datasets = datasets['train'].filter(lambda x: 'apache/spark' in x['repo'])"
   ],
   "id": "8a55e5724d6ef169",
   "outputs": [],
   "execution_count": 4
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T07:55:43.407393Z",
     "start_time": "2025-09-02T07:55:43.367877Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class CharTokenizer:\n",
    "\n",
    "    def __init__(self, data, end_ind=0):\n",
    "        # data: list[str]\n",
    "        # 得到所有的字符\n",
    "        chars = sorted(list(set(''.join(data))))\n",
    "        self.char2ind = {s: i + 1 for i, s in enumerate(chars)}\n",
    "        self.char2ind['<|e|>'] = end_ind\n",
    "        self.ind2char = {v: k for k, v in self.char2ind.items()}\n",
    "        self.end_ind = end_ind\n",
    "\n",
    "    def encode(self, x):\n",
    "        # x: str\n",
    "        return [self.char2ind[i] for i in x]\n",
    "\n",
    "    def decode(self, x):\n",
    "        # x: int or list[x]\n",
    "        if isinstance(x, int):\n",
    "            return self.ind2char[x]\n",
    "        return [self.ind2char[i] for i in x]\n",
    "\n",
    "tokenizer = CharTokenizer(datasets['original_string'])\n",
    "test_str = 'def f(x):'\n",
    "re = tokenizer.encode(test_str)\n",
    "print(re)\n",
    "''.join(tokenizer.decode(range(len(tokenizer.char2ind))))"
   ],
   "id": "5d3b97865997bc5c",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[70, 71, 72, 2, 72, 10, 90, 11, 28]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'<|e|>\\n !\"#$%&\\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~ö'"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 5
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:00.362017Z",
     "start_time": "2025-09-02T08:26:00.282107Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def process(data, tokenizer, sequence_len=sequence_len):\n",
    "    text = data['original_string']\n",
    "    # text is list[str]\n",
    "    inputs, labels = [], []\n",
    "    for t in text:\n",
    "        enc = tokenizer.encode(t)\n",
    "        enc += [tokenizer.end_ind]\n",
    "        # 有bug，无法处理长度过小的数据\n",
    "        for i in range(len(enc) - sequence_len):\n",
    "            inputs.append(enc[i: i + sequence_len])\n",
    "            labels.append(enc[i + 1: i + 1 + sequence_len])\n",
    "    return {'inputs': inputs, 'labels': labels}\n",
    "\n",
    "# 将数据分为训练集和测试集\n",
    "tokenized = datasets.train_test_split(test_size=0.1, seed=1024, shuffle=True)\n",
    "\n",
    "f = lambda x: process(x, tokenizer)\n",
    "tokenized = tokenized.map(f, batched=True, remove_columns=datasets.column_names)\n",
    "tokenized.set_format(type='torch', device=device)\n"
   ],
   "id": "b6b41cc38084e463",
   "outputs": [],
   "execution_count": 11
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:02.209474Z",
     "start_time": "2025-09-02T08:26:01.826348Z"
    }
   },
   "cell_type": "code",
   "source": [
    "train_loader = DataLoader(tokenized['train'], batch_size=batch_size, shuffle=True)\n",
    "test_loader = DataLoader(tokenized['test'], batch_size=batch_size, shuffle=True)\n",
    "next(iter(train_loader))"
   ],
   "id": "a98d6d87385d0431",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'inputs': tensor([[71, 94,  2,  ...,  2,  2,  2],\n",
       "         [70, 85, 86,  ...,  4,  4,  4],\n",
       "         [86, 74,  2,  ..., 81, 87, 82],\n",
       "         ...,\n",
       "         [82, 71, 10,  ..., 54, 91, 82],\n",
       "         [71, 69, 81,  ...,  2,  2,  2],\n",
       "         [84, 75, 80,  ..., 86,  2, 75]], device='cuda:0'),\n",
       " 'labels': tensor([[94,  2,  2,  ...,  2,  2,  2],\n",
       "         [85, 86, 84,  ...,  4,  4,  1],\n",
       "         [74,  2, 80,  ..., 87, 82, 75],\n",
       "         ...,\n",
       "         [71, 10, 21,  ..., 91, 82, 71],\n",
       "         [69, 81, 84,  ...,  2,  2,  2],\n",
       "         [75, 80, 73,  ...,  2, 75, 85]], device='cuda:0')}"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 12
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:03.801038Z",
     "start_time": "2025-09-02T08:26:03.797171Z"
    }
   },
   "cell_type": "code",
   "source": [
    "@torch.no_grad()\n",
    "def generate(model, context, tokenizer, max_new_tokens=300):\n",
    "    # context: (1, T)\n",
    "    #out = []\n",
    "    out = context.tolist()[0]\n",
    "    model.eval()\n",
    "    for _ in range(max_new_tokens):\n",
    "        #可以考虑截断背景，使得文本生成更加贴近训练\n",
    "        #logits = model(context[:, -sequence_len:])\n",
    "        logits = model(context)            # (1, T, 98)\n",
    "        probs = F.softmax(logits[:, -1, :], dim=-1)  # (1, 98)\n",
    "        # 随机生成文本\n",
    "        ix = torch.multinomial(probs, num_samples=1)  # (1, 1)\n",
    "        # 更新背景\n",
    "        context = torch.concat((context, ix), dim=-1)\n",
    "        out.append(ix.item())\n",
    "        if out[-1] == tokenizer.end_ind:\n",
    "            break\n",
    "    model.train()\n",
    "    return out"
   ],
   "id": "3bb164fae5ab1f3a",
   "outputs": [],
   "execution_count": 13
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:06.827681Z",
     "start_time": "2025-09-02T08:26:06.823179Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def estimate_loss(model):\n",
    "    re = {}\n",
    "    # 将模型切换至评估模式\n",
    "    model.eval()\n",
    "    re['train'] = _loss(model, train_loader)\n",
    "    re['test'] = _loss(model, test_loader)\n",
    "    # 将模型切换至训练模式\n",
    "    model.train()\n",
    "    return re\n",
    "\n",
    "@torch.no_grad()\n",
    "def _loss(model, data_loader):\n",
    "    \"\"\"\n",
    "    计算模型在不同数据集下面的评估指标\n",
    "    \"\"\"\n",
    "    loss = []\n",
    "    data_iter= iter(data_loader)\n",
    "    # 随机使用多个批量数据来预估模型效果\n",
    "    for k in range(eval_iters):\n",
    "        data = next(data_iter, None)\n",
    "        if data is None:\n",
    "            data_iter = iter(data_loader)\n",
    "            data = next(data_iter, None)\n",
    "        inputs, labels = data['inputs'], data['labels']  # (B, T)\n",
    "        logits = model(inputs)                           # (B, T, vs)\n",
    "        # 请参考官方文档\n",
    "        loss.append(F.cross_entropy(logits.transpose(-2, -1), labels).item())\n",
    "    return torch.tensor(loss).mean().item()"
   ],
   "id": "f4e319c48869b58d",
   "outputs": [],
   "execution_count": 15
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:09.811717Z",
     "start_time": "2025-09-02T08:26:09.806203Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def train_model(model, optimizer, epochs=10):\n",
    "    # 记录模型在训练集上的模型损失\n",
    "    lossi = []\n",
    "    for epoch in range(epochs):\n",
    "        for i, data in enumerate(train_loader, 0):\n",
    "            inputs, labels = data['inputs'], data['labels']  # (B, T)\n",
    "            optimizer.zero_grad()\n",
    "            logits = model(inputs)                           # (B, T, vs)\n",
    "            loss = F.cross_entropy(logits.transpose(-2, -1), labels)\n",
    "            lossi.append(loss.item())\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "        # 评估模型，并输出结果\n",
    "        stats = estimate_loss(model)\n",
    "        train_loss = f'train loss {stats[\"train\"]:.4f}'\n",
    "        test_loss = f'test loss {stats[\"test\"]:.4f}'\n",
    "        print(f'epoch {epoch:>2}: {train_loss}, {test_loss}')\n",
    "    return lossi"
   ],
   "id": "76be022dc7d32e1e",
   "outputs": [],
   "execution_count": 17
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "![](./images/LSTM.png)",
   "id": "fea102a72a64fd9"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:12.098574Z",
     "start_time": "2025-09-02T08:26:12.093575Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class LSTMCell(nn.Module):\n",
    "\n",
    "    def __init__(self, input_size, hidden_size):\n",
    "        super().__init__()\n",
    "        self.input_size = input_size\n",
    "        self.hidden_size = hidden_size\n",
    "\n",
    "        combined_size = input_size + hidden_size\n",
    "\n",
    "        self.forget_gate = nn.Linear(combined_size, hidden_size)\n",
    "        self.in_gate = nn.Linear(combined_size, hidden_size)\n",
    "        self.new_cell_state = nn.Linear(combined_size, hidden_size)\n",
    "        self.out_gate = nn.Linear(combined_size, hidden_size)\n",
    "\n",
    "    def forward(self, input, state=None):\n",
    "        # input: (B, I)\n",
    "        # state: ((B, H), (B, H))\n",
    "        B = input.shape[0]\n",
    "        if state is None:\n",
    "            state = self.init_state(B, input.device)\n",
    "        hs, cs = state\n",
    "        combined = torch.concat((input, hs), dim=-1)  # (B, I + H)\n",
    "\n",
    "        # 细胞状态更新\n",
    "        ingate = F.sigmoid(self.in_gate(combined))\n",
    "        forgetgate = F.sigmoid(self.forget_gate(combined))\n",
    "        ncs = F.tanh(self.new_cell_state(combined))\n",
    "\n",
    "        cs = cs * forgetgate + ingate * ncs\n",
    "\n",
    "        # 隐藏状态更新\n",
    "        outgate = F.sigmoid(self.out_gate(combined))\n",
    "        hs = F.tanh(cs) * outgate\n",
    "        return hs, cs\n",
    "\n",
    "    def init_state(self, B, device):\n",
    "        hs = torch.zeros((B, self.hidden_size), device=device)\n",
    "        cs = torch.zeros((B, self.hidden_size), device=device)\n",
    "        return hs, cs"
   ],
   "id": "3a8e6236d941b8fd",
   "outputs": [],
   "execution_count": 18
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:14.835251Z",
     "start_time": "2025-09-02T08:26:14.827347Z"
    }
   },
   "cell_type": "code",
   "source": [
    "l_cell = LSTMCell(3, 4)\n",
    "x = torch.randn(5, 3)\n",
    "a, b = l_cell(x)\n",
    "a.shape, b.shape"
   ],
   "id": "28912072c8dcadbd",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([5, 4]), torch.Size([5, 4]))"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 19
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:26:16.570361Z",
     "start_time": "2025-09-02T08:26:16.565857Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class LSTM(nn.Module):\n",
    "\n",
    "    def __init__(self, input_size, hidden_size):\n",
    "        super().__init__()\n",
    "        self.cell = LSTMCell(input_size, hidden_size)\n",
    "\n",
    "    def forward(self, input, state=None):\n",
    "        # input:  (B, T, C)\n",
    "        # state:  ((B, H), (B, H))\n",
    "        # out:    (B, T, H)\n",
    "        B, T, C = input.shape\n",
    "        re = []\n",
    "        for i in range(T):\n",
    "            state = self.cell(input[:, i, :], state)\n",
    "            re.append(state[0])\n",
    "        return torch.stack(re, dim=1)                                  # (B, T, H)"
   ],
   "id": "ab105187c4e040bc",
   "outputs": [],
   "execution_count": 20
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:27:26.496225Z",
     "start_time": "2025-09-02T08:27:26.467460Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def test_lstm():\n",
    "    '''\n",
    "    测试LSTM实现的准确性\n",
    "    '''\n",
    "    # 随机生成模型结构\n",
    "    B, T, input_size, hidden_size, num_layers = torch.randint(1, 20, (5,)).tolist()\n",
    "    ref_model = nn.LSTM(input_size, hidden_size, num_layers=num_layers, batch_first=True)\n",
    "    # 随机生成输入\n",
    "    inputs = torch.randn(B, T, input_size)\n",
    "    hs, cs = torch.randn((2 * num_layers, B, hidden_size)).chunk(2, 0)\n",
    "    re = inputs\n",
    "    # 取出模型参数\n",
    "    for layer_index in range(num_layers):\n",
    "        l = ref_model.all_weights[layer_index]\n",
    "        if layer_index == 0:\n",
    "            model = LSTM(input_size, hidden_size)\n",
    "        else:\n",
    "            model = LSTM(hidden_size, hidden_size)\n",
    "        i, f, c, o = torch.cat((l[0], l[1]), dim=1).chunk(4, 0)\n",
    "        ib, fb, cb, ob = (l[2] + l[3]).chunk(4, 0)\n",
    "        # 设置模型参数\n",
    "        model.cell.in_gate.weight = nn.Parameter(i)\n",
    "        model.cell.in_gate.bias = nn.Parameter(ib)\n",
    "        model.cell.forget_gate.weight = nn.Parameter(f)\n",
    "        model.cell.forget_gate.bias = nn.Parameter(fb)\n",
    "        model.cell.new_cell_state.weight = nn.Parameter(c)\n",
    "        model.cell.new_cell_state.bias = nn.Parameter(cb)\n",
    "        model.cell.out_gate.weight = nn.Parameter(o)\n",
    "        model.cell.out_gate.bias = nn.Parameter(ob)\n",
    "        # 计算隐藏状态\n",
    "        re = model(re, (hs[layer_index], cs[layer_index]))\n",
    "    ref_re, _ = ref_model(inputs, (hs, cs))\n",
    "    # 验证计算结果（最后一层的隐藏状态是否一致）\n",
    "    out = torch.all(torch.abs(re - ref_re) < 1e-4)\n",
    "    return out, (B, T, input_size, hidden_size, num_layers)\n",
    "\n",
    "test_lstm()"
   ],
   "id": "fd4e4e51791738e1",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor(True), (12, 8, 15, 11, 6))"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 22
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:53:06.314940Z",
     "start_time": "2025-09-02T08:53:06.310537Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class CharLSTM(nn.Module):  # 语言建模\n",
    "\n",
    "    def __init__(self, vs):\n",
    "        super().__init__()\n",
    "\n",
    "        self.emb_size = 256\n",
    "        self.hidden_size = 128\n",
    "        self.emb = nn.Embedding(vs, self.emb_size)\n",
    "        self.dp = nn.Dropout(0.4)\n",
    "        self.lstm1 = LSTM(self.emb_size, self.hidden_size)\n",
    "        self.ln1 = nn.LayerNorm(self.hidden_size)  # 包含两个可学习参数：gamma (weight)：缩放 beta (bias)：平移\n",
    "        self.lstm2 = LSTM(self.hidden_size, self.hidden_size)\n",
    "        self.ln2 = nn.LayerNorm(self.hidden_size)\n",
    "        self.lstm3 = LSTM(self.hidden_size, self.hidden_size)\n",
    "        self.ln3 = nn.LayerNorm(self.hidden_size)\n",
    "        self.lm = nn.Linear(self.hidden_size, vs)\n",
    "\n",
    "\n",
    "    def forward(self, x):\n",
    "        # x: (B, T)\n",
    "        embeddings = self.emb(x)\n",
    "        h = self.ln1(self.dp(self.lstm1(embeddings)))  # (B, T, H)\n",
    "        h = self.ln2(self.dp(self.lstm2(h)))\n",
    "        h = self.ln3(self.dp(self.lstm3(h)))\n",
    "        output = self.lm(h)\n",
    "        return output"
   ],
   "id": "98ace9ed494f56d6",
   "outputs": [],
   "execution_count": 27
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:53:07.443147Z",
     "start_time": "2025-09-02T08:53:07.415671Z"
    }
   },
   "cell_type": "code",
   "source": [
    "c_model = CharLSTM(len(tokenizer.char2ind)).to('cuda')\n",
    "c_model"
   ],
   "id": "e5943dbb4702e0db",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "CharLSTM(\n",
       "  (emb): Embedding(98, 256)\n",
       "  (dp): Dropout(p=0.4, inplace=False)\n",
       "  (lstm1): LSTM(\n",
       "    (cell): LSTMCell(\n",
       "      (forget_gate): Linear(in_features=384, out_features=128, bias=True)\n",
       "      (in_gate): Linear(in_features=384, out_features=128, bias=True)\n",
       "      (new_cell_state): Linear(in_features=384, out_features=128, bias=True)\n",
       "      (out_gate): Linear(in_features=384, out_features=128, bias=True)\n",
       "    )\n",
       "  )\n",
       "  (ln1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
       "  (lstm2): LSTM(\n",
       "    (cell): LSTMCell(\n",
       "      (forget_gate): Linear(in_features=256, out_features=128, bias=True)\n",
       "      (in_gate): Linear(in_features=256, out_features=128, bias=True)\n",
       "      (new_cell_state): Linear(in_features=256, out_features=128, bias=True)\n",
       "      (out_gate): Linear(in_features=256, out_features=128, bias=True)\n",
       "    )\n",
       "  )\n",
       "  (ln2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
       "  (lstm3): LSTM(\n",
       "    (cell): LSTMCell(\n",
       "      (forget_gate): Linear(in_features=256, out_features=128, bias=True)\n",
       "      (in_gate): Linear(in_features=256, out_features=128, bias=True)\n",
       "      (new_cell_state): Linear(in_features=256, out_features=128, bias=True)\n",
       "      (out_gate): Linear(in_features=256, out_features=128, bias=True)\n",
       "    )\n",
       "  )\n",
       "  (ln3): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
       "  (lm): Linear(in_features=128, out_features=98, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 28
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T08:53:13.332449Z",
     "start_time": "2025-09-02T08:53:08.958849Z"
    }
   },
   "cell_type": "code",
   "source": [
    "context = torch.tensor(tokenizer.encode('def'), device=device).unsqueeze(0)\n",
    "print(''.join(tokenizer.decode(generate(c_model, context, tokenizer))))"
   ],
   "id": "831da266f1db3208",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "def*ZKV\n",
      "oo|(\"YA{ByE G|uw=3<1'L$?Q9NN[{/Q=CK|AM:iKcam;+Q3m<sA!gW`$ö8Nx!q9T3\"yMm5Za)'c~5\\rm&B\"T{r\n",
      "c\"tM=^Dax1z#<|e|>\n"
     ]
    }
   ],
   "execution_count": 29
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T09:03:30.596013Z",
     "start_time": "2025-09-02T09:03:28.390093Z"
    }
   },
   "cell_type": "code",
   "source": "estimate_loss(c_model)",
   "id": "1a62aa69a0d9c5b9",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'train': 4.643476486206055, 'test': 4.641425132751465}"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 30
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T09:46:14.576181Z",
     "start_time": "2025-09-02T09:03:42.141182Z"
    }
   },
   "cell_type": "code",
   "source": "l = train_model(c_model, optim.Adam(c_model.parameters(), lr=learning_rate))",
   "id": "ef9d841fb8b7659e",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch  0: train loss 1.2523, test loss 1.4218\n",
      "epoch  1: train loss 1.1358, test loss 1.3298\n",
      "epoch  2: train loss 1.0366, test loss 1.2401\n",
      "epoch  3: train loss 0.9966, test loss 1.2153\n",
      "epoch  4: train loss 0.9600, test loss 1.2028\n",
      "epoch  5: train loss 0.9417, test loss 1.1896\n",
      "epoch  6: train loss 0.9280, test loss 1.1952\n",
      "epoch  7: train loss 0.9188, test loss 1.1848\n",
      "epoch  8: train loss 0.8910, test loss 1.1795\n",
      "epoch  9: train loss 0.8898, test loss 1.1726\n"
     ]
    }
   ],
   "execution_count": 31
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-02T12:43:03.067598Z",
     "start_time": "2025-09-02T12:42:33.899065Z"
    }
   },
   "cell_type": "code",
   "source": [
    "context = torch.tensor(tokenizer.encode('def'), device=device).unsqueeze(0)\n",
    "print(''.join(tokenizer.decode(generate(c_model, context, tokenizer))))"
   ],
   "id": "ffe4f1e45769acca",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "def quote=Value, size=None, partitionFunc=None, UDFType=None, options=None):\n",
      "        \"\"\"\n",
      "        Create a DataFrame is not None:\n",
      "            if not self._sc.startSize():\n",
      "                return self._jdf.sample()\n",
      "        jrdd = self.mapPartitionsWithIndex()\n",
      "                return starts[0] = np.memory.a\n"
     ]
    }
   ],
   "execution_count": 32
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
