{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "75569c76",
   "metadata": {},
   "source": [
    "d2l 里用的是字符级别的 token，由于我用的是别人的 tokenizer，所以我会基于单词级别的 token 来训练 RNN。\n",
    "\n",
    "毕竟，do sth. more 才是有效的学习。\n",
    "\n",
    "啊，首先看看 d2l 提供的信息论简述，里面的用语真的是，怎么说呢，这下知道测度论的重要性了。\n",
    "\n",
    "哈哈，总之看介绍信息熵更多的是一种代数性的构造，但与现实做了对齐，所以很有效。\n",
    "\n",
    "并集熵和交集熵只是定性的知识。KL 散度非常好，不仅简单地衡量了两个分布之间的区别，而且能从数值上解释交叉熵的有效性。（毕竟从公式上看不出交叉熵是在表达个啥）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "092418c0",
   "metadata": {},
   "source": [
    "另外，书中欠缺对公式的解释，$-\\log P(w_i \\mid w_1,w_2,\\dots,w_{i-1})$ 实际上是真实分布为 $Q : \\begin{cases} Q(w_i \\mid w_1,w_2,\\dots,w_{i-1}) = 1 \\\\ Q(\\rm{otherwise}) = 0 \\end{cases}$ 时的交叉熵，所以才看起来这么简单。\n",
    "\n",
    "在这种损失函数下训练出来的语言模型确实疑似有点……"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "id": "ae6e4362",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([123, 256]) torch.Size([1, 256])\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from torch import nn\n",
    "import json\n",
    "import torch.nn.functional as F\n",
    "\n",
    "\n",
    "# 从文件中加载字典\n",
    "with open(\"vocab.json\", \"r\", encoding=\"utf-8\") as f:\n",
    "    loaded_vocab = json.load(f)\n",
    "\n",
    "context_len = 200\n",
    "num_hiddens = 256\n",
    "model = nn.RNN(input_size=context_len, hidden_size=num_hiddens, num_layers=1, nonlinearity='relu')\n",
    "\n",
    "X = torch.zeros([123, context_len])\n",
    "print(model(X)[0].shape, model(X)[1].shape)\n",
    "# 确实只是隐藏层的输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "c190a35e",
   "metadata": {},
   "outputs": [],
   "source": [
    "class RNNmodel(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.hidden = nn.RNN(input_size=len(loaded_vocab), hidden_size=num_hiddens, num_layers=1, nonlinearity='relu', batch_first=False)\n",
    "        # 由于 batch_first = True 不能改变 hx\n",
    "        self.linear = nn.Linear(in_features=num_hiddens, out_features=len(loaded_vocab))\n",
    "    def forward(self, x, ht): #由于是多步推理，必须保存 ht。\n",
    "        X = nn.functional.one_hot(x.T.long(), len(loaded_vocab))\n",
    "        X = X.to(torch.float32)\n",
    "        # print(X.shape)\n",
    "        # for i in range(len(X)):\n",
    "        #     print(torch.argmax(X[i][0]))\n",
    "        Y, ht = self.hidden(X, ht)\n",
    "        # print(Y.shape[-1])\n",
    "        # for i in range(len(Y)):\n",
    "        #     m = nn.Softmax(dim=0)\n",
    "        #     print(i, Y[i][1])\n",
    "        # Y = Y.reshape((-1, Y.shape[1], Y.shape[-1]))\n",
    "        \n",
    "        # print(Y.shape)\n",
    "        output = self.linear(Y)\n",
    "        # print(output.shape)\n",
    "        return output, ht"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e84233cf",
   "metadata": {},
   "source": [
    "```cpp\n",
    "未经训练的 RNN-language-model 会变成循环节长度逐渐收敛为 1 的超级复读机\n",
    "比如读者可以把初始的 str 换成 time 啊，in 啊，travell 之类的东西试一试。（其实字典里没有 traveler（但有 traveller））\n",
    "travell whatever through whatever ##ept third beauty whatever whatever whatever whatever ##ept third beauty whatever whatever whatever whatever ##ept third\n",
    "traveller alone alone ##ny ##agne ##ounding ##ny ##agne ##ounding ##ny ##agne ##ounding ##ny ##agne ##ounding ##ny ##agne ##ounding ##ny ##agne ##ounding\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "id": "5a1bc171",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2, 354, 3]\n",
      "traveller\n",
      "traveller stop\n",
      "traveller stop altogether\n",
      "traveller stop altogether altogether\n",
      "traveller stop altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n",
      "traveller stop altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether altogether\n"
     ]
    }
   ],
   "source": [
    "from tokenizers import BertWordPieceTokenizer\n",
    "import numpy as np\n",
    "\n",
    "device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else \"cpu\"\n",
    "net = RNNmodel().to(device=device)\n",
    "tokenizer = BertWordPieceTokenizer(vocab=loaded_vocab)\n",
    "\n",
    "str = \"traveller\"\n",
    "ht = None\n",
    "X = torch.zeros([1, 3], dtype=torch.long)\n",
    "\n",
    "ids = tokenizer.encode(sequence=str).ids\n",
    "print(ids)\n",
    "print(tokenizer.decode(ids))\n",
    "# 假如你较真地查看了 ids，会发现实际上有四个 token，另外两个是 \"[CLS]\" 和 \"[SEP]\"，BERT 模型的 BOS 和 EOS\n",
    "\n",
    "X[0] = torch.tensor(ids, dtype=torch.long)\n",
    "\n",
    "X = X.to(device=device)\n",
    "for i in range(20):\n",
    "    output, ht = net(X, ht)\n",
    "\n",
    "    soft = nn.Softmax(dim=0)\n",
    "    # for i in range(len(output)):\n",
    "    #     output[i][0] = output[i][0] - output[i][0].max()\n",
    "    #     print(i, output[i][0], F.log_softmax(output[i][0], dim=-1), output[i][0].shape)\n",
    "    newid = torch.argmax(F.log_softmax(output[len(output) - 1][0], dim=0))\n",
    "    # str = str + ' ' + tokenizer.decode([230 + i])\n",
    "    str = str + ' ' + tokenizer.decode([newid])\n",
    "    print(str)\n",
    "    newids = tokenizer.encode(str).ids\n",
    "    X = torch.zeros([1, len(newids)], dtype=torch.long).to(device=device)\n",
    "    X[0] = torch.tensor(newids, dtype=torch.long)\n",
    "\n",
    "    # print(newid)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 113,
   "id": "635f2186",
   "metadata": {},
   "outputs": [],
   "source": [
    "def seq_data_iter_random(corpus, batch_size, num_steps): #@save\n",
    "    # 从随机偏移量开始对序列进行分区，随机范围包括num_steps-1\n",
    "    corpus = corpus[np.random.randint(0, num_steps- 1):]\n",
    "    # 减去1，是因为我们需要考虑标签\n",
    "    num_subseqs = (len(corpus)- 1) // num_steps\n",
    "    # 长度为num_steps的子序列的起始索引\n",
    "    initial_indices = list(range(0, num_subseqs * num_steps, num_steps))\n",
    "    # 在随机抽样的迭代过程中，\n",
    "    # 来自两个相邻的、随机的、小批量中的子序列不一定在原始序列上相邻\n",
    "    np.random.shuffle(initial_indices)\n",
    "\n",
    "    def data(pos):\n",
    "    # 返回从pos位置开始的长度为num_steps的序列\n",
    "        return corpus[pos: pos + num_steps]\n",
    "\n",
    "    num_batches = num_subseqs // batch_size\n",
    "    for i in range(0, batch_size * num_batches, batch_size):\n",
    "    # 在这里，initial_indices包含子序列的随机起始索引 \n",
    "        initial_indices_per_batch = initial_indices[i: i + batch_size]\n",
    "        X = [data(j) for j in initial_indices_per_batch]\n",
    "        Y = [data(j + 1) for j in initial_indices_per_batch]\n",
    "        yield torch.tensor(X), torch.tensor(Y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 114,
   "id": "4c976835",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "X:  tensor([[ 6,  7,  8,  9, 10],\n",
      "        [11, 12, 13, 14, 15]]) \n",
      "Y: tensor([[ 7,  8,  9, 10, 11],\n",
      "        [12, 13, 14, 15, 16]])\n",
      "X:  tensor([[ 1,  2,  3,  4,  5],\n",
      "        [21, 22, 23, 24, 25]]) \n",
      "Y: tensor([[ 2,  3,  4,  5,  6],\n",
      "        [22, 23, 24, 25, 26]])\n",
      "X:  tensor([[16, 17, 18, 19, 20],\n",
      "        [26, 27, 28, 29, 30]]) \n",
      "Y: tensor([[17, 18, 19, 20, 21],\n",
      "        [27, 28, 29, 30, 31]])\n"
     ]
    }
   ],
   "source": [
    "my_seq = list(range(35))\n",
    "for X, Y in seq_data_iter_random(my_seq, batch_size=2, num_steps=5):\n",
    "    print('X: ', X, '\\nY:', Y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 117,
   "id": "b1033aee",
   "metadata": {},
   "outputs": [],
   "source": [
    "def seq_data_iter_sequential(corpus, batch_size, num_steps): #@save\n",
    " # 从随机偏移量开始划分序列\n",
    "    offset = np.random.randint(0, num_steps)\n",
    "    num_tokens = ((len(corpus)- offset- 1) // batch_size) * batch_size\n",
    "    Xs = torch.tensor(corpus[offset: offset + num_tokens])\n",
    "    Ys = torch.tensor(corpus[offset + 1: offset + 1 + num_tokens])\n",
    "    Xs, Ys = Xs.reshape(batch_size,-1), Ys.reshape(batch_size,-1)\n",
    "    num_batches = Xs.shape[1] // num_steps\n",
    "    for i in range(0, num_steps * num_batches, num_steps):\n",
    "        X = Xs[:, i: i + num_steps]\n",
    "        Y = Ys[:, i: i + num_steps]\n",
    "        yield X, Y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 118,
   "id": "cf2fd236",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "X:  tensor([[ 0,  1,  2,  3,  4],\n",
      "        [17, 18, 19, 20, 21]]) \n",
      "Y: tensor([[ 1,  2,  3,  4,  5],\n",
      "        [18, 19, 20, 21, 22]])\n",
      "X:  tensor([[ 5,  6,  7,  8,  9],\n",
      "        [22, 23, 24, 25, 26]]) \n",
      "Y: tensor([[ 6,  7,  8,  9, 10],\n",
      "        [23, 24, 25, 26, 27]])\n",
      "X:  tensor([[10, 11, 12, 13, 14],\n",
      "        [27, 28, 29, 30, 31]]) \n",
      "Y: tensor([[11, 12, 13, 14, 15],\n",
      "        [28, 29, 30, 31, 32]])\n"
     ]
    }
   ],
   "source": [
    "for X, Y in seq_data_iter_sequential(my_seq, batch_size=2, num_steps=5):\n",
    "    print('X: ', X, '\\nY:', Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "abe0ff96",
   "metadata": {},
   "source": [
    "说实话我懒得写了，哈哈。随机截断在路径追踪中亦有应用。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "efdac2bb",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
