{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 自然语言处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1 语言表示与语言模型\n",
    "### 1.1 词嵌入(word embedding)\n",
    "图像分类中使用one-hot编码表示不同的类，但在自然语言中，字典(字表)很大如果使用one-hot编码会造成很大的数据稀疏性，并且该编码无法表达单词的语义相似性。自然语言中存在**语义鸿沟**问题，如“麦克风”和“话筒”，无法从字面看出两个单词其实在表达同样的东西，所以引入了基于神经网络的分布式表示，词嵌入(word embedding)或词向量(word vectors)\n",
    "\n",
    "通过一些训练文本，训练词向量模型，得到单词的较低维度的分布式表示，优点如下：\n",
    "1. 词向量的夹角越小，表达的语义更加接近(一定程度上解决了语义鸿沟问题)\n",
    "2. 高维向量中元素不是0或1的形式，数据更加稠密\n",
    "\n",
    "注意：word2vec是谷歌在2013年提出的训练词向量并获取词嵌入的**工具**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 416,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import torch\n",
    "import numpy as np\n",
    "from torch import nn, optim\n",
    "import torch.nn.functional as F\n",
    "import matplotlib.pyplot as plt\n",
    "from torch.autograd import Variable\n",
    "from torch.utils.data import DataLoader\n",
    "from torchvision import transforms, datasets, models\n",
    "\n",
    "# 优先使用GPU\n",
    "use_cuda = torch.cuda.is_available()\n",
    "device = torch.device('cuda' if use_cuda else 'cpu')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 417,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.测试单词的向量类型: torch.LongTensor\n",
      "2.词嵌入的结果: tensor([ 1.1618,  0.0866,  0.3452, -0.4999,  0.3040], grad_fn=<EmbeddingBackward>) torch.Size([5])\n"
     ]
    }
   ],
   "source": [
    "# 词嵌入，将one-hot编码的单词变为高维度的向量表示\n",
    "word_to_ix = {'hello':0, 'world':1, 'python':2, 'AI':3}\n",
    "embeds = nn.Embedding(3, 5)     # Embedding(m,n) m:字典，所有单词数目 n:嵌入维度\n",
    "test_idx = torch.tensor(word_to_ix['python'])   # 转换为tensor\n",
    "print('1.测试单词的向量类型:', test_idx.type())\n",
    "test_embed = embeds(test_idx)   # 进行词嵌入\n",
    "print('2.词嵌入的结果:', test_embed, test_embed.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结:\n",
    "- `nn.Embedding(m,n)`是将one-hot编码嵌入到相对较低的维度，用float型数据表示，通常词典长度远大于嵌入的维度，但嵌入的维度仍可以看做是高维度，该语句只是通过线性变换将稀疏的数据转换为稠密的低维数据，所以线性变换的权重对应的参数也要进行参数更新，即**词向量也要进行参数的更新**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.2 语言建模 (Language Model)\n",
    "(1) **N-Gram模型**\n",
    "\n",
    "一句话$T$由$w_1,w_2,...w_n$组成，通过前面的词推断后面的词，用条件概率将词联系到一起，公式如下:\n",
    "\n",
    "<center>$P(T)=P(w_1)P(w_2|w_1)...P(w_n|w_{n-1}w_{n-2}...w_2w_1)$</center>\n",
    "\n",
    "存在问题：预测一个词需要前面所有的词来计算\n",
    "\n",
    "解决方法：引入马尔科夫假设，某个单词只有前面几个词有关系，并不是前面的所有词，一般可以认为距离接近的词之间的联系比距离较远的词联系紧密，引入马尔科夫假设后的N-Gram模型有:\n",
    "1. 一元模型(unigram model)：单词间是独立的概率\n",
    "2. 二元模型(bigram model)：前一个个单词推断后一个单词\n",
    "3. 三元模型(trigram model)：前两个单词推断后一个单词\n",
    "\n",
    "(2) **词袋模型 (Continuous Bag-of-words model,CBOW)**\n",
    "\n",
    "一句话中的第$n$个单词，可以由它前面几个和后面几个单词推断出来，CBOW是通过上下文来预测中间目标词的模型，该模型可以看作N-Gram模型的加强版\n",
    "\n",
    "<img src=\"./image/cbow.jpg\" width=\"50%\" height=\"50%\">\n",
    "\n",
    "(3) **Skip-Gram model (SG)**\n",
    "\n",
    "与CBOW相反，是通过中间词来预测上下文\n",
    "<img src=\"./image/skip-gram.jpg\" width=\"50%\" height=\"50%\">\n",
    "注意：语言模型可以通过分层softmax(hierarchical softmax)和负采样(negative sampling)来优化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (1) N-Gram"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 418,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.字典的长度: 621\n",
      "2.训练文本的长度: 1413\n",
      "3.样本字符串: Jack Ma founder and chairman of Chinese e-commerce giant Alibaba will leave his position as company chairman on Sept 10\n"
     ]
    }
   ],
   "source": [
    "# 1.导入news数据，从China Daily上找的3篇\n",
    "import re  # 导入正则匹配来切分字符串\n",
    "\n",
    "def read_file(file_path):\n",
    "    \"\"\"file_path: 读入文本数据的文件路径\n",
    "       vocabulary：返回训练文本的字典\n",
    "       file_idx：返回训练文本对应的字典序号\n",
    "    \"\"\"\n",
    "    assert os.path.exists(file_path), 'file is not exist!'\n",
    "    with open(file_path) as file:\n",
    "        file_str = file.read().strip()   # 读入为字符格式\n",
    "        split_rule = r'[\\s\\.\\,]+'        # 匹配至少一个空格、逗号或句号\n",
    "        file_list = re.split(split_rule, file_str) # 按规则分割文本字符串->list          \n",
    "        file_set = set(file_list)        # 单词去重\n",
    "        # 创建字典 word:index\n",
    "        vocabulary = {word:i for i, word in enumerate(file_set)}  \n",
    "        file_idx = [vocabulary[word] for word in file_list]\n",
    "#         print(vocabulary)\n",
    "#         print(file_list)\n",
    "#         print(file_idx)\n",
    "        return vocabulary, file_idx\n",
    "\n",
    "vocab, data = read_file('./data/news.txt')\n",
    "idx_to_word = {idx:word for word, idx in vocab.items()}   # 创建index:word的字典\n",
    "print('1.字典的长度:', len(vocab))            # 将所有的词都作为字典了\n",
    "print('2.训练文本的长度:', len(data))\n",
    "samples_str = ' '.join([idx_to_word[index] for index in data[:20]])\n",
    "print('3.样本字符串:', samples_str)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以下数据生成器程序参考:[stikbuf/Language_Modeling](https://github.com/stikbuf/Language_Modeling/blob/master/Keras_Character_Aware_Neural_Language_Models.ipynb)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 419,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2.创建N-Gram的数据集 x(n-i),...,x(n-2),x(n-1)=>x(n)，前i个单词推断当前单词\n",
    "# 数据生成的思想，先将[0,1,...,len(datasets)-1]索引随机，选出batch_size个索引\n",
    "# 通过该索引使用datasets[i:i+num_infer+1]读取所需数据，但注意索引溢出\n",
    "def create_data(datasets, num_infer, batch_size):\n",
    "    \"\"\"datasets:数据集list形式\n",
    "       num_infer:需要参考的词数量\n",
    "       X:所需的推测数据 [batch, num_infer] numpy数组形式\n",
    "       y:目标数据 [batch, 1]\n",
    "    \"\"\"\n",
    "    while True:\n",
    "        rnd_idx = list(range(len(datasets) - num_infer - 1)) # 保证索引不溢出\n",
    "        np.random.shuffle(rnd_idx)    # 打乱顺序，随机取出一个batch的数据\n",
    "        batch_start = 0               # 按batch_size大小读取数据的索引\n",
    "        X, Y = [], []\n",
    "        while batch_start + batch_size < len(rnd_idx):    # 保证索引不溢出\n",
    "            # 取batch_size个索引\n",
    "            batch_idx = rnd_idx[batch_start:batch_start+batch_size] \n",
    "            temp_data = np.array([datasets[i:i+num_infer+1] for i in batch_idx])\n",
    "#             print(temp_data)\n",
    "\n",
    "            X = temp_data[:, :-1]       # 除去最后一列的所有数据\n",
    "            Y = temp_data[:, -1:]       # 最后一列数据 \n",
    "            batch_start += batch_size   # 方便读取下一个batch索引\n",
    "            yield X, Y\n",
    "        \n",
    "gen = create_data(data, 2, 3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 420,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.训练数据X:\n",
      " [[443 445]\n",
      " [ 20  48]\n",
      " [ 58  28]]\n",
      "2.目标数据y:\n",
      " [[317]\n",
      " [230]\n",
      " [525]]\n"
     ]
    }
   ],
   "source": [
    "x, y = next(gen)\n",
    "print('1.训练数据X:\\n', x)\n",
    "print('2.目标数据y:\\n', y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 421,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 3.创建模型\n",
    "class NgramModel(nn.Module):\n",
    "    def __init__(self, vocab_size, context_size, n_dim, hidden_dim=128):\n",
    "        super().__init__()\n",
    "        self.n_word = vocab_size                            # 训练数据的字典大小\n",
    "        self.context_size = context_size\n",
    "        self.n_dim = n_dim\n",
    "        # 嵌入后的维度 [batch, context_size] -> [batch, context_size, n_dim]\n",
    "        self.embedding = nn.Embedding(self.n_word, n_dim)   # 嵌入维度为n_dim\n",
    "        # get [batch, hidden_dim]\n",
    "        self.linear1 = nn.Linear(context_size*n_dim, hidden_dim)\n",
    "        # get [batch, self.n_word] 与字典索引对应\n",
    "        self.linear2 = nn.Linear(hidden_dim, self.n_word)   # 输出字典的维度\n",
    "    def forward(self, x):\n",
    "#         print('x:', x.shape)\n",
    "        embeds = self.embedding(x)\n",
    "         # get [batch,context_size*n_dim]\n",
    "#         print('embeds:', embeds.shape)\n",
    "        embeds = embeds.view(-1, self.context_size*self.n_dim)\n",
    "        out = self.linear1(embeds)       # get [batch, hidden_dim]\n",
    "        out = F.relu(out)\n",
    "        out = self.linear2(out)          # get [batch, n_word]\n",
    "        log_prob = F.log_softmax(out, 1) # 沿第dim=1轴计算，损失函数有softmax，可略\n",
    "        return log_prob    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.输入的数据类型: torch.LongTensor\n",
      "2.模型输出: torch.Size([5, 621]) \n",
      " tensor([[-6.2034, -6.3276, -6.7680,  ..., -6.2702, -6.7856, -6.1416],\n",
      "        [-6.3747, -6.7633, -6.2730,  ..., -6.7733, -6.7820, -6.3854],\n",
      "        [-5.9291, -6.6725, -6.2418,  ..., -6.2424, -6.6727, -6.8264],\n",
      "        [-6.3766, -6.2838, -6.1931,  ..., -6.5251, -6.5047, -6.6470],\n",
      "        [-6.4934, -6.3022, -6.3352,  ..., -6.5213, -6.0210, -7.1104]],\n",
      "       grad_fn=<LogSoftmaxBackward>)\n"
     ]
    }
   ],
   "source": [
    "# 创建模型并进行测试\n",
    "context = 2  # 参考前2各词\n",
    "inputs = np.random.randint(0,len(vocab), [5,2])\n",
    "inputs = torch.from_numpy(inputs).long()   # 必须将数据转换为LongTensor才行\n",
    "print('1.输入的数据类型:', inputs.type())\n",
    "NGram_model = NgramModel(len(vocab), context, 128, hidden_dim=256)\n",
    "outputs = NGram_model(inputs)\n",
    "print('2.模型输出:', outputs.shape, '\\n', outputs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "embedding.weight torch.Size([621, 128])\n",
      "linear1.weight torch.Size([256, 256])\n",
      "linear1.bias torch.Size([256])\n",
      "linear2.weight torch.Size([621, 256])\n",
      "linear2.bias torch.Size([621])\n"
     ]
    }
   ],
   "source": [
    "# 更加模型参数可以看出embedding的参数也要训练\n",
    "for name, m in NGram_model.named_parameters():\n",
    "    print(name, m.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 4.定义损失函数，优化函数\n",
    "criterion = nn.CrossEntropyLoss()  # 网络输出vocab维度，目标vocab维度，类似于分类\n",
    "optimizer = optim.SGD(NGram_model.parameters(), lr=1e-2)#, weight_decay=1e-5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 5.训练网络\n",
    "context = 2              # 参考的单词数目\n",
    "epochs = 50000\n",
    "batch_size = 32\n",
    "def train(model, batch_size, epochs):\n",
    "    model.to(device)     # 优先使用GPU\n",
    "    gen = create_data(data, context, batch_size)  # 创建数据生成器\n",
    "    train_loss = 0\n",
    "    for epoch in range(epochs):\n",
    "        x, y = next(gen)  # 获取训练数据\n",
    "        x, y = map(lambda x:torch.LongTensor(x).to(device), [x, y])\n",
    "        y = y.flatten()\n",
    "        prediction = model(x)\n",
    "#         print(prediction.shape)\n",
    "#         print(y.shape)\n",
    "        loss = criterion(prediction, y)\n",
    "#         train_loss += loss.item()    # 叠加\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        if epoch % 2000 == 0:\n",
    "            print('epoch {} loss {:.4f}'.format(epoch, loss.item()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 0 loss 6.5259\n",
      "epoch 2000 loss 4.1270\n",
      "epoch 4000 loss 2.4980\n",
      "epoch 6000 loss 1.0840\n",
      "epoch 8000 loss 0.3508\n",
      "epoch 10000 loss 0.2623\n",
      "epoch 12000 loss 0.1866\n",
      "epoch 14000 loss 0.2305\n",
      "epoch 16000 loss 0.1297\n",
      "epoch 18000 loss 0.2986\n",
      "epoch 20000 loss 0.2003\n",
      "epoch 22000 loss 0.1110\n",
      "epoch 24000 loss 0.1851\n",
      "epoch 26000 loss 0.0806\n",
      "epoch 28000 loss 0.1409\n",
      "epoch 30000 loss 0.1640\n",
      "epoch 32000 loss 0.1360\n",
      "epoch 34000 loss 0.0835\n",
      "epoch 36000 loss 0.1297\n",
      "epoch 38000 loss 0.3887\n",
      "epoch 40000 loss 0.2297\n",
      "epoch 42000 loss 0.0859\n",
      "epoch 44000 loss 0.2147\n",
      "epoch 46000 loss 0.0837\n",
      "epoch 48000 loss 0.0971\n"
     ]
    }
   ],
   "source": [
    "# 调用训练函数，训练网络\n",
    "train(NGram_model, batch_size, epochs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------------------------------------------------\n",
      "1.测试的输入单词 ['easier', 'and']  输出单词 ['more']\n",
      "2.预测的单词为  more\n",
      "------------------------------------------------------------\n",
      "1.测试的输入单词 ['petrochemical', 'factory']  输出单词 ['with']\n",
      "2.预测的单词为  with\n",
      "------------------------------------------------------------\n",
      "1.测试的输入单词 ['Tesla', 'announced']  输出单词 ['it']\n",
      "2.预测的单词为  it\n",
      "------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "gen = create_data(data, context, batch_size=1)\n",
    "idx_to_word = {idx:word for word, idx in vocab.items()}   # 创建index:word的字典\n",
    "\n",
    "def test(num_samples=1):\n",
    "    print('-'*60)\n",
    "    for i in range(num_samples):\n",
    "        word, label = next(gen)\n",
    "        # print(word)\n",
    "        # print(label)\n",
    "        # [[220 483]] numpy数组数据\n",
    "        inputs_idx = [word[0,i] for i in range(word.shape[1])] \n",
    "        # print(inputs_idx)\n",
    "\n",
    "        print('1.测试的输入单词 {}'.format([idx_to_word[idx] for idx in inputs_idx]), end='')\n",
    "        print('  输出单词 {}'.format([idx_to_word[label[0,0]]]))\n",
    "\n",
    "        word = torch.from_numpy(word).long().to(device)\n",
    "        prediction = NGram_model(word)\n",
    "        pred_label_idx = prediction.max(1)[1].item()\n",
    "        # print(pred_label_idx)\n",
    "        print('2.预测的单词为 ', idx_to_word[pred_label_idx])\n",
    "        print('-'*60)\n",
    "test(3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[1. 2.]\n",
      " [3. 4.]] <class 'numpy.ndarray'>\n",
      "tensor([[1, 2],\n",
      "        [3, 4]]) torch.LongTensor\n",
      "tensor([[1., 2.],\n",
      "        [3., 4.]], dtype=torch.float64) torch.DoubleTensor\n"
     ]
    }
   ],
   "source": [
    "num_np = np.array([[1,2],[3,4]], dtype=np.float64)\n",
    "print(num_np, type(num_np))\n",
    "num_torch1 = torch.LongTensor(num_np)   # 可以直接控制转换的类型\n",
    "print(num_torch1, num_torch1.type())\n",
    "num_torch2 = torch.from_numpy(num_np)   # 转换的tensor类型由numpy数据类型决定\n",
    "print(num_torch2, num_torch2.type())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结:\n",
    "- `CrossEntropyLoss()`在分类任务中标签非one-hot编码下使用，输出维度[batch, out_dim]，目标维度[batch]仅是一维的数组，且里面的元素属于[0,num_classes-1]之间\n",
    "- `torch.Tensor(numpy.ndarray)`和`torch.from_numpy(numpy.ndarray)`都可以将numpy数据转换为tensor，但前者直接控制转换的类型，后者是根据numpy的类型而定\n",
    "- 根据模型的参数，看出embedding层的权重$w$的维度为 **[vocab_size, embedding_dim]**，单词$x$的维度为 **[vocab_size,1]**，通过$x^Tw$可以得到[1,embedding_dim]大小的嵌入向量，或$w^Tx$得到[embedding_dim,1]大小的嵌入向量，注意其中权重$w$的参数**需要训练**\n",
    "- 这里采用了传统的神经网络方法，**NgramModel模型的输出维度为 [batch_size, vocab_size]**，当字典较小时，可以输出字典大小的维度，取值最大的一维索引号可以作为输出单词的索引，进而得到单词，。但当字典很大时如果输出字典大小的维度，会增大计算消耗，所以有**分层softmax(Hierarchical Softmax)**和**负采样(Negative Sampling)**两种方法来处理该问题\n",
    "    \n",
    "- N-Gram模型的原理图如下：\n",
    "\n",
    "参考：[word2vec(CBOW/Skip-gram)](https://www.cnblogs.com/Determined22/p/5804455.html)\n",
    "<img src=\"./image/N-Gram.jpg\" width=\"60%\" height=\"60%\">\n",
    "<center>N-Gram模型的原理图</center>\n",
    "(1) 其中$w_{t-i}, \\ i=1,2,...,n-1$表示单词$w_t$的上下文单词context，这$n-1$个单词，每个单词都经过嵌入矩阵$C$得到一个$m$维度的嵌入向量，这些向量**首位相接**得到长度为$(n-1)*m$的向量$x$，然后经过权重$H_{h\\times(n-1)m}$得到隐层(维度为$h$)，并使用ReLU激活，这对应着程序的linear1部分，\n",
    "\n",
    "(2) 之后再进过权重$U_{V\\times h}$得到维度为$1\\times V$的输出$z$，公式如下所示，$V$表示字典大小，这对应着linear2部分\n",
    "$$z=U\\cdot ReLU(H\\cdot x+d)+b \\tag{1}$$\n",
    "$$\\hat y_t=P(w_t \\mid w_{t-n-1},w_{t-n-2},\\cdots, w_{t-1})=softmax(z_t)=\\frac{e^{z_t}}{\\sum_{k=1}^V e^{z_k}} \\tag{2}$$\n",
    "(3) 因为输出$z$为$1\\times V$的，所以对它使用softmax可以找到$V$个数中最大的一个，即预测词的索引号，**$t$表示目标词的序号**，$\\hat y_t$表示预测词表中第$t$个词$w_t$的概率，该值越大说明预测的越准确，优化的目的使它变大，损失函数优化方向是变小，所以对该值取负对数即可，目标词$w_t$对应的损失如下：\n",
    "$$L=-\\log \\hat y_t \\tag{3}$$\n",
    "模型的损失是将所有中心词的损失求和即可，设$D$表示整个语料，(为了直观常数部分先省略)模型的损失函数如下：\n",
    "$$L=-\\sum_{w_{t-(n-1)}^t \\in D} logsoftmax(z_t) \\tag{4}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "上述的N-Gram语言模型在训练的同时也得到了词向量，2013年谷歌开源的word2vec工具中包含的CBOW和Skip-gram是以得到词向量为目标的模型，所有没有过多的考虑语言本身的词序信息，下面介绍没有使用优化加速策略的CBOW和Skip-gram模型，输出维度为字典大小"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (2) CBOW"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. 定义生成器产生context词 [w(n-2),w(n-1), w(n+1), w(n+2)]与中心目标词 w(n)\n",
    "# 已知 字典vocab，字典键值对互换idx_to_word，训练数据data\n",
    "def gen_cbow(datasets, batch_size, context):\n",
    "    \"\"\"datasets:读入的数据集\n",
    "       batch_size:每次生成的批次\n",
    "       context:中心词对应的上下文(单边的)，cbow使用两边的2*context来作为上下文\n",
    "    \"\"\"\n",
    "    while True:\n",
    "        # 列表切片[a,b]中b无法取，所以len(datasets)-context-1才行\n",
    "        rnd_idx = list(range(context, len(datasets) - context - 1)) # 保证索引不溢出\n",
    "        np.random.shuffle(rnd_idx)    # 打乱顺序，随机取出一个batch的数据\n",
    "        batch_start = 0               # 按batch_size大小读取数据的索引\n",
    "        X, Y = [], []\n",
    "        while batch_start + batch_size < len(rnd_idx):    # 保证索引不溢出\n",
    "            # 取batch_size个索引\n",
    "            batch_idx = rnd_idx[batch_start:batch_start+batch_size] \n",
    "            temp_data = np.array([datasets[i-context:i+context+1] for i in batch_idx])\n",
    "#             print(temp_data)\n",
    "            X = np.delete(temp_data, context, axis=1)  # 删除context列\n",
    "            Y = temp_data[:, context:context+1]       # 获取context列数据 \n",
    "            batch_start += batch_size   # 方便读取下一个batch索引\n",
    "            yield X, Y\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.生成的上下文词one-hot索引:\n",
      " [[297  10 144 443]\n",
      " [559 108 617 348]\n",
      " [186 138  68 273]]\n",
      "2.生成的中心词one-hot索引:\n",
      " [[123]\n",
      " [ 33]\n",
      " [144]]\n"
     ]
    }
   ],
   "source": [
    "gen = gen_cbow(data, batch_size=3, context=2)\n",
    "x,y = next(gen)\n",
    "print('1.生成的上下文词one-hot索引:\\n', x)\n",
    "print('2.生成的中心词one-hot索引:\\n',y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2.定义模型\n",
    "class CBOW(nn.Module):\n",
    "    def __init__(self, vocab_size, embedding_dim):\n",
    "        \"\"\"vocab_size 字典大小\n",
    "           embedding_dim：词向量嵌入的维度\n",
    "           hidden_dim：全连接层维度大小       \n",
    "        \"\"\"\n",
    "        super().__init__()\n",
    "        # 将词表的维度嵌入n_dim内\n",
    "        self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.linear = nn.Linear(embedding_dim, vocab_size)\n",
    "\n",
    "    def forward(self, x):\n",
    "        # get [batch,2*context, embedding_dim]，context表示上下文\n",
    "        embeds = self.embedding(x)  \n",
    "        # 将嵌入向量在context维度上求平均(与N-Gram的区别没有使用隐层)\n",
    "        bow = torch.mean(embeds, 1)       # get [batch, embedding_dim]\n",
    "        logits = self.linear(bow)        # get [batch, vocab_size]\n",
    "        return logits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.输入的数据类型: torch.LongTensor\n",
      "2.模型输出: torch.Size([1, 621])\n"
     ]
    }
   ],
   "source": [
    "# 创建模型并测试\n",
    "test_context = 2  # 参考前2各词\n",
    "inputs = np.random.randint(0,len(vocab), [1,2*test_context])\n",
    "inputs = torch.from_numpy(inputs).long()   # 必须将数据转换为LongTensor才行\n",
    "print('1.输入的数据类型:', inputs.type())\n",
    "test_model = CBOW(len(vocab), 128)\n",
    "outputs = test_model(inputs)\n",
    "print('2.模型输出:', outputs.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "batches/epoch: 45.0  max_epochs: 667.0\n"
     ]
    }
   ],
   "source": [
    "# 3.定义损失函数及优化方法\n",
    "context = 5              # 参考的单词数目\n",
    "batches = 30000          # 训练的总批次数\n",
    "batch_size = 32\n",
    "batches_per_epoch = np.ceil(len(data)/batch_size)\n",
    "max_epochs = np.ceil(batches/batches_per_epoch)\n",
    "print('batches/epoch: {}  max_epochs: {}'.format(batches_per_epoch, max_epochs))\n",
    "\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "cbow_model = CBOW(len(vocab), 100)\n",
    "# parameters = nn.utils.clip_grad_norm_(cbow_model.parameters(), max_norm=1)\n",
    "optimizer = optim.SGD(cbow_model.parameters(), lr=1e-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 0 loss 6.4722\n",
      "epoch 2000 loss 4.1838\n",
      "epoch 4000 loss 3.1784\n",
      "epoch 6000 loss 2.7150\n",
      "epoch 8000 loss 1.7723\n",
      "epoch 10000 loss 1.3701\n",
      "epoch 12000 loss 0.9754\n",
      "epoch 14000 loss 0.7765\n",
      "epoch 16000 loss 0.5984\n",
      "epoch 18000 loss 0.4629\n",
      "epoch 20000 loss 0.3784\n",
      "epoch 22000 loss 0.3291\n",
      "epoch 24000 loss 0.2514\n",
      "epoch 26000 loss 0.2022\n",
      "epoch 28000 loss 0.1979\n",
      "epoch 29999 loss 0.1662\n"
     ]
    }
   ],
   "source": [
    "# 4.训练网络\n",
    "def train(dataloader, model, batch_size, batches):\n",
    "    \"\"\"dataloader：生成批次数据的生成器\n",
    "       model：需要训练的模型\n",
    "       batch_size：批次大小\n",
    "       epochs：训练数据的轮数    \n",
    "    \"\"\"\n",
    "    model.to(device)     # 优先使用GPU\n",
    "    train_loss = 0\n",
    "    for epoch in range(batches):\n",
    "        x, y = next(dataloader)  # 获取训练数据\n",
    "        x, y = map(lambda x:torch.LongTensor(x).to(device), [x, y])\n",
    "        y = y.flatten()          # 把y变为一维数组\n",
    "        prediction = model(x)\n",
    "        loss = criterion(prediction, y)\n",
    "        \n",
    "#         train_loss += loss.item()    # 叠加\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        if epoch % 2000 == 0 or epoch == batches-1:\n",
    "            print('epoch {} loss {:.4f}'.format(epoch, loss.item()))\n",
    "\n",
    "data_gen = gen_cbow(data, batch_size=batch_size, context=context) # 创建数据生成器\n",
    "# cbow_model = CBOW(len(vocab), 256, context, hidden_dim=512)  # 创建模型\n",
    "train(data_gen, cbow_model, batch_size, batches)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------------------------------------------------\n",
      "1.上下文: ['run', 'between', 'the', 'West', 'Kowloon', 'Futian', 'eijing', 'authorities', 'released', 'a']\n",
      "  中心词: ['and']\n",
      "2.预测的单词为  and\n",
      "------------------------------------------------------------\n",
      "1.上下文: ['Chinese', 'tourists', 'as', 'its', 'breezy', 'as', '58', 'percent', 'of', 'Chinese']\n",
      "  中心词: ['summer']\n",
      "2.预测的单词为  summer\n",
      "------------------------------------------------------------\n",
      "1.上下文: ['a', 'good', 'time', 'Today', 'and', 'tourism', 'is', 'not', 'based', 'on']\n",
      "  中心词: [\"tomorrow's\"]\n",
      "2.预测的单词为  tomorrow's\n",
      "------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "# 5.测试模型，这里使用数据生成器\n",
    "def test(dataloader, model, context, num_samples=1):\n",
    "    \"\"\"dataloader：生成批次数据的生成器\n",
    "       model：需要测试的模型\n",
    "       context：单侧的上下文单词数\n",
    "       num_samples：测试的样本数\n",
    "    \"\"\"\n",
    "    print('-'*60)\n",
    "    for i in range(num_samples):\n",
    "        word, label = next(dataloader)\n",
    "#         print(word)\n",
    "#         print(label)\n",
    "        inputs_idx = [word[0,i] for i in range(word.shape[1]) ] \n",
    "#         print(inputs_idx)\n",
    "\n",
    "        print('1.上下文: {}'.format([idx_to_word[idx] for idx in inputs_idx]))\n",
    "        print('  中心词: {}'.format([idx_to_word[label[0,0]]]))\n",
    "\n",
    "        word = torch.from_numpy(word).long().to(device)\n",
    "        model.to(device)\n",
    "        prediction = model(word)\n",
    "#         print(prediction.shape)\n",
    "        pred_label_idx = prediction.max(1)[1].item()\n",
    "#         print(pred_label_idx)\n",
    "        print('2.预测的单词为 ', idx_to_word[pred_label_idx])\n",
    "        print('-'*60)\n",
    "\n",
    "data_gen = gen_cbow(data, batch_size=1, context=context)\n",
    "test(data_gen, cbow_model, context, 3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "- **先创建模型再定义优化器**，如果之后不小心又创建了一遍模型，才开始训练模型，则模型无法更新参数，原因是优化器是针对先创建的模型model1的，但传入训练时的模型是model2，两个模型是不同的，即使名称相同，但`id`查看后是不同的模型，在训练是model2进行前向计算，但优化器无法更新model2的参数，所以损失函数会保持不变\n",
    "- 使用Adam优化的太快，设置很小的学习率才能缓慢的优化，而相同学习率下SGD优化很慢，证明Adam优化的能力更强\n",
    "- 这里的CBOW和N-Gram没有本质的区别\n",
    "\n",
    "\n",
    "> (1) 相同点：\n",
    "N-Gram是选择中心词前面的$c$个context，而CBOW选择前后共$2c$个context做训练集，并且中心词都只有一个，模型中都使用了传统的神经网络，用全连接层输出字典大小维度，然后取softmax概率，最大概率的一维索引判定为预测词的索引，进而得到预测词\n",
    "\n",
    ">(2) 不同的：N-Gram使用的神经网络包含隐层，而**CBOW没有隐层**，而是直接将$2c$个context在context的维度上**求和**，然后直接通过一个全连接层输出字典大小的维度，然后进过softmax得到预测词\n",
    "\n",
    "\n",
    "- pytorch中CrossEntropyLoss损失函数的公式如下：\n",
    "$$\\text{loss}(x, class) = -\\log\\left(\\frac{\\exp(x[class])}{\\sum_j \\exp(x[j])}\\right)\n",
    "                   = -x[class] + \\log\\left(\\sum_j \\exp(x[j])\\right)$$\n",
    "带权类别的公式如下：\n",
    "$$\\text{loss}(x, class) = weight[class] \\left(-x[class] + \\log\\left(\\sum_j \\exp(x[j])\\right)\\right)$$\n",
    "所以网络的输出$x$只要维度大小是所有的类别数即可，不需要使用og_softmax函数对输出做softmax处理，因为在该交叉熵损失函数中，已经对输出进行了softmax处理了，所有维度已经归一化为0~1的概率值\n",
    "\n",
    "二分类的BCELoss损失函数如下:\n",
    "$$\\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad\n",
    "    l_n = - w_n \\left[ y_n \\cdot \\log x_n + (1 - y_n) \\cdot \\log (1 - x_n) \\right]\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- CBOW神经网络模型如下图所示：\n",
    "\n",
    "参考：[word2vec(CBOW/Skip-gram)](https://www.cnblogs.com/Determined22/p/5804455.html)\n",
    "\n",
    "<img src=\"./image/cbow_dnn.jpg\" width=\"60%\" height=\"60%\">\n",
    "(1) 中心词$w_t$的上下文单词为$w_{t-m},...,w_{t-1},w_{t+1},...,w_{t+m}$，one-hot表示为$x_{t+j}$，经过嵌入后得到词向量$v_{t+j}$，公式如下：\n",
    "$$v_{t+j} = V_{n\\times V} x_{t+j}, \\quad  j\\in \\lbrace -m,...-1,1,...,m\\rbrace \\tag{1}$$\n",
    "(2) 将上下文的词向量求平均得到投影(projection)向量$\\hat v_t$\n",
    "$$\\hat v_t=\\frac{1}{2m}\\sum_jv_{t+j},\\quad  j\\in \\lbrace -m,...-1,1,...,m\\rbrace \\tag{2}$$\n",
    "(3) 再次经过转换输出与字典大小维度的向量$z$，并使用softmax输出各个维度的概率：\n",
    "$$z=U\\hat v_t \\tag{3}$$\n",
    "$$\\hat y_i=P(w_i\\mid w_{t-m},...,w_{t-1},w_{t+1},...,w_{t+m})$$\n",
    "$$\\hat y_i=softmax(z_i)=softmax(u_i^T\\hat v_t)\\tag{4}$$\n",
    "(4) 中心词$w_t$对应的损失为:\n",
    "$$L=-\\log \\hat y_t=-log\\frac{e^{u_t^T\\hat v_t}}{\\sum_{k=1}^Ve^{u_k^T\\hat v_t}}=-z_t+log\\sum_{k=1}^Ve^{z_k}\\tag{5}$$\n",
    "模型对应的损失将语料中所有可能的中心词的损失求和\n",
    "$$L=-\\sum_{w_{t-m}^{t+m}\\in D} \\log \\hat y_t \\tag{6}$$\n",
    "**注意：$U$的参数每次迭代都更新，但$V$每次只更新中心词的上下文对应的部分**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (3) Skip-Gram"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 422,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1.生成数据，skip-gram由中心词预测上下文，\n",
    "# 所以输入：x(t)  预测：x(t-m)...,x(t-1),x(t+1)...,x(t-m)\n",
    "# <<<<已知 字典vocab，字典键值对互换idx_to_word，训练数据data>>>\n",
    "# cbow和 skip—gram的X和y数据是相反的，所以可以使用同一个数据生成器gen_cbow()\n",
    "# 为了直观将该函数包装一下\n",
    "def gen_skipgram(datasets, batch_size, context):\n",
    "    gen=  gen_cbow(datasets, batch_size, context)\n",
    "    while True:\n",
    "        x, y = next(gen)\n",
    "        yield y, x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 423,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.输入中心词:\n",
      " [[429]\n",
      " [597]\n",
      " [577]]\n",
      "2.上下文:\n",
      " [[435 416  10 314]\n",
      " [305 187  10 523]\n",
      " [383  56 183 186]]\n"
     ]
    }
   ],
   "source": [
    "gen_data = gen_skipgram(data, batch_size=3, context=2)\n",
    "x, y =next(gen_data)\n",
    "print('1.输入中心词:\\n', x)\n",
    "print('2.上下文:\\n', y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 445,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2.构建模型\n",
    "class SkipGram(nn.Module):\n",
    "    def __init__(self, vocab_size, context, embedding_dim):\n",
    "        \"\"\"vocab_size 字典大小\n",
    "           context:单侧的上下文大小\n",
    "           embedding_dim：词向量嵌入的维度\n",
    "           hidden_dim：全连接层维度大小       \n",
    "        \"\"\"\n",
    "        super().__init__()\n",
    "        # 将词表的维度嵌入n_dim内\n",
    "        self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
    "        self.linear = nn.Linear(embedding_dim, vocab_size)\n",
    "        \n",
    "        for m in self.parameters():\n",
    "            nn.init.karming_normal_()\n",
    "\n",
    "    def forward(self, x):\n",
    "        # x: get [batch, 2*context]\n",
    "        # embeds: get [batch,1, embedding_dim]，context表示上下文\n",
    "#         print('x',x.shape)\n",
    "        embeds = self.embedding(x)  \n",
    "#         print('embeds',embeds.shape)\n",
    "        out = self.linear(embeds)       # get [batch, 1, vocab_size]\n",
    "        out = out[:,0,:]                # get [batch, vocab_size]\n",
    "        logits = F.log_softmax(out, dim=1)\n",
    "        return logits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 446,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.输入的数据类型: torch.LongTensor\n",
      "2.模型输出: torch.Size([1, 621])\n"
     ]
    }
   ],
   "source": [
    "# 创建模型并测试\n",
    "test_context = 2  # 参考前2各词\n",
    "inputs = np.random.randint(0,len(vocab), [1,2*test_context])\n",
    "inputs = torch.from_numpy(inputs).long()   # 必须将数据转换为LongTensor才行\n",
    "print('1.输入的数据类型:', inputs.type())\n",
    "test_model = SkipGram(len(vocab), test_context, 128)\n",
    "outputs = test_model(inputs)\n",
    "print('2.模型输出:', outputs.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 515,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "embedding.weight torch.Size([621, 128])\n",
      "linear.weight torch.Size([621, 128])\n",
      "linear.bias torch.Size([621])\n"
     ]
    }
   ],
   "source": [
    "for m in test_model.named_parameters():\n",
    "    print(m[0], m[1].shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 507,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "batches/epoch: 45.0  max_epochs: 1778.0\n"
     ]
    }
   ],
   "source": [
    "# 3.定义损失函数及优化方法\n",
    "context = 1              # 参考的单词数目\n",
    "batches = 80000          # 训练的总批次数\n",
    "batch_size = 32\n",
    "batches_per_epoch = np.ceil(len(data)/batch_size)\n",
    "max_epochs = np.ceil(batches/batches_per_epoch)\n",
    "print('batches/epoch: {}  max_epochs: {}'.format(batches_per_epoch, max_epochs))\n",
    "\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "skipgram_model = SkipGram(len(vocab), context, 128)\n",
    "optimizer = optim.Adam(skipgram_model.parameters(), lr=1e-4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 508,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 0 loss 13.1804\n",
      "epoch 2000 loss 9.5221\n",
      "epoch 4000 loss 7.9127\n",
      "epoch 6000 loss 6.0929\n",
      "epoch 8000 loss 4.7584\n",
      "epoch 10000 loss 3.8265\n",
      "epoch 12000 loss 4.2760\n",
      "epoch 14000 loss 4.4726\n",
      "epoch 16000 loss 3.7470\n",
      "epoch 18000 loss 3.9840\n",
      "epoch 20000 loss 3.8772\n",
      "epoch 22000 loss 4.1451\n",
      "epoch 24000 loss 3.5803\n",
      "epoch 26000 loss 4.1551\n",
      "epoch 28000 loss 2.9056\n",
      "epoch 30000 loss 4.1047\n",
      "epoch 32000 loss 4.2547\n",
      "epoch 34000 loss 3.5480\n",
      "epoch 36000 loss 2.9102\n",
      "epoch 38000 loss 3.4955\n",
      "epoch 40000 loss 4.0368\n",
      "epoch 42000 loss 3.2990\n",
      "epoch 44000 loss 3.9054\n",
      "epoch 46000 loss 3.6118\n",
      "epoch 48000 loss 4.3859\n",
      "epoch 50000 loss 3.4964\n",
      "epoch 52000 loss 3.7660\n",
      "epoch 54000 loss 4.4865\n",
      "epoch 56000 loss 3.4058\n",
      "epoch 58000 loss 3.7130\n",
      "epoch 60000 loss 3.7271\n",
      "epoch 62000 loss 3.4512\n",
      "epoch 64000 loss 4.5993\n",
      "epoch 66000 loss 4.2181\n",
      "epoch 68000 loss 3.3303\n",
      "epoch 70000 loss 3.8687\n",
      "epoch 72000 loss 3.4631\n",
      "epoch 74000 loss 4.0942\n",
      "epoch 76000 loss 4.2376\n",
      "epoch 78000 loss 4.3446\n",
      "epoch 79999 loss 3.8101\n"
     ]
    }
   ],
   "source": [
    "# 训练网络\n",
    "def train(dataloader, model, batch_size, batches):\n",
    "    \"\"\"dataloader：生成批次数据的生成器\n",
    "       model：需要训练的模型\n",
    "       batch_size：批次大小\n",
    "       epochs：训练数据的轮数    \n",
    "    \"\"\"\n",
    "    model.to(device)     # 优先使用GPU    \n",
    "    for epoch in range(batches):\n",
    "        x, y = next(dataloader)  # 获取训练数据\n",
    "        x, y = map(lambda x:torch.LongTensor(x).to(device), [x, y])\n",
    "        prediction = model(x)\n",
    "#         print('pred:', prediction.shape)\n",
    "        total_loss = 0\n",
    "        for i in range(y.size(1)):   # 遍历每次的context\n",
    "#             print('---')\n",
    "            loss = criterion(prediction, y[:,i].flatten())\n",
    "            total_loss += loss\n",
    "        size = torch.Tensor([y.size(1)]).to(device)\n",
    "\n",
    "#             print('loss', loss, total_loss)\n",
    "\n",
    "#         train_loss += loss.item()    # 叠加\n",
    "        optimizer.zero_grad()\n",
    "        total_loss.backward()\n",
    "        optimizer.step()\n",
    "        if epoch % 2000 == 0 or epoch == batches-1:\n",
    "            print('epoch {} loss {:.4f}'.format(epoch, total_loss.item()))\n",
    "\n",
    "data_gen = gen_skipgram(data, batch_size=batch_size, context=context) # 创建数据生成器\n",
    "# cbow_model = CBOW(len(vocab), 256, context, hidden_dim=512)  # 创建模型\n",
    "train(data_gen, skipgram_model, batch_size, batches)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 513,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "------------------------------------------------------------\n",
      "1.中心词: ['Alibaba']\n",
      "  上下文: ['of', 'and']\n",
      "2.预测的单词为  ['and', 'of']\n",
      "------------------------------------------------------------\n",
      "1.中心词: ['the']\n",
      "  上下文: ['enter', 'Chinese']\n",
      "2.预测的单词为  ['to', 'in']\n",
      "------------------------------------------------------------\n",
      "1.中心词: ['have']\n",
      "  上下文: ['companies', 'unveiled']\n",
      "2.预测的单词为  ['should', 'a']\n",
      "------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "# 5.测试模型，这里使用数据生成器\n",
    "def test(dataloader, model, context, num_samples=1):\n",
    "    \"\"\"dataloader：生成批次数据的生成器\n",
    "       model：需要测试的模型\n",
    "       context：单侧的上下文单词数\n",
    "       num_samples：测试的样本数\n",
    "    \"\"\"\n",
    "    print('-'*60)\n",
    "    for i in range(num_samples):\n",
    "        word, label = next(dataloader)\n",
    "#         print(word)\n",
    "#         print(label)\n",
    "        target_idx = [label[0,i] for i in range(label.shape[1]) ] \n",
    "#         print(target_idx)\n",
    "        print('1.中心词: {}'.format([idx_to_word[word[0,0]]]))\n",
    "        print('  上下文: {}'.format([idx_to_word[idx] for idx in target_idx]))\n",
    "       \n",
    "        word = torch.from_numpy(word).long().to(device)\n",
    "        model.to(device)\n",
    "        prediction = model(word)\n",
    "        sort_out = torch.sort(prediction, dim=1)\n",
    "#         print(sort_out[1][:,:,-context:])\n",
    "        pred_label_idx = sort_out[1][:,-2*context:].flatten()\n",
    "        pred_label_idx = list(pred_label_idx.cpu())\n",
    "#         print(pred_label_idx)\n",
    "        print('2.预测的单词为 ', [idx_to_word[idx.item()] for idx in pred_label_idx])\n",
    "        print('-'*60)\n",
    "\n",
    "data_gen = gen_skipgram(data, batch_size=1, context=context)\n",
    "test(data_gen, skipgram_model, context, 3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "小结：\n",
    "- 参考网页\n",
    "https://github.com/fanglanting/skip-gram-pytorch\n",
    "https://github.com/theeluwin/pytorch-sgns\n",
    "https://towardsdatascience.com/implementing-word2vec-in-pytorch-skip-gram-model-e6bae040d2fb\n",
    "https://nbviewer.jupyter.org/github/DSKSD/DeepNLP-models-Pytorch/blob/master/notebooks/02.Skip-gram-Negative-Sampling.ipynb\n",
    "- skip-gram模型如下\n",
    "<img src=\"./image/skip-gram-dnn.jpg\" width=\"50%\" height=\"50%\">\n",
    "(1) $中心词W_t$的one-hot编码为$x_k$，进过嵌入矩阵$W_{V\\times N}$得到隐层向量$h$，隐藏层输出只与权重矩阵第$k$行相关\n",
    "$$h=x^TW_{V\\times N}=W_k\\tag{1}$$\n",
    "(2) 进过输出矩阵$W^{'}$得到上下文$y_{t-m},....,y_{t-1},y_{t+1}...,y_{t+m}$，各个上下文的维度都是字典大小，而且使用同一个$W^{'}$，一个中心词生成$2m$个向量，以**多标签分类**的思想来看待中心词预测上下文，就是一个中心词对应$2m$个上下文的标签\n",
    "$$y_i=W^{'T}h, \\quad i \\in \\lbrace t-m,...,t-1,t+1,...,t+m \\rbrace \\tag{2}$$\n",
    "(3) 单个中心词的损失函数如下，这里$C$表示上下文，即$\\lbrace t-m,...,t-1,t+1,...,t+m \\rbrace$，$D$表示整个字典，该公式表示为了使中心词预测每个上下文准确，所以要把所有的上下文的损失相加\n",
    "$$L=-\\sum_{i \\in C} log(\\frac{e^{y_i}}{\\sum_{j\\in D}e^{y_j}}) \\tag{3}$$\n",
    "(4) 模型的损失将所有中心词的损失相加，$center$表示中心词，公式如下所示：\n",
    "$$L=-\\sum_{center \\in D}\\sum_{i \\in C} log(\\frac{e^{y_{i,center}}}{\\sum_{j\\in D}e^{y_{j,center}}}) \\tag{4}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.3 CBOW和Skip-Gram的算法加速"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "word2vec是谷歌提出的训练词向量的工具，它也使用到CBOW和Skip-Gram两种语言模型，但使用分层softmax和负采样来创建这两种模型，从而较少了计算消耗，并实现很好的性能，下面介绍着两种模型\n",
    "\n",
    "### CBOW\n",
    "参考：[基于Hierarchical Softmax的模型概述](https://www.cnblogs.com/pinard/p/7243513.html)\n",
    "\n",
    "(1) 使用Hierarchical Softmax的方法\n",
    "与传统神经网络方法不同点：\n",
    "1. 从输入层到隐藏层的映射没有采用神经网络的线性变换加激活函数的方法(N-Gram例子中使用该方法)，而是直接对输入的上下文词向量求**平均**得到一个词向量\n",
    "2. 从隐藏层到输出的softmax层输出维度变化，DNN的方法中输出的维度是字典大小，而分层softmax使用**霍夫曼树**来作为隐藏层到输出softmax层的映射，如下图所示:\n",
    "\n",
    "<img src=\"./image/huffman_tree.jpg\" width=\"50%\" height=\"50%\">\n",
    "<center>霍夫曼树表示输出映射</center>\n",
    "    \n",
    "隐藏层到输出层的softmax不是一下完成，而是沿着霍夫曼树进行，二叉树左子树为负类(用1表示)，右子树为正类(用0表示)，从根节点出发，在某个节点处对应的哪个子类概率大就沿着哪个子树走，直到走到叶节点。叶子节点的个数与词汇表对应，而内部节点处子类对应的概率与模型参数有关，所以通过训练可以优化参数，最后优化的对数似然函数$L$如下，具体细节参考上述网页\n",
    "$$L=log\\prod_{j=2}^{l_w}(P(d_j^w|x_w,\\theta_{j-1}^w))=\\sum_{j=2}^{l_w}((1-d_j^w)log[\\sigma(x_w^T\\theta_{j-1}^w)]+d_j^w log[1-\\sigma(x_w^T\\theta_{j-1}^w)]) $$\n",
    "其中$P(d_j^w|x_w,\\theta_{j-1}^w)$代表$w$经过霍夫曼树的某一个节点$j$的逻辑回归概率，表达式如下：\n",
    "$$\\begin{equation}\n",
    "P(d_j^w|x_w,\\theta_{j-1}^w)=\\left\\{\n",
    "    \\begin{array}{lr}\n",
    "     \\sigma(x_w^T\\theta_{j-1}^w)   & d_j^w=0 \\\\\n",
    "     1-\\sigma(x_w^T\\theta_{j-1}^w) & d_j^w=1\n",
    "     \\end{array}\n",
    "\\right.\n",
    "\\end{equation}$$\n",
    "- 优化损失函数是为了求最小值，使用梯度下降法，在极大似然中为使似然概率最大使用梯度上升法\n",
    "\n",
    "\n",
    "---\n",
    "\n",
    "参考：[基于Negative Sampling的模型概述](http://www.cnblogs.com/pinard/p/7249903.html)\n",
    "\n",
    "(2) 使用负采样Negative Sampling方法\n",
    "\n",
    "在训练样本中，中心词$w$周围的上下文有$2c$个词，记为$context(w)$，这些词都是有相关联的，所以这$2c$个词是中心词的正例，然后进行**负采样**，随机选择$neg$个单词$w_i,i=1,2,...neg$作为负例。最终得到样本$(context(w),w_i)$，其中$i=1,2,...neg$，使用一个正例和$neg$个负例进行**二元逻辑回归**，训练模型更新参数\n",
    "\n",
    "设$w_0$表示正例，则正、负例满足下面公式："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$P(context(w_0),w_i)=\\sigma(x_{w_0}^T\\theta^{w_i}), \\quad y_i=1,i=0 \\tag{1}$$\n",
    "$$P(context(w_0),w_i)=1-\\sigma(x_{w_0}^T\\theta^{w_i}),\\quad  y_i=0,i=1,2,...,neg \\tag{2}$$\n",
    "\n",
    "## ？？？为什么不是$x_{w_i}^T$，为什么每个w都有权重，每个都这么算？\n",
    "\n",
    "模型的目的是分类准确，所以正样本对应(1)的概率越大越好，而负样本对应(2)的概率越大越好，所以得到似然函数为:\n",
    "$$\\prod_{i=0}^{neg}\\sigma(x_{w_0}^T\\theta^{w_i})^{y_i}(1-\\sigma(x_{w_0}^T\\theta^{w_i}))^{1-y_i} \\tag{3}$$\n",
    "对其求对数得：\n",
    "$$\\sum_{i=0}^{neg} y_ilog(\\sigma(x_{w_0}^T\\theta^{w_i})) + (1-y_i)log((1-\\sigma(x_{w_0}^T\\theta^{w_i}))) \\tag{4}$$\n",
    "\n",
    "**注意：上面的$w_0$表示正样本，$x_{w_0}$代表正样本的词向量，是其上下文的$2c$个单词平均后得到的，$\\theta^{w_i}$表示第$i$个负样本的词向量$w_i$对应的模型参数？？？**，正负样本共$neg+1$个单词一起进行拟合优化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "负采样方法：\n",
    "\n",
    "将一条线段分成$V$份，每份对应词汇表的一个词，根据词频率的大小来决定线段的长度，每个词$w$的线段长度用下面的公式计算：\n",
    "$$len(w)=\\frac{count(w)}{\\sum_{u\\in vocab} count(u)} \\tag{5}$$\n",
    "在word2vec中分子和分母都取3/4次幂：\n",
    "$$len(w)=\\frac{count(w)^{3/4}}{\\sum_{u\\in vocab} count(u)^{3/4}} \\tag{6}$$\n",
    "采用前，将长度为1的线段分成$M$等份，使$M>>V$，使频率高的单词对应多个$1/M$等份，采用时从$M$个位置随机取出$neg$个位置，采样到的位置对应的线段所属的词就是负样本，$M$取值为$10^8$，示意图如下：\n",
    "\n",
    "<img src=\"./image/negative_sampling.jpg\" width=\"70%\" height=\"70%\">"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 算法带补充"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
