{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d1962762-538f-4db9-83c8-0ff3fee4c8fe",
   "metadata": {},
   "source": [
    "# word2vec算法复现"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2bac0857-205a-41d1-9721-832b549b7b4e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-15T07:12:33.921399Z",
     "iopub.status.busy": "2024-07-15T07:12:33.920799Z",
     "iopub.status.idle": "2024-07-15T07:12:34.388267Z",
     "shell.execute_reply": "2024-07-15T07:12:34.387713Z",
     "shell.execute_reply.started": "2024-07-15T07:12:33.921359Z"
    }
   },
   "source": [
    "## 词向量介绍\n",
    "\n",
    "在自然语言处理任务中，词向量（Word Embedding）是表示自然语言里单词的一种方法，即把每个词都表示为一个N维空间内的点，即一个高维空间内的向量。通过这种方法，实现把自然语言计算转换为向量计算。\n",
    "\n",
    "如 **图1** 所示的词向量计算任务中，先把每个词（如queen，king等）转换成一个高维空间的向量，这些向量在一定意义上可以代表这个词的语义信息。再通过计算这些向量之间的距离，就可以计算出词语之间的关联关系，从而达到让计算机像计算数值一样去计算自然语言的目的。\n",
    "\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/00ba55f7304e4f97942165cf1deb946ced404a19325d40969b7c220e30cf527e\" width=\"800\" ></center>\n",
    "<center>图1：词向量计算示意图</center>\n",
    "<br></br>\n",
    "\n",
    "因此，大部分词向量模型都需要回答两个问题：\n",
    "\n",
    "1. **如何把词转换为向量?**\n",
    "\n",
    "自然语言单词是离散信号，比如“香蕉”，“橘子”，“水果”在我们看来就是3个离散的词。\n",
    "\n",
    "如何把每个离散的单词转换为一个向量？\n",
    "\n",
    "2. **如何让向量具有语义信息?**\n",
    "\n",
    "比如，我们知道在很多情况下，“香蕉”和“橘子”更加相似，而“香蕉”和“句子”就没有那么相似，同时“香蕉”和“食物”、“水果”的相似程度可能介于“橘子”和“句子”之间。\n",
    "\n",
    "那么，我们该如何让词向量具备这样的语义信息？\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "65ca987e-379c-498e-adba-7b3cffbe4e9b",
   "metadata": {},
   "source": [
    "## 如何把词转换为向量\n",
    "\n",
    "自然语言单词是离散信号，比如“我”、“ 爱”、“人工智能”。如何把每个离散的单词转换为一个向量？通常情况下，我们可以维护一个如 **图2** 所示的查询表。表中每一行都存储了一个特定词语的向量值，每一列的第一个元素都代表着这个词本身，以便于我们进行词和向量的映射（如“我”对应的向量值为 [0.3，0.5，0.7，0.9，-0.2，0.03] ）。给定任何一个或者一组单词，我们都可以通过查询这个excel，实现把单词转换为向量的目的，这个查询和替换过程称之为Embedding Lookup。\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/2ccf57a0c4584ca5b5000da85bc48c4cbe1588f0e3f548c6b2271c26e2e7df14\" width=\"800\" ></center>\n",
    "<center><br>图2：词向量查询表</br></center>\n",
    "<br></br>\n",
    "\n",
    "上述过程也可以使用一个字典数据结构实现。事实上如果不考虑计算效率，使用字典实现上述功能是个不错的选择。然而在进行神经网络计算的过程中，需要大量的算力，常常要借助特定硬件（如GPU）满足训练速度的需求。GPU上所支持的计算都是以张量（Tensor）为单位展开的，因此在实际场景中，我们需要把Embedding Lookup的过程转换为张量计算，如 **图3** 所示。\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/0da8ca9f87364701a95a5d1f51736dd08bd1e6321a3f4c30bfb2ee2d2385698c\" width=\"800\" ></center>\n",
    "<center><br>图3：张量计算示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "假设对于句子\"我，爱，人工，智能\"，把Embedding Lookup的过程转换为张量计算的流程如下：\n",
    "\n",
    "1. 通过查询字典，先把句子中的单词转换成一个ID（通常是一个大于等于0的整数），这个单词到ID的映射关系可以根据需求自定义（如**图3**中，我=>1, 人工=>2，爱=>3，...）。\n",
    "\n",
    "2. 得到ID后，再把每个ID转换成一个固定长度的向量。假设字典的词表中有5000个词，那么，对于单词“我”，就可以用一个5000维的向量来表示。由于“我”的ID是1，因此这个向量的第一个元素是1，其他元素都是0（[1，0，0，…，0]）；同样对于单词“人工”，第二个元素是1，其他元素都是0。用这种方式就实现了用一个向量表示一个单词。由于每个单词的向量表示都只有一个元素为1，而其他元素为0，因此我们称上述过程为One-Hot Encoding。\n",
    "\n",
    "3. 经过One-Hot Encoding后，句子“我，爱，人工，智能”就被转换成为了一个形状为 4×5000的张量，记为$V$。在这个张量里共有4行、5000列，从上到下，每一行分别代表了“我”、“爱”、“人工”、“智能”四个单词的One-Hot Encoding。最后，我们把这个张量$V$和另外一个稠密张量$W$相乘，其中$W$张量的形状为5000 × 128（5000表示词表大小，128表示每个词的向量大小）。经过张量乘法，我们就得到了一个4×128的张量，从而完成了把单词表示成向量的目的。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8abb95fb-4a42-4f8c-a723-41f26f2804c9",
   "metadata": {},
   "source": [
    "## 如何让向量具有语义信息\n",
    "\n",
    "得到每个单词的向量表示后，我们需要思考下一个问题：比如在多数情况下，“香蕉”和“橘子”更加相似，而“香蕉”和“句子”就没有那么相似；同时，“香蕉”和“食物”、“水果”的相似程度可能介于“橘子”和“句子”之间。那么如何让存储的词向量具备这样的语义信息呢？\n",
    "\n",
    "我们先学习自然语言处理领域的一个小技巧。在自然语言处理研究中，科研人员通常有一个共识：使用一个单词的上下文来了解这个单词的语义，比如：\n",
    "\n",
    " >“苹果手机质量不错，就是价格有点贵。”\n",
    " >\n",
    " >“这个苹果很好吃，非常脆。”\n",
    " >\n",
    " >“菠萝质量也还行，但是不如苹果支持的APP多。”\n",
    ">\n",
    "在上面的句子中，我们通过上下文可以推断出第一个“苹果”指的是苹果手机，第二个“苹果”指的是水果苹果，而第三个“菠萝”指的应该也是一个手机。事实上，在自然语言处理领域，使用上下文描述一个词语或者元素的语义是一个常见且有效的做法。我们可以使用同样的方式训练词向量，让这些词向量具备表示语义信息的能力。\n",
    "\n",
    "2013年，Mikolov提出的经典word2vec算法就是通过上下文来学习语义信息。word2vec包含两个经典模型：CBOW（Continuous Bag-of-Words）和Skip-gram，如 **图4** 所示。\n",
    "\n",
    "- **CBOW**：通过上下文的词向量推理中心词。\n",
    "- **Skip-gram**：根据中心词推理上下文。\n",
    "\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/87b90136eef04d7285803e567a5f7f6d0e40bb552deb40a19a7540c4e6aa20a3\" width=\"700\" ></center>\n",
    "<center><br>图4：CBOW和Skip-gram语义学习示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "假设有一个句子“Pineapples are spiked and yellow”，两个模型的推理方式如下：\n",
    "\n",
    "- 在**CBOW**中，先在句子中选定一个中心词，并把其它词作为这个中心词的上下文。如 **图4** CBOW所示，把“spiked”作为中心词，把“Pineapples、are、and、yellow”作为中心词的上下文。在学习过程中，使用上下文的词向量推理中心词，这样中心词的语义就被传递到上下文的词向量中，如“spiked → pineapple”，从而达到学习语义信息的目的。\n",
    "\n",
    "- 在**Skip-gram**中，同样先选定一个中心词，并把其他词作为这个中心词的上下文。如 **图4** Skip-gram所示，把“spiked”作为中心词，把“Pineapples、are、and、yellow”作为中心词的上下文。不同的是，在学习过程中，使用中心词的词向量去推理上下文，这样上下文定义的语义被传入中心词的表示中，如“pineapple → spiked”，\n",
    "从而达到学习语义信息的目的。\n",
    "\n",
    "------\n",
    "**说明：**\n",
    "\n",
    "一般来说，CBOW比Skip-gram训练速度快，训练过程更加稳定，原因是CBOW使用上下文average的方式进行训练，每个训练step会见到更多样本。而在生僻字（出现频率低的字）处理上，skip-gram比CBOW效果更好，原因是skip-gram不会刻意回避生僻字。\n",
    "\n",
    "------"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6fa7dfa8-b122-489c-94e6-687a097f4d0a",
   "metadata": {},
   "source": [
    "## CBOW和Skip-gram的算法实现\n",
    "\n",
    "我们以这句话：“Pineapples are spiked and yellow”为例分别介绍CBOW和Skip-gram的算法实现。\n",
    "\n",
    "如 **图5** 所示，CBOW是一个具有3层结构的神经网络，分别是：\n",
    "\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/72397490c0ba499692cff31484431c57bc9d20f7ef344454868e12d628ec5bd3\" width=\"400\" ></center>\n",
    "<center><br>图5：CBOW的算法实现</br></center>\n",
    "<br></br>\n",
    "\n",
    "* **输入层：** 一个形状为C×V的one-hot张量，其中C代表上线文中词的个数，通常是一个偶数，我们假设为4；V表示词表大小，我们假设为5000，该张量的每一行都是一个上下文词的one-hot向量表示，比如“Pineapples, are, and, yellow”。\n",
    "* **隐藏层：** 一个形状为V×N的参数张量W1，一般称为word-embedding，N表示每个词的词向量长度，我们假设为128。输入张量和word embedding W1进行矩阵乘法，就会得到一个形状为C×N的张量。综合考虑上下文中所有词的信息去推理中心词，因此将上下文中C个词相加得一个1×N的向量，是整个上下文的一个隐含表示。\n",
    "* **输出层：** 创建另一个形状为N×V的参数张量，将隐藏层得到的1×N的向量乘以该N×V的参数张量，得到了一个形状为1×V的向量。最终，1×V的向量代表了使用上下文去推理中心词，每个候选词的打分，再经过softmax函数的归一化，即得到了对中心词的推理概率：\n",
    "\n",
    "$$𝑠𝑜𝑓𝑡𝑚𝑎𝑥({O_i})= \\frac{exp({O_i})}{\\sum_jexp({O_j})}$$\n",
    "\n",
    "如 **图6** 所示，Skip-gram是一个具有3层结构的神经网络，分别是：\n",
    "\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/a572953b845d4c91bdf6b7b475e7b4437bee69bd60024eb2b8c46f56adf2bdef\" width=\"400\" ></center>\n",
    "<center><br>图6：Skip-gram算法实现</br></center>\n",
    "<br></br>\n",
    "\n",
    "- **Input Layer（输入层）**：接收一个one-hot张量 $V \\in R^{1 \\times \\text{vocab\\_size}}$ 作为网络的输入，里面存储着当前句子中心词的one-hot表示。\n",
    "- **Hidden Layer（隐藏层）**：将张量$V$乘以一个word embedding张量$W_1 \\in R^{\\text{vocab\\_size} \\times \\text{embed\\_size}}$，并把结果作为隐藏层的输出，得到一个形状为$R^{1 \\times \\text{embed\\_size}}$的张量，里面存储着当前句子中心词的词向量。\n",
    "- **Output Layer（输出层）**：将隐藏层的结果乘以另一个word embedding张量$W_2 \\in R^{\\text{embed\\_size} \\times \\text{vocab\\_size}}$，得到一个形状为$R^{1 \\times \\text{vocab\\_size}}$的张量。这个张量经过softmax变换后，就得到了使用当前中心词对上下文的预测结果。根据这个softmax的结果，我们就可以去训练词向量模型。\n",
    "\n",
    "在实际操作中，使用一个滑动窗口（一般情况下，长度是奇数），从左到右开始扫描当前句子。每个扫描出来的片段被当成一个小句子，每个小句子中间的词被认为是中心词，其余的词被认为是这个中心词的上下文。\n",
    "\n",
    "   \n",
    "### Skip-gram的理想实现\n",
    "\n",
    "使用神经网络实现Skip-gram中，模型接收的输入应该有2个不同的tensor：\n",
    "\n",
    "- 代表中心词的tensor：假设我们称之为center_words $V$，一般来说，这个tensor是一个形状为[batch_size, vocab_size]的one-hot tensor，表示在一个mini-batch中，每个中心词的ID，对应位置为1，其余为0。\n",
    "\n",
    "- 代表目标词的tensor：目标词是指需要推理出来的上下文词，假设我们称之为target_words $T$，一般来说，这个tensor是一个形状为[batch_size, 1]的整型tensor，这个tensor中的每个元素是一个[0, vocab_size-1]的值，代表目标词的ID。\n",
    "\n",
    "在理想情况下，我们可以使用一个简单的方式实现skip-gram。即把需要推理的每个目标词都当成一个标签，把skip-gram当成一个大规模分类任务进行网络构建，过程如下：\n",
    "\n",
    "1. 声明一个形状为[vocab_size, embedding_size]的张量，作为需要学习的词向量，记为$W_0$。对于给定的输入$V$，使用向量乘法，将$V$乘以$W_0$，这样就得到了一个形状为[batch_size, embedding_size]的张量，记为$H=V×W_0$。这个张量$H$就可以看成是经过词向量查表后的结果。\n",
    "1. 声明另外一个需要学习的参数$W_1$，这个参数的形状为[embedding_size, vocab_size]。将上一步得到的$H$去乘以$W_1$，得到一个新的tensor $O=H×W_1$，此时的$O$是一个形状为[batch_size, vocab_size]的tensor，表示当前这个mini-batch中的每个中心词预测出的目标词的概率。\n",
    "1. 使用softmax函数对mini-batch中每个中心词的预测结果做归一化，即可完成网络构建。\n",
    "\n",
    "   \n",
    "### Skip-gram的实际实现\n",
    "\n",
    "然而在实际情况中，vocab_size通常很大（几十万甚至几百万），导致$W_0$和$W_1$也会非常大。对于$W_0$而言，所参与的矩阵运算并不是通过一个矩阵乘法实现，而是通过指定ID，对参数$W_0$进行访存的方式获取。然而对$W_1$而言，仍要处理一个非常大的矩阵运算（计算过程非常缓慢，需要消耗大量的内存/显存）。为了缓解这个问题，通常采取负采样（negative_sampling）的方式来近似模拟多分类任务。此时新定义的$W_0$和$W_1$均为形状为[vocab_size, embedding_size]的张量。\n",
    "\n",
    "假设有一个中心词$c$和一个上下文词正样本$t_p$。在Skip-gram的理想实现里，需要最大化使用$c$推理$t_p$的概率。在使用softmax学习时，需要最大化$t_p$的推理概率，同时最小化其他词表中词的推理概率。之所以计算缓慢，是因为需要对词表中的所有词都计算一遍。然而我们还可以使用另一种方法，就是随机从词表中选择几个代表词，通过最小化这几个代表词的概率，去近似最小化整体的预测概率。比如，先指定一个中心词（如“人工”）和一个目标词正样本（如“智能”），再随机在词表中采样几个目标词负样本（如“日本”，“喝茶”等）。有了这些内容，我们的skip-gram模型就变成了一个二分类任务。对于目标词正样本，我们需要最大化它的预测概率；对于目标词负样本，我们需要最小化它的预测概率。通过这种方式，我们就可以完成计算加速。上述做法，我们称之为负采样。\n",
    "\n",
    "在实现的过程中，通常会让模型接收3个tensor输入：\n",
    "\n",
    "- 代表中心词的tensor：假设我们称之为center_words $V$，一般来说，这个tensor是一个形状为[batch_size, vocab_size]的one-hot tensor，表示在一个mini-batch中每个中心词具体的ID。\n",
    "\n",
    "- 代表目标词的tensor：假设我们称之为target_words $T$，一般来说，这个tensor同样是一个形状为[batch_size, vocab_size]的one-hot tensor，表示在一个mini-batch中每个目标词具体的ID。\n",
    "\n",
    "- 代表目标词标签的tensor：假设我们称之为labels $L$，一般来说，这个tensor是一个形状为[batch_size, 1]的tensor，每个元素不是0就是1（0：负样本，1：正样本）。\n",
    "\n",
    "模型训练过程如下：\n",
    "1. 用$V$去查询$W_0$，用$T$去查询$W_1$，分别得到两个形状为[batch_size, embedding_size]的tensor，记为$H_1$和$H_2$。\n",
    "2. 点乘这两个tensor，最终得到一个形状为[batch_size]的tensor  $O = [O_i = \\sum_j H_0[i,j] × H_1[i,j]]_{i=1}^{batch\\_size}$。\n",
    "3. 使用sigmoid函数作用在$O$上，将上述点乘的结果归一化为一个0-1的概率值，作为预测概率，根据标签信息$L$训练这个模型即可。\n",
    "\n",
    "在结束模型训练之后，一般使用$W_0$作为最终要使用的词向量，可以用$W_0$提供的向量表示。通过向量点乘的方式，计算两个不同词之间的相似度。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc2a980d-d62f-4ad7-b1ed-a45e80d9455d",
   "metadata": {},
   "source": [
    "## 代码实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "b267f93f-ad83-480f-bcd2-2a4435bec23b",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:01.754537Z",
     "iopub.status.busy": "2024-07-16T13:25:01.753983Z",
     "iopub.status.idle": "2024-07-16T13:25:02.564911Z",
     "shell.execute_reply": "2024-07-16T13:25:02.562854Z",
     "shell.execute_reply.started": "2024-07-16T13:25:01.754497Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/usr/bin/zsh: /home/arcment/miniconda3/envs/paddle/lib/libtinfo.so.6: no version information available (required by /usr/bin/zsh)\n",
      "\u001b[33mWARNING: Package(s) not found: paddlepaddle\u001b[0m\u001b[33m\n",
      "\u001b[0mName: jieba\n",
      "Version: 0.42.1\n",
      "Summary: Chinese Words Segmentation Utilities\n",
      "Home-page: https://github.com/fxsjy/jieba\n",
      "Author: Sun, Junyi\n",
      "Author-email: ccnusjy@gmail.com\n",
      "License: MIT\n",
      "Location: /home/arcment/miniconda3/envs/paddle/lib/python3.9/site-packages\n",
      "Requires: \n",
      "Required-by: paddlenlp\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "%pip show paddlepaddle jieba"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "e787ab40-9946-4045-82ff-dab646450003",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:02.568116Z",
     "iopub.status.busy": "2024-07-16T13:25:02.567817Z",
     "iopub.status.idle": "2024-07-16T13:25:02.599441Z",
     "shell.execute_reply": "2024-07-16T13:25:02.598951Z",
     "shell.execute_reply.started": "2024-07-16T13:25:02.568089Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "自学本科可以考研吗？,首先说取得自考本科学历后是有资格考研的，考研对自考学历的要求如下：自考本科有什么用处么？,自考本科是我国基本高等教育制度之一，自考文凭效力与个人的努力程度相关，成绩合格后由主考学校和高等教育自学考试委员会联合颁发大学毕业证书，国家承认学历，符合条件者由主考大学授予学士学位。自考本科优势：1.就业本科学历比专科学历找工作的优势显而易见，专科学历，无形之中将丧失许多理想的工作机会。当然，高学历并不必然能事业成功，许多没有学历的人一样创业很成功，但当今社会通常学历越高工作机会越多，上升空间越大，发展速度越快。2.工资定级我国国家机关和事业单位基本都是按照学历定工资，本科工资比专科工资高一档次，较规范的企业也是按学历定工资，如在苏州、上海、深圳等地外资企业或国内知名企业上班，上岗工资本科工资比专科工资高500元以上是正常的，而且本科以上的奖金和提升机会都比专科相对多一些，当然也有部分企业部分岗位，尤其是一些未成型的企业，并不以学历定岗，只考虑为其创造了多少效益。3.人事改革许多单位（尤其是国家机关和事业单位）提拔干部、竞选领导基本条件都是本科以上学历，即使自己完全可以胜任\n"
     ]
    }
   ],
   "source": [
    "#读取数据\n",
    "with open(\"data/corpus1.txt\", \"r\") as f:\n",
    "    corpus = f.read().replace(\"\\n\", '').replace(' ', '')\n",
    "\n",
    "#打印前500个字符，简要看一下这个语料的样子\n",
    "print(corpus[:500])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "515d6971-be63-4d76-92cf-093889e754c2",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:02.600741Z",
     "iopub.status.busy": "2024-07-16T13:25:02.600560Z",
     "iopub.status.idle": "2024-07-16T13:25:05.965907Z",
     "shell.execute_reply": "2024-07-16T13:25:05.965509Z",
     "shell.execute_reply.started": "2024-07-16T13:25:02.600726Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Building prefix dict from the default dictionary ...\n",
      "Loading model from cache /tmp/jieba.cache\n",
      "Loading model cost 0.374 seconds.\n",
      "Prefix dict has been built successfully.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['自学', '本科', '可以', '考研', '吗', '？', ',', '首先', '说', '取得', '自考', '本科学历', '后', '是', '有', '资格', '考研', '的', '，', '考研', '对', '自考', '学历', '的', '要求', '如下', '：', '自考', '本科', '有', '什么', '用处', '么', '？', ',', '自考', '本科', '是', '我国', '基本', '高等教育', '制度', '之一', '，', '自考', '文凭', '效力', '与', '个人', '的']\n"
     ]
    }
   ],
   "source": [
    "# 对语料预处理需要进行分词，使用jieba分词。\n",
    "import jieba\n",
    "def data_preprocess(corpus):\n",
    "    corpus = corpus.strip()\n",
    "    corpus = jieba.lcut(corpus)\n",
    "    return corpus\n",
    "\n",
    "corpus = data_preprocess(corpus)\n",
    "print(corpus[:50])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "fde53134-f4e6-457d-a1f7-120da0b2f13e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:05.966551Z",
     "iopub.status.busy": "2024-07-16T13:25:05.966397Z",
     "iopub.status.idle": "2024-07-16T13:25:06.143385Z",
     "shell.execute_reply": "2024-07-16T13:25:06.142977Z",
     "shell.execute_reply.started": "2024-07-16T13:25:05.966538Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "there are totoally 34209 different words in the corpus\n",
      "word ，, its id 0, its word freq 73249\n",
      "word 的, its id 1, its word freq 52380\n",
      "word 。, its id 2, its word freq 29568\n",
      "word 、, its id 3, its word freq 22482\n",
      "word 是, its id 4, its word freq 16029\n",
      "word ,, its id 5, its word freq 15837\n",
      "word 自考, its id 6, its word freq 13352\n",
      "word 可以, its id 7, its word freq 11946\n",
      "word 你, its id 8, its word freq 9458\n",
      "word 考试, its id 9, its word freq 8941\n",
      "word 有, its id 10, its word freq 8864\n",
      "word 专业, its id 11, its word freq 8857\n",
      "word 本科, its id 12, its word freq 8494\n",
      "word 在, its id 13, its word freq 8361\n",
      "word 和, its id 14, its word freq 7931\n",
      "word 了, its id 15, its word freq 5900\n",
      "word 学历, its id 16, its word freq 5509\n",
      "word ：, its id 17, its word freq 5352\n",
      "word 报考, its id 18, its word freq 5273\n",
      "word 专升本, its id 19, its word freq 5147\n",
      "word 都, its id 20, its word freq 5081\n",
      "word 也, its id 21, its word freq 4988\n",
      "word ）, its id 22, its word freq 4825\n",
      "word （, its id 23, its word freq 4762\n",
      "word 报名, its id 24, its word freq 4375\n",
      "word 学校, its id 25, its word freq 4365\n",
      "word 就, its id 26, its word freq 4356\n",
      "word 不, its id 27, its word freq 4304\n",
      "word ？, its id 28, its word freq 4026\n",
      "word 专科, its id 29, its word freq 3819\n",
      "word 吗, its id 30, its word freq 3736\n",
      "word 要, its id 31, its word freq 3537\n",
      "word 教育, its id 32, its word freq 3393\n",
      "word 毕业, its id 33, its word freq 3253\n",
      "word 考, its id 34, its word freq 3173\n",
      "word 没有, its id 35, its word freq 3108\n",
      "word 年, its id 36, its word freq 3021\n",
      "word 需要, its id 37, its word freq 3003\n",
      "word ；, its id 38, its word freq 2951\n",
      "word 学习, its id 39, its word freq 2915\n",
      "word 什么, its id 40, its word freq 2879\n",
      "word 自学, its id 41, its word freq 2825\n",
      "word 时间, its id 42, its word freq 2825\n",
      "word 与, its id 43, its word freq 2783\n",
      "word 我, its id 44, its word freq 2776\n",
      "word 到, its id 45, its word freq 2728\n",
      "word 工作, its id 46, its word freq 2701\n",
      "word 大学, its id 47, its word freq 2665\n",
      "word 如果, its id 48, its word freq 2602\n",
      "word 自己, its id 49, its word freq 2592\n"
     ]
    }
   ],
   "source": [
    "\n",
    "#构造词典，统计每个词的频率，并根据频率将每个词转换为一个整数id\n",
    "def build_dict(corpus):\n",
    "    #首先统计每个不同词的频率（出现的次数），使用一个词典记录\n",
    "    word_freq_dict = dict()\n",
    "    for word in corpus:\n",
    "        if word not in word_freq_dict:\n",
    "            word_freq_dict[word] = 0\n",
    "        word_freq_dict[word] += 1\n",
    "\n",
    "    #将这个词典中的词，按照出现次数排序，出现次数越高，排序越靠前\n",
    "\n",
    "    word_freq_dict = sorted(word_freq_dict.items(), key = lambda x:x[1], reverse = True)\n",
    "    \n",
    "    #构造3个不同的词典，分别存储，\n",
    "    #每个词到id的映射关系：word2id_dict\n",
    "    #每个id出现的频率：word2id_freq\n",
    "    #每个id到词的映射关系：id2word_dict\n",
    "    word2id_dict = dict()\n",
    "    word2id_freq = dict()\n",
    "    id2word_dict = dict()\n",
    "\n",
    "    #按照频率，从高到低，开始遍历每个单词，并为这个单词构造一个独一无二的id\n",
    "    for word, freq in word_freq_dict:\n",
    "        curr_id = len(word2id_dict)\n",
    "        word2id_dict[word] = curr_id\n",
    "        word2id_freq[word2id_dict[word]] = freq\n",
    "        id2word_dict[curr_id] = word\n",
    "\n",
    "    return word2id_freq, word2id_dict, id2word_dict\n",
    "\n",
    "word2id_freq, word2id_dict, id2word_dict = build_dict(corpus)\n",
    "vocab_size = len(word2id_freq)\n",
    "\n",
    "print(\"there are totoally %d different words in the corpus\" % vocab_size)\n",
    "for _, (word, word_id) in zip(range(50), word2id_dict.items()):\n",
    "    print(\"word %s, its id %d, its word freq %d\" % (word, word_id, word2id_freq[word_id]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "fdc7e718-6487-4e18-997c-f70695f8f475",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:06.144126Z",
     "iopub.status.busy": "2024-07-16T13:25:06.143968Z",
     "iopub.status.idle": "2024-07-16T13:25:06.214250Z",
     "shell.execute_reply": "2024-07-16T13:25:06.213867Z",
     "shell.execute_reply.started": "2024-07-16T13:25:06.144114Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1071029 tokens in the corpus\n",
      "[41, 12, 7, 153, 30, 28, 5, 473, 163, 170, 6, 183, 66, 4, 10, 321, 153, 1, 0, 153, 101, 6, 16, 1, 61, 750, 17, 6, 12, 10, 40, 1972, 574, 28, 5, 6, 12, 4, 494, 250, 127, 895, 556, 0, 6, 192, 1722, 43, 274, 1]\n"
     ]
    }
   ],
   "source": [
    "#把语料转换为id序列\n",
    "def convert_corpus_to_id(corpus, word2id_dict):\n",
    "    #使用一个循环，将语料中的每个词替换成对应的id，以便于神经网络进行处理\n",
    "\n",
    "    corpus = [word2id_dict[word] for word in corpus]\n",
    "    return corpus\n",
    "\n",
    "corpus = convert_corpus_to_id(corpus, word2id_dict)\n",
    "\n",
    "\n",
    "print(\"%d tokens in the corpus\" % len(corpus))\n",
    "print(corpus[:50])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "9701f1e2-c58f-4212-99d2-313af395a24e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:06.215040Z",
     "iopub.status.busy": "2024-07-16T13:25:06.214884Z",
     "iopub.status.idle": "2024-07-16T13:25:06.495638Z",
     "shell.execute_reply": "2024-07-16T13:25:06.495201Z",
     "shell.execute_reply.started": "2024-07-16T13:25:06.215027Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "440833 tokens in the corpus\n",
      "[473, 66, 321, 750, 40, 1972, 574, 12, 127, 895, 192, 1722, 120, 14, 127, 1210, 1561, 514, 55, 3085, 689, 311, 197, 577, 6819, 7608, 2554, 671, 1308, 349, 2341, 2082, 6267, 671, 113, 1598, 118, 4846, 855, 5437, 299, 1436, 2792, 1699, 5830, 266, 2256, 1436, 948, 52]\n"
     ]
    }
   ],
   "source": [
    "import random\n",
    "import math\n",
    "#使用二次采样算法（subsampling）处理语料，强化训练效果\n",
    "def subsampling(corpus, word2id_freq):\n",
    "    \n",
    "    #这个discard函数决定了一个词会不会被替换，这个函数是具有随机性的，每次调用结果不同\n",
    "    #如果一个词的频率很大，那么它被遗弃的概率就很大\n",
    "    def discard(word_id):\n",
    "        return random.uniform(0, 1) < 1 - math.sqrt(1e-4 / word2id_freq[word_id] * len(corpus))\n",
    "\n",
    "    corpus = [word for word in corpus if not discard(word)]\n",
    "    return corpus\n",
    "\n",
    "corpus = subsampling(corpus, word2id_freq)\n",
    "\n",
    "\n",
    "print(\"%d tokens in the corpus\" % len(corpus))\n",
    "print(corpus[:50])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a90aa45-5447-4a83-87fb-f5167fc5c97d",
   "metadata": {},
   "source": [
    "**负采样**  \n",
    "在完成语料数据预处理之后，需要构造训练数据。根据上面的描述，我们需要使用一个滑动窗口对语料从左到右扫描，在每个窗口内，中心词需要预测它的上下文，并形成训练数据。\n",
    "\n",
    "在实际操作中，由于词表往往很大（50000，100000等），对大词表的一些矩阵运算（如softmax）需要消耗巨大的资源，因此可以通过负采样的方式模拟softmax的结果。\n",
    "\n",
    "* 给定一个中心词和一个需要预测的上下文词，把这个上下文词作为正样本。\n",
    "* 通过词表随机采样的方式，选择若干个负样本。\n",
    "* 把一个大规模分类问题转化为一个2分类问题，通过这种方式优化计算速度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "1f321c75-2a11-40f6-b86a-fb839b13439c",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:06.496482Z",
     "iopub.status.busy": "2024-07-16T13:25:06.496259Z",
     "iopub.status.idle": "2024-07-16T13:25:09.840803Z",
     "shell.execute_reply": "2024-07-16T13:25:09.840176Z",
     "shell.execute_reply.started": "2024-07-16T13:25:06.496468Z"
    }
   },
   "outputs": [],
   "source": [
    "#构造数据，准备模型训练\n",
    "#max_window_size代表了最大的window_size的大小，程序会根据max_window_size从左到右扫描整个语料\n",
    "#negative_sample_num代表了对于每个正样本，我们需要随机采样多少负样本用于训练，\n",
    "#一般来说，negative_sample_num的值越大，训练效果越稳定，但是训练速度越慢。 \n",
    "def build_data(corpus, word2id_dict, word2id_freq, max_window_size = 2, negative_sample_num = 4):\n",
    "    \n",
    "    #使用一个list存储处理好的数据\n",
    "    dataset = []\n",
    "\n",
    "    #从左到右，开始枚举每个中心点的位置\n",
    "    for center_word_idx in range(len(corpus)):\n",
    "        #以max_window_size为上限，随机采样一个window_size，这样会使得训练更加稳定\n",
    "        window_size = random.randint(1, max_window_size)\n",
    "        #当前的中心词就是center_word_idx所指向的词\n",
    "        center_word = corpus[center_word_idx]\n",
    "\n",
    "        #以当前中心词为中心，左右两侧在window_size内的词都可以看成是正样本\n",
    "        positive_word_range = (max(0, center_word_idx - window_size), min(len(corpus) - 1, center_word_idx + window_size))\n",
    "        positive_word_candidates = [corpus[idx] for idx in range(positive_word_range[0], positive_word_range[1]+1) if idx != center_word_idx]\n",
    "\n",
    "        #对于每个正样本来说，随机采样negative_sample_num个负样本，用于训练\n",
    "        for positive_word in positive_word_candidates:\n",
    "            #首先把（中心词，正样本，label=1）的三元组数据放入dataset中，\n",
    "            #这里label=1表示这个样本是个正样本\n",
    "            dataset.append((center_word, positive_word, 1))\n",
    "\n",
    "            #开始负采样\n",
    "            i = 0\n",
    "            while i < negative_sample_num:\n",
    "                negative_word_candidate = random.randint(0, vocab_size-1)\n",
    "\n",
    "                if negative_word_candidate not in positive_word_candidates:\n",
    "                    #把（中心词，正样本，label=0）的三元组数据放入dataset中，\n",
    "                    #这里label=0表示这个样本是个负样本\n",
    "                    dataset.append((center_word, negative_word_candidate, 0))\n",
    "                    i += 1\n",
    "    \n",
    "    return dataset\n",
    "\n",
    "data = build_data(corpus, word2id_dict, word2id_freq)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5e5b40c-8462-419b-a366-f481836582ae",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-15T07:31:49.681697Z",
     "iopub.status.busy": "2024-07-15T07:31:49.680912Z",
     "iopub.status.idle": "2024-07-15T07:31:49.689489Z",
     "shell.execute_reply": "2024-07-15T07:31:49.688302Z",
     "shell.execute_reply.started": "2024-07-15T07:31:49.681661Z"
    }
   },
   "source": [
    " 训练数据准备好后，使用Dataset, DataLoader建立数据集读取器，把训练数据都组装成mini-batch，并准备输入到网络中进行训练，代码如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "0ba7cab7-9728-45a5-bf70-7810776a7795",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:09.841552Z",
     "iopub.status.busy": "2024-07-16T13:25:09.841423Z",
     "iopub.status.idle": "2024-07-16T13:25:11.147160Z",
     "shell.execute_reply": "2024-07-16T13:25:11.146667Z",
     "shell.execute_reply.started": "2024-07-16T13:25:09.841539Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tensor(shape=[128], dtype=int64, place=Place(gpu_pinned), stop_gradient=True,\n",
      "       [10   , 137  , 497  , 1746 , 325  , 67   , 228  , 2529 , 3328 , 478  ,\n",
      "        21585, 1975 , 884  , 1031 , 73   , 4205 , 1147 , 23696, 72   , 121  ,\n",
      "        3952 , 506  , 1    , 358  , 7642 , 116  , 24   , 365  , 2928 , 5451 ,\n",
      "        542  , 245  , 196  , 367  , 1679 , 148  , 9132 , 1698 , 784  , 356  ,\n",
      "        5    , 608  , 1299 , 2839 , 495  , 264  , 330  , 26494, 14   , 1047 ,\n",
      "        6186 , 3813 , 505  , 2272 , 123  , 478  , 12372, 27166, 2148 , 756  ,\n",
      "        1233 , 6533 , 17716, 23   , 17530, 13922, 1376 , 2    , 665  , 2148 ,\n",
      "        299  , 1228 , 2750 , 753  , 981  , 1560 , 98   , 4136 , 27471, 1899 ,\n",
      "        3259 , 134  , 1105 , 202  , 2683 , 1343 , 5918 , 88   , 5584 , 1212 ,\n",
      "        2131 , 7675 , 1022 , 2276 , 4062 , 4124 , 15564, 8    , 1303 , 7112 ,\n",
      "        107  , 635  , 768  , 695  , 149  , 248  , 656  , 673  , 1335 , 96   ,\n",
      "        401  , 78   , 2194 , 1307 , 1624 , 3379 , 4447 , 2364 , 4395 , 0    ,\n",
      "        1083 , 3332 , 2497 , 188  , 1663 , 32728, 896  , 414  ]) Tensor(shape=[128], dtype=int64, place=Place(gpu_pinned), stop_gradient=True,\n",
      "       [7073 , 31251, 28621, 409  , 18796, 10368, 11751, 12473, 10449, 3881 ,\n",
      "        11327, 16754, 29176, 9959 , 392  , 374  , 25894, 33693, 20154, 28312,\n",
      "        3045 , 41   , 21335, 25747, 14643, 8387 , 22066, 4863 , 21515, 25198,\n",
      "        13954, 3512 , 1873 , 296  , 17048, 26265, 5841 , 132  , 1771 , 29919,\n",
      "        5007 , 24597, 26737, 23169, 10900, 17695, 1300 , 10432, 9937 , 15281,\n",
      "        13580, 112  , 3027 , 2332 , 22213, 19384, 3032 , 9545 , 29467, 20353,\n",
      "        215  , 19603, 27740, 19303, 25757, 19064, 32942, 20163, 25765, 32168,\n",
      "        24070, 32646, 22861, 29770, 23312, 28382, 3910 , 27136, 1311 , 2466 ,\n",
      "        10312, 23594, 11340, 20586, 19126, 2004 , 24630, 92   , 25913, 1505 ,\n",
      "        4587 , 1572 , 30287, 751  , 17477, 29145, 1212 , 1545 , 431  , 4078 ,\n",
      "        16732, 3001 , 3752 , 28584, 12274, 28062, 15794, 9728 , 25896, 2959 ,\n",
      "        21809, 27109, 7325 , 5414 , 25172, 24612, 13970, 15538, 25091, 26932,\n",
      "        6922 , 6570 , 6969 , 26738, 121  , 4046 , 7140 , 921  ]) Tensor(shape=[128], dtype=int64, place=Place(gpu_pinned), stop_gradient=True,\n",
      "       [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0,\n",
      "        0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0,\n",
      "        0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
      "        0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1,\n",
      "        1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,\n",
      "        0, 0, 1, 0, 1, 0, 0, 1])\n"
     ]
    }
   ],
   "source": [
    "from paddle.io import Dataset, DataLoader\n",
    "\n",
    "class MyDateset(Dataset):\n",
    "    def __init__(self, data):\n",
    "        self._data = data\n",
    "    \n",
    "    def __len__(self):\n",
    "        return len(self._data)\n",
    "    \n",
    "    def __getitem__(self, idx):\n",
    "        example = self._data[idx]\n",
    "        center_word = example[0]\n",
    "        target_word= example[1]\n",
    "        label = example[2]\n",
    "\n",
    "        return center_word, target_word, label\n",
    "\n",
    "dataset = MyDateset(data)\n",
    "dataloader = DataLoader(dataset, batch_size=128, shuffle=True)\n",
    "\n",
    "for _, (center_words, target_words, label) in zip(range(1), dataloader):\n",
    "    print(center_words, target_words, label)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "37d1c9c3-1984-4470-b3b3-b4388a9c9804",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:11.148068Z",
     "iopub.status.busy": "2024-07-16T13:25:11.147883Z",
     "iopub.status.idle": "2024-07-16T13:25:11.152058Z",
     "shell.execute_reply": "2024-07-16T13:25:11.151766Z",
     "shell.execute_reply.started": "2024-07-16T13:25:11.148055Z"
    }
   },
   "outputs": [],
   "source": [
    "#定义skip-gram训练网络结构\n",
    "\n",
    "import paddle.nn as nn\n",
    "from paddle.nn import Embedding\n",
    "\n",
    "\n",
    "class SkipGram(nn.Layer):\n",
    "    def __init__(self, vocab_size, embedding_size, init_scale=0.1):\n",
    "        #vocab_size定义了这个skipgram这个模型的词表大小\n",
    "        #embedding_size定义了词向量的维度是多少\n",
    "        super(SkipGram, self).__init__()\n",
    "        self.vocab_size = vocab_size\n",
    "        self.embedding_size = embedding_size\n",
    "\n",
    "        #构造一个词向量参数，这个参数的大小为：[self.vocab_size, self.embedding_size]\n",
    "        self.embedding = Embedding(vocab_size, embedding_size, sparse=True)\n",
    "\n",
    "        #构造另外一个词向量参数\n",
    "        self.embedding_out = Embedding(vocab_size, embedding_size, sparse=True)\n",
    "\n",
    "    #定义网络的前向计算逻辑\n",
    "    #center_words是一个tensor（mini-batch），表示中心词\n",
    "    #target_words是一个tensor（mini-batch），表示目标词\n",
    "    #label是一个tensor（mini-batch），表示这个词是正样本还是负样本（用0或1表示）\n",
    "    #用于在训练中计算这个tensor中对应词的同义词，用于观察模型的训练效果\n",
    "    def forward(self, center_words, target_words, label):\n",
    "        #首先，通过embedding_para（self.embedding）参数，将mini-batch中的词转换为词向量\n",
    "        #这里center_words和eval_words_emb查询的是一个相同的参数\n",
    "        #而target_words_emb查询的是另一个参数\n",
    "        center_words_emb = self.embedding(center_words)\n",
    "        target_words_emb = self.embedding_out(target_words)\n",
    "\n",
    "        #我们通过点乘的方式计算中心词到目标词的输出概率，并通过sigmoid函数估计这个词是正样本还是负样本的概率。\n",
    "        word_sim = paddle.multiply(center_words_emb, target_words_emb)\n",
    "        word_sim = paddle.sum(word_sim, axis = -1)\n",
    "        word_sim = paddle.reshape(word_sim, shape=[-1])\n",
    "        pred =  paddle.nn.functional.sigmoid(word_sim)\n",
    "\n",
    "        #通过估计的输出概率定义损失函数，注意我们使用的是sigmoid_cross_entropy_with_logits函数\n",
    "        #将sigmoid计算和cross entropy合并成一步计算可以更好的优化，所以输入的是word_sim，而不是pred\n",
    "        \n",
    "        loss = paddle.nn.functional.binary_cross_entropy_with_logits(word_sim, label)\n",
    "        loss = paddle.mean(loss)\n",
    "\n",
    "        #返回前向计算的结果，飞桨会通过backward函数自动计算出反向结果。\n",
    "        return pred, loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "581e788a-e3e2-445f-b096-192ea8beef8e",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:11.152632Z",
     "iopub.status.busy": "2024-07-16T13:25:11.152475Z",
     "iopub.status.idle": "2024-07-16T13:25:11.155362Z",
     "shell.execute_reply": "2024-07-16T13:25:11.155097Z",
     "shell.execute_reply.started": "2024-07-16T13:25:11.152620Z"
    }
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "#定义一个使用word-embedding查询同义词的函数\n",
    "#这个函数query_token是要查询的词，k表示要返回多少个最相似的词，embed是我们学习到的word-embedding参数\n",
    "#我们通过计算不同词之间的cosine距离，来衡量词和词的相似度\n",
    "#具体实现如下，x代表要查询词的Embedding，Embedding参数矩阵W代表所有词的Embedding\n",
    "#两者计算Cos得出所有词对查询词的相似度得分向量，排序取top_k放入indices列表\n",
    "def get_similar_tokens(query_token, k, embed):\n",
    "    W = embed.numpy()\n",
    "    x = W[word2id_dict[query_token]]\n",
    "    cos = np.dot(W, x) / np.sqrt(np.sum(W * W, axis=1) * np.sum(x * x) + 1e-9)\n",
    "    flat = cos.flatten()\n",
    "    indices = np.argpartition(flat, -k)[-k:]\n",
    "    indices = indices[np.argsort(-flat[indices])]\n",
    "    for i in indices:\n",
    "        print('for word %s, the similar word is %s' % (query_token, str(id2word_dict[i])))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "8b7a9fbb-aa4f-439a-8a30-a4108c72cdf5",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2024-07-16T13:25:11.156001Z",
     "iopub.status.busy": "2024-07-16T13:25:11.155761Z",
     "iopub.status.idle": "2024-07-16T13:31:43.891258Z",
     "shell.execute_reply": "2024-07-16T13:31:43.890664Z",
     "shell.execute_reply.started": "2024-07-16T13:25:11.155989Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "W0716 21:25:11.158685 67261 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.8\n",
      "W0716 21:25:11.159274 67261 gpu_resources.cc:91] device: 0, cuDNN Version: 8.9.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 1, step 1000, loss 0.693\n",
      "epoch 1, step 2000, loss 0.687\n",
      "epoch 1, step 3000, loss 0.654\n",
      "epoch 1, step 4000, loss 0.586\n",
      "epoch 1, step 5000, loss 0.531\n",
      "epoch 1, step 6000, loss 0.466\n",
      "epoch 1, step 7000, loss 0.376\n",
      "epoch 1, step 8000, loss 0.397\n",
      "epoch 1, step 9000, loss 0.380\n",
      "epoch 1, step 10000, loss 0.322\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is ）\n",
      "for word 考试, the similar word is 4\n",
      "for word 考试, the similar word is 自考\n",
      "for word 考试, the similar word is ？\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 学历\n",
      "for word 本科, the similar word is ,\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 3\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is ？\n",
      "for word 学校, the similar word is 没有\n",
      "for word 学校, the similar word is 可以\n",
      "for word 学校, the similar word is 《\n",
      "epoch 1, step 11000, loss 0.253\n",
      "epoch 1, step 12000, loss 0.270\n",
      "epoch 1, step 13000, loss 0.326\n",
      "epoch 1, step 14000, loss 0.346\n",
      "epoch 1, step 15000, loss 0.284\n",
      "epoch 1, step 16000, loss 0.266\n",
      "epoch 1, step 17000, loss 0.268\n",
      "epoch 1, step 18000, loss 0.348\n",
      "epoch 1, step 19000, loss 0.217\n",
      "epoch 1, step 20000, loss 0.283\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 4\n",
      "for word 考试, the similar word is 自考\n",
      "for word 考试, the similar word is 毕业证书\n",
      "for word 考试, the similar word is 以上\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 学历\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 不\n",
      "for word 本科, the similar word is 和\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 没有\n",
      "for word 学校, the similar word is ？\n",
      "for word 学校, the similar word is 可以\n",
      "for word 学校, the similar word is 研究生\n",
      "epoch 1, step 21000, loss 0.247\n",
      "epoch 1, step 22000, loss 0.224\n",
      "epoch 1, step 23000, loss 0.304\n",
      "epoch 1, step 24000, loss 0.220\n",
      "epoch 1, step 25000, loss 0.351\n",
      "epoch 1, step 26000, loss 0.231\n",
      "epoch 1, step 27000, loss 0.290\n",
      "epoch 1, step 28000, loss 0.304\n",
      "epoch 1, step 29000, loss 0.267\n",
      "epoch 1, step 30000, loss 0.268\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 自考\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is 4\n",
      "for word 考试, the similar word is 考生\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 学历\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is ？\n",
      "for word 本科, the similar word is 有\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 很\n",
      "for word 学校, the similar word is 毕业\n",
      "for word 学校, the similar word is 如果\n",
      "for word 学校, the similar word is 通过\n",
      "epoch 1, step 31000, loss 0.430\n",
      "epoch 1, step 32000, loss 0.228\n",
      "epoch 1, step 33000, loss 0.272\n",
      "epoch 1, step 34000, loss 0.358\n",
      "epoch 1, step 35000, loss 0.218\n",
      "epoch 1, step 36000, loss 0.247\n",
      "epoch 1, step 37000, loss 0.188\n",
      "epoch 1, step 38000, loss 0.378\n",
      "epoch 1, step 39000, loss 0.202\n",
      "epoch 1, step 40000, loss 0.202\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is 自考\n",
      "for word 考试, the similar word is 4\n",
      "for word 考试, the similar word is 后\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 不\n",
      "for word 本科, the similar word is 学历\n",
      "for word 本科, the similar word is 有\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is ？\n",
      "for word 学校, the similar word is 可以\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is 专升本\n",
      "epoch 1, step 41000, loss 0.293\n",
      "epoch 1, step 42000, loss 0.195\n",
      "epoch 1, step 43000, loss 0.259\n",
      "epoch 1, step 44000, loss 0.315\n",
      "epoch 1, step 45000, loss 0.309\n",
      "epoch 1, step 46000, loss 0.265\n",
      "epoch 1, step 47000, loss 0.312\n",
      "epoch 1, step 48000, loss 0.302\n",
      "epoch 1, step 49000, loss 0.261\n",
      "epoch 1, step 50000, loss 0.281\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 自考\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is ；\n",
      "for word 考试, the similar word is 上\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 不\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 学历\n",
      "for word 本科, the similar word is 吗\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is ？\n",
      "for word 学校, the similar word is 可以\n",
      "for word 学校, the similar word is 专升本\n",
      "epoch 1, step 51000, loss 0.403\n",
      "epoch 1 model is saved.\n",
      "epoch 2, step 1000, loss 0.358\n",
      "epoch 2, step 2000, loss 0.261\n",
      "epoch 2, step 3000, loss 0.261\n",
      "epoch 2, step 4000, loss 0.199\n",
      "epoch 2, step 5000, loss 0.197\n",
      "epoch 2, step 6000, loss 0.398\n",
      "epoch 2, step 7000, loss 0.304\n",
      "epoch 2, step 8000, loss 0.248\n",
      "epoch 2, step 9000, loss 0.158\n",
      "epoch 2, step 10000, loss 0.253\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 自考\n",
      "for word 考试, the similar word is 报考\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 不\n",
      "for word 本科, the similar word is ？\n",
      "for word 本科, the similar word is 是\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is 毕业\n",
      "for word 学校, the similar word is 没有\n",
      "for word 学校, the similar word is ？\n",
      "epoch 2, step 11000, loss 0.188\n",
      "epoch 2, step 12000, loss 0.280\n",
      "epoch 2, step 13000, loss 0.347\n",
      "epoch 2, step 14000, loss 0.269\n",
      "epoch 2, step 15000, loss 0.328\n",
      "epoch 2, step 16000, loss 0.251\n",
      "epoch 2, step 17000, loss 0.292\n",
      "epoch 2, step 18000, loss 0.264\n",
      "epoch 2, step 19000, loss 0.389\n",
      "epoch 2, step 20000, loss 0.193\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is 3\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 不\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is ？\n",
      "for word 本科, the similar word is ,\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is 毕业\n",
      "for word 学校, the similar word is 去\n",
      "for word 学校, the similar word is 什么\n",
      "epoch 2, step 21000, loss 0.223\n",
      "epoch 2, step 22000, loss 0.141\n",
      "epoch 2, step 23000, loss 0.311\n",
      "epoch 2, step 24000, loss 0.376\n",
      "epoch 2, step 25000, loss 0.211\n",
      "epoch 2, step 26000, loss 0.302\n",
      "epoch 2, step 27000, loss 0.310\n",
      "epoch 2, step 28000, loss 0.338\n",
      "epoch 2, step 29000, loss 0.320\n",
      "epoch 2, step 30000, loss 0.430\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is 在\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 是\n",
      "for word 本科, the similar word is 吗\n",
      "for word 本科, the similar word is 不\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is 过\n",
      "for word 学校, the similar word is 去\n",
      "for word 学校, the similar word is 到\n",
      "epoch 2, step 31000, loss 0.415\n",
      "epoch 2, step 32000, loss 0.208\n",
      "epoch 2, step 33000, loss 0.279\n",
      "epoch 2, step 34000, loss 0.235\n",
      "epoch 2, step 35000, loss 0.286\n",
      "epoch 2, step 36000, loss 0.243\n",
      "epoch 2, step 37000, loss 0.269\n",
      "epoch 2, step 38000, loss 0.301\n",
      "epoch 2, step 39000, loss 0.389\n",
      "epoch 2, step 40000, loss 0.267\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 在\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 吗\n",
      "for word 本科, the similar word is ,\n",
      "for word 本科, the similar word is 学历\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is ？\n",
      "for word 学校, the similar word is 可以\n",
      "for word 学校, the similar word is ,\n",
      "epoch 2, step 41000, loss 0.362\n",
      "epoch 2, step 42000, loss 0.466\n",
      "epoch 2, step 43000, loss 0.221\n",
      "epoch 2, step 44000, loss 0.312\n",
      "epoch 2, step 45000, loss 0.228\n",
      "epoch 2, step 46000, loss 0.195\n",
      "epoch 2, step 47000, loss 0.176\n",
      "epoch 2, step 48000, loss 0.245\n",
      "epoch 2, step 49000, loss 0.294\n",
      "epoch 2, step 50000, loss 0.227\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 现在\n",
      "for word 考试, the similar word is 毕业\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 再\n",
      "for word 本科, the similar word is 大专\n",
      "for word 本科, the similar word is 是\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is 去\n",
      "for word 学校, the similar word is 跟\n",
      "for word 学校, the similar word is ？\n",
      "epoch 2, step 51000, loss 0.241\n",
      "epoch 2 model is saved.\n",
      "epoch 3, step 1000, loss 0.237\n",
      "epoch 3, step 2000, loss 0.244\n",
      "epoch 3, step 3000, loss 0.327\n",
      "epoch 3, step 4000, loss 0.165\n",
      "epoch 3, step 5000, loss 0.242\n",
      "epoch 3, step 6000, loss 0.312\n",
      "epoch 3, step 7000, loss 0.351\n",
      "epoch 3, step 8000, loss 0.311\n",
      "epoch 3, step 9000, loss 0.325\n",
      "epoch 3, step 10000, loss 0.360\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 现在\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 大专\n",
      "for word 本科, the similar word is ,\n",
      "for word 本科, the similar word is 成考\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is 去\n",
      "for word 学校, the similar word is 也\n",
      "for word 学校, the similar word is 就\n",
      "epoch 3, step 11000, loss 0.458\n",
      "epoch 3, step 12000, loss 0.312\n",
      "epoch 3, step 13000, loss 0.366\n",
      "epoch 3, step 14000, loss 0.326\n",
      "epoch 3, step 15000, loss 0.369\n",
      "epoch 3, step 16000, loss 0.285\n",
      "epoch 3, step 17000, loss 0.250\n",
      "epoch 3, step 18000, loss 0.232\n",
      "epoch 3, step 19000, loss 0.202\n",
      "epoch 3, step 20000, loss 0.201\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 每年\n",
      "for word 考试, the similar word is 每次\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 再\n",
      "for word 本科, the similar word is 大专\n",
      "for word 本科, the similar word is 么\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 吗\n",
      "for word 学校, the similar word is 毕业证\n",
      "for word 学校, the similar word is 现在\n",
      "for word 学校, the similar word is 差不多\n",
      "epoch 3, step 21000, loss 0.386\n",
      "epoch 3, step 22000, loss 0.467\n",
      "epoch 3, step 23000, loss 0.277\n",
      "epoch 3, step 24000, loss 0.258\n",
      "epoch 3, step 25000, loss 0.216\n",
      "epoch 3, step 26000, loss 0.322\n",
      "epoch 3, step 27000, loss 0.399\n",
      "epoch 3, step 28000, loss 0.179\n",
      "epoch 3, step 29000, loss 0.335\n",
      "epoch 3, step 30000, loss 0.407\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 安排\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 再\n",
      "for word 本科, the similar word is 成考\n",
      "for word 本科, the similar word is 区别\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 毕业证\n",
      "for word 学校, the similar word is 现在\n",
      "for word 学校, the similar word is 不用\n",
      "for word 学校, the similar word is 之前\n",
      "epoch 3, step 31000, loss 0.240\n",
      "epoch 3, step 32000, loss 0.363\n",
      "epoch 3, step 33000, loss 0.269\n",
      "epoch 3, step 34000, loss 0.204\n",
      "epoch 3, step 35000, loss 0.305\n",
      "epoch 3, step 36000, loss 0.251\n",
      "epoch 3, step 37000, loss 0.335\n",
      "epoch 3, step 38000, loss 0.467\n",
      "epoch 3, step 39000, loss 0.181\n",
      "epoch 3, step 40000, loss 0.128\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 再\n",
      "for word 本科, the similar word is 区别\n",
      "for word 本科, the similar word is 本科生\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 只有\n",
      "for word 学校, the similar word is 之前\n",
      "for word 学校, the similar word is 毕业证\n",
      "for word 学校, the similar word is 大专生\n",
      "epoch 3, step 41000, loss 0.284\n",
      "epoch 3, step 42000, loss 0.162\n",
      "epoch 3, step 43000, loss 0.455\n",
      "epoch 3, step 44000, loss 0.273\n",
      "epoch 3, step 45000, loss 0.265\n",
      "epoch 3, step 46000, loss 0.217\n",
      "epoch 3, step 47000, loss 0.096\n",
      "epoch 3, step 48000, loss 0.205\n",
      "epoch 3, step 49000, loss 0.536\n",
      "epoch 3, step 50000, loss 0.121\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 每次\n",
      "for word 考试, the similar word is 次\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 区别\n",
      "for word 本科, the similar word is 成考\n",
      "for word 本科, the similar word is 再\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 只有\n",
      "for word 学校, the similar word is 大专生\n",
      "for word 学校, the similar word is 之前\n",
      "for word 学校, the similar word is 毕业证\n",
      "epoch 3, step 51000, loss 0.328\n",
      "epoch 3 model is saved.\n",
      "epoch 4, step 1000, loss 0.295\n",
      "epoch 4, step 2000, loss 0.170\n",
      "epoch 4, step 3000, loss 0.400\n",
      "epoch 4, step 4000, loss 0.315\n",
      "epoch 4, step 5000, loss 0.518\n",
      "epoch 4, step 6000, loss 0.316\n",
      "epoch 4, step 7000, loss 0.265\n",
      "epoch 4, step 8000, loss 0.319\n",
      "epoch 4, step 9000, loss 0.319\n",
      "epoch 4, step 10000, loss 0.176\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 本科学历\n",
      "for word 本科, the similar word is 成考\n",
      "for word 本科, the similar word is 才能\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 只有\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 是否\n",
      "epoch 4, step 11000, loss 0.217\n",
      "epoch 4, step 12000, loss 0.228\n",
      "epoch 4, step 13000, loss 0.242\n",
      "epoch 4, step 14000, loss 0.211\n",
      "epoch 4, step 15000, loss 0.176\n",
      "epoch 4, step 16000, loss 0.359\n",
      "epoch 4, step 17000, loss 0.230\n",
      "epoch 4, step 18000, loss 0.240\n",
      "epoch 4, step 19000, loss 0.421\n",
      "epoch 4, step 20000, loss 0.263\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 次\n",
      "for word 考试, the similar word is 两次\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 成考\n",
      "for word 本科, the similar word is 区别\n",
      "for word 本科, the similar word is 属于\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 大专生\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 只有\n",
      "epoch 4, step 21000, loss 0.251\n",
      "epoch 4, step 22000, loss 0.269\n",
      "epoch 4, step 23000, loss 0.432\n",
      "epoch 4, step 24000, loss 0.299\n",
      "epoch 4, step 25000, loss 0.179\n",
      "epoch 4, step 26000, loss 0.278\n",
      "epoch 4, step 27000, loss 0.216\n",
      "epoch 4, step 28000, loss 0.189\n",
      "epoch 4, step 29000, loss 0.135\n",
      "epoch 4, step 30000, loss 0.223\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次\n",
      "for word 考试, the similar word is 两次\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 两者\n",
      "for word 本科, the similar word is 第一\n",
      "for word 本科, the similar word is 成考\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 是否\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 大专生\n",
      "epoch 4, step 31000, loss 0.433\n",
      "epoch 4, step 32000, loss 0.269\n",
      "epoch 4, step 33000, loss 0.294\n",
      "epoch 4, step 34000, loss 0.411\n",
      "epoch 4, step 35000, loss 0.220\n",
      "epoch 4, step 36000, loss 0.234\n",
      "epoch 4, step 37000, loss 0.158\n",
      "epoch 4, step 38000, loss 0.363\n",
      "epoch 4, step 39000, loss 0.346\n",
      "epoch 4, step 40000, loss 0.155\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次\n",
      "for word 考试, the similar word is 两次\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 两者\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 升\n",
      "for word 本科, the similar word is 考个\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 大型\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 是否\n",
      "epoch 4, step 41000, loss 0.400\n",
      "epoch 4, step 42000, loss 0.201\n",
      "epoch 4, step 43000, loss 0.215\n",
      "epoch 4, step 44000, loss 0.166\n",
      "epoch 4, step 45000, loss 0.458\n",
      "epoch 4, step 46000, loss 0.301\n",
      "epoch 4, step 47000, loss 0.484\n",
      "epoch 4, step 48000, loss 0.281\n",
      "epoch 4, step 49000, loss 0.226\n",
      "epoch 4, step 50000, loss 0.254\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 次\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 两者\n",
      "for word 本科, the similar word is 读完\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 是否\n",
      "for word 学校, the similar word is 大专生\n",
      "epoch 4, step 51000, loss 0.285\n",
      "epoch 4 model is saved.\n",
      "epoch 5, step 1000, loss 0.359\n",
      "epoch 5, step 2000, loss 0.188\n",
      "epoch 5, step 3000, loss 0.335\n",
      "epoch 5, step 4000, loss 0.195\n",
      "epoch 5, step 5000, loss 0.268\n",
      "epoch 5, step 6000, loss 0.273\n",
      "epoch 5, step 7000, loss 0.204\n",
      "epoch 5, step 8000, loss 0.242\n",
      "epoch 5, step 9000, loss 0.351\n",
      "epoch 5, step 10000, loss 0.213\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 一周\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 两者\n",
      "for word 本科, the similar word is 升\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 是否\n",
      "for word 学校, the similar word is 唯一\n",
      "epoch 5, step 11000, loss 0.265\n",
      "epoch 5, step 12000, loss 0.142\n",
      "epoch 5, step 13000, loss 0.376\n",
      "epoch 5, step 14000, loss 0.212\n",
      "epoch 5, step 15000, loss 0.272\n",
      "epoch 5, step 16000, loss 0.228\n",
      "epoch 5, step 17000, loss 0.326\n",
      "epoch 5, step 18000, loss 0.229\n",
      "epoch 5, step 19000, loss 0.191\n",
      "epoch 5, step 20000, loss 0.257\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 一周\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 升\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 是否\n",
      "for word 学校, the similar word is 唯一\n",
      "epoch 5, step 21000, loss 0.235\n",
      "epoch 5, step 22000, loss 0.335\n",
      "epoch 5, step 23000, loss 0.416\n",
      "epoch 5, step 24000, loss 0.286\n",
      "epoch 5, step 25000, loss 0.287\n",
      "epoch 5, step 26000, loss 0.359\n",
      "epoch 5, step 27000, loss 0.222\n",
      "epoch 5, step 28000, loss 0.260\n",
      "epoch 5, step 29000, loss 0.179\n",
      "epoch 5, step 30000, loss 0.309\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 一周\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 升\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 是否\n",
      "epoch 5, step 31000, loss 0.334\n",
      "epoch 5, step 32000, loss 0.173\n",
      "epoch 5, step 33000, loss 0.670\n",
      "epoch 5, step 34000, loss 0.308\n",
      "epoch 5, step 35000, loss 0.262\n",
      "epoch 5, step 36000, loss 0.461\n",
      "epoch 5, step 37000, loss 0.238\n",
      "epoch 5, step 38000, loss 0.270\n",
      "epoch 5, step 39000, loss 0.512\n",
      "epoch 5, step 40000, loss 0.151\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 一周\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 升\n",
      "for word 本科, the similar word is 省内\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "epoch 5, step 41000, loss 0.410\n",
      "epoch 5, step 42000, loss 0.165\n",
      "epoch 5, step 43000, loss 0.288\n",
      "epoch 5, step 44000, loss 0.184\n",
      "epoch 5, step 45000, loss 0.210\n",
      "epoch 5, step 46000, loss 0.218\n",
      "epoch 5, step 47000, loss 0.411\n",
      "epoch 5, step 48000, loss 0.337\n",
      "epoch 5, step 49000, loss 0.195\n",
      "epoch 5, step 50000, loss 0.248\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 次数\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 省内\n",
      "for word 本科, the similar word is 两者\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 由\n",
      "epoch 5, step 51000, loss 0.259\n",
      "epoch 5 model is saved.\n",
      "epoch 6, step 1000, loss 0.254\n",
      "epoch 6, step 2000, loss 0.432\n",
      "epoch 6, step 3000, loss 0.320\n",
      "epoch 6, step 4000, loss 0.169\n",
      "epoch 6, step 5000, loss 0.163\n",
      "epoch 6, step 6000, loss 0.487\n",
      "epoch 6, step 7000, loss 0.143\n",
      "epoch 6, step 8000, loss 0.213\n",
      "epoch 6, step 9000, loss 0.217\n",
      "epoch 6, step 10000, loss 0.349\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次数\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 省内\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 由\n",
      "epoch 6, step 11000, loss 0.266\n",
      "epoch 6, step 12000, loss 0.393\n",
      "epoch 6, step 13000, loss 0.241\n",
      "epoch 6, step 14000, loss 0.131\n",
      "epoch 6, step 15000, loss 0.265\n",
      "epoch 6, step 16000, loss 0.275\n",
      "epoch 6, step 17000, loss 0.188\n",
      "epoch 6, step 18000, loss 0.192\n",
      "epoch 6, step 19000, loss 0.252\n",
      "epoch 6, step 20000, loss 0.381\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次数\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 省内\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 两者\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 是否\n",
      "epoch 6, step 21000, loss 0.234\n",
      "epoch 6, step 22000, loss 0.212\n",
      "epoch 6, step 23000, loss 0.300\n",
      "epoch 6, step 24000, loss 0.143\n",
      "epoch 6, step 25000, loss 0.151\n",
      "epoch 6, step 26000, loss 0.454\n",
      "epoch 6, step 27000, loss 0.264\n",
      "epoch 6, step 28000, loss 0.185\n",
      "epoch 6, step 29000, loss 0.229\n",
      "epoch 6, step 30000, loss 0.137\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 次数\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 跨\n",
      "for word 本科, the similar word is 可不可以\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 严格\n",
      "for word 学校, the similar word is 由\n",
      "epoch 6, step 31000, loss 0.244\n",
      "epoch 6, step 32000, loss 0.342\n",
      "epoch 6, step 33000, loss 0.228\n",
      "epoch 6, step 34000, loss 0.287\n",
      "epoch 6, step 35000, loss 0.243\n",
      "epoch 6, step 36000, loss 0.302\n",
      "epoch 6, step 37000, loss 0.202\n",
      "epoch 6, step 38000, loss 0.211\n",
      "epoch 6, step 39000, loss 0.306\n",
      "epoch 6, step 40000, loss 0.384\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 次数\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 省内\n",
      "for word 本科, the similar word is 跨\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "for word 学校, the similar word is 严格\n",
      "epoch 6, step 41000, loss 0.252\n",
      "epoch 6, step 42000, loss 0.293\n",
      "epoch 6, step 43000, loss 0.185\n",
      "epoch 6, step 44000, loss 0.327\n",
      "epoch 6, step 45000, loss 0.311\n",
      "epoch 6, step 46000, loss 0.175\n",
      "epoch 6, step 47000, loss 0.237\n",
      "epoch 6, step 48000, loss 0.449\n",
      "epoch 6, step 49000, loss 0.308\n",
      "epoch 6, step 50000, loss 0.116\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 可不可以\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "for word 学校, the similar word is 基本上\n",
      "epoch 6, step 51000, loss 0.220\n",
      "epoch 6 model is saved.\n",
      "epoch 7, step 1000, loss 0.437\n",
      "epoch 7, step 2000, loss 0.213\n",
      "epoch 7, step 3000, loss 0.266\n",
      "epoch 7, step 4000, loss 0.118\n",
      "epoch 7, step 5000, loss 0.198\n",
      "epoch 7, step 6000, loss 0.130\n",
      "epoch 7, step 7000, loss 0.107\n",
      "epoch 7, step 8000, loss 0.292\n",
      "epoch 7, step 9000, loss 0.195\n",
      "epoch 7, step 10000, loss 0.287\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 技校\n",
      "for word 本科, the similar word is 可不可以\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "for word 学校, the similar word is 基本上\n",
      "epoch 7, step 11000, loss 0.176\n",
      "epoch 7, step 12000, loss 0.207\n",
      "epoch 7, step 13000, loss 0.224\n",
      "epoch 7, step 14000, loss 0.288\n",
      "epoch 7, step 15000, loss 0.228\n",
      "epoch 7, step 16000, loss 0.356\n",
      "epoch 7, step 17000, loss 0.142\n",
      "epoch 7, step 18000, loss 0.106\n",
      "epoch 7, step 19000, loss 0.492\n",
      "epoch 7, step 20000, loss 0.157\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 念\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "for word 学校, the similar word is 仅\n",
      "epoch 7, step 21000, loss 0.253\n",
      "epoch 7, step 22000, loss 0.237\n",
      "epoch 7, step 23000, loss 0.266\n",
      "epoch 7, step 24000, loss 0.263\n",
      "epoch 7, step 25000, loss 0.345\n",
      "epoch 7, step 26000, loss 0.240\n",
      "epoch 7, step 27000, loss 0.311\n",
      "epoch 7, step 28000, loss 0.274\n",
      "epoch 7, step 29000, loss 0.367\n",
      "epoch 7, step 30000, loss 0.235\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 军校\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "for word 学校, the similar word is 民办高校\n",
      "epoch 7, step 31000, loss 0.147\n",
      "epoch 7, step 32000, loss 0.240\n",
      "epoch 7, step 33000, loss 0.366\n",
      "epoch 7, step 34000, loss 0.180\n",
      "epoch 7, step 35000, loss 0.169\n",
      "epoch 7, step 36000, loss 0.356\n",
      "epoch 7, step 37000, loss 0.372\n",
      "epoch 7, step 38000, loss 0.198\n",
      "epoch 7, step 39000, loss 0.191\n",
      "epoch 7, step 40000, loss 0.083\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 民办\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "for word 学校, the similar word is 民办高校\n",
      "epoch 7, step 41000, loss 0.181\n",
      "epoch 7, step 42000, loss 0.252\n",
      "epoch 7, step 43000, loss 0.371\n",
      "epoch 7, step 44000, loss 0.431\n",
      "epoch 7, step 45000, loss 0.169\n",
      "epoch 7, step 46000, loss 0.131\n",
      "epoch 7, step 47000, loss 0.281\n",
      "epoch 7, step 48000, loss 0.183\n",
      "epoch 7, step 49000, loss 0.250\n",
      "epoch 7, step 50000, loss 0.236\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 科目\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 由\n",
      "for word 学校, the similar word is 基本上\n",
      "epoch 7, step 51000, loss 0.140\n",
      "epoch 7 model is saved.\n",
      "epoch 8, step 1000, loss 0.210\n",
      "epoch 8, step 2000, loss 0.109\n",
      "epoch 8, step 3000, loss 0.197\n",
      "epoch 8, step 4000, loss 0.203\n",
      "epoch 8, step 5000, loss 0.159\n",
      "epoch 8, step 6000, loss 0.221\n",
      "epoch 8, step 7000, loss 0.302\n",
      "epoch 8, step 8000, loss 0.149\n",
      "epoch 8, step 9000, loss 0.156\n",
      "epoch 8, step 10000, loss 0.139\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次数\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 任何\n",
      "for word 学校, the similar word is 仅\n",
      "epoch 8, step 11000, loss 0.280\n",
      "epoch 8, step 12000, loss 0.182\n",
      "epoch 8, step 13000, loss 0.301\n",
      "epoch 8, step 14000, loss 0.124\n",
      "epoch 8, step 15000, loss 0.081\n",
      "epoch 8, step 16000, loss 0.284\n",
      "epoch 8, step 17000, loss 0.248\n",
      "epoch 8, step 18000, loss 0.390\n",
      "epoch 8, step 19000, loss 0.131\n",
      "epoch 8, step 20000, loss 0.436\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 每年\n",
      "for word 考试, the similar word is 次数\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 军校\n",
      "for word 本科, the similar word is 念\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 任何\n",
      "for word 学校, the similar word is 仅\n",
      "epoch 8, step 21000, loss 0.200\n",
      "epoch 8, step 22000, loss 0.383\n",
      "epoch 8, step 23000, loss 0.384\n",
      "epoch 8, step 24000, loss 0.284\n",
      "epoch 8, step 25000, loss 0.474\n",
      "epoch 8, step 26000, loss 0.192\n",
      "epoch 8, step 27000, loss 0.217\n",
      "epoch 8, step 28000, loss 0.327\n",
      "epoch 8, step 29000, loss 0.287\n",
      "epoch 8, step 30000, loss 0.195\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次数\n",
      "for word 考试, the similar word is 每年\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 技校\n",
      "for word 本科, the similar word is 江汉\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 任何\n",
      "for word 学校, the similar word is 班级\n",
      "epoch 8, step 31000, loss 0.142\n",
      "epoch 8, step 32000, loss 0.224\n",
      "epoch 8, step 33000, loss 0.174\n",
      "epoch 8, step 34000, loss 0.439\n",
      "epoch 8, step 35000, loss 0.128\n",
      "epoch 8, step 36000, loss 0.283\n",
      "epoch 8, step 37000, loss 0.277\n",
      "epoch 8, step 38000, loss 0.118\n",
      "epoch 8, step 39000, loss 0.104\n",
      "epoch 8, step 40000, loss 0.117\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 次数\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 军校\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 任何\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 民办高校\n",
      "epoch 8, step 41000, loss 0.333\n",
      "epoch 8, step 42000, loss 0.137\n",
      "epoch 8, step 43000, loss 0.128\n",
      "epoch 8, step 44000, loss 0.239\n",
      "epoch 8, step 45000, loss 0.283\n",
      "epoch 8, step 46000, loss 0.192\n",
      "epoch 8, step 47000, loss 0.363\n",
      "epoch 8, step 48000, loss 0.217\n",
      "epoch 8, step 49000, loss 0.243\n",
      "epoch 8, step 50000, loss 0.244\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 每年\n",
      "for word 考试, the similar word is 科目\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 考个\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 任何\n",
      "for word 学校, the similar word is 开设\n",
      "epoch 8, step 51000, loss 0.156\n",
      "epoch 8 model is saved.\n",
      "epoch 9, step 1000, loss 0.207\n",
      "epoch 9, step 2000, loss 0.387\n",
      "epoch 9, step 3000, loss 0.235\n",
      "epoch 9, step 4000, loss 0.141\n",
      "epoch 9, step 5000, loss 0.530\n",
      "epoch 9, step 6000, loss 0.150\n",
      "epoch 9, step 7000, loss 0.145\n",
      "epoch 9, step 8000, loss 0.262\n",
      "epoch 9, step 9000, loss 0.281\n",
      "epoch 9, step 10000, loss 0.204\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 考前\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 军校\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 任何\n",
      "for word 学校, the similar word is 基本上\n",
      "epoch 9, step 11000, loss 0.272\n",
      "epoch 9, step 12000, loss 0.163\n",
      "epoch 9, step 13000, loss 0.110\n",
      "epoch 9, step 14000, loss 0.213\n",
      "epoch 9, step 15000, loss 0.243\n",
      "epoch 9, step 16000, loss 0.110\n",
      "epoch 9, step 17000, loss 0.233\n",
      "epoch 9, step 18000, loss 0.435\n",
      "epoch 9, step 19000, loss 0.098\n",
      "epoch 9, step 20000, loss 0.213\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 考前\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 本科, the similar word is 军校\n",
      "for word 本科, the similar word is 民办\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 民办高校\n",
      "for word 学校, the similar word is 班级\n",
      "epoch 9, step 21000, loss 0.309\n",
      "epoch 9, step 22000, loss 0.124\n",
      "epoch 9, step 23000, loss 0.578\n",
      "epoch 9, step 24000, loss 0.184\n",
      "epoch 9, step 25000, loss 0.230\n",
      "epoch 9, step 26000, loss 0.210\n",
      "epoch 9, step 27000, loss 0.545\n",
      "epoch 9, step 28000, loss 0.386\n",
      "epoch 9, step 29000, loss 0.139\n",
      "epoch 9, step 30000, loss 0.187\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 考试, the similar word is 考前\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 军校\n",
      "for word 本科, the similar word is 民办\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 班级\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 民办高校\n",
      "epoch 9, step 31000, loss 0.137\n",
      "epoch 9, step 32000, loss 0.103\n",
      "epoch 9, step 33000, loss 0.315\n",
      "epoch 9, step 34000, loss 0.274\n",
      "epoch 9, step 35000, loss 0.147\n",
      "epoch 9, step 36000, loss 0.491\n",
      "epoch 9, step 37000, loss 0.383\n",
      "epoch 9, step 38000, loss 0.231\n",
      "epoch 9, step 39000, loss 0.229\n",
      "epoch 9, step 40000, loss 0.299\n",
      "for word 考试, the similar word is 考试\n",
      "for word 考试, the similar word is 两次\n",
      "for word 考试, the similar word is 统考\n",
      "for word 考试, the similar word is 次数\n",
      "for word 考试, the similar word is 入学考试\n",
      "for word 本科, the similar word is 本科\n",
      "for word 本科, the similar word is 结业证\n",
      "for word 本科, the similar word is 军校\n",
      "for word 本科, the similar word is 念\n",
      "for word 本科, the similar word is 专升本\n",
      "for word 学校, the similar word is 学校\n",
      "for word 学校, the similar word is 唯一\n",
      "for word 学校, the similar word is 校\n",
      "for word 学校, the similar word is 任何\n",
      "for word 学校, the similar word is 班级\n",
      "epoch 9, step 41000, loss 0.316\n",
      "epoch 9, step 42000, loss 0.120\n",
      "epoch 9, step 43000, loss 0.196\n",
      "epoch 9, step 44000, loss 0.148\n",
      "epoch 9, step 45000, loss 0.234\n",
      "epoch 9, step 46000, loss 0.161\n",
      "epoch 9, step 47000, loss 0.189\n"
     ]
    },
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[11], line 19\u001b[0m\n\u001b[1;32m     16\u001b[0m step \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m0\u001b[39m\n\u001b[1;32m     17\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m (center_words, target_words, label) \u001b[38;5;129;01min\u001b[39;00m dataloader:\n\u001b[1;32m     18\u001b[0m     \u001b[38;5;66;03m#进行一次前向计算，并得到计算结果\u001b[39;00m\n\u001b[0;32m---> 19\u001b[0m     pred, loss \u001b[38;5;241m=\u001b[39m \u001b[43mskip_gram_model\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcenter_words\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtarget_words\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mlabel\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mastype\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mfloat32\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     21\u001b[0m     \u001b[38;5;66;03m#通过backward函数，让程序自动完成反向计算\u001b[39;00m\n\u001b[1;32m     22\u001b[0m     loss\u001b[38;5;241m.\u001b[39mbackward()\n",
      "File \u001b[0;32m~/miniconda3/envs/paddle/lib/python3.9/site-packages/paddle/fluid/dygraph/layers.py:1012\u001b[0m, in \u001b[0;36mLayer.__call__\u001b[0;34m(self, *inputs, **kwargs)\u001b[0m\n\u001b[1;32m   1003\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[1;32m   1004\u001b[0m     (\u001b[38;5;129;01mnot\u001b[39;00m in_declarative_mode())\n\u001b[1;32m   1005\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (\u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks)\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m   1009\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (\u001b[38;5;129;01mnot\u001b[39;00m in_profiler_mode())\n\u001b[1;32m   1010\u001b[0m ):\n\u001b[1;32m   1011\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_build_once(\u001b[38;5;241m*\u001b[39minputs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[0;32m-> 1012\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mforward\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1013\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m   1014\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_dygraph_call_func(\u001b[38;5;241m*\u001b[39minputs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n",
      "Cell \u001b[0;32mIn[9], line 30\u001b[0m, in \u001b[0;36mSkipGram.forward\u001b[0;34m(self, center_words, target_words, label)\u001b[0m\n\u001b[1;32m     26\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, center_words, target_words, label):\n\u001b[1;32m     27\u001b[0m     \u001b[38;5;66;03m#首先，通过embedding_para（self.embedding）参数，将mini-batch中的词转换为词向量\u001b[39;00m\n\u001b[1;32m     28\u001b[0m     \u001b[38;5;66;03m#这里center_words和eval_words_emb查询的是一个相同的参数\u001b[39;00m\n\u001b[1;32m     29\u001b[0m     \u001b[38;5;66;03m#而target_words_emb查询的是另一个参数\u001b[39;00m\n\u001b[0;32m---> 30\u001b[0m     center_words_emb \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43membedding\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcenter_words\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     31\u001b[0m     target_words_emb \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39membedding_out(target_words)\n\u001b[1;32m     33\u001b[0m     \u001b[38;5;66;03m#我们通过点乘的方式计算中心词到目标词的输出概率，并通过sigmoid函数估计这个词是正样本还是负样本的概率。\u001b[39;00m\n",
      "File \u001b[0;32m~/miniconda3/envs/paddle/lib/python3.9/site-packages/paddle/fluid/dygraph/layers.py:1012\u001b[0m, in \u001b[0;36mLayer.__call__\u001b[0;34m(self, *inputs, **kwargs)\u001b[0m\n\u001b[1;32m   1003\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[1;32m   1004\u001b[0m     (\u001b[38;5;129;01mnot\u001b[39;00m in_declarative_mode())\n\u001b[1;32m   1005\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (\u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks)\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m   1009\u001b[0m     \u001b[38;5;129;01mand\u001b[39;00m (\u001b[38;5;129;01mnot\u001b[39;00m in_profiler_mode())\n\u001b[1;32m   1010\u001b[0m ):\n\u001b[1;32m   1011\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_build_once(\u001b[38;5;241m*\u001b[39minputs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[0;32m-> 1012\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mforward\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   1013\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m   1014\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_dygraph_call_func(\u001b[38;5;241m*\u001b[39minputs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n",
      "File \u001b[0;32m~/miniconda3/envs/paddle/lib/python3.9/site-packages/paddle/nn/layer/common.py:1517\u001b[0m, in \u001b[0;36mEmbedding.forward\u001b[0;34m(self, x)\u001b[0m\n\u001b[1;32m   1516\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, x):\n\u001b[0;32m-> 1517\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43membedding\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   1518\u001b[0m \u001b[43m        \u001b[49m\u001b[43mx\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1519\u001b[0m \u001b[43m        \u001b[49m\u001b[43mweight\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1520\u001b[0m \u001b[43m        \u001b[49m\u001b[43mpadding_idx\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_padding_idx\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1521\u001b[0m \u001b[43m        \u001b[49m\u001b[43msparse\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_sparse\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1522\u001b[0m \u001b[43m        \u001b[49m\u001b[43mname\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_name\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1523\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/miniconda3/envs/paddle/lib/python3.9/site-packages/paddle/nn/functional/input.py:203\u001b[0m, in \u001b[0;36membedding\u001b[0;34m(x, weight, padding_idx, sparse, name)\u001b[0m\n\u001b[1;32m    199\u001b[0m     \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpadding_idx must be within [-\u001b[39m\u001b[38;5;132;01m{}\u001b[39;00m\u001b[38;5;124m, \u001b[39m\u001b[38;5;132;01m{}\u001b[39;00m\u001b[38;5;124m)\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;241m.\u001b[39mformat(\n\u001b[1;32m    200\u001b[0m         weight\u001b[38;5;241m.\u001b[39mshape[\u001b[38;5;241m0\u001b[39m], weight\u001b[38;5;241m.\u001b[39mshape[\u001b[38;5;241m0\u001b[39m]))\n\u001b[1;32m    202\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m in_dygraph_mode():\n\u001b[0;32m--> 203\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43m_C_ops\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43membedding\u001b[49m\u001b[43m(\u001b[49m\u001b[43mx\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpadding_idx\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43msparse\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    204\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m _in_legacy_dygraph():\n\u001b[1;32m    205\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m _legacy_C_ops\u001b[38;5;241m.\u001b[39mlookup_table_v2(weight, x, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mis_sparse\u001b[39m\u001b[38;5;124m'\u001b[39m, sparse,\n\u001b[1;32m    206\u001b[0m                                          \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mis_distributed\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m    207\u001b[0m                                          \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mremote_prefetch\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;28;01mFalse\u001b[39;00m,\n\u001b[1;32m    208\u001b[0m                                          \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mpadding_idx\u001b[39m\u001b[38;5;124m'\u001b[39m, padding_idx)\n",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "import paddle\n",
    "\n",
    "#开始训练，定义一些训练过程中需要使用的超参数\n",
    "epoch_num = 1\n",
    "embedding_size = 50\n",
    "step = 0\n",
    "learning_rate = 0.0003\n",
    "\n",
    "skip_gram_model = SkipGram(vocab_size, embedding_size)\n",
    "\n",
    "#构造训练这个网络的优化器\n",
    "optimizer = paddle.optimizer.Adam(learning_rate=learning_rate, parameters = skip_gram_model.parameters())\n",
    "\n",
    "#使用build_batch函数，以mini-batch为单位，遍历训练数据，并训练网络\n",
    "for epoch in range(epoch_num):\n",
    "    step = 0\n",
    "    for (center_words, target_words, label) in dataloader:\n",
    "        #进行一次前向计算，并得到计算结果\n",
    "        pred, loss = skip_gram_model(center_words, target_words, label.astype('float32'))\n",
    "\n",
    "        #通过backward函数，让程序自动完成反向计算\n",
    "        loss.backward()\n",
    "        #对参数的优化更新\n",
    "        optimizer.step()\n",
    "        optimizer.clear_grad()\n",
    "        \n",
    "        #每经过100个mini-batch，打印一次当前的loss，看看loss是否在稳定下降\n",
    "        if (step + 1) % 1000 == 0:\n",
    "            print(\"epoch %d, step %d, loss %.3f\" % (epoch+1, step+1, loss.numpy()[0]))\n",
    "\n",
    "        # 经过10000个mini-batch，打印一次模型对eval_words中的10个词计算的同义词\n",
    "        # 这里我们使用词和词之间的向量点积作为衡量相似度的方法\n",
    "        # 我们只打印了5个最相似的词\n",
    "        \n",
    "        if (step + 1) % 10000 == 0:\n",
    "            get_similar_tokens('考试', 5, skip_gram_model.embedding.weight)\n",
    "            get_similar_tokens('本科', 5, skip_gram_model.embedding.weight)\n",
    "            get_similar_tokens('学校', 5, skip_gram_model.embedding.weight)\n",
    "        step += 1\n",
    "            \n",
    "    paddle.save(skip_gram_model.state_dict(), f'./checkpoint/{epoch+1}.pdparams')\n",
    "    print(f'epoch {epoch+1} model is saved.')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fb09d994-261c-4cb6-a64c-c711842459d6",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "paddle",
   "language": "python",
   "name": "paddle"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
