{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 词嵌入\n",
    "\n",
    "## 1. One-Hot\n",
    "\n",
    "## 2. Word2Vec\n",
    "\n",
    "### 2.1 CBOW\n",
    "\n",
    "连续词袋模型是根据上下文预测目标词汇的方式，是Word2Vec的一种思路.连续词袋模型的基本流程如下：\n",
    "- 选择上下文窗口，假设为2;\n",
    "- 将上下文的4个词语使用One-Hot方法表示，然后通过一个embedding层(N*V), 将会得到4个V维的向量;\n",
    "- 对输出的4个向量进行平均得到V=(V1+V2+V3+V4)/4;\n",
    "- 将V传入一个线性层(V*N)得到一个与原始词汇表大小相同的N维向量;\n",
    "- 将向量经过一个softmax层, 使用被预测词的One-Hot编码作为标签，进行损失;\n",
    "- 循环迭代，训练embedding层和输出的线性层;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用pytorch实现CBOW模型\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "class CBOW(nn.Module):\n",
    "    def __init__(self, vocab_size:int, embedding_dim:int) -> None:\n",
    "        super(CBOW, self).__init__()\n",
    "        self.embedding  = nn.Embedding(vocab_size, embedding_dim)   # 嵌入矩阵\n",
    "        self.linear     = nn.Linear(embedding_dim, vocab_size)      # 输出线性矩阵\n",
    "        \n",
    "    def forward(self, idxs):\n",
    "        embeds = self.embedding(idxs) \n",
    "        embeds = sum(embeds).view(1,-1)\n",
    "        out    = self.linear(embeds)\n",
    "        return out                                                  # softmax可以放在损失函数中，这里省略"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 SG\n",
    "\n",
    "SG是调词模型，全称为Skip-Gram, 其思路与CBOW刚好相反，其意图使用目标词预测上线的词. 无论是CBOW还是SG，其目标都是迭代出词向量字典embeddings.\n",
    "\n",
    "考虑句子： “We are about to study the idea of deep learning”, 能否用词语study预测出它的上下文词呢？ SG模型的基本步骤如下：\n",
    "- 选择上下文窗口长度，假设为2，如果我们想使用study预测上下文词语，则需要预测的词语为about, to, the idea 4个词;\n",
    "- 创建两个embedding层，一个用于嵌入target词，一个用于表示全部词的词向量;\n",
    "- 每次forward时, 使用第一个embedding(设为in_embedding)计算目标词target的词嵌入;\n",
    "- 然后获取另一个embedding的weght做为词表所有词的词嵌入;\n",
    "- 使用矩阵乘法计算target词的词嵌入与所有词的词嵌入的余弦相似度(矩阵乘法);\n",
    "- 通过softmax层计算相似度的概率;\n",
    "- 将是目标target词的上下文词的label标记为1，否则标记为0的方式计算损失，迭代模型的两个embedding;\n",
    "- 最终将in_embedding作为skip-gram模型的输出;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# pytorch实现sg模型\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "from   torch.nn import functional\n",
    "\n",
    "class SkipGram(nn.Module):\n",
    "    def __init__(self, vocab_size:int, embed_size:int)->None:\n",
    "        super(SkipGram, self).__init__()\n",
    "        self.in_embedding  = nn.Embedding(vocab_size, embed_size)\n",
    "        self.out_embedding = nn.Embedding(vocab_size, embed_size)\n",
    "    \n",
    "    def forward(self, target: torch.Tensor)->torch.Tensor:\n",
    "        in_vec  = self.in_embedding(target)           # 将目标词语转换词向量\n",
    "        out_vec = self.out_embedding.weight           # 获取全部词的词向量\n",
    "        scores  = torch.matmul(in_vec, out_vec.t())   # 计算目标词向量与全部词向量的点积(相似度)\n",
    "        # 该scores代表了每个词是输入词上下文词的概率\n",
    "        return functional.log_softmax(scores, dim=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Glove\n",
    "\n",
    "GloVe使用共现矩阵计算词向量，其能有效利用全局信息. GloVe官网提供了个几个版本的预训练权重，其不同版本包括训练语料的token数量和词嵌入的维度不同，如其提供了词嵌入维度为25,50,100,200维等多个版本.\n",
    "\n",
    "\n",
    "## 4. 使用预训练词向量完成标题分类\n",
    "\n",
    "由于文件 Tencent_AILab_ChineseEmbedding.txt 太大，需要先到 https://ai.tencent.com/ailab/nlp/zh/embedding.html 下载，并解压出该文件到同一目录下才可运行本代码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "#! 1. 数据准备\n",
    "import pkuseg\n",
    "\n",
    "# 定义两个list分别存放两个板块的帖子数据\n",
    "academy_titles = []\n",
    "job_titles     = []\n",
    "seg            = pkuseg.pkuseg() \n",
    "with open('../Chapter03-05/academy_titles.txt', encoding='utf8') as f:\n",
    "    for l in f:  # 按行读取文件\n",
    "        academy_titles.append(list(seg.cut(l.strip())))  # strip 方法用于去掉行尾空格\n",
    "        \n",
    "with open('../Chapter03-05/job_titles.txt', encoding='utf8') as f:\n",
    "    for l in f:  # 按行读取文件\n",
    "        job_titles.append(list(seg.cut(l.strip())))       # strip 方法用于去掉行尾空格"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['生物',\n",
       " '光子学',\n",
       " '研究院',\n",
       " '|',\n",
       " '22-23',\n",
       " '考研',\n",
       " '华师',\n",
       " '进',\n",
       " '复试',\n",
       " '人数',\n",
       " '、',\n",
       " '录取',\n",
       " '人数',\n",
       " '及',\n",
       " '...']"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "academy_titles[2]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于词汇表过大，因此本文没有完全演示， 后续有需要可参考: https://gitee.com/nlp_practice/nlp_practice_source_code/blob/master/Chapter08/8.4%20%E5%AE%9E%E8%B7%B5.ipynb"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
