{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# gensim中的word2vec的数学原理与源码解析\n",
    "## 1 统计语言模型\n",
    "$W=w_1^T:=(w_1,...,w_T)$是由$T$个词按顺序组成的句子<br>\n",
    "$$p(W)=p(w_1^T)=p(w_1,...,w_T)$$\n",
    "按照Bayes公式，上述公式可链式分解为各个参数的积：\n",
    "$$p(w_1^T)=p(w_1)p(w_2|w_1)p(w_3|w_1^2)...p(w_T|w_1^{T-1})$$\n",
    "再有n-gram模型的假设，n=2时，每个参数只跟它前面一个词相关\n",
    "$$p(w_k|w_1^{k-1})\\approx p(w_k|w_{k-2+1}^{k-1})\\approx \\frac{count(w_{k-1},w_k)}{count(w_{k-1})} $$\n",
    "再对n-gram模型进行抽象，把和当前词相关的词称赋予上下文的概念：\n",
    "$$Context(w_i)=w_{i-n+1}^{i-1}$$\n",
    "那么上述句子的概率变成一定条件下词的概率：\n",
    "$$p(w|Context(w))$$\n",
    "按照优化问题的套路，可以再对乘积项抽象：\n",
    "$$p(w|Context(w))=F(w,Context,\\theta)$$\n",
    "那么下一步就是考虑合理构造函数$F$，这是word2vec的基础。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2 Hierarchical Softmax模型\n",
    "HS模型，CBOW结构节点图,按照每个词出现的频率，可以给出一个节点图： \n",
    "![节点图](http://bucket-lz.oss-cn-beijing.aliyuncs.com/%E8%8A%82%E7%82%B9%E5%9B%BE.png?x-oss-process=image/resize,w_400)\n",
    "其中，v表示词w前后各c个词构成，$x^w$通过求和累加得到，向量长度为m.$p^w$表示路径，$l^w$表示路径长度（节点个数），$d_{j}^w, 2\\leq j \\leq l^w$表示路径上的第j个节点的编码0或者1，根节点不参与编码， $\\theta_j^w, 1\\leq j \\leq l^w-1$第j个非叶子节点对应的向量，他们只是辅助向量。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1 算法准备\n",
    "word.code的初始化，生成哈夫曼树<br>\n",
    "model.syn1的初始化，非叶节点（参数）向量初始化<br>\n",
    "model.wv.syn0的初始化，词向量初始化<br>\n",
    "\n",
    "model.wv.vocab词典，6457个单词:<br>\n",
    "hs模型：\n",
    "        \n",
    "    {'一個數學家的自白' (1402630759312), Vocab: Vocab(code:array([0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1]), count:1, index:841, point:array([6455, 6453, 6451, 6444, 6437, 6409, 6360, 6262, 6176, 5910, 5399, 3026,  992]), sample_int:4294967296)}\n",
    "        \n",
    "NEG模型：\n",
    "\n",
    "    {'一個數學家的自白' (1976616932336), Vocab: Vocab(count:1, index:841, sample_int:4294967296)}\n",
    "\n",
    "\n",
    "word.index当前词索引\n",
    "\n",
    "word.point<br>\n",
    "model.syn1neg<br>\n",
    "model.neg_labels<br>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 gensim中代码与数学公式推导的对应关系<br>\n",
    "<table>\n",
    "<tr><th>代码变量</th><th>模型变量</th><th>维度</th></tr>\n",
    "<tr><td>l1</td><td> $x_w^T$</td><td>(200,)</td></tr>\n",
    "<tr><td>l2a</td><td>$\\theta_{1\\cdots(l^{w}-1)}^w$</td><td>(200,6)</td></tr>\n",
    "<tr><td>fa</td><td>$\\frac{1}{1-e^{\\theta^w_{1\\cdots (l^{w}-1)} x^w}}$</td><td>(6,)</td></tr>\n",
    "<tr><td>ga</td><td>(1-word.code-fa)$\\eta$</td><td>(6,)</td></tr>\n",
    "<tr><td>neu1a</td><td>$e$</td><td>(200,)</td></tr>\n",
    "<tr><td>model.syn1[word.point]</td><th>$\\theta_{1..l^w-1}^w$</th><td>(200,6)</td></tr>\n",
    "<tr><td>model.wv.syn0[input_word_indices]</td><th>$v(context(w))$</th><td>(200,10)</td></tr>\n",
    "</table>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "l2a = model.syn1[word.point] \n",
    "def update_weights(self):\n",
    "    self.wv.syn0 = vstack([self.wv.syn0, newsyn0])\n",
    "    if self.hs:\n",
    "        self.syn1 = vstack([self.syn1, zeros((gained_vocab, self.layer1_size), dtype=REAL)])\n",
    "    if self.negative:\n",
    "        self.syn1neg = vstack([self.syn1neg, zeros((gained_vocab, self.layer1_size), dtype=REAL)])    \n",
    "        \n",
    "def reset_weights(self):\n",
    "    self.wv.syn0 = empty((len(self.wv.vocab), self.vector_size), dtype=REAL)\n",
    "    # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once\n",
    "    for i in xrange(len(self.wv.vocab)):\n",
    "        # construct deterministic seed from word AND seed argument\n",
    "        self.wv.syn0[i] = self.seeded_vector(self.wv.index2word[i] + str(self.seed))\n",
    "    if self.hs:\n",
    "        self.syn1 = zeros((len(self.wv.vocab), self.layer1_size), dtype=REAL)\n",
    "    if self.negative:\n",
    "        self.syn1neg = zeros((len(self.wv.vocab), self.layer1_size), dtype=REAL)\n",
    "\n",
    "self.wv = KeyedVectors()\n",
    "class KeyedVectors(utils.SaveLoad):\n",
    "#Class to contain vectors and vocab for the Word2Vec training class and other w2v methods not directly involved in training such as most_similar()\n",
    "    def __init__(self):\n",
    "        self.syn0 = []"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 目标函数的提出\n",
    "\n",
    "单词向量化最直接能想到需要利用的信息包括：词频，词的上下文环境。\n",
    "首先以单词频率作为基准建立一棵二叉树，然后从二分类的角度,对每一个非叶子节点的左右孩子指定类别，再利用每个词的上下文词向量初始化当前词。利用哈夫曼树结构能够很好地利用词频相对大小的偏序关系，而每次初始化当前词则能很好地利用上下文信息。\n",
    "\n",
    "$$Label(p_i^w)=1-d_i^w,i=2,3,...,l^w$$\n",
    "<table>\n",
    "<tr><td>1</td><td style=\"width:100px\">$(-1)^1$</td><td>左节点</td><td> 负类 </td><td style=\"width:100px\">  $1-\\sigma(x_w^T\\theta)$ </td><td> 权值大</td></tr>\n",
    "<tr><td>0</td><td style=\"width:100px\">$(-1)^0$</td><td>右节点</td><td> 正类 </td><td style=\"width:100px\">  $\\sigma(x_w^T\\theta)$</td><td> 权值小</td></tr>\n",
    "</table>\n",
    " \n",
    "<!--<table>\n",
    "<tr><td>1</td><td>0</td></tr>\n",
    "<tr><td>$(-1)^1$</td><td>$(-1)^0$</td></tr>\n",
    "<tr><td>左边</td><td> 右边</td></tr>\n",
    "<tr><td>负类 </td><td> 正类 </td></tr>\n",
    "<tr><td style=\"width:100px\">$$1-\\sigma(x_w^T\\theta)$$ </td><td style=\"width:100px\">$$\\sigma(x_w^T\\theta)$$</td></tr>\n",
    "<tr><td> 权值大</td><td> 权值小</td></tr>\n",
    "</table>-->\n",
    "\n",
    "$$L=\\sum_{w\\in C}p(w|Context(w))$$\n",
    "有了哈夫曼树的结构，就能定义Hierarchical Softmax的上下文模式，也就是上文中提到的模型函数$F(w,Context,\\theta)$:\n",
    "$$p(w|Context(w))=\\prod_{j=2}^{l^w}p(d_j|x_w,\\theta_{j-1}^w)$$\n",
    "其中\n",
    "\\begin{equation}\n",
    "p(d_j|x_w,\\theta_{j-1}^w)=\\left\\{\n",
    "\\begin{aligned}\n",
    "&\\sigma(x_w^T\\theta_{j-1}^w),&d_j^w=0\\\\\n",
    "&1-\\sigma(x_w^T\\theta_{j-1}^w)=\\sigma(-x_w^T\\theta_{j-1}^w),&d_j^w=1\n",
    "\\end{aligned}\n",
    "\\right.\n",
    "\\end{equation}\n",
    "或者写在一起：\n",
    "$$L(w,j)=p(d_j|x_w,\\theta_{j-1}^w)=[\\sigma(x_w^T\\theta_{j-1}^w)]^{1-d_j^w}\\cdot [1-\\sigma(x_w^T\\theta_{j-1}^w)]^{d_j^w}$$\n",
    "再利用对数似然，写出目标函数\n",
    "$$L=\\sum_{w\\in C}\\sum_{j=2}^{l^w}L(w,j)$$\n",
    "$$L(w,j)=(1-d_j^w)log[\\sigma(x_w^T\\theta_{j-1}^w)]+d_j^w log[1-\\sigma(x_w^T\\theta_{j-1}^w)]$$\n",
    "所以\n",
    "\\begin{equation}\n",
    " L(w,j)=\\left\\{\n",
    "\\begin{aligned}\n",
    " & log[\\sigma(x_w^T\\sigma_j^w)] ,&d_j=0\\\\\n",
    " & log[\\sigma(-x_w^T\\sigma_j^w)],&d_j=1\n",
    "\\end{aligned}\n",
    "\\right.\n",
    "\\end{equation}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def score_cbow_pair(model, word, l1):\n",
    "# We have currently only implemented score for the hierarchical softmax scheme,\n",
    "        so you need to have run word2vec with hs=1 and negative=0 for this to work.\n",
    "    l2a = model.syn1[word.point]  # 2d matrix, codelen x layer1_size\n",
    "    sgn = (-1.0)**word.code  # ch function, 0-> 1, 1 -> -1\n",
    "    lprob = -logaddexp(0, -sgn * dot(l1, l2a.T))\n",
    "    return sum(lprob)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上述lprob的定义：\n",
    "$$-logaddexp(0,-x)=\\frac{1}{1+e^{-x}}=\\sigma(x)$$\n",
    "S型函数(sigmoid函数)常用的一些性质：\n",
    "\\begin{equation}\n",
    "1-\\sigma(x)=\\sigma(-x)\\\\\n",
    "\\sigma^{'}(x)=\\sigma(x)[1-\\sigma(x)]\\\\\n",
    "[ln(\\sigma(x))]^{'}=1-\\sigma(x)\\\\\n",
    "[ln(1-\\sigma(x))]^{'}=-\\sigma(x)\n",
    "\\end{equation}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4 主要机器学习方案：随机梯度下降<br>\n",
    "各个参数的更新公式如下：\n",
    "$$\\frac{\\partial L(w,j)}{\\partial \\theta_{j-1}^{w}}=[1-d_j^w-\\sigma(x^T_w\\theta_{j-1}^w)]x_w$$\n",
    "从而$\\theta_{j-1}^w$的更新公式为\n",
    "$$\\theta_{j-1}^w:=\\theta_{j-1}^w+\\eta[1-d_j^w-\\sigma(x_w^T\\theta_{j-1}^w)]x_w$$\n",
    "对词向量求偏导\n",
    "$$\\frac{\\partial L(w,j)}{\\partial x_{w}}=[1-d_j^w-\\sigma(x^T_w\\theta_{j-1}^w)]\\theta_{j-1}^w$$\n",
    "注意到$x_w$是$Context(w)$中各个词的累加，如何利用$\\frac{\\partial L(w,j)}{\\partial x_{w}}$更新$v(\\hat{w})$，这里只进行简单处理，直接取：\n",
    "$$v(\\hat{w}):=v(\\hat{w})+\\eta\\sum_{j=2}^{l^w}\\frac{\\partial L(w,j)}{\\partial x_w},\\hat{w}\\in Context(w)$$\n",
    "注:这里是不能用平均贡献的，这个和bp算法是一样的，也就是链式求导法则，中间的权重是1，不是1/窗口大小。\n",
    "如果是mean-pooling那种前向传播这里才是用的均值贡献。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.5 HS模型的两种求解算法\n",
    "CBOW算法核心步骤:(在gensim中默认当前词的初始向量是上下文的均值)<br>\n",
    "<ol text-align:center;>\n",
    "<li>$e=0$ </li>\n",
    "<li>$x_w=\\frac{1}{|Context(w)|}\\sum_{u\\in Context(w)}v(u)$</li>//当然也可以取平均值处理，在gensim中有专门是否取平均的选项model.cbow_mean默认为1。但是说均值与和的效果没有大的差别，我觉得还是很惊人的？“单纯求sum，会爆掉网络的，这个结果计算误差已经不精确了，然后还把这个不精确误差，重复复制几遍，往两侧词向量啪啪啪，更不精确”\n",
    "<li>For j=2:$l^w$</li>\n",
    "  3.1 $f_j=\\sigma(x_w^T\\theta_{j-1}^w)$<br>\n",
    "  3.2 $g_j=\\eta(1-d_j^w-f_j)$<br>\n",
    "  3.3 $e=e+g_j\\theta^w_{j-1}$<br>\n",
    "  3.4 $\\theta_{j-1}^w:=\\theta_{j-1}^w+g_jx_w$<br>\n",
    "<li>For $u\\in Context(w)$:</li>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&emsp;&emsp;$v(u):=v(u)+e$\n",
    "</ol>\n",
    "\n",
    "Skip-gram算法核心步骤（最主要的区别在于每次在更新的对象不同，skip-gram每次只更新当前词，而CBOW只更新上下文词）:<br>\n",
    "For $u\\in Context(w)$:\n",
    "<ol>\n",
    "<li>$e=0$ </li>\n",
    "<li>For j=2:$l^w$</li>\n",
    "  2.1 $f_j=\\sigma(x_w^T\\theta_{j-1}^w)$<br>\n",
    "  2.2 $g_j=\\eta(1-d_j^w-f_j)$<br>\n",
    "  2.3 $e=e+g_j\\theta^w_{j-1}$<br>\n",
    "  2.4 $\\theta_{j-1}^w:=\\theta_{j-1}^w+g_jx_w$<br>\n",
    "<li>$v(w):=v(w)+e$</li>\n",
    "</ol>  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "inp='f2_10lines.txt'\n",
    "ls=LineSentence(inp)\n",
    "#Word2Vec默认是sg=0, hs=0, negative=5, cbow_mean=1\n",
    "#默认NS模型，CBOW算法，而且求均值\n",
    "model = Word2Vec(ls, size=200, window=5, min_count=1）\n",
    "#使用hs模型，cbow算法\n",
    "model = Word2Vec(ls, size=200, window=5, min_count=1, hs=1, negative=0)\n",
    "#使用hs模型，sg算法\n",
    "model = Word2Vec(ls, size=200, window=5, min_count=1，hs=1, negative=0, sg=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def train_batch_cbow(model, sentences, alpha, work=None, neu1=None, compute_loss=False):\n",
    "    result = 0\n",
    "    for sentence in sentences:\n",
    "        #一行词（句子）的中的词进行抽样\n",
    "        word_vocabs = [model.wv.vocab[w] for w in sentence if w in model.wv.vocab and\n",
    "                       model.wv.vocab[w].sample_int > model.random.rand() * 2**32]\n",
    "        for pos, word in enumerate(word_vocabs):\n",
    "            #窗口阈值中随机选取\n",
    "            reduced_window = model.random.randint(model.window)  # `b` in the original word2vec code\n",
    "            start = max(0, pos - model.window + reduced_window)\n",
    "            window_pos = enumerate(word_vocabs[start:(pos + model.window + 1 - reduced_window)], start)\n",
    "            #去除当前词的位置\n",
    "            word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)]\n",
    "            #l1表示窗口附近的那几个词在每一个词向量维度的均值,默认是均值，但是不排除直接求和，但有人说会爆？\n",
    "            l1 = np_sum(model.wv.syn0[word2_indices], axis=0)  # 1 x vector_size\n",
    "            if word2_indices and model.cbow_mean:\n",
    "                l1 /= len(word2_indices)\n",
    "            train_cbow_pair(model, word, word2_indices, l1, alpha, compute_loss=compute_loss)\n",
    "        result += len(word_vocabs)\n",
    "    return result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# CBOW 模型\n",
    "def train_cbow_pair(model, word, input_word_indices, l1, alpha, learn_vectors=True, learn_hidden=True, compute_loss=False): \n",
    "    if model.hs:\n",
    "        #考察词汇的前置节点（那几个非叶子节点）对应的向量,word.point指的是非叶子节点在其向量矩阵中的位置索引\n",
    "        l2a = model.syn1[word.point]  # 2d matrix, codelen x layer1_size\n",
    "        prod_term = dot(l1, l2a.T)\n",
    "        fa = expit(prod_term)  # propagate hidden -> output\n",
    "        #word.code  是在哈夫曼树中的前置节点d_j（比如有10个）在当前哈夫曼节点的代号  0 或者  1 \n",
    "        ga = (1. - word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate\n",
    "        #说的是3.3和3.4不能交换，但是这里并不是交换，而且是可以写到下面的\n",
    "        if learn_hidden:\n",
    "            model.syn1[word.point] += outer(ga, l1)  # learn hidden -> output\n",
    "        neu1e += dot(ga, l2a)  # save error\n",
    "        #更新词向量\n",
    "        if learn_vectors:\n",
    "            # learn input -> hidden, here for all words in the window separately\n",
    "            if not model.cbow_mean and input_word_indices:\n",
    "                neu1e /= len(input_word_indices)\n",
    "            for i in input_word_indices:\n",
    "                model.wv.syn0[i] += neu1e * model.syn0_lockf[i]\n",
    "\n",
    " \n",
    "# skip-gram 模型\n",
    "def train_sg_pair(model, word, context_index, alpha, learn_vectors=True, learn_hidden=True, context_vectors=None, context_locks=None, compute_loss=False):\n",
    "    if model.hs:\n",
    "        context_vectors = model.wv.syn0\n",
    "        l1 = context_vectors[context_index]  # input word (NN input/projection layer)\n",
    "        if model.hs:\n",
    "            # work on the entire tree at once, to push as much work into numpy's C routines as possible (performance)\n",
    "            l2a = deepcopy(model.syn1[predict_word.point])  # 2d matrix, codelen x layer1_size\n",
    "            prod_term = dot(l1, l2a.T)\n",
    "            fa = expit(prod_term)  # propagate hidden -> output\n",
    "            ga = (1 - predict_word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate\n",
    "            if learn_hidden:\n",
    "                model.syn1[predict_word.point] += outer(ga, l1)  # learn hidden -> output\n",
    "            neu1e += dot(ga, l2a)  # save error\n",
    "    if learn_vectors:\n",
    "        l1 += neu1e * lock_factor  # learn input -> hidden (mutates model.wv.syn0[word2.index], if that is l1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "    if compute_loss:\n",
    "        sgn = (-1.0)**predict_word.code  # `ch` function, 0 -> 1, 1 -> -1\n",
    "        lprob = -log(expit(-sgn * prod_term))\n",
    "        model.running_training_loss += sum(lprob)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3 Negative Sampling模型\n",
    "### 3.1 模型推导\n",
    "为了提高训练速度并改善词向量的质量，利用二分类的思想，可以考虑将Huffman树中的非叶右节点去掉，只保留左非叶节点，同时使叶节点都在右侧。同时，为了使得叶节点的深度尽量均衡，在一定范围内随机生成深度。这样的想法产生了一种新的方法——随机负采样。例如在CBOW模型中，已知词$w$的上下文$Context(w)$需要预测$w$，因此词$w$是一个正样本，其他词就是负样本。这里用专门的负采样算法来得到负样本，这本质上是一种带权采样。在论文中，作者指出对于小规模数据集，选择5-20个negative words会比较好，对于大规模数据集可以仅选择2-5个negative words。<br>\n",
    "假定已经选好了一个关于$w$的负样本子集$NEG(w)\\neq \\emptyset$。这里请注意$NEG(w)$与$Context(w)$的区别，前者是隐藏节点，后者是真正的词节点。且对$\\forall \\tilde{w} \\in D$,定义词$\\tilde{w}$的示性函数。\n",
    "\n",
    "\\begin{equation}\n",
    "L^w(\\tilde{w})=\n",
    "\\left\\{\n",
    "\\begin{aligned}\n",
    "&1,&\\tilde{w}=w,\\\\\n",
    "&0,&\\tilde{w}\\neq w\n",
    "\\end{aligned}\n",
    "\\right.\n",
    "\\end{equation}\n",
    "\n",
    "这样条件概率变成：\n",
    "$$p(u|Context(w))=[\\sigma(x_w^T\\theta^u)]^{L^w(u)}\\cdot [1-\\sigma(x_w^T\\theta^u)]^{1-L^w(u)}$$\n",
    "目标函数对应：\n",
    "\\begin{equation}\n",
    "\\begin{aligned}\n",
    "g(w)=&\\prod_{u\\in{w}\\bigcup NEG(w)}p(u|Context(w))\\\\\n",
    "    =&\\sigma(x_w^T\\theta^w)\\prod_{u\\in NEG(w)}[1-\\sigma(x_w^T\\theta^u)]\\\\\n",
    "    =&\\sigma(x_w^T\\theta^w)\\prod_{u\\in NEG(w)}[\\sigma(-x_w^T\\theta^u)]\n",
    "\\end{aligned}\n",
    "\\end{equation}\n",
    "结合起来看最大化$\\sigma(x_w^T)$相当于最大化中心词的概率$\\sigma(x_w^T)$同时最小化所有的上下文词的概率$\\sigma(x_w^T\\theta^u)$。这正好是二分类中增大正样本概率的同时降低负样本概率的做法。\n",
    "那么相应的对数似然为：\n",
    "$$L=logG=log\\sum_{w\\in C}g(w)=\\\\\n",
    "\\sum_{w\\in C}{log[\\sigma(x_w^T\\theta^w)]+\\sum_{u\\in NEG(w)}log[\\sigma(-x_w^T\\theta^u)]}$$\n",
    "从而\n",
    "$$L(w,u)=L^w(u)\\cdot log[\\sigma(x_w^T\\theta^u)]+(1-L^w(u)]\\cdot log[1-\\sigma(x_w^T\\theta^u)]$$\n",
    "再利用随机梯度下降：\n",
    "$$\\frac{\\partial L(w,u)}{\\partial \\theta^u}=[L^w(u)-\\sigma(x_w^T\\theta^u)]x_w$$\n",
    "$$\\frac{\\partial L(w,u)}{\\partial x_w}=[L^w(u)-\\sigma(x_w^T\\theta^u)]\\theta^u$$\n",
    "从而更新公式为：\n",
    "$$\\theta^u:=\\theta^u+\\eta[L^w(u)-\\sigma(x_w^T\\theta^u)]x_w$$\n",
    "$$v(\\hat{w}):=v(\\hat{w})+\\eta\\sum_{u\\in{w}\\bigcup NEG(w)}\\frac{\\partial L(w,u)}{\\partial x_w},\\hat{w} \\in Context(w)$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 NS模型CBOW算法核心步骤:<br>\n",
    "<ol text-align:center;>\n",
    "<li>$e=0$ </li>\n",
    "<li>$x_w=\\frac{1}{|Context(w)|}\\sum_{u\\in Context(w)}v(u)$ </li>\n",
    "<li>For $u=\\{w\\}\\bigcup NEG(w)$&nbsp;&nbsp;&nbsp;&nbsp;DO</li>\n",
    "  3.1 $f_u=\\sigma(x_w^T\\theta^u)$<br>\n",
    "  3.2 $g_u=\\eta(L^w(u)-f_u)$<br>\n",
    "  3.3 $e=e+g_j\\theta^u$<br>\n",
    "  3.4 $\\theta^u:=\\theta^u+g_ux_w$<br>\n",
    "<li>For $u\\in Context(w)$:</li>\n",
    "&nbsp;&nbsp;&nbsp;&nbsp;&emsp;&emsp;$v(u):=v(u)+e$\n",
    "</ol>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def make_cum_table(self, power=0.75, domain=2**31 - 1):\n",
    "    \"\"\"\n",
    "    Create a cumulative-distribution table using stored vocabulary word counts for\n",
    "    drawing random words in the negative-sampling training routines.\n",
    "\n",
    "    To draw a word index, choose a random integer up to the maximum value in the\n",
    "    table (cum_table[-1]), then finding that integer's sorted insertion point\n",
    "    (as if by bisect_left or ndarray.searchsorted()). That insertion point is the\n",
    "    drawn index, coming up in proportion equal to the increment at that slot.\n",
    "\n",
    "    Called internally from 'build_vocab()'.\n",
    "    \"\"\"\n",
    "    vocab_size = len(self.wv.index2word)\n",
    "    self.cum_table = zeros(vocab_size, dtype=uint32)\n",
    "    # compute sum of all power (Z in paper)\n",
    "    train_words_pow = 0.0\n",
    "    for word_index in xrange(vocab_size):\n",
    "        train_words_pow += self.wv.vocab[self.wv.index2word[word_index]].count**power\n",
    "    cumulative = 0.0\n",
    "    for word_index in xrange(vocab_size):\n",
    "        cumulative += self.wv.vocab[self.wv.index2word[word_index]].count**power\n",
    "        self.cum_table[word_index] = round(cumulative / train_words_pow * domain)\n",
    "    if len(self.cum_table) > 0:\n",
    "        assert self.cum_table[-1] == domain\n",
    "        \n",
    "#NEG模型的cbow算法\n",
    "if model.negative:\n",
    "    # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)\n",
    "    word_indices = [word.index]\n",
    "    #让word_indices凑够负样本数目，这里model.negative默认是5\n",
    "    while len(word_indices) < model.negative + 1:\n",
    "        w = model.cum_table.searchsorted(model.random.randint(model.cum_table[-1]))\n",
    "        if w != word.index:\n",
    "            word_indices.append(w)\n",
    "    l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size\n",
    "    prod_term = dot(l1, l2b.T)\n",
    "    fb = expit(prod_term)  # propagate hidden -> output\n",
    "    gb = (model.neg_labels - fb) * alpha  # vector of error gradients multiplied by the learning rate\n",
    "    if learn_hidden:\n",
    "        model.syn1neg[word_indices] += outer(gb, l1)  # learn hidden -> output\n",
    "    neu1e += dot(gb, l2b)  # save error"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "    # loss component corresponding to negative sampling\n",
    "    if compute_loss:\n",
    "        model.running_training_loss -= sum(log(expit(-1 * prod_term[1:])))  # for the sampled words\n",
    "        model.running_training_loss -= log(expit(prod_term[0]))  # for the output word"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "## 4 词向量的评价\n",
    "把它丢到另一个有监督的系统（任务）中去当做特征，看看对这个有监督系统有多大改善。\n",
    "常见的衡量word embedding质量的tasks有: \n",
    "1. similarity task\n",
    "2. analogy task，也就是著名Ａ-B=C-D\n",
    "\n",
    "①有人拿它寻找近义词或者相关词，直接根据向量空间里的距离远近来判定词的关系。<br>\n",
    "②也有不少早期的工作，直接拿词向量做特征，在现有系统中加入词向量作为特征。特征嘛，就是要个多样性，虽然不知道词向量包含了什么信息，但是说不定就带着新的信息，效果就能提升了。<br>\n",
    "③还有大量基于神经网络的工作，拿词向量作为神经网络的初始值。神经网络的初始值选得好，就有可能收敛到更好的局部最优解。\n",
    "好，就是这三种指标了：语义特性、用作特征、用作初始值。<br>\n",
    "基于这三大类用法，具体可以找到 8 个指标进行比较，综合评价词向量的性能。但8个指标的结果很不一致，因此难以找到一个比较通用的终极指标。\n",
    "\n",
    "语料对词向量的影响比模型的影响要重要得多得多得多（重要的事说三遍）。\n",
    "\n",
    "50 维之后效果提升就比较少了。这部分的结果很依赖于具体任务的实现，或许用上了更先进的神经网络优化方法，"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5 总结\n",
    "1. 调参经验：<br>维数、窗口大小、最低词频、二次采样的设置等等都要根据语料的实际情况：维数是一个可以调节的超参数，如果不是机器翻译这种特别大的任务一般200以内就够；窗口大小，如果是推特这样的短文本那就不能取太大。\n",
    "\n",
    "2. 澄清：浅层结构被说成DL或者与句法对比<br>\n",
    "Word2vec embeddings seem to be slightly better than fastText embeddings at the semantic tasks, while the fastText embeddings do significantly better on the syntactic analogies. Makes sense, since fastText embeddings are trained for understanding morphological nuances, and most of the syntactic analogies are morphology based.\n",
    "Skip-Gram模型是通过输入词来预测上下文。这里之所以多加了一步随机数的窗口重新选择步骤，是为了能够让模型更聚焦于当前input word的邻近词。\n",
    "\n",
    "3. 连续词袋模型（CBOW）及Skip-Gram模型：<br>从算法角度看，这两种方法非常相似，其区别为CBOW根据源词上下文词汇（'the cat sits on the'）来预测目标词汇（例如，‘mat’），而Skip-Gram模型做法相反，它通过目标词汇来预测源词汇。Skip-Gram模型采取CBOW的逆过程的动机在于：CBOW算法对于很多分布式信息进行了平滑处理（例如将一整段上下文信息视为一个单一观察量）。很多情况下，对于小型的数据集，这一处理是有帮助的。相形之下，Skip-Gram模型将每个“上下文-目标词汇”的组合视为一个新观察量，这种做法在大型数据集中会更为有效。\n",
    "CBOW模型把上下文的 2m 个词向量求平均值“揉”成了一个向量 v^t 然后作为输入，进而预测中心词；而Skip-gram模型则是把上下文的 2m 个词向量 vt+j 依次作为输入，然后预测中心词。\n",
    "模型的输出概率代表着到我们词典中每个词有多大可能性跟input word同时出现。\n",
    "\n",
    "4. 高频词：<br>\n",
    "为了消除训练集中的不平衡性，在纯神经网络训练时需要抽样去除停用词。Word2Vec通过“抽样”模式来解决这种高频词问题。它的基本思想如下：对于我们在训练原始文本中遇到的每一个单词，它们都有一定概率被我们从文本中删掉，而这个被删除的概率与单词的频率有关。\n",
    "只有那些在语料中出现频率超过 0.26% 的单词才会被采样。\n",
    "对停用词进行采样，例如“the”， “of”以及“for”这类单词进行剔除。剔除这些单词以后能够加快我们的训练过程，同时减少训练过程中的噪音。\n",
    "采用以下公式:\n",
    "$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\n",
    "其中$ t $是一个阈值参数，一般为1e-3至1e-5。  \n",
    "$f(w_i)$ 是单词 $w_i$ 在整个数据集中的出现频次。  \n",
    "$P(w_i)$ 是单词被删除的概率。\n",
    "\n",
    "1. 大多数情况下，可以选取一个简单任务的性能峰值作为训练词向量的迭代终止条件。在条件允许的情况下，选择目标任务的验证集性能作为参考标准，是最合适的选择。\n",
    "2. 词向量的维度选择多少维比较合适？对于分析词向量语言学特性的任务，维度越大效果越好（除 C&W 模型以外),词向量维度到达 50 之后，效果提升非常微弱。\n",
    "3. 卷积神经网络利用最大池化技术能从文本中找出最有用的文本片段，其复杂度也是 O(n)。因此卷积神经网络在构建文本语义时有更大的潜力。\n",
    "4. 循环卷积网络（绿线）的性能不依赖于窗口大小，比卷积神经网络选最佳窗口大小时的效果更好。循环结构可以保留更长的上下文信息；同时相比大窗口的卷积神经网络，循环卷积网络也会引入更少的噪声。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6 一些值得思考的问题\n",
    "1. 对词向量生成好坏没有严格的标准，king-man+woman=queen只是一个参考<br>\n",
    "2. huffman树为何能保证hs的中的概率和$\\sum_{w\\in D}p(w|Context(w))=1$<br>\n",
    "3. BP无法用均值贡献，mean-pooling那种前向传播可以？<br>\n",
    "4. 隐藏层到输出的矩阵计算很耗时间?<br>\n",
    "\n",
    "\n",
    "- $L$关于$x_w$求导的结果中是不是应该有一个$\\sum_i^{l_m}$在前面？L关于$x_w$的求导实际上和L关于$\\theta_j$求导并不是完全对等的。对$v(w)$的更新之所\n",
    "以直接用$L$关于$x_w$的梯度更新，是因为$L$关于$v(w)$的导数就等于L关于x_w的导数，因为 $x_w = \\sum{v(w)}$，当然，如果是$x_w$为平均向量的话，还应该乘上一个系数1/c。文中的解释有些牵强，应该是把$l_w$这个路径上的求和与context中单词向量的求和搞混了？"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7 参考文献\n",
    "0. Efficient Estimation of Word Representations in Vector Space 13.01\n",
    "1. Exploiting Similarities among Languages for Machine Translation 13.09\n",
    "3. Distributed Representations of Words and Phrases and their Compositionality 13.10\n",
    "\n",
    "\n",
    "1. csdn博文非常详细的博文 [皮果提][1]\n",
    "2. gensim库以及函数说明[gensim tutorials][2] \n",
    "3. 矩阵推导的方式再一次阐述原理 [矩阵推导][3]\n",
    "4. 雷锋网翻译，用tensorflow实现skip-gram的结构，训练，实现 [雷锋网][4]\n",
    "5. 知乎源码 [知乎github][5]\n",
    "6. 词向量评价 [词向量评价][6]\n",
    "7. 博士论文的博文网站 [博士论文的博文网站][7]\n",
    "8. gensim作者的博客，w2v如何用python实现超过c的速度 [gensim作者的博客][8]\n",
    "[1]: http://blog.csdn.net/itplus/article/details/37969817  \"皮果提\" \n",
    "[2]: https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md  \"gensim tutorials\" \n",
    "[3]: http://www.cnblogs.com/Determined22/p/5804455.html \"矩阵推导\"\n",
    "[4]: https://www.leiphone.com/news/201706/PamWKpfRFEI42McI.html \"雷锋网\"\n",
    "[5]: https://github.com/NELSONZHAO/zhihu/blob/master/skip_gram/Skip-Gram-Chinese-Corpus.ipynb \"知乎github\"\n",
    "[6]: https://www.zhihu.com/question/37489735?sort=created \"词向量评价\"\n",
    "[7]: http://licstar.net/archives/tag/%E8%AF%8D%E5%90%91%E9%87%8F \"博士论文的博文网站\"\n",
    "[8]: https://rare-technologies.com/word2vec-in-python-part-two-optimizing/ \"gensim作者的博客\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "\n",
    "默认train_document_dm模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Doc2Vec(Word2Vec):\n",
    "    def __init__(self, documents=None, dm_mean=None,\n",
    "                 dm=1, dbow_words=0, dm_concat=0, dm_tag_count=1,\n",
    "                 docvecs=None, docvecs_mapfile=None, comment=None, trim_rule=None, **kwargs):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "    def train_document_dbow(model, doc_words, doctag_indexes, alpha, work=None,\n",
    "                            train_words=False, learn_doctags=True, learn_words=True, learn_hidden=True,\n",
    "                            word_vectors=None, word_locks=None, doctag_vectors=None, doctag_locks=None):\n",
    "        \"\"\"\n",
    "        If `train_words` is True, simultaneously train word-to-word (not just doc-to-word)\n",
    "        examples, exactly as per Word2Vec skip-gram training. (Without this option,\n",
    "        word vectors are neither consulted nor updated during DBOW doc vector training.)\n",
    "\n",
    "        Any of `learn_doctags', `learn_words`, and `learn_hidden` may be set False to\n",
    "        prevent learning-updates to those respective model weights, as if using the\n",
    "        (partially-)frozen model to infer other compatible vectors.\n",
    "        \"\"\"\n",
    "        if doctag_vectors is None:\n",
    "            doctag_vectors = model.docvecs.doctag_syn0\n",
    "        if doctag_locks is None:\n",
    "            doctag_locks = model.docvecs.doctag_syn0_lockf\n",
    "\n",
    "        if train_words and learn_words:\n",
    "            train_batch_sg(model, [doc_words], alpha, work)\n",
    "        for doctag_index in doctag_indexes:\n",
    "            for word in doc_words:\n",
    "                train_sg_pair(model, word, doctag_index, alpha, learn_vectors=learn_doctags,\n",
    "                              learn_hidden=learn_hidden, context_vectors=doctag_vectors,\n",
    "                              context_locks=doctag_locks)\n",
    "\n",
    "        return len(doc_words)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "    def train_document_dm(model, doc_words, doctag_indexes, alpha, work=None, neu1=None,\n",
    "                          learn_doctags=True, learn_words=True, learn_hidden=True,\n",
    "                          word_vectors=None, word_locks=None, doctag_vectors=None, doctag_locks=None):\n",
    "        \"\"\"\n",
    "        Called internally from `Doc2Vec.train()` and `Doc2Vec.infer_vector()`. This\n",
    "        method implements the DM model with a projection (input) layer that is\n",
    "        either the sum or mean of the context vectors, depending on the model's\n",
    "        `dm_mean` configuration field.  See `train_document_dm_concat()` for the DM\n",
    "        model with a concatenated input layer.\n",
    "\n",
    "        The document is provided as `doc_words`, a list of word tokens which are looked up\n",
    "        in the model's vocab dictionary, and `doctag_indexes`, which provide indexes\n",
    "        into the doctag_vectors array.\n",
    "        \"\"\"\n",
    "        if word_vectors is None:\n",
    "            word_vectors = model.wv.syn0\n",
    "        if word_locks is None:\n",
    "            word_locks = model.syn0_lockf\n",
    "        if doctag_vectors is None:\n",
    "            doctag_vectors = model.docvecs.doctag_syn0\n",
    "        if doctag_locks is None:\n",
    "            doctag_locks = model.docvecs.doctag_syn0_lockf\n",
    "\n",
    "        word_vocabs = [model.wv.vocab[w] for w in doc_words if w in model.wv.vocab and\n",
    "                       model.wv.vocab[w].sample_int > model.random.rand() * 2**32]\n",
    "\n",
    "        for pos, word in enumerate(word_vocabs):\n",
    "            reduced_window = model.random.randint(model.window)  # `b` in the original doc2vec code\n",
    "            start = max(0, pos - model.window + reduced_window)\n",
    "            window_pos = enumerate(word_vocabs[start:(pos + model.window + 1 - reduced_window)], start)\n",
    "            word2_indexes = [word2.index for pos2, word2 in window_pos if pos2 != pos]\n",
    "            l1 = np_sum(word_vectors[word2_indexes], axis=0) + np_sum(doctag_vectors[doctag_indexes], axis=0)\n",
    "            #都一个，所以加起来2个\n",
    "            count = len(word2_indexes) + len(doctag_indexes)\n",
    "            if model.cbow_mean and count > 1 :\n",
    "                l1 /= count\n",
    "            neu1e = train_cbow_pair(model, word, word2_indexes, l1, alpha,\n",
    "                                    learn_vectors=False, learn_hidden=learn_hidden)\n",
    "            if not model.cbow_mean and count > 1:\n",
    "                neu1e /= count\n",
    "            if learn_doctags:\n",
    "                for i in doctag_indexes:\n",
    "                    doctag_vectors[i] += neu1e * doctag_locks[i]\n",
    "            if learn_words:\n",
    "                for i in word2_indexes:\n",
    "                    word_vectors[i] += neu1e * word_locks[i]\n",
    "\n",
    "        return len(word_vocabs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "关于多线程：\n",
    "    def _do_train_job(self, job, alpha, inits):\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
