{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Negative Sampling\n",
    "\n",
    "负采样（ Negative Sampling）用得很普遍，比如我们熟悉的word2vec就使用了负采样技术。\n",
    "\n",
    "那么什么是负采样技术呢？顾名思义，就是采取一定的手段，构造出负样本，让这些负样本也参与训练。\n",
    "\n",
    "那么问题来了，负采样对训练有什么影响呢？换句话说，负采样在模型训练的过程中，对最终的loss有什么影响呢？"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Keras的skipgram中的负采样\n",
    "\n",
    "`keras.preprocessing.sequence.skipgram`函数中有一个参数`negative_samples`，这个就是对于每一个相本，选取的负样本的数量。这个函数的负采样是这样实现的：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def skipgrams(sequence, vocabulary_size,\n",
    "              window_size=4, negative_samples=1., shuffle=True,\n",
    "              categorical=False, sampling_table=None, seed=None):\n",
    "    \"\"\"Generates skipgram word pairs.\n",
    "\n",
    "    This function transforms a sequence of word indexes (list of integers)\n",
    "    into tuples of words of the form:\n",
    "\n",
    "    - (word, word in the same window), with label 1 (positive samples).\n",
    "    - (word, random word from the vocabulary), with label 0 (negative samples).\n",
    "\n",
    "    Read more about Skipgram in this gnomic paper by Mikolov et al.:\n",
    "    [Efficient Estimation of Word Representations in\n",
    "    Vector Space](http://arxiv.org/pdf/1301.3781v3.pdf)\n",
    "\n",
    "    # Arguments\n",
    "        sequence: A word sequence (sentence), encoded as a list\n",
    "            of word indices (integers). If using a `sampling_table`,\n",
    "            word indices are expected to match the rank\n",
    "            of the words in a reference dataset (e.g. 10 would encode\n",
    "            the 10-th most frequently occurring token).\n",
    "            Note that index 0 is expected to be a non-word and will be skipped.\n",
    "        vocabulary_size: Int, maximum possible word index + 1\n",
    "        window_size: Int, size of sampling windows (technically half-window).\n",
    "            The window of a word `w_i` will be\n",
    "            `[i - window_size, i + window_size+1]`.\n",
    "        negative_samples: Float >= 0. 0 for no negative (i.e. random) samples.\n",
    "            1 for same number as positive samples.\n",
    "        shuffle: Whether to shuffle the word couples before returning them.\n",
    "        categorical: bool. if False, labels will be\n",
    "            integers (eg. `[0, 1, 1 .. ]`),\n",
    "            if `True`, labels will be categorical, e.g.\n",
    "            `[[1,0],[0,1],[0,1] .. ]`.\n",
    "        sampling_table: 1D array of size `vocabulary_size` where the entry i\n",
    "            encodes the probability to sample a word of rank i.\n",
    "        seed: Random seed.\n",
    "\n",
    "    # Returns\n",
    "        couples, labels: where `couples` are int pairs and\n",
    "            `labels` are either 0 or 1.\n",
    "\n",
    "    # Note\n",
    "        By convention, index 0 in the vocabulary is\n",
    "        a non-word and will be skipped.\n",
    "    \"\"\"\n",
    "    couples = []\n",
    "    labels = []\n",
    "    for i, wi in enumerate(sequence):\n",
    "        if not wi:\n",
    "            continue\n",
    "        if sampling_table is not None:\n",
    "            if sampling_table[wi] < random.random():\n",
    "                continue\n",
    "\n",
    "        window_start = max(0, i - window_size)\n",
    "        window_end = min(len(sequence), i + window_size + 1)\n",
    "        for j in range(window_start, window_end):\n",
    "            if j != i:\n",
    "                wj = sequence[j]\n",
    "                if not wj:\n",
    "                    continue\n",
    "                couples.append([wi, wj])\n",
    "                if categorical:\n",
    "                    labels.append([0, 1])\n",
    "                else:\n",
    "                    labels.append(1)\n",
    "\n",
    "    if negative_samples > 0:\n",
    "        num_negative_samples = int(len(labels) * negative_samples)\n",
    "        words = [c[0] for c in couples]\n",
    "        random.shuffle(words)\n",
    "\n",
    "        couples += [[words[i % len(words)],\n",
    "                     random.randint(1, vocabulary_size - 1)]\n",
    "                    for i in range(num_negative_samples)]\n",
    "        if categorical:\n",
    "            labels += [[1, 0]] * num_negative_samples\n",
    "        else:\n",
    "            labels += [0] * num_negative_samples\n",
    "\n",
    "    if shuffle:\n",
    "        if seed is None:\n",
    "            seed = random.randint(0, 10e6)\n",
    "        random.seed(seed)\n",
    "        random.shuffle(couples)\n",
    "        random.seed(seed)\n",
    "        random.shuffle(labels)\n",
    "\n",
    "    return couples, labels\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看出来，十分的简单，就是随机选取指定数量的词作为负样本，对应的标签打成`0`或者`1 0`而已。\n",
    "\n",
    "回到之前的问题：**这些负样本是怎么影响损失的呢？**\n",
    "\n",
    "答案很简单：**经过softmax之后，会得到正负样本的概率分布，而负样本对应的标签是0，所以计算出来的loss，在进行反向传播的时候，会尽量地使这些负样本的概率分布趋于0，相反的，会让正样本的概率分布趋于1**。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:tf]",
   "language": "python",
   "name": "conda-env-tf-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
