{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# GloVe Word Embeddings Demo"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This demo was part of a presentation for [this word embeddings workshop](https://www.eventbrite.com/e/practical-ai-for-female-engineers-product-managers-and-designers-tickets-34805104003) and a talk at [the Demystifying AI conference](https://www.eventbrite.com/e/demystifying-deep-learning-ai-tickets-34351888423).  It is not necessary to download the demo to be able to follow along and enjoy the workshop.\n",
    "\n",
    "It is available on Github at https://github.com/fastai/word-embeddings-workshop"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Loading our data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import pickle\n",
    "import numpy as np\n",
    "import re\n",
    "import json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "np.set_printoptions(precision=4, suppress=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The dataset is available at http://files.fast.ai/models/glove/6B.100d.tgz\n",
    "To download and unzip the files from the command line, you can run:\n",
    "\n",
    "    wget http://files.fast.ai/models/glove_50_glove_100.tgz \n",
    "    tar xvzf glove_50_glove_100.tgz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You will need to update the path below to be accurate for where you are storing the data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "vecs = np.load(\"glove_vectors_100d.npy\")\n",
    "vecs50 = np.load(\"glove_vectors_50d.npy\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "with open('words.txt') as f:\n",
    "    content = f.readlines()\n",
    "words = [x.strip() for x in content] "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "wordidx = json.load(open('wordsidx.txt'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What the data looks like"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's see what our data looks like:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "400000"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(words)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['the', ',', '.', 'of', 'to', 'and', 'in', 'a', '\"', \"'s\"]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "words[:10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['together',\n",
       " 'congress',\n",
       " 'index',\n",
       " 'australia',\n",
       " 'results',\n",
       " 'hard',\n",
       " 'hours',\n",
       " 'land',\n",
       " 'action',\n",
       " 'higher']"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "words[600:610]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "wordidx allows us to look up a word in order to find out it's index:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "dict"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "type(wordidx)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "11853"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "wordidx['feminist']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'feminist'"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "words[11853]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Words as vectors"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The word \"intelligence\" is represented by the 100 dimensional vector:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "numpy.ndarray"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "type(vecs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([ 0.296 ,  0.7626, -0.9866,  0.3776,  0.3194,  0.8286, -0.1686,\n",
       "       -1.4558,  0.1965,  0.3854, -0.3348, -0.6503, -0.2528, -0.11  ,\n",
       "       -0.1545,  0.5354, -0.4527, -0.0516,  0.1312,  0.0744,  0.5001,\n",
       "        0.2151,  0.0688,  0.4347,  0.261 , -0.0371,  0.1385, -1.518 ,\n",
       "        0.0641,  0.149 , -0.0314,  0.5038,  0.2839,  0.3457, -0.4411,\n",
       "       -0.3459, -0.2118,  0.5651, -0.088 , -0.0438, -1.2228,  0.6039,\n",
       "       -0.23  ,  0.2287, -0.2695, -0.9398,  0.2376,  0.3302, -0.2422,\n",
       "        0.6359,  0.1347,  0.5542,  0.1432,  0.2861,  0.0216, -0.7437,\n",
       "        0.3508,  0.362 ,  0.5566,  0.3403,  0.3613,  0.5185, -0.5437,\n",
       "       -0.285 ,  1.1831, -0.1192,  0.2473,  0.0614,  0.4436, -0.244 ,\n",
       "        0.2016,  0.5143, -0.4695, -0.0974, -0.9836, -0.3594,  0.3903,\n",
       "       -0.517 , -0.1659, -1.2132, -1.3228,  0.0578,  0.7022,  0.3492,\n",
       "       -0.9103, -0.381 , -0.1545,  0.4467, -0.009 , -0.9838,  1.0114,\n",
       "       -0.227 ,  0.2697,  0.1566,  0.5613,  0.1175, -0.5755, -0.6324,\n",
       "        0.1052,  1.2465], dtype=float32)"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vecs[11853]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This lets us do some useful calculations. For instance, we can see how far apart two words are using a distance metric:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from scipy.spatial.distance import cosine as dist"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Smaller numbers mean two words are closer together, larger numbers mean they are further apart."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The distance between similar words is low:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.27636240676695256"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"puppy\"]], vecs[wordidx[\"dog\"]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.20527545040329642"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"queen\"]], vecs[wordidx[\"princess\"]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "And the distance between unrelated words is high:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.98835787578057777"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"celebrity\"]], vecs[wordidx[\"dusty\"]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.87298516557634254"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"kitten\"]], vecs[wordidx[\"airplane\"]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.96211070091611983"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"avalanche\"]], vecs[wordidx[\"antique\"]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Bias"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is a lot of opportunity for bias:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.50985148631697985"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"man\"]], vecs[wordidx[\"genius\"]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.6897833082810727"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"woman\"]], vecs[wordidx[\"genius\"]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not all pairs are stereotyped:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.55957489609574407"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"man\"]], vecs[wordidx[\"emotional\"]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.62572056015698596"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dist(vecs[wordidx[\"woman\"]], vecs[wordidx[\"emotional\"]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I just checked the distance between pairs of words, because this is a quick and simple way to illustrate the concept.  It is also a very **noisy** approach, and **researchers approach this problem in more systematic ways**."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualizing the words"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will use [Plotly](https://plot.ly/), a Python library to make interactive graphs (note: everything below is done with the free, offline version of Plotly)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": true
   },
   "source": [
    "### Methods"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "import plotly\n",
    "import plotly.graph_objs as go    \n",
    "from IPython.display import IFrame"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "def plotly_3d(Y, cat_labels):\n",
    "    trace_dict = {}\n",
    "    for i, label in enumerate(cat_labels):\n",
    "        trace_dict[i] = go.Scatter3d(\n",
    "            x=Y[i*5:(i+1)*5, 0],\n",
    "            y=Y[i*5:(i+1)*5, 1],\n",
    "            z=Y[i*5:(i+1)*5, 2],\n",
    "            mode='markers',\n",
    "            marker=dict(\n",
    "                size=8,\n",
    "                line=dict(\n",
    "                    color='rgba('+ str(i*40) + ',' + str(i*40) + ',' + str(i*40) + ', 0.14)',\n",
    "                    width=0.5\n",
    "                ),\n",
    "                opacity=0.8\n",
    "            ),\n",
    "            text = my_words[i*5:(i+1)*5],\n",
    "            name = label\n",
    "        )\n",
    "\n",
    "    data = [item for item in trace_dict.values()]\n",
    "    layout = go.Layout(\n",
    "        margin=dict(\n",
    "            l=0,\n",
    "            r=0,\n",
    "            b=0,\n",
    "            t=0\n",
    "        )\n",
    "    )\n",
    "\n",
    "    plotly.offline.plot({\n",
    "        \"data\": data,\n",
    "        \"layout\": layout\n",
    "    })"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "def plotly_2d(Y, cat_labels):\n",
    "    trace_dict = {}\n",
    "    for i, label in enumerate(cat_labels):\n",
    "        trace_dict[i] = go.Scatter(\n",
    "            x=Y[i*5:(i+1)*5, 0],\n",
    "            y=Y[i*5:(i+1)*5, 1],\n",
    "            mode='markers',\n",
    "            marker=dict(\n",
    "                size=8,\n",
    "                line=dict(\n",
    "                    color='rgba('+ str(i*40) + ',' + str(i*40) + ',' + str(i*40) + ', 0.14)',\n",
    "                    width=0.5\n",
    "                ),\n",
    "                opacity=0.8\n",
    "            ),\n",
    "            text = my_words[i*5:(i+1)*5],\n",
    "            name = label\n",
    "        )\n",
    "\n",
    "    data = [item for item in trace_dict.values()]\n",
    "    layout = go.Layout(\n",
    "        margin=dict(\n",
    "            l=0,\n",
    "            r=0,\n",
    "            b=0,\n",
    "            t=0\n",
    "        )\n",
    "    )\n",
    "\n",
    "    plotly.offline.plot({\n",
    "        \"data\": data,\n",
    "        \"layout\": layout\n",
    "    })"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Preparing the Data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's plot words from a few different categories:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "categories = [\n",
    "              \"bugs\", \"music\", \n",
    "              \"pleasant\", \"unpleasant\", \n",
    "              \"science\", \"arts\"\n",
    "             ]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "my_words = [\n",
    "            \"maggot\", \"flea\", \"tarantula\", \"bedbug\", \"mosquito\", \n",
    "            \"violin\", \"cello\", \"flute\", \"harp\", \"mandolin\",\n",
    "            \"joy\", \"love\", \"peace\", \"pleasure\", \"wonderful\",\n",
    "            \"agony\", \"terrible\", \"horrible\", \"nasty\", \"failure\", \n",
    "            \"physics\", \"chemistry\", \"science\", \"technology\", \"engineering\",\n",
    "            \"poetry\", \"art\", \"literature\", \"dance\", \"symphony\",\n",
    "           ]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Again, we need to look up the indices of our words using the wordidx dictionary:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "X = np.array([wordidx[word] for word in my_words])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(30, 100)"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "vecs[X].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we will make a set combining our words with the first 10,000 words in our entire set of words (some of the words will already be in there), and create a matrix of their embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(10030, 100)"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embeddings = np.concatenate((vecs[X], vecs[:10000,:]), axis=0); embeddings.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Viewing the words in 3D"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The words are in 100 dimensions, so we will need a way to reduce them to 3 dimensions so that we can view them.  Two good options are T-SNE or PCA.  The main idea is to find a meaningful way to go from 100 dimensions to 3 dimensions (while keeping a similar notion of what is close to what).\n",
    "\n",
    "You would typically just use one of these (T-SNE or PCA).  I've included both if you're interested."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": true
   },
   "source": [
    "#### TSNE"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "from sklearn import manifold"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "tsne = manifold.TSNE(n_components=3, init='pca', random_state=0)\n",
    "Y = tsne.fit_transform(subset)\n",
    "plotly_3d(Y, categories)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "        <iframe\n",
       "            width=\"600\"\n",
       "            height=\"400\"\n",
       "            src=\"temp-plot.html\"\n",
       "            frameborder=\"0\"\n",
       "            allowfullscreen\n",
       "        ></iframe>\n",
       "        "
      ],
      "text/plain": [
       "<IPython.lib.display.IFrame at 0x7f6f87c3c588>"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "IFrame('temp-plot.html', width=600, height=400)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### PCA"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from sklearn import decomposition"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "pca = decomposition.PCA(n_components=3).fit(subset.T)\n",
    "components = pca.components_\n",
    "plotly_3d(components.T[:len(my_words),:], categories)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "        <iframe\n",
       "            width=\"600\"\n",
       "            height=\"400\"\n",
       "            src=\"temp-plot.html\"\n",
       "            frameborder=\"0\"\n",
       "            allowfullscreen\n",
       "        ></iframe>\n",
       "        "
      ],
      "text/plain": [
       "<IPython.lib.display.IFrame at 0x7f6f87c3ce48>"
      ]
     },
     "execution_count": 65,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "IFrame('temp-plot.html', width=600, height=400)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Nearest Neighbors"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also see what words are close to a given word."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from sklearn.neighbors import NearestNeighbors"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Nearest Neighbors is an algorithm that finds the points closest to a given point."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "NearestNeighbors(algorithm='brute', leaf_size=30, metric='cosine',\n",
       "         metric_params=None, n_jobs=1, n_neighbors=10, p=2, radius=0.5)"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "neigh = NearestNeighbors(n_neighbors=10, radius=0.5, metric='cosine', algorithm='brute')\n",
    "neigh.fit(vecs) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": true,
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([vecs[wordidx[\"feminist\"]]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('intelligence', 1.7881393e-07),\n",
       " ('cia', 0.25781947),\n",
       " ('information', 0.27898049),\n",
       " ('security', 0.3036899),\n",
       " ('fbi', 0.30377108),\n",
       " ('military', 0.3065179),\n",
       " ('secret', 0.31066364),\n",
       " ('counterterrorism', 0.32373756),\n",
       " ('pentagon', 0.33488154),\n",
       " ('defense', 0.34354311)]"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can take this a step further, and add two words together.  What is the result?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "new_vec = vecs[wordidx[\"artificial\"]] + vecs[wordidx[\"intelligence\"]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([ 0.0345, -0.1185,  0.746 ,  0.3256,  0.3256, -1.4699, -0.8715,\n",
       "       -0.9421,  0.0679,  0.922 ,  0.6811, -0.3729,  1.0969,  0.7196,\n",
       "        1.3515,  1.2493,  0.6621,  0.1901, -0.2707, -0.0444, -1.232 ,\n",
       "        0.1744,  0.7577, -0.9177, -1.2184,  0.6959, -0.1966, -0.415 ,\n",
       "       -0.3358,  0.5452,  0.589 , -0.0299, -0.9744, -0.8937,  0.2283,\n",
       "       -0.2092, -1.3795,  1.7811,  0.2269,  0.47  , -0.3045, -0.1573,\n",
       "       -0.478 ,  0.3071,  0.4202, -0.4434,  0.1602,  0.1443, -0.9528,\n",
       "       -0.5565,  0.7537,  0.182 ,  1.4008,  1.8967,  0.595 , -3.0072,\n",
       "        0.6811, -0.2557,  2.0217,  0.7825,  0.4251,  1.3615,  0.5902,\n",
       "       -0.1312,  0.9344, -0.5377, -0.3988, -0.6415,  0.6527,  0.5117,\n",
       "        0.7315,  0.1396,  0.3785, -0.6403, -0.094 ,  0.1076,  0.6197,\n",
       "        0.2537, -1.4346,  1.169 ,  1.6931,  0.1458, -0.5981,  0.8195,\n",
       "       -3.1903,  1.2429,  2.1481,  1.6004,  0.2014, -0.2121,  0.3698,\n",
       "       -0.001 , -0.628 ,  0.2869,  0.3119, -0.1093, -0.6341, -1.7804,\n",
       "        0.5857,  0.3702], dtype=float32)"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "new_vec"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([new_vec])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('intelligence', 0.18831611),\n",
       " ('artificial', 0.25617576),\n",
       " ('information', 0.3256532),\n",
       " ('knowledge', 0.33641893),\n",
       " ('secret', 0.36480361),\n",
       " ('human', 0.36726683),\n",
       " ('biological', 0.37090683),\n",
       " ('using', 0.37736303),\n",
       " ('scientific', 0.38513899),\n",
       " ('communication', 0.38691515)]"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([vecs[wordidx[\"king\"]]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 146,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('king', 0.0),\n",
       " ('prince', 0.23176712),\n",
       " ('queen', 0.24923098),\n",
       " ('son', 0.29791123),\n",
       " ('brother', 0.30142248),\n",
       " ('monarch', 0.30221093),\n",
       " ('throne', 0.30800098),\n",
       " ('kingdom', 0.31885898),\n",
       " ('father', 0.3197971),\n",
       " ('emperor', 0.32871419)]"
      ]
     },
     "execution_count": 146,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 147,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "new_vec = vecs[wordidx[\"king\"]] - vecs[wordidx[\"he\"]] + vecs[wordidx[\"she\"]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 148,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([new_vec])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 149,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('king', 0.13275802),\n",
       " ('queen', 0.16259885),\n",
       " ('princess', 0.24821734),\n",
       " ('daughter', 0.29121184),\n",
       " ('prince', 0.29464376),\n",
       " ('elizabeth', 0.29630506),\n",
       " ('mother', 0.3091293),\n",
       " ('sister', 0.31979591),\n",
       " ('father', 0.34473372),\n",
       " ('throne', 0.34474838)]"
      ]
     },
     "execution_count": 149,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 150,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "19226"
      ]
     },
     "execution_count": 150,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "wordidx[\"programmer\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 152,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([vecs[wordidx[\"programmer\"]]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Closest words to \"programmer\":"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 153,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('programmer', 0.0),\n",
       " ('programmers', 0.32259798),\n",
       " ('animator', 0.36951029),\n",
       " ('software', 0.38250893),\n",
       " ('computer', 0.40600348),\n",
       " ('technician', 0.41406858),\n",
       " ('engineer', 0.43037564),\n",
       " ('user', 0.43565339),\n",
       " ('translator', 0.43721014),\n",
       " ('linguist', 0.44948018)]"
      ]
     },
     "execution_count": 153,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Feminine version of \"programmer\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "new_vec = vecs[wordidx[\"programmer\"]] - vecs[wordidx[\"he\"]] + vecs[wordidx[\"she\"]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([new_vec])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('programmer', 0.19503421),\n",
       " ('stylist', 0.42715943),\n",
       " ('animator', 0.48206449),\n",
       " ('programmers', 0.48337293),\n",
       " ('choreographer', 0.48626775),\n",
       " ('technician', 0.4862805),\n",
       " ('designer', 0.48710018),\n",
       " ('prodigy', 0.49118328),\n",
       " ('lets', 0.49730021),\n",
       " ('screenwriter', 0.49754214)]"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Masculine version of \"programmer\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "new_vec = vecs[wordidx[\"programmer\"]] - vecs[wordidx[\"she\"]] + vecs[wordidx[\"he\"]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([new_vec])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('programmer', 0.17419636),\n",
       " ('programmers', 0.41335857),\n",
       " ('engineer', 0.46376407),\n",
       " ('compiler', 0.46731704),\n",
       " ('software', 0.4681465),\n",
       " ('animator', 0.48923665),\n",
       " ('computer', 0.50461578),\n",
       " ('mechanic', 0.51500672),\n",
       " ('setup', 0.51882535),\n",
       " ('developer', 0.51953185)]"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([vecs[wordidx[\"doctor\"]]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('doctor', 0.0),\n",
       " ('physician', 0.23267597),\n",
       " ('nurse', 0.24784923),\n",
       " ('dr.', 0.28248072),\n",
       " ('doctors', 0.29191142),\n",
       " ('patient', 0.29258156),\n",
       " ('medical', 0.30040079),\n",
       " ('surgeon', 0.30946612),\n",
       " ('hospital', 0.30990696),\n",
       " ('psychiatrist', 0.3410902)]"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Feminine version of doctor:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "new_vec = vecs[wordidx[\"doctor\"]] - vecs[wordidx[\"he\"]] + vecs[wordidx[\"she\"]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([new_vec])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('doctor', 0.13456273),\n",
       " ('nurse', 0.22582489),\n",
       " ('mother', 0.27610379),\n",
       " ('woman', 0.29901671),\n",
       " ('pregnant', 0.32096934),\n",
       " ('girl', 0.33241045),\n",
       " ('patient', 0.34357929),\n",
       " ('she', 0.35723114),\n",
       " ('child', 0.36312521),\n",
       " ('herself', 0.363388)]"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Masculine version of doctor:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "new_vec = vecs[wordidx[\"doctor\"]] - vecs[wordidx[\"she\"]] + vecs[wordidx[\"he\"]]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "distances, indices = neigh.kneighbors([new_vec])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('doctor', 0.15277696),\n",
       " ('physician', 0.27226871),\n",
       " ('medical', 0.37674332),\n",
       " ('he', 0.37695646),\n",
       " ('doctors', 0.38290107),\n",
       " ('dr.', 0.38466901),\n",
       " ('surgeon', 0.39124882),\n",
       " ('him', 0.40270936),\n",
       " ('hospital', 0.42226428),\n",
       " ('himself', 0.42476076)]"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(words[int(ind)], dist) for ind, dist in zip(list(indices[0]), list(distances[0]))]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Bias"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Again, just looking at individual words is a **noisy** approach (I'm using it as a simple illustration).  [Researchers from Princeton and University of Bath](https://www.princeton.edu/~aylinc/papers/caliskan-islam_semantics.pdf) use **small baskets of terms** to represent concepts.  They first confirmed that flowers are more pleasant than insects, and musical instruments are more pleasant from weapons.\n",
    "\n",
    "They then found that European American names are \"more pleasant\" than African American names, as captured by how close the word vectors are (as embedded by GloVe, which is a library from Stanford, along the same lines as Word2Vec).\n",
    "\n",
    "    We show for the first time that if AI is to exploit via our language the vast \n",
    "    knowledge that culture has compiled, it will inevitably inherit human-like \n",
    "    prejudices. In other words, if AI learns enough about the properties of language \n",
    "    to be able to understand and produce it, it also acquires cultural associations \n",
    "    that can be offensive, objectionable, or harmful.\n",
    "\n",
    "[Researchers from Boston University and Microsoft Research](https://arxiv.org/pdf/1606.06121.pdf) found the pairs most analogous to *He : She*.  They found gender bias, and also proposed a way to debias the vectors.\n",
    "\n",
    "Rob Speer, CTO of Luminoso, tested for ethnic bias by finding correlations for a list of positive and negative words:\n",
    "\n",
    "    The tests I implemented for ethnic bias are to take a list of words, such as \n",
    "    “white”, “black”, “Asian”, and “Hispanic”, and find which one has the strongest \n",
    "    correlation with each of a list of positive and negative words, such as “cheap”, \n",
    "    “criminal”, “elegant”, and “genius”. I did this again with a fine-grained version \n",
    "    that lists hundreds of words for ethnicities and nationalities, and thus is more \n",
    "    difficult to get a low score on, and again with what may be the trickiest test of \n",
    "    all, comparing words for different religions and spiritual beliefs."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Ways to address bias**\n",
    "\n",
    "There are a few different approaches:\n",
    "\n",
    "- Debias word embeddings\n",
    "  - [Technique in Bolukbasi, et al.](https://arxiv.org/abs/1606.06121)\n",
    "  - [ConceptNet Numberbatch (Rob Speer)](https://blog.conceptnet.io/2017/04/24/conceptnet-numberbatch-17-04-better-less-stereotyped-word-vectors/)\n",
    "- Argument that “awareness is better than blindness”: debiasing should happen at time of action, not at perception. ([Caliskan-Islam, Bryson, Narayanan](https://www.princeton.edu/~aylinc/papers/caliskan-islam_semantics.pdf))\n",
    "\n",
    "Either way, you need to be on the lookout for bias and have a plan to address it!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you are interested in the topic of bias in AI, I gave a workshop [you can watch here](https://www.youtube.com/watch?v=25nC0n9ERq4) that covers this material and goes into more depth about bias."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Movie Reviews Sentiment Analysis Demo"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This demo has been adapted (and simplified) from part of Lesson 5 of [Practical Deep Learning for Coders](http://course.fast.ai/index.html)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "heading_collapsed": true
   },
   "source": [
    "## Setup data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "We're going to look at the IMDB dataset, which contains movie reviews from IMDB, along with their sentiment. \n",
    "\n",
    "We will be using [Keras](https://keras.io/), a high-level neural network API. Two of the guiding principles of Keras are **user-friendliness** (it's designed for humans, not machines) and **works with Python**.  Yay for both of these!\n",
    "\n",
    "Keras can run on top of many other neural network frameworks, including TensorFlow, Theano, R, MxNet, or CNTK.  I am using it on top of TensorFlow here.\n",
    "\n",
    "Keras comes with some helpers for the IMDB dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n"
     ]
    }
   ],
   "source": [
    "from keras.datasets import imdb\n",
    "from keras.utils.data_utils import get_file\n",
    "idx = imdb.get_word_index()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "import keras.backend as K\n",
    "\n",
    "def limit_mem():\n",
    "    K.get_session().close()\n",
    "    cfg = K.tf.ConfigProto()\n",
    "    cfg.gpu_options.allow_growth = True\n",
    "    cfg.gpu_options.per_process_gpu_memory_fraction = 0.6\n",
    "    K.set_session(K.tf.Session(config=cfg))\n",
    "    \n",
    "limit_mem()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "This is the word list:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 171,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['the', 'and', 'a', 'of', 'to', 'is', 'br', 'in', 'it', 'i']"
      ]
     },
     "execution_count": 171,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "idx_arr = sorted(idx, key=idx.get)\n",
    "idx_arr[:10]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "...and this is the mapping from id to word"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 172,
   "metadata": {
    "collapsed": true,
    "hidden": true,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "idx2word = {v: k for k, v in idx.items()}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "We download the reviews using code from https://keras.io/datasets/:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 173,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "path = get_file('imdb_full.pkl',\n",
    "                origin='https://s3.amazonaws.com/text-datasets/imdb_full.pkl',\n",
    "                md5_hash='d091312047c43cf9e4e38fef92437263')\n",
    "f = open(path, 'rb')\n",
    "(x_train, labels_train), (x_test, labels_test) = pickle.load(f)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 174,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "25000"
      ]
     },
     "execution_count": 174,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(x_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "Here's the 1st review. As you see, the words have been replaced by ids. The ids can be looked up in idx2word."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 175,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'23022, 309, 6, 3, 1069, 209, 9, 2175, 30, 1, 169, 55, 14, 46, 82, 5869, 41, 393, 110, 138, 14, 5359, 58, 4477, 150, 8, 1, 5032, 5948, 482, 69, 5, 261, 12, 23022, 73935, 2003, 6, 73, 2436, 5, 632, 71, 6, 5359, 1, 25279, 5, 2004, 10471, 1, 5941, 1534, 34, 67, 64, 205, 140, 65, 1232, 63526, 21145, 1, 49265, 4, 1, 223, 901, 29, 3024, 69, 4, 1, 5863, 10, 694, 2, 65, 1534, 51, 10, 216, 1, 387, 8, 60, 3, 1472, 3724, 802, 5, 3521, 177, 1, 393, 10, 1238, 14030, 30, 309, 3, 353, 344, 2989, 143, 130, 5, 7804, 28, 4, 126, 5359, 1472, 2375, 5, 23022, 309, 10, 532, 12, 108, 1470, 4, 58, 556, 101, 12, 23022, 309, 6, 227, 4187, 48, 3, 2237, 12, 9, 215'"
      ]
     },
     "execution_count": 175,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "', '.join(map(str, x_train[0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "The first word of the first review is 23022. Let's see what that is."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 176,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'bromwell'"
      ]
     },
     "execution_count": 176,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "idx2word[23022]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "Here's the whole review, mapped from ids to words."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 177,
   "metadata": {
    "hidden": true,
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"bromwell high is a cartoon comedy it ran at the same time as some other programs about school life such as teachers my 35 years in the teaching profession lead me to believe that bromwell high's satire is much closer to reality than is teachers the scramble to survive financially the insightful students who can see right through their pathetic teachers' pomp the pettiness of the whole situation all remind me of the schools i knew and their students when i saw the episode in which a student repeatedly tried to burn down the school i immediately recalled at high a classic line inspector i'm here to sack one of your teachers student welcome to bromwell high i expect that many adults of my age think that bromwell high is far fetched what a pity that it isn't\""
      ]
     },
     "execution_count": 177,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "' '.join([idx2word[o] for o in x_train[0]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "The labels are 1 for positive, 0 for negative."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 178,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]"
      ]
     },
     "execution_count": 178,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "labels_train[:10]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "Reduce vocab size by setting rare words to max index."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 179,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "vocab_size = 5000\n",
    "\n",
    "trn = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_train]\n",
    "test = [np.array([i if i<vocab_size-1 else vocab_size-1 for i in s]) for s in x_test]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "Look at distribution of lengths of sentences."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 180,
   "metadata": {
    "hidden": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[array([4999,  309,    6,    3, 1069,  209,    9, 2175,   30,    1,  169,\n",
       "          55,   14,   46,   82, 4999,   41,  393,  110,  138,   14, 4999,\n",
       "          58, 4477,  150,    8,    1, 4999, 4999,  482,   69,    5,  261,\n",
       "          12, 4999, 4999, 2003,    6,   73, 2436,    5,  632,   71,    6,\n",
       "        4999,    1, 4999,    5, 2004, 4999,    1, 4999, 1534,   34,   67,\n",
       "          64,  205,  140,   65, 1232, 4999, 4999,    1, 4999,    4,    1,\n",
       "         223,  901,   29, 3024,   69,    4,    1, 4999,   10,  694,    2,\n",
       "          65, 1534,   51,   10,  216,    1,  387,    8,   60,    3, 1472,\n",
       "        3724,  802,    5, 3521,  177,    1,  393,   10, 1238, 4999,   30,\n",
       "         309,    3,  353,  344, 2989,  143,  130,    5, 4999,   28,    4,\n",
       "         126, 4999, 1472, 2375,    5, 4999,  309,   10,  532,   12,  108,\n",
       "        1470,    4,   58,  556,  101,   12, 4999,  309,    6,  227, 4187,\n",
       "          48,    3, 2237,   12,    9,  215]),\n",
       " array([4999,   39, 4999,   14,  739, 4999, 3428,   44,   74,   32, 1831,\n",
       "          15,  150,   18,  112,    3, 1344,    5,  336,  145,   20,    1,\n",
       "         887,   12,   68,  277, 1189,  403,   34,  119,  282,   36,  167,\n",
       "           5,  393,  154,   39, 2299,   15,    1,  548,   88,   81,  101,\n",
       "           4,    1, 3273,   14,   40,    3,  413, 1200,  134, 4999,   41,\n",
       "         180,  138,   14, 3086,    1,  322,   20, 4930, 4999,  359,    5,\n",
       "        3112, 2128,    1, 4999, 4999,   39, 4999,   45, 3661,   27,  372,\n",
       "           5,  127,   53,   20,    1, 1983,    7,    7,   18,   48,   45,\n",
       "          22,   68,  345,    3, 2131,    5,  409,   20,    1, 1983,   15,\n",
       "           3, 3238,  206,    1, 4999,   22,  277,   66,   36,    3,  341,\n",
       "           1,  719,  729,    3, 3865, 1265,   20,    1, 1510,    3, 1219,\n",
       "           2,  282,   22,  277, 2525,    5,   64,   48,   42,   37,    5,\n",
       "          27, 3273,   12,    6, 4999, 4999, 2034,    7,    7, 3771, 3225,\n",
       "          34, 4186,   34,  378,   14, 4999,  296,    3, 1023,  129,   34,\n",
       "          44,  282,    8,    1,  179,  363, 4999,    5,   94,    3, 2131,\n",
       "          16,    3, 4999, 3005, 4999, 4999,    5,   64,   45,   26,   67,\n",
       "         409,    8,    1, 1983,   15, 3261,  501,  206,    1, 4999,   45,\n",
       "        4999, 2877,   26,   67,   78,   48,   26,  491,   16,    3,  702,\n",
       "        1184,    4,  228,   50, 4505,    1, 4999,   20,  118, 4999,    6,\n",
       "        1373,   20,    1,  887,   16,    3, 4999,   20,   24, 3964,    5,\n",
       "        4999,   24,  172,  844,  118,   26,  188, 1488,  122,    1, 4999,\n",
       "         237,  345,    1, 4999, 4999,   31,    3, 4999,  100,   42,  395,\n",
       "          20,   24, 4999,  118, 4999,  889,   82,  102,  584,    3,  252,\n",
       "          31,    1,  400,    4, 4787, 4999, 1962, 3861,   32, 1230, 3186,\n",
       "          34,  185, 4310,  156, 2325,   38,  341,    2,   38, 4999, 4999,\n",
       "        2231, 4846,    2, 4999, 4999, 2610,   34,   23,  457,  340,    5,\n",
       "           1, 1983,  504, 4355, 4999,  215,  237,   21,  340,    5, 4468,\n",
       "        4999, 4999,   37,   26,  277,  119,   51,  109, 1023,  118,   42,\n",
       "         545,   39, 2814,  513,   39,   27,  553,    7,    7,  134,    1,\n",
       "         116, 2022,  197, 4787,    2, 4999,  283, 1667,    5,  111,   10,\n",
       "         255,  110, 4382,    5,   27,   28,    4, 3771, 4999, 4999,  105,\n",
       "         118, 2597,    5,  109,    3,  209,    9,  284,    3, 4325,  496,\n",
       "        1076,    5,   24, 2761,  154,  138,   14, 4999, 4999,  182, 4999,\n",
       "          39, 4999,   15,    1,  548,    5,  120,   48,   42,   37,  257,\n",
       "         139, 4530,  156, 2325,    9,    1,  372,  248,   39,   20,    1,\n",
       "          82,  505,  228,    3,  376, 2131,   37,   29, 1023,   81,   78,\n",
       "          51,   33,   89,  121,   48,    5,   78,   16,   65,  275,  276,\n",
       "          33,  141,  199,    9,    5,    1, 3273,  302,    4,  769,    9,\n",
       "          37, 4999,  275,    7,    7,   39,  276,   11,   19,   77, 4999,\n",
       "          22,    5,  336,  406]),\n",
       " array([ 527,  117,  113,   31, 4999, 1962, 3861,  115,  902, 4999,  758,\n",
       "          10,   25,  123,  107,    2,  116,  136,    8, 1646, 4999,   23,\n",
       "         330,    5,  597,    1, 4999,   20,  390,    6,    3,  353,   14,\n",
       "          49,   14,  230,    8, 4999, 4999,    1,  190,   20, 4999,    6,\n",
       "          79,  894,  100,  109, 3609,    4,  109,    3, 4999, 3485,   43,\n",
       "          24, 1407,    2,  109, 4999,    1, 2405,    4, 4999, 4999, 4999,\n",
       "        4999,  143,    3, 2405,   26,  557,  286,  160,  712, 4122, 4999,\n",
       "           3,  511,   36,    1,  300, 2793, 4999,  120,    6,  774,  130,\n",
       "          96,   14,    3, 1165, 4999,   34,  491,    5, 4263,    1, 4999,\n",
       "          24,  106,    6,   50, 4999,   71,  641,    1, 1547,  133,    2,\n",
       "           1,  133,  118,    1, 3273, 4999,    3, 4999, 2135,   23,   29,\n",
       "          55, 2236,  165,   15,    1, 2974,  133,    2,    1,  104,  191,\n",
       "        4999,  994,   28, 4999,   11,   17,  211,  125,  254,   55,   10,\n",
       "          64,    9,   60,    6,  176,  397]),\n",
       " array([  11,    6,  711,    1,   88, 2181,   19, 4999,    1, 3225, 4999,\n",
       "         249,   91, 3045,    9,  124,   21,  199,    3,  818,  647,    4,\n",
       "        4999, 1022,  132,   86, 3842, 3558,  517,    3,  818,  647,    4,\n",
       "        4999, 4999,   39, 2743,  517,    3,  818,  647,    4, 4999,   22,\n",
       "        3752,  108,    4,    1,  637,  806, 1032,   18,  128,   11,   19,\n",
       "           6,   52, 3201,    8,    3,   93,  108, 1287,   23,   21,    2,\n",
       "           5, 1595,   12,  122,    8,    3,   62,   41,   46,    4,    1,\n",
       "          88, 4999, 4999, 1063,    4,  923,    6,  368, 1156,   91,   21,\n",
       "           1, 3870,  708,   18,   91,   21,  592,  342,   58,   61, 3303,\n",
       "           6,   12, 3225,  141,   25,  174,  291,  331,    8,    1,  482,\n",
       "          10,  116, 3771,   14,    3,  164,    2,  561,   21,   35,   73,\n",
       "          14,    3,  482]),\n",
       " array([  11,    6,   21,    1,  798, 3771, 3225,   19,    9,   13,   73,\n",
       "         326, 2761,   71,   88,    4,   24,   99,    2,  162,   66,    3,\n",
       "         111,   12,   13, 4999, 2811, 1962, 3861,   90,    1,   17,   56,\n",
       "           6,  138,    3,  774,  464, 1147,  521,   47,   68,   46,  385,\n",
       "          12,   97,   25,   74, 4999,   43,    3,  224,   50,    2,   46,\n",
       "         136,   12,   97,  239,   25,   74,  602,    5,   94,    1,  670,\n",
       "           5,   78,   35,   18,   29,    8,   29,   11,    6,  287,    1,\n",
       "        1863,    5,  848,    2,   64,    9,    1,  113,   13,   49,  441,\n",
       "        3225,  306,  119,    3,   49,  289,  206,   24, 4999, 1383,    5,\n",
       "        2552,    5,    1,  308,  171, 3861,   13,    1,  115,  281,    8,\n",
       "           1,   17,   18, 4999,    2, 4999,  196,  253,   65,  528,   70]),\n",
       " array([  11,  215,    1, 1714, 2160, 1698,  882,    6,    9,    1, 2809,\n",
       "        2137, 2160, 1698,    4, 1133,  705, 2243,   11,    6,    3, 4999,\n",
       "           4,    1,  353,  450,  206,  117, 4999, 1846,   16, 4999,  159,\n",
       "         116,    4,    1,  705,   18,   11,  215,    3,  705, 3110, 4999,\n",
       "          11,    6,   50,    3,  733,  833, 2193,  140,   60, 1698, 1024,\n",
       "           5, 4999,    3, 1192,  427,    2,   24, 4999,    7,    7,   79,\n",
       "        1181, 4602,  446,    2, 4999, 4999,   11,  833,  450,  296,  181,\n",
       "          73,   37,    3, 1633, 4433,  363, 4999,  106,  211,  488,    5,\n",
       "        4999,   24, 3356,    7,    7,   10,  212,  132,   12,   10,   13,\n",
       "         542, 2173,  148,   11,   17,  993,    5, 3333, 3616, 4999,   39,\n",
       "        4999,    9,  418,   50,   37,   10,   13,  146,    3,  229, 1698,\n",
       "          14,   26,   13,  162, 3459,    1, 1726,   36,    3,  837,  412,\n",
       "        1968,    8,   82,  712,    9,  418,  144,    2,   10,   13,  499,\n",
       "           5, 4999,    5,    1,  860,    4,    1,   62,    7,    7,   29,\n",
       "           8,   29,   42,  287,    3,  103,  148,   42,  404,   21, 2550,\n",
       "        2321,  311, 2393,    7,    7,    9, 4999,    3,  690,  690,  155,\n",
       "          36,    7,    7,    1, 4999]),\n",
       " array([ 419,   91,   32,  495,    5, 2973,   94,    3,  547, 1782,  705,\n",
       "           7,    7,    1,   62, 4183,    8,  324, 4999,  134,   22,   89,\n",
       "          57, 1492,    9, 1445,    7,    7,  475,  236,   31, 2160, 1698,\n",
       "           1, 3121, 2442,    8,    1,   19,   67,  303, 1741,    2,   67,\n",
       "         239, 4522,   86,   73,   22,  355,    1,   19,  187,    1, 2023,\n",
       "         111,    6,   52, 1725,    1,   17,  149, 3406, 1643,   22,    2,\n",
       "         128, 4999,   22,  192,    5,  398,   22, 1532,    1,  455,    6,\n",
       "          49,  358,    4, 2687,    5, 2712, 4627, 4999,    4,  833,    2,\n",
       "        4999,    6,   49,    7,    7,   52,  324,  297,   55,  103,   45,\n",
       "          22,   23,  264,    5, 4582,  142,    2,  839,    3, 3014,  343,\n",
       "          62]),\n",
       " array([   8,   11, 4999, 4999, 1984,  705,  445,   20,  280,  684, 4868,\n",
       "        2160, 1698,    3, 4999,  561,    2,  519,  311,  737,  120, 3229,\n",
       "         458, 4999,   31,    1, 4999,   62,    4,    3,  182, 4999,    2,\n",
       "          24, 4999,  449, 4999, 4999,   51, 4999, 1201, 4999,   41,   11,\n",
       "        4999,   62,  187, 4868,  656,  306, 1306,   80,    3, 4999,  733,\n",
       "          12, 4999,    3, 2492, 4999, 1790,    5,  595, 4116, 3932,    7,\n",
       "           7,   22,   63,  141,  567,  883,  131,  792,    2,  103,    1,\n",
       "          19,  147,    7,    7,    1,   86,  119,   26, 1582,   24, 3964,\n",
       "         274,   16, 1560, 4999, 3598,   38,  159,  110,  141,   27, 4999,\n",
       "         122,    2, 1409,    5, 4999,  136, 1538,   42, 4999,    1,  280,\n",
       "         873,    4,   38, 1745,    2, 1749, 4999,  141,   27,  575,   31,\n",
       "           1,   55,  440, 1698, 1744,    5,  159,  779,  866,   38, 4999,\n",
       "          97,   27,    8,  885,   18,    3, 3408,   97,   25,   27,   90,\n",
       "         810,    8,  342,    1, 4999,   39,  371, 2211,  136,    1,   19,\n",
       "          59, 4272,   36,    3,  793,  799,   86,   41,    3, 2027,  602,\n",
       "           7,    7, 1698,    2, 3468, 4999,   14, 4999,   89,  303, 2718,\n",
       "         864,   14,    3,  375,    3,  133,   39,  104, 4999,   65,  646,\n",
       "         235,   25, 1675,  267,    1,  865,  897,    1,  174,    6, 4999,\n",
       "        1698, 1577,   32, 4840,  562, 3585,    2,   21,    3,  989, 4999,\n",
       "        4602,  446,   14, 2244,  911, 4999,   14, 4999,    2, 4999, 4999,\n",
       "        4999, 4999,   23,   29,  401,    7,    7,  115,    4,   29, 4999,\n",
       "        4204, 3243,    8,    1,  945, 2367,    4, 2243, 1560,  446,    6,\n",
       "        2293,    8,  657, 4999,  235,   27,   22,  121,   37,   12,  229,\n",
       "          36, 4999,   47,   25,   74,  447,  150,   51, 4999,  740,  113,\n",
       "        2125,  465,    5, 2098,   15,  369,  685,    5,    3, 4999, 4999,\n",
       "           4,  552,  431,   33,   97,   25, 1922, 4999,   16,   46, 1341,\n",
       "        4999,   56,    6,   12,   49,    2,  164, 2326, 4999,  404, 4999,\n",
       "        2909,   26,   57,  163,  394,    3, 4999,   36,    3, 4999, 1689,\n",
       "        2567,    7,    7,  414,  924, 4999, 4999, 4999,    2, 3970, 2545,\n",
       "        1830, 4999,   36, 2814, 4999, 2584,    7,    7,    1,  311, 4999,\n",
       "         297, 4999, 4999, 2326, 4999, 2160, 1698, 4999, 4999, 4602,  446,\n",
       "        4999, 4999]),\n",
       " array([   1,  311, 4999, 2942,  297,  238, 2160, 1698, 4999, 4999, 3468,\n",
       "        4999, 4999, 4999,  911, 4999, 4602,  446,  305, 4999, 2748, 4999,\n",
       "        4999, 1962, 3368, 4999, 2326, 4999,    7,    7, 4999, 4999,  405,\n",
       "        1698,    3,  756,   43,  361, 1314,  236,    7,    7,   48,    6,\n",
       "           9,   41, 4999,    2,  448,   48,    6,    1,  748, 4578,   28,\n",
       "        4999,   16,    1,   82,    2,  135,    6,    9,  217,    1, 4999,\n",
       "           7,    7,    8,    1, 2481, 4999,  334, 2675,  445,   20,  280,\n",
       "         684,   54,  326, 1698,  378,   14,    3,  737, 1875, 1610,  770,\n",
       "        4868,   54,   28,   34, 4532,  534,  237, 4999,  117,    1, 4999,\n",
       "           2,   44, 4999,   32,  218,  334,    8,    1,  809,    4,    3,\n",
       "         182,  427,  770, 4999, 4999, 4999,   34,   44, 4999,    3, 4999,\n",
       "          41,    1, 4999,    4,   24, 3203, 1937,    5,   54, 2063, 3791,\n",
       "        4999, 4999,   34,  405,    9,    5,   54,   28,    5,  329,   15,\n",
       "         306,    7,    7,   54,   28,    6, 1955, 4011,   18, 1113, 3701,\n",
       "          41,    1, 4999, 2008,    4, 4999,  109, 4999,    2, 3372, 4999,\n",
       "          15,  150,  363,   26,   13,  414, 4999,   31,    3, 3730,  770,\n",
       "        4204, 4999,  740,   32,  318,  236,   34,   44, 4999,    1,  427,\n",
       "          18,   38, 4999,   16,   54,   28, 2669,   12, 4999,    6, 1718,\n",
       "          36, 4419, 1955,   54,   28,  491,    5,  906,    1,  448,   18,\n",
       "           6, 1084,    8,  821,    5,   65,  866, 4999, 4999, 4201,   51,\n",
       "           1, 4446,    6, 4999,   31,   24, 4999, 1458, 4999, 4999,  622,\n",
       "        2114, 4999,   36,   65,  159,  779,  540, 1600,   44,   54,   28,\n",
       "           8,   32,  918, 4999,   12,   44,   61,  147, 2068,   80,    3,\n",
       "        4999,    8,    3, 4999,   51,   26, 1065,    5,   78,   46, 4999,\n",
       "          80, 4204,    2, 4999, 4257, 4999,   46, 4999,   12,   26,  158,\n",
       "        4999,    7,    7,  395,   31, 4999, 4999,   34,  998, 1037,    1,\n",
       "         878,   16,   24, 1135, 1458, 3970, 2545,    2,    1,  595, 4999,\n",
       "         164, 4999,    2,  445,   20,    3,  280,   62,   41,    3, 4999,\n",
       "        4999,  255,   43,   44,   46, 4999,  385,   12,  518,   20,  365,\n",
       "        4999,   37,   98,   49,  151, 3354, 4357, 4999,  124,    9, 1522,\n",
       "          12, 1698,  405,    3,  756,   43,  361, 1314,  236,   14,    1,\n",
       "        4999,   49, 2284, 1610,   34, 2067,  491,    5,  261,   12,   24,\n",
       "         609,   28,  334,    6,    8,  189,  144,    2,  124,  116,   87,\n",
       "           1,   28,  152,   12,   44, 3951,   24,  202,  632,    2,   44,\n",
       "          46, 4330, 2147,  385,   16,    1,  945, 4999,  622,   28, 1745,\n",
       "        4999,   10,   77,  560, 4999,   18, 4999,    1, 4179,    4,   38,\n",
       "         106,   12,   67, 4999,   22,    5,    1, 2023,    7,    7,  187,\n",
       "           1,   19, 1126,   43,    4, 2548,    2,  850,  458,    3,  224,\n",
       "        3608,    2,  724,  463,    3, 4999,  523,  415,    4, 4999,    2,\n",
       "         733,   31, 4999,    9, 4101,    5, 1629,    5,  126,  202, 2416,\n",
       "         541,   27, 4687,    4,   48,   22,  437,   15]),\n",
       " array([  22,  121, 2160, 1698,  555, 4999,   87,    6, 1340, 1202,  306,\n",
       "           8,    1, 2017, 4438,   16,   29,  131,  992, 1287,   26,   44,\n",
       "         221,   11, 2065,   16,  379,    1, 1398,    4,  338,    5, 4999,\n",
       "          60, 4999,   51,    9,  382,   43,   18,    6,  147,    3, 1212,\n",
       "         353,    1, 3284,   26,   44,   90, 4438,   25,   74,  774,  259,\n",
       "        4999,    2,   28,  531, 4880,    1,  311, 4999,  463, 1498,  854,\n",
       "           2,    3, 1602,  285,  763,    6,  790,   24,  115,  154,  807,\n",
       "           7,    7,   11,    6,    3,   52, 2857,   62,   57,  148,    9,\n",
       "         149, 1467,    3, 1432,  452,   39,  256,   12, 3102, 1821,   15,\n",
       "          12,  548,    1, 1117,    4,    1,   19,    6,  445,   20,   32,\n",
       "         776,  417,    4, 4999,   12,  128,   44,  243,    5,   27, 4999,\n",
       "        4999,    8,  309,  393,   10,  329,   32, 4999,   31,    3,  503,\n",
       "         770, 2044, 4999, 2817,   34, 3114, 2990, 2566,    2,  850, 4999,\n",
       "        4419,   14,    3,  956,   10,   13, 1678,   31,    1,   62,  363,\n",
       "          10,  329, 4999, 4689,   12, 2817,  200,   21,  162, 1775,   51,\n",
       "          10,  216,   11,   17,    1, 1468, 1414,   12, 2160, 1698,   35,\n",
       "        2102,  997, 4999,    8,   58,  327,    7,    7, 4999, 4999,  239,\n",
       "         405,   38,  115,  902,  236,   96,   14,    1, 1113, 4999, 4999,\n",
       "          38,  214,   13,    3,  227, 1412,   36,  145,   56,   66,    8,\n",
       "          99,   37,  114,  714, 3903,   47,   68,   57,  208,   56,  607,\n",
       "          80,    1,  367,  118,   10,  194,   56,   13, 4485,  205,   30,\n",
       "          69,    9,  301,    3,   49,  521,    5,  294,   12,  429,    4,\n",
       "         214,    2,   42,   11, 4635,  243,   70, 4999,  214,   12,  163,\n",
       "        4999, 4999,  239,   28,    4,    1,  115, 1504,    4,   11, 2245,\n",
       "          21,    5,   25,   57,   74, 2302,   15,   32, 1806, 1341,   14,\n",
       "           4, 4999,   42, 1045,   12,   47,    6,   30,  219,   28,  252,\n",
       "           8,   11,  179,   34,    6,   37,   11,    2,   42,  626,   96,\n",
       "           7,    7,   11,    6,    3,   49,  462,   19,   12,   10,  542,\n",
       "         383,   27, 2845,    5,   27, 4999,  148,   85,   11,   17,  886,\n",
       "          22,   16,    3,  677,  544,   30,    1,  127])]"
      ]
     },
     "execution_count": 180,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trn[:10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 181,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "lens = np.array([len(review) for review in trn])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 182,
   "metadata": {
    "hidden": true,
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(2493, 10, 237.71364)"
      ]
     },
     "execution_count": 182,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "(lens.max(), lens.min(), lens.mean())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "Pad (with zero) or truncate each sentence to make consistent length."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 183,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "from keras.preprocessing import sequence"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 184,
   "metadata": {
    "collapsed": true,
    "hidden": true
   },
   "outputs": [],
   "source": [
    "seq_len = 500\n",
    "\n",
    "trn = sequence.pad_sequences(trn, maxlen=seq_len, value=0)\n",
    "test = sequence.pad_sequences(test, maxlen=seq_len, value=0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "hidden": true
   },
   "source": [
    "This results in nice rectangular matrices that can be passed to ML algorithms. Reviews shorter than 500 words are pre-padded with zeros, those greater are truncated."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 185,
   "metadata": {
    "hidden": true,
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(25000, 500)"
      ]
     },
     "execution_count": 185,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trn.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create a model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Single conv layer with max pooling"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*Convolutional neural networks* (abbreviated CNNs) are a powerful type of neural networks that do well with ordered data.  They have traditionally been used primarily for image data, but more recently are showing great results on natural language data.  [Facebook AI recently announced results](https://code.facebook.com/posts/1978007565818999/a-novel-approach-to-neural-machine-translation/) of using a CNN to speed up language translation 9x faster than the current state-of-the-art. \n",
    "\n",
    "We'll use a 1D CNN, since a sequence of words is 1D."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For this workshop, we will treat the CNN as a black box.  If you want to learn more about what is going on inside it, check out [Practical Deep Learning for Coders](http://course.fast.ai/) (the only pre-req is 1 year of coding experience)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 520,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from keras.models import Sequential\n",
    "from keras.layers.embeddings import Embedding\n",
    "from keras.layers.core import Flatten, Dense, Dropout\n",
    "from keras.layers.convolutional import Convolution1D, MaxPooling1D\n",
    "from keras.optimizers import Adam"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 521,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "conv1 = Sequential([\n",
    "    Embedding(vocab_size, 50, input_length=seq_len, dropout=0.4),\n",
    "    Convolution1D(64, 5, border_mode='same', activation='relu'),\n",
    "    Dropout(0.2),\n",
    "    MaxPooling1D(),\n",
    "    Flatten(),\n",
    "    Dense(100, activation='relu'),\n",
    "    Dropout(0.5),\n",
    "    Dense(1, activation='sigmoid')])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 522,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "conv1.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 523,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n",
      "  \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 25000 samples, validate on 25000 samples\n",
      "Epoch 1/4\n",
      "25000/25000 [==============================] - 7s - loss: 0.5144 - acc: 0.7248 - val_loss: 0.4012 - val_acc: 0.8090\n",
      "Epoch 2/4\n",
      "25000/25000 [==============================] - 6s - loss: 0.3460 - acc: 0.8540 - val_loss: 0.2825 - val_acc: 0.8896\n",
      "Epoch 3/4\n",
      "25000/25000 [==============================] - 6s - loss: 0.3123 - acc: 0.8710 - val_loss: 0.2926 - val_acc: 0.8820\n",
      "Epoch 4/4\n",
      "25000/25000 [==============================] - 6s - loss: 0.2947 - acc: 0.8747 - val_loss: 0.2852 - val_acc: 0.8869\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x7f0ac42b21d0>"
      ]
     },
     "execution_count": 523,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In deep learning, often as you get closer to the answer, you need to reduce your *learning rate*, which is the step size for how the algorithm changes it's guess each time.  When you are far from the answer, you want to take large steps to get to the right vicinity.  Once you are close to the answer, you want to take small steps so you don't overshoot the answer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 524,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "conv1.optimizer.lr=1e-4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 525,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 25000 samples, validate on 25000 samples\n",
      "Epoch 1/4\n",
      "25000/25000 [==============================] - 6s - loss: 0.2739 - acc: 0.8856 - val_loss: 0.3164 - val_acc: 0.8617\n",
      "Epoch 2/4\n",
      "25000/25000 [==============================] - 6s - loss: 0.2711 - acc: 0.8893 - val_loss: 0.2728 - val_acc: 0.8876\n",
      "Epoch 3/4\n",
      "25000/25000 [==============================] - 6s - loss: 0.2612 - acc: 0.8920 - val_loss: 0.3220 - val_acc: 0.8589\n",
      "Epoch 4/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.2448 - acc: 0.8981 - val_loss: 0.3201 - val_acc: 0.8605\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x7f0ac42b2128>"
      ]
     },
     "execution_count": 525,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "conv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The [Stanford paper](http://ai.stanford.edu/~amaas/papers/wvSent_acl2011.pdf)(2011) that this dataset is from cites a state of the art accuracy (without unlabelled data) of 88.3%.  We have surpassed that!\n",
    "\n",
    "Note that accuracy of 88.9% means an error rate of 11.1% (it's often more helpful to talk about error rates)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using our GloVe word embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We could improve our model by using the GloVe word embeddings from above, since this capture semantic meaning, and have been trained for much longer on a much larger dataset than what we are using here."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We are going to use a version of GloVe where the embeddings have just 50 dimensions (as opposed to 100).  It's the same idea as before."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 526,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def create_emb():\n",
    "    n_fact = vecs50.shape[1]\n",
    "    emb = np.zeros((vocab_size, n_fact))\n",
    "\n",
    "    for i in range(1,len(emb)):\n",
    "        word = idx2word[i]\n",
    "        if word and re.match(r\"^[a-zA-Z0-9\\-]*$\", word):\n",
    "            src_idx = wordidx[word]\n",
    "            emb[i] = vecs50[src_idx]\n",
    "        else:\n",
    "            # If we can't find the word in glove, randomly initialize\n",
    "            emb[i] = np.random.normal(scale=0.6, size=(n_fact,))\n",
    "\n",
    "    # This is our \"rare word\" id - we want to randomly initialize\n",
    "    emb[-1] = np.random.normal(scale=0.6, size=(n_fact,))\n",
    "    emb/=3\n",
    "    return emb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 189,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "emb = create_emb()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We pass our embedding matrix to the Embedding constructor, and set it to non-trainable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 190,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "model = Sequential([\n",
    "    Embedding(vocab_size, 50, input_length=seq_len, dropout=0.4, weights=[emb]),\n",
    "    Convolution1D(64, 5, border_mode='same', activation='relu'),\n",
    "    Dropout(0.2),\n",
    "    MaxPooling1D(),\n",
    "    Flatten(),\n",
    "    Dense(100, activation='relu'),\n",
    "    Dropout(0.5),\n",
    "    Dense(1, activation='sigmoid')])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 191,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 192,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n",
      "  \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 25000 samples, validate on 25000 samples\n",
      "Epoch 1/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.5020 - acc: 0.7401 - val_loss: 0.3170 - val_acc: 0.8737\n",
      "Epoch 2/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.3404 - acc: 0.8570 - val_loss: 0.2897 - val_acc: 0.8906\n",
      "Epoch 3/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.3120 - acc: 0.8688 - val_loss: 0.2703 - val_acc: 0.8975\n",
      "Epoch 4/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.2933 - acc: 0.8776 - val_loss: 0.2646 - val_acc: 0.8959\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x7fce8dae3da0>"
      ]
     },
     "execution_count": 192,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We decrease the learning rate now that we are getting closer to the answer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 193,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "model.optimizer.lr=1e-4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 194,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 25000 samples, validate on 25000 samples\n",
      "Epoch 1/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.2832 - acc: 0.8812 - val_loss: 0.2674 - val_acc: 0.8976\n",
      "Epoch 2/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.2715 - acc: 0.8866 - val_loss: 0.2696 - val_acc: 0.8949\n",
      "Epoch 3/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.2593 - acc: 0.8921 - val_loss: 0.2652 - val_acc: 0.8975\n",
      "Epoch 4/4\n",
      "25000/25000 [==============================] - 5s - loss: 0.2480 - acc: 0.8986 - val_loss: 0.2590 - val_acc: 0.8962\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x7fce8da40b38>"
      ]
     },
     "execution_count": 194,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our error rate has improved from 11.1% to 10.3%, a 7% improvement \n",
    "\n",
    "(this value was fluctuating, but I typically got that it was between 4-10%)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 195,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.07207207207207197"
      ]
     },
     "execution_count": 195,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "(11.1 - 10.3)/11.1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
