{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# NLP concepts with spaCy\n",
    "\n",
    "By [Allison Parrish](http://www.decontextualize.com/)\n",
    "\n",
    "“Natural Language Processing” is a field at the intersection of computer science, linguistics and artificial intelligence which aims to make the underlying structure of language available to computer programs for analysis and manipulation. It’s a vast and vibrant field with a long history! New research and techniques are being developed constantly.\n",
    "\n",
    "The aim of this notebook is to introduce a few simple concepts and techniques from NLP—just the stuff that’ll help you do creative things quickly, and maybe open the door for you to understand more sophisticated NLP concepts that you might encounter elsewhere.\n",
    "\n",
    "We'll be using a library called [spaCy](https://spacy.io/), which is a good compromise between being very powerful and state-of-the-art and easy for newcomers to understand.\n",
    "\n",
    "(Traditionally, most NLP work in Python was done with a library called [NLTK](http://www.nltk.org/). NLTK is a fantastic library, but it’s also a writhing behemoth: large and slippery and difficult to understand. Also, much of the code in NLTK is decades out of date with contemporary practices in NLP.)\n",
    "\n",
    "This tutorial is written for Python 3.5+. [Here's a Python 2.7 version of the tutorial](https://gist.github.com/aparrish/f21f6abbf2367e8eb23438558207e1c3).\n",
    "\n",
    "## Natural language\n",
    "\n",
    "“Natural language” is a loaded phrase: what makes one stretch of language “natural” while another stretch is not? NLP techniques are opinionated about what language is and how it works; as a consequence, you’ll sometimes find yourself having to conceptualize your text with uncomfortable abstractions in order to make it work with NLP. (This is especially true of poetry, which almost by definition breaks most “conventional” definitions of how language behaves and how it’s structured.)\n",
    "\n",
    "Of course, a computer can never really fully “understand” human language. Even when the text you’re using fits the abstractions of NLP perfectly, the results of NLP analysis are always going to be at least a little bit inaccurate. But often even inaccurate results can be “good enough”—and in any case, inaccurate output from NLP procedures can be an excellent source of the sublime and absurd juxtapositions that we (as poets) are constantly in search of.\n",
    "\n",
    "## English only (sorta)\n",
    "\n",
    "The main assumption that most NLP libraries and techniques make is that the text you want to process will be in English. Historically, most NLP research has been on English specifically; it’s only more recently that serious work has gone into applying these techniques to other languages. The examples in this chapter are all based on English texts, and the tools we’ll use are geared toward English. If you’re interested in working on NLP in other languages, here are a few starting points:\n",
    "\n",
    "* [spaCy has models for various languages](https://spacy.io/models/#available-models), including German, Spanish, Portuguese, French, Italian and Dutch. Note that not all of these models support all of the capabilities of spaCy that we'll talk about in this tutorial. Also note that not all languages have the same ideas about what constitutes a \"part of speech\"!\n",
    "* [Konlpy](https://github.com/konlpy/konlpy), natural language processing in\n",
    "  Python for Korean\n",
    "* [Jieba](https://github.com/fxsjy/jieba), text segmentation and POS tagging in\n",
    "  Python for Chinese\n",
    "* Facebook's [fasttext project](https://fasttext.cc/docs/en/pretrained-vectors.html) makes available word vectors for a large number of languages (~300).\n",
    "\n",
    "## English grammar: a crash course\n",
    "\n",
    "The only thing I believe about English grammar is [this](http://www.writing.upenn.edu/~afilreis/88v/creeley-on-sentence.html):\n",
    "\n",
    "> \"Oh yes, the sentence,\" Creeley once told the critic Burton Hatlen, \"that's\n",
    "> what we call it when we put someone in jail.\"\n",
    "\n",
    "There is no such thing as a sentence, or a phrase, or a part of speech, or even\n",
    "a \"word\"---these are all pareidolic fantasies occasioned by glints of sunlight\n",
    "we see on reflected on the surface of the ocean of language; fantasies that we\n",
    "comfort ourselves with when faced with language's infinite and unknowable\n",
    "variability.\n",
    "\n",
    "Regardless, we may find it occasionally helpful to think about language using\n",
    "these abstractions. The following is a gross oversimplification of both how\n",
    "English grammar works, and how theories of English grammar work in the context\n",
    "of NLP. But it should be enough to get us going!\n",
    "\n",
    "### Sentences and parts of speech\n",
    "\n",
    "English texts can roughly be divided into \"sentences.\" Sentences are themselves\n",
    "composed of individual words, each of which has a function in expressing the\n",
    "meaning of the sentence. The function of a word in a sentence is called its\n",
    "\"part of speech\"—i.e., a word functions as a noun, a verb, an adjective, etc.\n",
    "Here's a sentence, with words marked for their part of speech:\n",
    "\n",
    "    I       really love entrees       from        the        new       cafeteria.\n",
    "    pronoun adverb verb noun (plural) preposition determiner adjective noun\n",
    "\n",
    "Of course, the \"part of speech\" of a word isn't a property of the word itself.\n",
    "We know this because a single \"word\" can function as two different parts of speech:\n",
    "\n",
    "> I love cheese.\n",
    "\n",
    "The word \"love\" here is a verb. But here:\n",
    "\n",
    "> Love is a battlefield.\n",
    "\n",
    "... it's a noun. For this reason (and others), it's difficult for computers to\n",
    "accurately determine the part of speech for a word in a sentence. (It's\n",
    "difficult sometimes even for humans to do this.) But NLP procedures do their\n",
    "best!\n",
    "\n",
    "### Phrases and larger syntactic structures\n",
    "\n",
    "There are several different ways for talking about larger syntactic structures in sentences. The scheme used by spaCy is called a \"dependency grammar.\" We'll talk about the details of this below.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Installing spaCy\n",
    "\n",
    "[Follow the instructions here](https://spacy.io/docs/usage/). To install on Anaconda, you'll need to open a Terminal window (or the equivalent on your operating system) and type\n",
    "\n",
    "    conda install -c conda-forge spacy\n",
    "    \n",
    "This line installs the library. You'll also need to download a language model. For that, type:\n",
    "\n",
    "    python -m spacy download en_core_web_md\n",
    "    \n",
    "(Replace `en` with the language code for your desired language, if there's a model available for it.) The language model contains the statistical information necessary to parse text into sentences and sentences into parts of speech. Note that this download is several hundred megabytes, so it might take a while!\n",
    "\n",
    "If you're not using Anaconda, you can also install with `pip`. When using `pip`, make sure to upgrade to the newest version first, with `pip install --upgrade pip`. (This will ensure that at least *some* of the dependencies are installed as pre-built binaries)\n",
    "\n",
    "    pip install spacy\n",
    "    \n",
    "(If you're not using a virtual environment, try `sudo pip install spacy`.)\n",
    "\n",
    "Currently, spaCy is distributed in source form only, so the installation process involves a bit of compiling. On macOS, you'll need to install [XCode](https://developer.apple.com/xcode/) in order to perform the compilation steps. [Here's a good tutorial for macOS Sierra](http://railsapps.github.io/xcode-command-line-tools.html), though the steps should be similar on other versions.\n",
    "\n",
    "After you've installed spaCy, you'll need to download the data. Run the following on the command line:\n",
    "\n",
    "    !python -m spacy download en_core_web_md"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Basic usage\n",
    "\n",
    "Import `spacy` like any other Python module:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import spacy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create a new spaCy object using `spacy.load('en_core_web_md')`. (The name in the parentheses is the same as the name of the model you downloaded above. If you downloaded a different model, you can put its name here instead. You can also just write `'en'` and spaCy will load the best model it has for that language.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "nlp = spacy.load('en_core_web_md')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And then create a `Document` object by calling the spaCy object with the text you want to work with. Below I've included a few sentences from the Universal Declaration of Human Rights:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "doc = nlp(\"All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. Everyone has the right to life, liberty and security of person.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Sentences\n",
    "\n",
    "If you learn nothing else about spaCy (or NLP), then learn at least that it's a good way to get a list of sentences in a text. Once you've created a document object, you can iterate over the sentences it contains using the `.sents` attribute:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "All human beings are born free and equal in dignity and rights.\n",
      "They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.\n",
      "Everyone has the right to life, liberty and security of person.\n"
     ]
    }
   ],
   "source": [
    "for item in doc.sents:\n",
    "    print(item.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `.sents` attribute is a [generator](https://wiki.python.org/moin/Generators), not a list, so while you can use it in a `for` loop or list comprehension, you can't index (or count) it directly. To do this, you'll need to convert it to a list first using the `list()` function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "sentences_as_list = list(doc.sents)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3"
      ]
     },
     "execution_count": 59,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(sentences_as_list)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then you can get a random item from the list:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Everyone has the right to life, liberty and security of person."
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import random\n",
    "random.choice(sentences_as_list)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Words\n",
    "\n",
    "Iterating over a document yields each word in the document in turn. Words are represented with spaCy [Token](https://spacy.io/docs/api/token) objects, which have several interesting attributes. The `.text` attribute gives the underlying text of the word, and the `.lemma_` attribute gives the word's \"lemma\" (explained below):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "All all\n",
      "human human\n",
      "beings being\n",
      "are be\n",
      "born bear\n",
      "free free\n",
      "and and\n",
      "equal equal\n",
      "in in\n",
      "dignity dignity\n",
      "and and\n",
      "rights right\n",
      ". .\n",
      "They -PRON-\n",
      "are be\n",
      "endowed endow\n",
      "with with\n",
      "reason reason\n",
      "and and\n",
      "conscience conscience\n",
      "and and\n",
      "should should\n",
      "act act\n",
      "towards towards\n",
      "one one\n",
      "another another\n",
      "in in\n",
      "a a\n",
      "spirit spirit\n",
      "of of\n",
      "brotherhood brotherhood\n",
      ". .\n",
      "Everyone everyone\n",
      "has have\n",
      "the the\n",
      "right right\n",
      "to to\n",
      "life life\n",
      ", ,\n",
      "liberty liberty\n",
      "and and\n",
      "security security\n",
      "of of\n",
      "person person\n",
      ". .\n"
     ]
    }
   ],
   "source": [
    "for word in doc:\n",
    "    print(word.text, word.lemma_)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A word's \"lemma\" is its most \"basic\" form, the form without any morphology\n",
    "applied to it. \"Sing,\" \"sang,\" \"singing,\" are all different \"forms\" of the\n",
    "lemma *sing*. Likewise, \"octopi\" is the plural of \"octopus\"; the \"lemma\" of\n",
    "\"octopi\" is *octopus*.\n",
    "\n",
    "\"Lemmatizing\" a text is the process of going through the text and replacing\n",
    "each word with its lemma. This is often done in an attempt to reduce a text\n",
    "to its most \"essential\" meaning, by eliminating pesky things like verb tense\n",
    "and noun number.\n",
    "\n",
    "Individual sentences can also be iterated over to get a list of words:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "They\n",
      "are\n",
      "endowed\n",
      "with\n",
      "reason\n",
      "and\n",
      "conscience\n",
      "and\n",
      "should\n",
      "act\n",
      "towards\n",
      "one\n",
      "another\n",
      "in\n",
      "a\n",
      "spirit\n",
      "of\n",
      "brotherhood\n",
      ".\n"
     ]
    }
   ],
   "source": [
    "sentence = list(doc.sents)[1]\n",
    "for word in sentence:\n",
    "    print(word.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Parts of speech\n",
    "\n",
    "The `pos_` attribute gives a general part of speech; the `tag_` attribute gives a more specific designation. [List of meanings here.](https://spacy.io/docs/api/annotation)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "All DET DT\n",
      "human ADJ JJ\n",
      "beings NOUN NNS\n",
      "are VERB VBP\n",
      "born VERB VBN\n",
      "free ADJ JJ\n",
      "and CCONJ CC\n",
      "equal ADJ JJ\n",
      "in ADP IN\n",
      "dignity NOUN NN\n",
      "and CCONJ CC\n",
      "rights NOUN NNS\n",
      ". PUNCT .\n",
      "They PRON PRP\n",
      "are VERB VBP\n",
      "endowed VERB VBN\n",
      "with ADP IN\n",
      "reason NOUN NN\n",
      "and CCONJ CC\n",
      "conscience NOUN NN\n",
      "and CCONJ CC\n",
      "should VERB MD\n",
      "act VERB VB\n",
      "towards ADP IN\n",
      "one NUM CD\n",
      "another DET DT\n",
      "in ADP IN\n",
      "a DET DT\n",
      "spirit NOUN NN\n",
      "of ADP IN\n",
      "brotherhood NOUN NN\n",
      ". PUNCT .\n",
      "Everyone NOUN NN\n",
      "has VERB VBZ\n",
      "the DET DT\n",
      "right NOUN NN\n",
      "to ADP IN\n",
      "life NOUN NN\n",
      ", PUNCT ,\n",
      "liberty NOUN NN\n",
      "and CCONJ CC\n",
      "security NOUN NN\n",
      "of ADP IN\n",
      "person NOUN NN\n",
      ". PUNCT .\n"
     ]
    }
   ],
   "source": [
    "for item in doc:\n",
    "    print(item.text, item.pos_, item.tag_)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Extracting words by part of speech\n",
    "\n",
    "With knowledge of which part of speech each word belongs to, we can make simple code to extract and recombine words by their part of speech. The following code creates a list of all nouns and adjectives in the text:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "nouns = [item.text for item in doc if item.pos_ == 'NOUN']\n",
    "adjectives = [item.text for item in doc if item.pos_ == 'ADJ']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And below, some code to print out random pairings of an adjective from the text with a noun from the text:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "equal rights\n",
      "free dignity\n",
      "equal dignity\n",
      "equal spirit\n",
      "free right\n",
      "human spirit\n",
      "free spirit\n",
      "equal liberty\n",
      "human right\n",
      "free Everyone\n"
     ]
    }
   ],
   "source": [
    "for i in range(10):\n",
    "    print(random.choice(adjectives) + \" \" + random.choice(nouns))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Making a list of verbs works similarly:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "verbs = [item.text for item in doc if item.pos_ == 'VERB']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Although in this case, you'll notice the list of verbs is a bit unintuitive. We're getting words like \"should\" and \"are\" and \"has\"—helper verbs that maybe don't fit our idea of what verbs we want to extract."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['are', 'born', 'are', 'endowed', 'should', 'act', 'has']"
      ]
     },
     "execution_count": 67,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "verbs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is because we used the `.pos_` attribute, which only gives us general information about the part of speech. The `.tag_` attribute allows us to be more specific about the kinds of verbs we want. For example, this code gives us only the verbs in past participle form:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "only_past = [item.text for item in doc if item.tag_ == 'VBN']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['born', 'endowed']"
      ]
     },
     "execution_count": 69,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "only_past"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Larger syntactic units\n",
    "\n",
    "Okay, so we can get individual words by their part of speech. Great! But what if we want larger chunks, based on their syntactic role in the sentence? The easy way is `.noun_chunks`, which is an attribute of a document or a sentence that evaluates to a list of [spans](https://spacy.io/docs/api/span) of noun phrases, regardless of their position in the document:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "All human beings, dignity, rights, They, reason, conscience, a spirit, brotherhood, Everyone, the right, life, liberty, security, person\n"
     ]
    }
   ],
   "source": [
    "noun_chunks = [item.text for item in doc.noun_chunks]\n",
    "print(\", \".join(noun_chunks))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For anything more sophisticated than this, though, we'll need to learn about how spaCy parses sentences into its syntactic components."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Understanding dependency grammars"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![displacy parse](http://static.decontextualize.com/syntax_example.png)\n",
    "\n",
    "[See in \"displacy\", spaCy's syntax visualization tool.](https://demos.explosion.ai/displacy/?text=Everyone%20has%20the%20right%20to%20life%2C%20liberty%20and%20security%20of%20person&model=en&cpu=1&cph=0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The spaCy library parses the underlying sentences using a [dependency grammar](https://en.wikipedia.org/wiki/Dependency_grammar). Dependency grammars look different from the kinds of sentence diagramming you may have done in high school, and even from tree-based [phrase structure grammars](https://en.wikipedia.org/wiki/Phrase_structure_grammar) commonly used in descriptive linguistics. The idea of a dependency grammar is that every word in a sentence is a \"dependent\" of some other word, which is that word's \"head.\" Those \"head\" words are in turn dependents of other words. The finite verb in the sentence is the ultimate \"head\" of the sentence, and is not itself dependent on any other word. (The dependents of a particular head are sometimes called its \"children.\")\n",
    "\n",
    "The question of how to know what constitutes a \"head\" and a \"dependent\" is complicated. As a starting point, here's a passage from [Dependency Grammar and Dependency Parsing](http://stp.lingfil.uu.se/~nivre/docs/05133.pdf):\n",
    "\n",
    "> Here are some of the criteria that have been proposed for identifying a syntactic relation between a head H and a dependent D in a construction C (Zwicky, 1985; Hudson, 1990):\n",
    ">\n",
    "> 1. H determines the syntactic category of C and can often replace C.\n",
    "> 2. H determines the semantic category of C; D gives semantic specification.\n",
    "> 3. H is obligatory; D may be optional.\n",
    "> 4. H selects D and determines whether D is obligatory or optional.\n",
    "> 5. The form of D depends on H (agreement or government).\n",
    "> 6. The linear position of D is specified with reference to H.\"\n",
    "\n",
    "Dependents are related to their heads by a *syntactic relation*. The name of the syntactic relation describes the relationship between the head and the dependent. Use the displaCy visualizer (linked above) to see how a particular sentence is parsed, and what the relations between the heads and dependents are.\n",
    "\n",
    "Every token object in a spaCy document or sentence has attributes that tell you what the word's head is, what the dependency relationship is between that word and its head, and a list of that word's children (dependents). The following code prints out each word in the sentence, the tag, the word's head, the word's dependency relation with its head, and the word's children (i.e., dependent words):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Word: Everyone\n",
      "Tag: NN\n",
      "Head: has\n",
      "Dependency relation: nsubj\n",
      "Children: []\n",
      "\n",
      "Word: has\n",
      "Tag: VBZ\n",
      "Head: has\n",
      "Dependency relation: ROOT\n",
      "Children: [Everyone, right, .]\n",
      "\n",
      "Word: the\n",
      "Tag: DT\n",
      "Head: right\n",
      "Dependency relation: det\n",
      "Children: []\n",
      "\n",
      "Word: right\n",
      "Tag: NN\n",
      "Head: has\n",
      "Dependency relation: dobj\n",
      "Children: [the, to]\n",
      "\n",
      "Word: to\n",
      "Tag: IN\n",
      "Head: right\n",
      "Dependency relation: prep\n",
      "Children: [life]\n",
      "\n",
      "Word: life\n",
      "Tag: NN\n",
      "Head: to\n",
      "Dependency relation: pobj\n",
      "Children: [,, liberty]\n",
      "\n",
      "Word: ,\n",
      "Tag: ,\n",
      "Head: life\n",
      "Dependency relation: punct\n",
      "Children: []\n",
      "\n",
      "Word: liberty\n",
      "Tag: NN\n",
      "Head: life\n",
      "Dependency relation: conj\n",
      "Children: [and, security]\n",
      "\n",
      "Word: and\n",
      "Tag: CC\n",
      "Head: liberty\n",
      "Dependency relation: cc\n",
      "Children: []\n",
      "\n",
      "Word: security\n",
      "Tag: NN\n",
      "Head: liberty\n",
      "Dependency relation: conj\n",
      "Children: [of]\n",
      "\n",
      "Word: of\n",
      "Tag: IN\n",
      "Head: security\n",
      "Dependency relation: prep\n",
      "Children: [person]\n",
      "\n",
      "Word: person\n",
      "Tag: NN\n",
      "Head: of\n",
      "Dependency relation: pobj\n",
      "Children: []\n",
      "\n",
      "Word: .\n",
      "Tag: .\n",
      "Head: has\n",
      "Dependency relation: punct\n",
      "Children: []\n",
      "\n"
     ]
    }
   ],
   "source": [
    "for word in list(doc.sents)[2]:\n",
    "    print(\"Word:\", word.text)\n",
    "    print(\"Tag:\", word.tag_)\n",
    "    print(\"Head:\", word.head.text)\n",
    "    print(\"Dependency relation:\", word.dep_)\n",
    "    print(\"Children:\", list(word.children))\n",
    "    print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here's a list of a few dependency relations and what they mean. ([A more complete list can be found here.](http://www.mathcs.emory.edu/~choi/doc/clear-dependency-2012.pdf))\n",
    "\n",
    "* `nsubj`: this word's head is a verb, and this word is itself the subject of the verb\n",
    "* `nsubjpass`: same as above, but for subjects in sentences in the passive voice\n",
    "* `dobj`: this word's head is a verb, and this word is itself the direct object of the verb\n",
    "* `iobj`: same as above, but indirect object\n",
    "* `aux`: this word's head is a verb, and this word is an \"auxiliary\" verb (like \"have\", \"will\", \"be\")\n",
    "* `attr`: this word's head is a copula (like \"to be\"), and this is the description attributed to the subject of the sentence (e.g., in \"This product is a global brand\", `brand` is dependent on `is` with the `attr` dependency relation)\n",
    "* `det`: this word's head is a noun, and this word is a determiner of that noun (like \"the,\" \"this,\" etc.)\n",
    "* `amod`: this word's head is a noun, and this word is an adjective describing that noun\n",
    "* `prep`: this word is a preposition that modifies its head\n",
    "* `pobj`: this word is a dependent (object) of a preposition"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using .subtree for extracting syntactic units\n",
    "\n",
    "The `.subtree` attribute evaluates to a generator that can be flatted by passing it to `list()`. This is a list of the word's syntactic dependents—essentially, the \"clause\" that the word belongs to."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This function merges a subtree and returns a string with the text of the words contained in it:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def flatten_subtree(st):\n",
    "    return ''.join([w.text_with_ws for w in list(st)]).strip()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "With this function in our toolbox, we can write a loop that prints out the subtree for each word in a sentence:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Word: Everyone\n",
      "Flattened subtree:  Everyone\n",
      "\n",
      "Word: has\n",
      "Flattened subtree:  Everyone has the right to life, liberty and security of person.\n",
      "\n",
      "Word: the\n",
      "Flattened subtree:  the\n",
      "\n",
      "Word: right\n",
      "Flattened subtree:  the right to life, liberty and security of person\n",
      "\n",
      "Word: to\n",
      "Flattened subtree:  to life, liberty and security of person\n",
      "\n",
      "Word: life\n",
      "Flattened subtree:  life, liberty and security of person\n",
      "\n",
      "Word: ,\n",
      "Flattened subtree:  ,\n",
      "\n",
      "Word: liberty\n",
      "Flattened subtree:  liberty and security of person\n",
      "\n",
      "Word: and\n",
      "Flattened subtree:  and\n",
      "\n",
      "Word: security\n",
      "Flattened subtree:  security of person\n",
      "\n",
      "Word: of\n",
      "Flattened subtree:  of person\n",
      "\n",
      "Word: person\n",
      "Flattened subtree:  person\n",
      "\n",
      "Word: .\n",
      "Flattened subtree:  .\n",
      "\n"
     ]
    }
   ],
   "source": [
    "for word in list(doc.sents)[2]:\n",
    "    print(\"Word:\", word.text)\n",
    "    print(\"Flattened subtree: \", flatten_subtree(word.subtree))\n",
    "    print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Using the subtree and our knowledge of dependency relation types, we can write code that extracts larger syntactic units based on their relationship with the rest of the sentence. For example, to get all of the noun phrases that are subjects of a verb:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "subjects = []\n",
    "for word in doc:\n",
    "    if word.dep_ in ('nsubj', 'nsubjpass'):\n",
    "        subjects.append(flatten_subtree(word.subtree))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['All human beings', 'They', 'Everyone']"
      ]
     },
     "execution_count": 75,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "subjects"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or every prepositional phrase:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "prep_phrases = []\n",
    "for word in doc:\n",
    "    if word.dep_ == 'prep':\n",
    "        prep_phrases.append(flatten_subtree(word.subtree))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['in dignity and rights',\n",
       " 'with reason and conscience',\n",
       " 'towards one another',\n",
       " 'in a spirit of brotherhood',\n",
       " 'of brotherhood',\n",
       " 'to life, liberty and security of person',\n",
       " 'of person']"
      ]
     },
     "execution_count": 77,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "prep_phrases"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Entity extraction\n",
    "\n",
    "A common task in NLP is taking a text and extracting \"named entities\" from it—basically, proper nouns, or names of companies, products, locations, etc. You can easily access this information using the `.ents` property of a document."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "doc2 = nlp(\"John McCain and I visited the Apple Store in Manhattan.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "John McCain\n",
      "Apple\n",
      "Manhattan\n"
     ]
    }
   ],
   "source": [
    "for item in doc2.ents:\n",
    "    print(item)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Entity objects have a `.label_` attribute that tells you the type of the entity. ([Here's a full list of the built-in entity types.](https://spacy.io/docs/usage/entity-recognition#entity-types))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "John McCain PERSON\n",
      "Apple ORG\n",
      "Manhattan GPE\n"
     ]
    }
   ],
   "source": [
    "for item in doc2.ents:\n",
    "    print(item.text, item.label_)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Loading data from a file\n",
    "\n",
    "You can load data from a file easily with spaCy. [Here's the first few verses from the King James Version of the Bible](http://rwet.decontextualize.com/texts/genesis.txt), for example. (Download the linked file and make sure it's in the same directory as this notebook.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "doc3 = nlp(open(\"genesis.txt\").read())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "From here, we can see what entities were here with us from the very beginning:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "earth LOC\n",
      "earth LOC\n",
      "God PERSON\n",
      "God PERSON\n",
      "Night PERSON\n",
      "the evening and the morning TIME\n",
      "the first day DATE\n",
      "the second day DATE\n",
      "one CARDINAL\n",
      "Earth LOC\n",
      "earth LOC\n",
      "earth LOC\n",
      "the third day DATE\n",
      "the day DATE\n",
      "seasons DATE\n",
      "days DATE\n",
      "years DATE\n",
      "earth LOC\n",
      "two CARDINAL\n",
      "the day DATE\n",
      "the night TIME\n",
      "the day DATE\n",
      "the night TIME\n",
      "the evening and the morning TIME\n",
      "the fourth day DATE\n",
      "the fifth day DATE\n",
      "Behold PERSON\n",
      "earth LOC\n",
      "earth LOC\n",
      "earth LOC\n",
      "the sixth day DATE\n"
     ]
    }
   ],
   "source": [
    "for item in doc3.ents:\n",
    "    print(item.text, item.label_)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To make a list of all of the times in the creation of the Earth:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['the evening and the morning',\n",
       " 'the night',\n",
       " 'the night',\n",
       " 'the evening and the morning']"
      ]
     },
     "execution_count": 83,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[item.text for item in doc3.ents if item.label_ == 'TIME']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Approaches to keyword extraction\n",
    "\n",
    "\"Keyword extraction\" is the name for any kind of procedure that attempts to identify a subset of words in a text as being representative of that text's overall meaning. It's a way of computationally answering the questions of what a text is about, and how this text might be different in its contents from other texts. There are a number of ways to perform keyword extraction, some of which are quite sophisticated and depend on a large number of documents to be effective. Others are simple and effective enough that we can implement them in a few lines of code with just the data that we get from spaCy's model and basic analysis of a single document. We'll take a look at a few techniques of the latter kind below.\n",
    "\n",
    "Here are some helpful recent overviews of different keyword extraction techniques (sometimes also called \"automatic terminology recognition\") from a number of different disciplines:\n",
    "\n",
    "* Astrakhantsev, N. “ATR4S: Toolkit with State-of-the-Art Automatic Terms Recognition Methods in Scala.” ArXiv:1611.07804 [Cs], Nov. 2016. arXiv.org, http://arxiv.org/abs/1611.07804.\n",
    "* [Chuang, Jason, et al. “‘Without the Clutter of Unimportant Words’: Descriptive Keyphrases for Text Visualization.” ACM Transactions on Computer-Human Interaction (TOCHI), vol. 19, no. 3, 2012, p. 19.](http://vis.stanford.edu/papers/keyphrases)\n",
    "* [Understanding Keyness](http://www.thegrammarlab.com/?nor-portfolio=understanding-keyness) from the Grammar Lab\n",
    "\n",
    "### Counting words\n",
    "\n",
    "Maybe the most obvious way to extract keywords from a text is to find the words that occur most frequently. This approach might not be very valuable, as we'll see below, but it's helpful at least to know how it's done. Fortunately, Python's `Counter` object, which provides an easy way to count the number of times that particular items occur in a list, will do most of the work for us. [Here's a more detailed tutorial about `Counter`](https://gist.github.com/aparrish/4b096b95bfbd636733b7b9f2636b8cf4), but the basics are easy to understand. First, import `Counter` from Python's built-in `collections` library:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from collections import Counter"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And then pass a list of strings to `Counter()`, assigning the result to a variable. I'll start by just counting raw word counts:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 85,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "word_counts = Counter([item.text for item in doc3 if item.is_alpha])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(The `if item.is_alpha` clause in the list comprehension above limits the list to only tokens that are alphanumeric, i.e., excluding punctuation.)\n",
    "\n",
    "The `word_counts` variable contains a `Counter` object, which has a few interesting methods and properties. If you just evaluate it, you get a dictionary-like object that maps tokens to the number of times those tokens occur:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Counter({'And': 33,\n",
       "         'Be': 2,\n",
       "         'Behold': 1,\n",
       "         'Day': 1,\n",
       "         'Earth': 1,\n",
       "         'God': 32,\n",
       "         'Heaven': 1,\n",
       "         'I': 2,\n",
       "         'In': 1,\n",
       "         'Let': 8,\n",
       "         'Night': 1,\n",
       "         'Seas': 1,\n",
       "         'So': 1,\n",
       "         'Spirit': 1,\n",
       "         'a': 2,\n",
       "         'above': 2,\n",
       "         'abundantly': 2,\n",
       "         'after': 11,\n",
       "         'air': 3,\n",
       "         'all': 2,\n",
       "         'also': 1,\n",
       "         'and': 64,\n",
       "         'appear': 1,\n",
       "         'be': 7,\n",
       "         'bearing': 1,\n",
       "         'beast': 3,\n",
       "         'beginning': 1,\n",
       "         'behold': 1,\n",
       "         'blessed': 2,\n",
       "         'bring': 3,\n",
       "         'brought': 2,\n",
       "         'called': 5,\n",
       "         'cattle': 3,\n",
       "         'created': 5,\n",
       "         'creature': 3,\n",
       "         'creepeth': 3,\n",
       "         'creeping': 2,\n",
       "         'darkness': 4,\n",
       "         'day': 9,\n",
       "         'days': 1,\n",
       "         'deep': 1,\n",
       "         'divide': 3,\n",
       "         'divided': 2,\n",
       "         'dominion': 2,\n",
       "         'dry': 2,\n",
       "         'earth': 20,\n",
       "         'evening': 6,\n",
       "         'every': 12,\n",
       "         'face': 3,\n",
       "         'female': 1,\n",
       "         'fifth': 1,\n",
       "         'fill': 1,\n",
       "         'firmament': 9,\n",
       "         'first': 1,\n",
       "         'fish': 2,\n",
       "         'fly': 1,\n",
       "         'for': 6,\n",
       "         'form': 1,\n",
       "         'forth': 5,\n",
       "         'fourth': 1,\n",
       "         'fowl': 6,\n",
       "         'from': 5,\n",
       "         'fruit': 4,\n",
       "         'fruitful': 2,\n",
       "         'gathered': 1,\n",
       "         'gathering': 1,\n",
       "         'give': 2,\n",
       "         'given': 2,\n",
       "         'good': 7,\n",
       "         'grass': 2,\n",
       "         'great': 2,\n",
       "         'greater': 1,\n",
       "         'green': 1,\n",
       "         'had': 1,\n",
       "         'hath': 1,\n",
       "         'have': 4,\n",
       "         'he': 6,\n",
       "         'heaven': 6,\n",
       "         'herb': 4,\n",
       "         'him': 1,\n",
       "         'his': 9,\n",
       "         'image': 3,\n",
       "         'in': 13,\n",
       "         'is': 4,\n",
       "         'it': 16,\n",
       "         'itself': 2,\n",
       "         'kind': 10,\n",
       "         'land': 2,\n",
       "         'lesser': 1,\n",
       "         'let': 6,\n",
       "         'life': 2,\n",
       "         'light': 10,\n",
       "         'lights': 3,\n",
       "         'likeness': 1,\n",
       "         'living': 3,\n",
       "         'made': 5,\n",
       "         'make': 1,\n",
       "         'male': 1,\n",
       "         'man': 2,\n",
       "         'may': 1,\n",
       "         'meat': 2,\n",
       "         'midst': 1,\n",
       "         'morning': 6,\n",
       "         'moved': 1,\n",
       "         'moveth': 2,\n",
       "         'moving': 1,\n",
       "         'multiply': 3,\n",
       "         'night': 3,\n",
       "         'of': 20,\n",
       "         'one': 1,\n",
       "         'open': 1,\n",
       "         'our': 2,\n",
       "         'over': 10,\n",
       "         'own': 1,\n",
       "         'place': 1,\n",
       "         'replenish': 1,\n",
       "         'rule': 3,\n",
       "         'said': 10,\n",
       "         'saw': 7,\n",
       "         'saying': 1,\n",
       "         'sea': 2,\n",
       "         'seas': 1,\n",
       "         'seasons': 1,\n",
       "         'second': 1,\n",
       "         'seed': 6,\n",
       "         'set': 1,\n",
       "         'shall': 1,\n",
       "         'signs': 1,\n",
       "         'sixth': 1,\n",
       "         'so': 6,\n",
       "         'stars': 1,\n",
       "         'subdue': 1,\n",
       "         'that': 14,\n",
       "         'the': 108,\n",
       "         'their': 2,\n",
       "         'them': 8,\n",
       "         'there': 5,\n",
       "         'thing': 6,\n",
       "         'third': 1,\n",
       "         'to': 11,\n",
       "         'together': 2,\n",
       "         'tree': 4,\n",
       "         'two': 1,\n",
       "         'under': 2,\n",
       "         'unto': 2,\n",
       "         'upon': 10,\n",
       "         'us': 1,\n",
       "         'very': 1,\n",
       "         'void': 1,\n",
       "         'was': 17,\n",
       "         'waters': 11,\n",
       "         'were': 8,\n",
       "         'whales': 1,\n",
       "         'wherein': 1,\n",
       "         'which': 5,\n",
       "         'whose': 2,\n",
       "         'winged': 1,\n",
       "         'without': 1,\n",
       "         'years': 1,\n",
       "         'yielding': 5,\n",
       "         'you': 2})"
      ]
     },
     "execution_count": 86,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "word_counts"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can get the count for a particular token by using square bracket indexing with the `Counter` object:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "9"
      ]
     },
     "execution_count": 87,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "word_counts['firmament']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or you can get the *n* most frequent items using the `.most_common()` method, which takes an integer parameter to limit the list to a certain number of items, sorted from most frequent to least:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('the', 108),\n",
       " ('and', 64),\n",
       " ('And', 33),\n",
       " ('God', 32),\n",
       " ('earth', 20),\n",
       " ('of', 20),\n",
       " ('was', 17),\n",
       " ('it', 16),\n",
       " ('that', 14),\n",
       " ('in', 13)]"
      ]
     },
     "execution_count": 88,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "word_counts.most_common(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is a list of [tuples](https://docs.python.org/3.5/library/stdtypes.html#typesseq-tuple). (Tuples are just like lists, except you can't change them after you create them.) To get just the list of the ten most common nouns:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 89,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "the, and, And, God, earth, of, was, it, that, in\n"
     ]
    }
   ],
   "source": [
    "top_ten_words = [item[0] for item in word_counts.most_common(10)]\n",
    "print(\", \".join(top_ten_words))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can think of this as a kind of (very simple!) list of keywords–essentially, the words that occur in this document more than any other word.\n",
    "\n",
    "The following expression evaluates to a list of every word in the text and the percentage of the text that it comprises. (To keep things short, I'm just getting the first 25 items from the list using the list slice syntax `[:25]`.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('In', 0.0012547051442910915),\n",
       " ('the', 0.1355081555834379),\n",
       " ('beginning', 0.0012547051442910915),\n",
       " ('God', 0.04015056461731493),\n",
       " ('created', 0.006273525721455458),\n",
       " ('heaven', 0.0075282308657465494),\n",
       " ('and', 0.08030112923462986),\n",
       " ('earth', 0.025094102885821833),\n",
       " ('And', 0.04140526976160602),\n",
       " ('was', 0.02132998745294856),\n",
       " ('without', 0.0012547051442910915),\n",
       " ('form', 0.0012547051442910915),\n",
       " ('void', 0.0012547051442910915),\n",
       " ('darkness', 0.005018820577164366),\n",
       " ('upon', 0.012547051442910916),\n",
       " ('face', 0.0037641154328732747),\n",
       " ('of', 0.025094102885821833),\n",
       " ('deep', 0.0012547051442910915),\n",
       " ('Spirit', 0.0012547051442910915),\n",
       " ('moved', 0.0012547051442910915),\n",
       " ('waters', 0.013801756587202008),\n",
       " ('said', 0.012547051442910916),\n",
       " ('Let', 0.010037641154328732),\n",
       " ('there', 0.006273525721455458),\n",
       " ('be', 0.00878293601003764)]"
      ]
     },
     "execution_count": 90,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "total_words = sum(word_counts.values())\n",
    "[(item[0], word_counts[item[0]] / total_words) for item in word_counts.items()][:25]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This tells you that, e.g., the text is about 13% made up of the word \"the\" and about 0.5% made up of the word \"darkness.\" Another way of formulating this is in terms of probability: if you pick a random word from this text, it has about a 13% chance of being \"the\" and a 0.5% chance of being \"darkness.\" Using this method of extracting keywords, we're just making a list of the words that are most likely to be drawn at random from all words in that text."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Word probabilities\n",
    "\n",
    "Of course, this particular way of extracting keywords in a text isn't terribly useful—of the top ten items on the list, at least eight of them (excluding \"God\" and \"earth\") could be expected to occur in similar probabilities in *any* given source text. A potentially more interesting way to formulate the problem is to ask: what words are *uniquely* frequent in this text (and not any arbitrary English text)?\n",
    "\n",
    "To figure this out, we need data: specifically, data on what the probability is that a given word will occur in any text written in English. Of course, the corpus of \"text written in English\" is not all computer-readable, is growing all the time, and has a poorly defined boundary (what counts as \"English?\"), so we can never know these probabilities precisely. But with a sufficiently large corpus of English documents, we could at least form a rough idea.\n",
    "\n",
    "Fortunately, spaCy's model includes—for every word in its vocabulary—the word's [log probability](https://en.wikipedia.org/wiki/Log_probability) estimate, based on a large corpus of English texts. You can access a word's log probability estimate in English using the `.prob` attribute of the `Token` object (which is what you get when you iterate over a document or a sentence.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('In', -7.603263854980469),\n",
       " ('the', -3.528766632080078),\n",
       " ('beginning', -9.830488204956055),\n",
       " ('God', -8.62376594543457),\n",
       " ('created', -9.588191986083984),\n",
       " ('the', -3.528766632080078),\n",
       " ('heaven', -11.090792655944824),\n",
       " ('and', -4.113108158111572),\n",
       " ('the', -3.528766632080078),\n",
       " ('earth', -9.99667739868164),\n",
       " ('.', -3.0678977966308594),\n",
       " ('\\n', -6.0506510734558105),\n",
       " ('And', -7.012199401855469),\n",
       " ('the', -3.528766632080078),\n",
       " ('earth', -9.99667739868164),\n",
       " ('was', -5.252320289611816),\n",
       " ('without', -7.694504261016846),\n",
       " ('form', -9.062009811401367),\n",
       " (',', -3.4549596309661865),\n",
       " ('and', -4.113108158111572),\n",
       " ('void', -11.47757625579834),\n",
       " (';', -6.586422920227051),\n",
       " ('and', -4.113108158111572),\n",
       " ('darkness', -11.919983863830566),\n",
       " ('was', -5.252320289611816)]"
      ]
     },
     "execution_count": 91,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[(item.text, item.prob) for item in doc3][:25]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Lower numbers (i.e., numbers that are more negative) are more rare. You can also look up any word's probability using the `.vocab` attribute of the [`Language`](https://spacy.io/api/language) object, which we initially created by calling `spacy.load()`, which returns a [`Lexeme`](https://spacy.io/api/lexeme) object:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "water = nlp.vocab['water']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "-8.589462280273438"
      ]
     },
     "execution_count": 93,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "water.prob"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "By the way: you can convert a log probability back to a percentage by raising the constant $e$ to the power of the log probability. The constant $e$ is included as part of the `math` package, and the operator to raise a value by a power in Python is `**`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 94,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.00018605610680043203"
      ]
     },
     "execution_count": 94,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from math import e\n",
    "e**water.prob"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This tells us that, according to spaCy, if you pick a word at random from any given English text, the chance of it being \"water\" is about 0.02%."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A first approximation, then, of our task to find the words that are uniquely probable in our text would be simply to get a list of the *least common words* in the text, as judged by spaCy's word probability estimate. To do this, we first need a list of just the unique words in the text (i.e., a list of all of the words with duplicates removed)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "unique_words = list(set([item.text for item in doc3 if item.is_alpha]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then, using Python's `sorted()` function, we can sort these according to their probability and give only the top ten rarest words in the text."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['moveth',\n",
       " 'creepeth',\n",
       " 'firmament',\n",
       " 'Seas',\n",
       " 'fowl',\n",
       " 'yielding',\n",
       " 'subdue',\n",
       " 'abundantly',\n",
       " 'Behold',\n",
       " 'fruitful',\n",
       " 'replenish',\n",
       " 'likeness',\n",
       " 'hath',\n",
       " 'winged',\n",
       " 'dominion']"
      ]
     },
     "execution_count": 96,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[item for item in sorted(unique_words, key=lambda x: nlp.vocab[x].prob)][:15]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> NOTE: If you're looking at that `sorted()` function and wondering things like \"what is `lambda`\" and \"why is this happening to me?\" then you might want to take a look at [this tutorial](https://github.com/ledeprogram/courses/blob/master/databases-2015/01_Python_Beyond_the_Basics.ipynb).\n",
    "\n",
    "### Word weirdness\n",
    "\n",
    "The result of the expression above feels a *bit* more like an accurate summary of the text, but it does seem to be favoring words that are just rare *in general*, and isn't picking up on words that are relatively common in English but are unusually common in our document. For example, according to our probability calculation earlier, one in twenty words in our text is \"God,\" but the same could not be said for English in general (outside of a few specific genres and contexts, at least). So we need to focus in on the *uniqueness* of the probability. Is a given word uniquely probable to occur in our document, as opposed to English in general?\n",
    "\n",
    "An easy and intuitive way to calculate this is simply to find the ratio of the word's probability in our document to spaCy's estimate of the word's probability in English. This calculation for a particular word was called that word's \"weirdness\" in [Ahmad, Khurshid, et al. “University of Surrey Participation in TREC8: Weirdness Indexing for Logical Document Extrapolation and Retrieval (WILDER).” TREC, 1999, pp. 1–8.](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.3364) and a similar measure called \"log ratio\" was proposed by [Andrew Hardie here](http://cass.lancs.ac.uk/?p=1133)).\n",
    "\n",
    "We'll find each word's \"weirdness\" score by dividing its frequency in our source document (Genesis) with its English log frequency estimate from spaCy, like so (taking care to convert spaCy's log probability back into a percentage by raising $e$ to that power). To account for our intuition that our source text, being comparatively small, overrepresents the frequency of its rarest words, and underrepresents the frequency of its most common words, we'll use the *square* of the ratio in our source text. (Note: I have no actual well-motivated statistical reason for this, but it seems to work okay in practice. [See this tutorial](quick-and-dirty-keywords.ipynb) for a more statistically defensible but slightly more difficult-to-understand approach to this task.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "square_weirdness = [(item, pow(word_counts[item]/total_words, 2) / e**nlp.vocab[item].prob) for item in unique_words]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('that', 0.02680706036866266),\n",
       " ('Let', 0.8735509428958823),\n",
       " ('one', 0.0006328201748930747),\n",
       " ('brought', 0.09958702824984658),\n",
       " ('whales', 0.5626471557219519),\n",
       " ('behold', 0.6741497008189071),\n",
       " ('two', 0.002663514115913552),\n",
       " ('face', 0.06997393346725132),\n",
       " ('said', 0.2234619775500119),\n",
       " ('fish', 0.16378760290969788),\n",
       " ('and', 0.3942243826208631),\n",
       " ('of', 0.04530349263154088),\n",
       " ('may', 0.0034026034243890475),\n",
       " ('after', 0.27245900703261716),\n",
       " ('sixth', 0.4794220601229158),\n",
       " ('he', 0.021358924306546304),\n",
       " ('over', 0.17746281298749195),\n",
       " ('our', 0.010520954094237858),\n",
       " ('creepeth', 1378.6369355482045),\n",
       " ('In', 0.003156013802475997),\n",
       " ('God', 8.966795747370472),\n",
       " ('moved', 0.024726696730063457),\n",
       " ('give', 0.012733081112292561),\n",
       " ('deep', 0.026994624370536725),\n",
       " ('fruitful', 7.9162567496963385),\n",
       " ('lights', 0.49984586472853804),\n",
       " ('his', 0.09135656008221582),\n",
       " ('bearing', 0.21605927589715465),\n",
       " ('fill', 0.039745426646784425),\n",
       " ('dominion', 4.919916289305311),\n",
       " ('you', 0.0004996394704867954),\n",
       " ('fruit', 1.3520109225141252),\n",
       " ('gathered', 0.3085002013775422),\n",
       " ('shall', 0.054595267998189666),\n",
       " ('meat', 0.13847244495885594),\n",
       " ('cattle', 4.6871548255632325),\n",
       " ('under', 0.028177976325588633),\n",
       " ('earth', 13.824364628671661),\n",
       " ('abundantly', 8.90097054615443),\n",
       " ('moving', 0.019859309199980735),\n",
       " ('which', 0.03818308881539306),\n",
       " ('yielding', 66.92894793252185),\n",
       " ('also', 0.0016080464754996862),\n",
       " ('let', 0.13764815593639493),\n",
       " ('female', 0.020164700403006837),\n",
       " ('Earth', 0.04629195700823975),\n",
       " ('Day', 0.057958726655371856),\n",
       " ('together', 0.03574041644714984),\n",
       " ('third', 0.02141640356961613),\n",
       " ('years', 0.0024968588495192182),\n",
       " ('darkness', 3.784308701095608),\n",
       " ('air', 0.16560057447849827),\n",
       " ('multiply', 4.599606040520469),\n",
       " ('stars', 0.07151768838012834),\n",
       " ('herb', 9.75652421298085),\n",
       " ('without', 0.0034575152056405263),\n",
       " ('so', 0.019169840638429997),\n",
       " ('land', 0.09391100407013181),\n",
       " ('signs', 0.06844189871600477),\n",
       " ('seas', 0.8167907688345959),\n",
       " ('Heaven', 0.2829391523057961),\n",
       " ('there', 0.018177849381636316),\n",
       " ('to', 0.009005703597640517),\n",
       " ('greater', 0.048686991556084434),\n",
       " ('evening', 3.5948391505388395),\n",
       " ('own', 0.003263642250178071),\n",
       " ('itself', 0.05366433594966312),\n",
       " ('And', 1.903140025786664),\n",
       " ('forth', 1.832993701230368),\n",
       " ('be', 0.014338676518236792),\n",
       " ('light', 1.2086036632915507),\n",
       " ('creature', 1.0032757001269654),\n",
       " ('form', 0.013572636350095951),\n",
       " ('seasons', 0.06808806708965255),\n",
       " ('unto', 1.8054193456863203),\n",
       " ('divide', 2.023597963776899),\n",
       " ('beast', 0.8930938443423192),\n",
       " ('kind', 0.4016512152850912),\n",
       " ('open', 0.008163141368662901),\n",
       " ('were', 0.07968679297594378),\n",
       " ('from', 0.016039494051943107),\n",
       " ('very', 0.001613611376833971),\n",
       " ('good', 0.05228920680712282),\n",
       " ('Seas', 4.351506748218236),\n",
       " ('Night', 0.12141958289280423),\n",
       " ('wherein', 0.7874493606630398),\n",
       " ('in', 0.02697786181165973),\n",
       " ('waters', 41.096087788725875),\n",
       " ('made', 0.06438114122860954),\n",
       " ('appear', 0.04269433030464307),\n",
       " ('life', 0.012262439764517442),\n",
       " ('is', 0.002173597292640695),\n",
       " ('blessed', 1.6409960209417769),\n",
       " ('fifth', 0.2319826460163109),\n",
       " ('lesser', 0.12685623480036456),\n",
       " ('creeping', 2.822916314440728),\n",
       " ('saying', 0.00405676556744585),\n",
       " ('every', 0.37634519731173177),\n",
       " ('Spirit', 0.202292939560622),\n",
       " ('grass', 0.42737655458111046),\n",
       " ('fly', 0.03824894836063699),\n",
       " ('saw', 0.3688464275976693),\n",
       " ('moveth', 797.4161519821521),\n",
       " ('first', 0.001839994918310743),\n",
       " ('us', 0.0032862261691542485),\n",
       " ('So', 0.00210734165256931),\n",
       " ('called', 0.17576553341915332),\n",
       " ('a', 0.0003205005244031602),\n",
       " ('for', 0.0074608859708165534),\n",
       " ('hath', 1.5649118298751825),\n",
       " ('created', 0.5742780762194335),\n",
       " ('was', 0.0869030299824882),\n",
       " ('second', 0.006667125481231629),\n",
       " ('tree', 0.6569704786734412),\n",
       " ('had', 0.0010003172882553004),\n",
       " ('firmament', 1427.9135176509458),\n",
       " ('gathering', 0.22778899301619082),\n",
       " ('beginning', 0.02926915193317775),\n",
       " ('dry', 0.16788484172345222),\n",
       " ('image', 0.16131152100533389),\n",
       " ('bring', 0.11762105718934171),\n",
       " ('have', 0.004371557039199277),\n",
       " ('divided', 0.999692243899251),\n",
       " ('great', 0.010514915622433427),\n",
       " ('heaven', 3.7158306132295786),\n",
       " ('man', 0.013478364904649463),\n",
       " ('it', 0.03243614687523277),\n",
       " ('male', 0.020940952746017597),\n",
       " ('fourth', 0.12259183752562235),\n",
       " ('winged', 1.3358836460823353),\n",
       " ('all', 0.0023844798825504603),\n",
       " ('subdue', 2.2947302094022484),\n",
       " ('place', 0.0044852492813323664),\n",
       " ('rule', 0.18727781593629286),\n",
       " ('their', 0.004030152477070667),\n",
       " ('I', 0.00027912528782498376),\n",
       " ('midst', 0.7409638082256795),\n",
       " ('living', 0.1132022617865191),\n",
       " ('set', 0.006244380064675447),\n",
       " ('Behold', 2.1448822349835326),\n",
       " ('likeness', 1.8139744318219606),\n",
       " ('Be', 0.1353634540877291),\n",
       " ('morning', 0.7796763525877697),\n",
       " ('thing', 0.06620344043298265),\n",
       " ('them', 0.0505345199383551),\n",
       " ('him', 0.0013897350921763642),\n",
       " ('green', 0.02979740299325094),\n",
       " ('night', 0.07083428793608146),\n",
       " ('fowl', 144.73703716043218),\n",
       " ('replenish', 1.8792503805690608),\n",
       " ('make', 0.0012409174329054945),\n",
       " ('the', 0.625827646261501),\n",
       " ('given', 0.04309329055726339),\n",
       " ('upon', 2.4902194894317793),\n",
       " ('void', 0.15196073683768563),\n",
       " ('seed', 4.636269201140727),\n",
       " ('sea', 0.31249531159764693),\n",
       " ('whose', 0.21270836936190612),\n",
       " ('day', 0.20881363801468666),\n",
       " ('above', 0.04943493272063484),\n",
       " ('days', 0.005891342464886247)]"
      ]
     },
     "execution_count": 98,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "square_weirdness"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The higher the score, the weirder the word (i.e., the more particular it is to our source text versus English in general). Sorting by the score gives us our new list of keywords:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['firmament',\n",
       " 'creepeth',\n",
       " 'moveth',\n",
       " 'fowl',\n",
       " 'yielding',\n",
       " 'waters',\n",
       " 'earth',\n",
       " 'herb',\n",
       " 'God',\n",
       " 'abundantly',\n",
       " 'fruitful',\n",
       " 'dominion',\n",
       " 'cattle',\n",
       " 'seed',\n",
       " 'multiply']"
      ]
     },
     "execution_count": 99,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[item[0] for item in sorted(weirdness, reverse=True, key=lambda x: x[1])][:15]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This list has many of the same words from the \"just the least probable\" list, but now includes words like \"waters\" and \"God\" that, while moderately probable in English, are especially probable in our text. Try it out with your own source text and see what you think!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Counting parsed units"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Another simple way to pull out common words and phrases is to focus on only particular stretches of the document that have certain syntactic or semantic characteristics, as determined by spaCy's parser. For example, in the cell below I'm counting the number of times particular nouns appear:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "noun_counts = Counter([item.text for item in doc3 if item.pos_ == 'NOUN'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "... and then getting just the ten most common nouns:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "earth, waters, kind, day, firmament, light, evening, morning, seed, fowl\n"
     ]
    }
   ],
   "source": [
    "top_ten_nouns = [item[0] for item in noun_counts.most_common(10)]\n",
    "print(\", \".join(top_ten_nouns))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here's the same thing with noun chunks:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "God, the earth, it, the waters, his kind, them, the firmament, the evening, the morning, the heaven\n"
     ]
    }
   ],
   "source": [
    "chunk_counts = Counter([item.text for item in doc3.noun_chunks])\n",
    "top_ten_chunks = [item[0] for item in chunk_counts.most_common(10)]\n",
    "print(\", \".join(top_ten_chunks))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or with named entities:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "earth, the day, God, the evening and the morning, the night, Night, the first day, the second day, one, Earth\n"
     ]
    }
   ],
   "source": [
    "entity_counts = Counter([item.text for item in doc3.ents])\n",
    "top_ten_entities = [item[0] for item in entity_counts.most_common(10)]\n",
    "print(\", \".join(top_ten_entities))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or with subjects of sentences:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "God, it, that, evening, earth, which, he, them, seed, waters\n"
     ]
    }
   ],
   "source": [
    "subject_counts = Counter([item.text for item in doc3 if item.dep_ == 'nsubj'])\n",
    "top_ten_subjects = [item[0] for item in subject_counts.most_common(10)]\n",
    "print(\", \".join(top_ten_subjects))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Further reading and resources\n",
    "\n",
    "We've barely scratched the surface of what it's possible to do with spaCy. [There's a great page of tutorials on the official site](https://spacy.io/docs/usage/tutorials) that you should check out!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
