n_chapter,chapter,n_section,section,n_subsection,subsection,text
2,Regular Expressions,,,,,"The dialogue above is from ELIZA, an early natural language processing system ELIZA that could carry on a limited conversation with a user by imitating the responses of a Rogerian psychotherapist (Weizenbaum, 1966). ELIZA is a surprisingly simple program that uses pattern matching to recognize phrases like \""I need X\"" and translate them into suitable outputs like \""What would it mean to you if you got X?\"". This simple technique succeeds in this domain because ELIZA doesn't actually need to know anything to mimic a Rogerian psychotherapist. As Weizenbaum notes, this is one of the few dialogue genres where listeners can act as if they know nothing of the world. Eliza's mimicry of human conversation was remarkably successful: many people who interacted with ELIZA came to believe that it really understood them and their problems, many continued to believe in ELIZA's abilities even after the program's operation was explained to them (Weizenbaum, 1976), and even today such chatbots are a fun diversion."
2,Regular Expressions,,,,,"Of course modern conversational agents are much more than a diversion; they can answer questions, book flights, or find restaurants, functions for which they rely on a much more sophisticated understanding of the user's intent, as we will see in Chapter 24. Nonetheless, the simple pattern-based methods that powered ELIZA and other chatbots play a crucial role in natural language processing."
2,Regular Expressions,,,,,"We'll begin with the most important tool for describing text patterns: the regular expression. Regular expressions can be used to specify strings we might want to extract from a document, from transforming \""I need X\"" in Eliza above, to defining strings like $199 or $24.99 for extracting tables of prices from a document."
2,Regular Expressions,,,,,"We'll then turn to a set of tasks collectively called text normalization, in which regular expressions play an important part. Normalizing text means converting it to a more convenient, standard form. For example, most of what we are going to do with language relies on first separating out or tokenizing words from running text, the task of tokenization. English words are often separated from each other by whitespace, but whitespace is not always sufficient. New York and rock 'n' roll are sometimes treated as large words despite the fact that they contain spaces, while sometimes we'll need to separate I'm into the two words I and am. For processing tweets or texts we'll need to tokenize emoticons like :) or hashtags like #nlproc. 2.1 \u2022 REGULAR EXPRESSIONS 3 Some languages, like Japanese, don't have spaces between words, so word tokenization becomes more difficult."
2,Regular Expressions,,,,,"Another part of text normalization is lemmatization, the task of determining that two words have the same root, despite their surface differences. For example, the words sang, sung, and sings are forms of the verb sing. The word sing is the common lemma of these words, and a lemmatizer maps from all of these to sing. Lemmatization is essential for processing morphologically complex languages like Arabic. Stemming refers to a simpler version of lemmatization in which we mainly just strip suffixes from the end of the word. Text normalization also includes sentence segmentation: breaking up a text into individual sentences, using cues like periods or exclamation points."
2,Regular Expressions,,,,,"Finally, we'll need to compare words and other strings. We'll introduce a metric called edit distance that measures how similar two strings are based on the number of edits (insertions, deletions, substitutions) it takes to change one string into the other. Edit distance is an algorithm with applications throughout language processing, from spelling correction to speech recognition to coreference resolution."
2,Regular Expressions,2.1,Regular Expressions,,,"One of the unsung successes in standardization in computer science has been the regular expression (RE), a language for specifying text search strings. This prac-regular expression tical language is used in every computer language, word processor, and text processing tools like the Unix tools grep or Emacs. Formally, a regular expression is an algebraic notation for characterizing a set of strings. They are particularly useful for searching in texts, when we have a pattern to search for and a corpus of texts to search through. A regular expression search function will search through the corpus, returning all texts that match the pattern. The corpus can be a single document or a collection. For example, the Unix command-line tool grep takes a regular expression and returns every line of the input document that matches the expression."
2,Regular Expressions,2.1,Regular Expressions,,,"A search can be designed to return every match on a line, if there are more than one, or just the first match. In the following examples we generally underline the exact part of the pattern that matches the regular expression and show only the first match. We'll show regular expressions delimited by slashes but note that slashes are not part of the regular expressions."
2,Regular Expressions,2.1,Regular Expressions,,,"Regular expressions come in many variants. We'll be describing extended regular expressions; different regular expression parsers may only recognize subsets of these, or treat some expressions slightly differently. Using an online regular expression tester is a handy way to test out your expressions and explore these variations."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"The simplest kind of regular expression is a sequence of simple characters. To search for woodchuck, we type /woodchuck/. The expression /Buttercup/ matches any string containing the substring Buttercup; grep with that expression would return the line I'm called little Buttercup. The search string can consist of a single character (like /!/) or a sequence of characters (like /urgl/)."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"Regular expressions are case sensitive; lower case /s/ is distinct from upper case /S/ (/s/ matches a lower case s but not an upper case S). This means that the pattern /woodchucks/ will not match the string Woodchucks. We can solve this problem with the use of the square braces [ and ] . The string of characters inside the braces specifies a disjunction of characters to match. For example, Fig. 2.2 shows that the pattern /[wW]/ matches patterns containing either w or W."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"The regular expression /[1234567890]/ specifies any single digit. While such classes of characters as digits or letters are important building blocks in expressions, they can get awkward (e.g., it’s inconvenient to specify /[ABCDEFGHIJKLMNOPQRSTUVWXYZ]/ to mean “any capital letter”). In cases where there is a well-defined sequence associated with a set of characters, the brackets can be used with the dash (-) to specify any one character in a range. The pattern /[2-5]/ specifies any one of the characters 2, 3, 4, or 5. The pattern /[b-g]/ specifies one of the characters b, c, d, e, f, or g. Some other examples are shown in Fig. 2.3."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"The square braces can also be used to specify what a single character cannot be, by use of the caret ˆ. If the caret ˆ is the first symbol after the open square brace [, the resulting pattern is negated. For example, the pattern /[ˆa]/ matches any single character (including special characters) except a. This is only true when the caret is the first symbol after the open square brace. If it occurs anywhere else, it usually stands for a caret; Fig. 2.4 shows some examples."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"How can we talk about optional elements, like an optional s in woodchuck and woodchucks? We can’t use the square brackets, because while they allow us to say “s or S”, they don’t allow us to say “s or nothing”. For this we use the question mark /?/, which means “the preceding character or nothing”, as shown in Fig. 2.5."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"We can think of the question mark as meaning ""zero or one instances of the previous character"". That is, it's a way of specifying how many of something that we want, something that is very important in regular expressions. For example, consider the language of certain sheep, which consists of strings that look like the following: baa! baaa! baaaa! baaaaa! . . ."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like ""some number of as"" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced ""cleany star""). The Kleene star means ""zero or more occurrences of the immediately previous character or regular expression"". So /a*/ means ""any string of zero or more as"". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means ""zero or more a's or b's"" (not ""zero or more right square braces""). This will match strings like aaaa or ababab or bbbb."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"For specifying multiple digits (useful for finding prices) we can extend /[0-9]/, the regular expression for a single digit. An integer (a string of digits) is thus /[0-9][0-9]*/. (Why isn't it just /[0-9]*/?) Sometimes it's annoying to have to write the regular expression for digits twice, so there is a shorter way to specify ""at least one"" of some character. This is the Kleene +, which means ""one or more occurrences of the immediately preceding Kleene + character or regular expression"". Thus, the expression /[0-9]+/ is the normal way to specify ""a sequence of digits"". There are thus two ways to specify the sheep language: /baaa*!/ or /baa+!/."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"One very important special character is the period (/./), a wildcard expression that matches any single character (except a carriage return), as shown in Fig. 2 .6."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"The wildcard is often used together with the Kleene star to mean ""any string of characters"". For example, suppose we want to find any line in which a particular word, for example, aardvark, appears twice. We can specify this with the regular expression /aardvark.*aardvark/."
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"Anchors are special characters that anchor regular expressions to particular places in a string. The most common anchors are the caretˆand the dollar sign $. The caret matches the start of a line. The pattern /ˆThe/ matches the word The only at the start of a line. Thus, the caretˆhas three uses: to match the start of a line, to indicate a negation inside of square brackets, and just to mean a caret. (What are the contexts that allow grep or Python to know which function a given caret is supposed to have?) The dollar sign $ matches the end of a line. So the pattern $ is a useful pattern for matching a space at the end of a line, and /ˆThe dog\.$/ matches a line that contains only the phrase The dog. (We have to use the backslash here since we want the . to mean ""period"" and not the wildcard.)"
2,Regular Expressions,2.1,Regular Expressions,2.1.1,Basic Regular Expressions,"There are also two other anchors: \b matches a word boundary, and \B matches a non-boundary. Thus, /\bthe\b/ matches the word the but not the word other. More technically, a ""word"" for the purposes of a regular expression is defined as any sequence of digits, underscores, or letters; this is based on the definition of ""words"" in programming languages. For example, /\b99\b/ will match the string 99 in There are 99 bottles of beer on the wall (because 99 follows a space) but not 99 in There are 299 bottles of beer on the wall (since 99 follows a number). But it will match 99 in $99 (since 99 follows a dollar sign ($), which is not a digit, underscore, or letter)."
2,Regular Expressions,2.1,Regular Expressions,2.1.2,"Disjunction, Grouping and Precendence","Suppose we need to search for texts about pets; perhaps we are particularly interested in cats and dogs. In such a case, we might want to search for either the string cat or the string dog. Since we can't use the square brackets to search for ""cat or dog"" (why can't we say /[catdog]/?), we need a new operator, the disjunction operator, also disjunction called the pipe symbol |. The pattern /cat|dog/ matches either the string cat or the string dog."
2,Regular Expressions,2.1,Regular Expressions,2.1.2,"Disjunction, Grouping and Precendence","Sometimes we need to use this disjunction operator in the midst of a larger sequence. For example, suppose I want to search for information about pet fish for my cousin David. How can I specify both guppy and guppies? We cannot simply say /guppy|ies/, because that would match only the strings guppy and ies. This is because sequences like guppy take precedence over the disjunction operator |."
2,Regular Expressions,2.1,Regular Expressions,2.1.2,"Disjunction, Grouping and Precendence","precedence To make the disjunction operator apply only to a specific pattern, we need to use the parenthesis operators ( and ). Enclosing a pattern in parentheses makes it act like a single character for the purposes of neighboring operators like the pipe | and the Kleene*. So the pattern /gupp(y|ies)/ would specify that we meant the disjunction only to apply to the suffixes y and ies."
2,Regular Expressions,2.1,Regular Expressions,2.1.2,"Disjunction, Grouping and Precendence","The parenthesis operator ( is also useful when we are using counters like the Kleene*. Unlike the | operator, the Kleene* operator applies by default only to a single character, not to a whole sequence. Suppose we want to match repeated instances of a string. Perhaps we have a line that has column labels of the form Column 1 Column 2 Column 3. The expression /Column [0-9]+ */ will not match any number of columns; instead, it will match a single column followed by any number of spaces! The star here applies only to the space that precedes it, not to the whole sequence. With the parentheses, we could write the expression /(Column [0-9]+ *)*/ to match the word Column, followed by a number and optional spaces, the whole pattern repeated zero or more times."
2,Regular Expressions,2.1,Regular Expressions,2.1.2,"Disjunction, Grouping and Precendence","This idea that one operator may take precedence over another, requiring us to sometimes use parentheses to specify what we mean, is formalized by the operator precedence hierarchy for regular expressions. The following table gives the order of RE operator precedence, from highest precedence to lowest precedence."
2,Regular Expressions,2.1,Regular Expressions,2.1.2,"Disjunction, Grouping and Precendence","Thus, because counters have a higher precedence than sequences,"
2,Regular Expressions,2.1,Regular Expressions,2.1.2,"Disjunction, Grouping and Precendence","/the*/ matches theeeee but not thethe. Because sequences have a higher precedence than disjunction, /the|any/ matches the or any but not thany or theny. Patterns can be ambiguous in another way. Consider the expression /[a-z]*/ when matching against the text once upon a time. Since /[a-z]*/ matches zero or more letters, this expression could match nothing, or just the first letter o, on, onc, or once. In these cases regular expressions always match the largest string they can; we say that patterns are greedy, expanding to cover as much of a string as they can. There are, however, ways to enforce non-greedy matching, using another meaning of the ? qualifier. The operator *? is a Kleene star that matches as little text as possible. The operator +? is a Kleene plus that matches as little text as possible."
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,Suppose we wanted to write a RE to find cases of the English article the. A simple (but incorrect) pattern might be: /the/
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,"One problem is that this pattern will miss the word when it begins a sentence and hence is capitalized (i.e., The). This might lead us to the following pattern: /[tT]he/"
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,"But we will still incorrectly return texts with the embedded in other words (e.g., other or theology). So we need to specify that we want instances with a word boundary on both sides: /\b[tT]he\b/"
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,Suppose we wanted to do this without the use of /\b/. We might want this since /\b/ won’t treat underscores and numbers as word boundaries; but we might want to find the in some context where it might also have underlines or numbers nearb (the or the25). We need to specify that we want instances in which there are no alphabetic letters on either side of the the: /[ˆa-zA-Z][tT]he[ˆa-zA-Z]/
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,"But there is still one more problem with this pattern: it won’t find the word the when it begins a line. This is because the regular expression [ˆa-zA-Z], which we used to avoid embedded instances of the, implies that there must be some single (although non-alphabetic) character before the the. We can avoid this by specifying that before the the we require either the beginning-of-line or a non-alphabetic character, and the same at the end of the line: /(ˆ|[ˆa-zA-Z])[tT]he([ˆa-zA-Z]|$)/"
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,"The process we just went through was based on fixing two kinds of errors: false positives, strings that we incorrectly matched like other or there, and false negafalse positives tives, strings that we incorrectly missed, like The. Addressing these two kinds of false negatives errors comes up again and again in implementing speech and language processing systems. Reducing the overall error rate for an application thus involves two antagonistic efforts:"
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,• Increasing precision (minimizing false positives) • Increasing recall (minimizing false negatives)
2,Regular Expressions,2.1,Regular Expressions,2.1.3,A Simple Example,We'll come back to precision and recall with more precise definitions in Chapter 4.
2,Regular Expressions,2.1,Regular Expressions,2.1.4,More Operators,"Figure 2.8 shows some aliases for common ranges, which can be used mainly to save typing. Besides the Kleene * and Kleene + we can also use explicit numbers as counters, by enclosing them in curly brackets. The regular expression /{3}/ means ""exactly 3 occurrences of the previous character or expression"". So /a\.{24}z/ will match a followed by 24 dots followed by z (but not a followed by 23 or 25 dots followed by a z)."
2,Regular Expressions,2.1,Regular Expressions,2.1.4,More Operators,"A range of numbers can also be specified. So /{n,m}/ specifies from n to m occurrences of the previous char or expression, and /{n,}/ means at least n occurrences of the previous expression. REs for counting are summarized in Fig. 2 .9."
2,Regular Expressions,2.1,Regular Expressions,2.1.4,More Operators,"Finally, certain special characters are referred to by special notation based on the backslash (\) (see Fig. 2 .10). The most common of these are the newline character newline \n and the tab character \t. To refer to characters that are special themselves (like ., *, [, and \) , precede them with a backslash, (i.e., /\./, /\*/, /\[/, and /\\/)."
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,"Let’s try out a more significant example of the power of REs. Suppose we want to build an application to help a user buy a computer on the Web. The user might want “any machine with at least 6 GHz and 500 GB of disk space for less than $1000”. To do this kind of retrieval, we first need to be able to look for expressions like 6GHz or 500 GB or Mac or $999.99. In the rest of this section we’ll work out some simple regular expressions for this task."
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,"First, let’s complete our regular expression for prices. Here’s a regular expression for a dollar sign followed by a string of digits:"
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,/$[0-9]+/
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,"Note that the $ character has a different function here than the end-of-line function we discussed earlier. Most regular expression parsers are smart enough to realize that $ here doesn’t mean end-of-line. (As a thought experiment, think about how regex parsers might figure out the function of $ from the context.)"
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,Now we just need to deal with fractions of dollars. We’ll add a decimal point and two digits afterwards: /$[0-9]+\.[0-9][0-9]/
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,This pattern only allows $199.99 but not $199. We need to make the cents
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,optional and to make sure we’re at a word boundary: /(ˆ|\W)$[0-9]+(\.[0-9][0-9])?\b/
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,"One last catch! This pattern allows prices like $199999.99 which would be far too expensive! We need to limit the dollars: /(ˆ|\W)$[0-9]{0,3}(\.[0-9][0-9])?\b/"
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,"How about disk space? We’ll need to allow for optional fractions again (5.5 GB); note the use of ? for making the final s optional, and the of / */ to mean “zero or more spaces” since there might always be extra spaces lying around: /\b[0-9]+(\.[0-9]+)? *(GB|[Gg]igabytes?)\b/"
2,Regular Expressions,2.1,Regular Expressions,2.1.5,A More Complex Example,Modifying this regular expression so that it only matches more than 500 GB is left as an exercise for the reader.
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","An important use of regular expressions is in substitutions. For example, the substitution operator s/regexp1/pattern/ used in Python and in Unix commands like vim or sed allows a string characterized by a regular expression to be replaced by another string: s/colour/color/"
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","It is often useful to be able to refer to a particular subpart of the string matching the first pattern. For example, suppose we wanted to put angle brackets around all integers in a text, for example, changing the 35 boxes to the <35> boxes. We'd like a way to refer to the integer we've found so that we can easily add the brackets. To do this, we put parentheses ( and ) around the first pattern and use the number operator \1 in the second pattern to refer back. Here's how it looks: s/([0-9]+)/<\1>/"
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","The parenthesis and number operators can also specify that a certain string or expression must occur twice in the text. For example, suppose we are looking for the pattern ""the Xer they were, the Xer they will be"", where we want to constrain the two X's to be the same string. We do this by surrounding the first X with the parenthesis operator, and replacing the second X with the number operator \1, as follows: /the (.*)er they were, the \1er they will be/"
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","Here the \1 will be replaced by whatever string matched the first item in parentheses. So this will match the bigger they were, the bigger they will be but not the bigger they were, the faster they will be."
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","This use of parentheses to store a pattern in memory is called a capture group. Every time a capture group is used (i.e., parentheses surround a pattern), the resulting match is stored in a numbered register. If you match two different sets of register parentheses, \\2 means whatever matched the second capture group. Thus /the (.*)er they (.*), the \\1er we \\2/ will match the faster they ran, the faster we ran but not the faster they ran, the faster we ate. Similarly, the third capture group is stored in \\3, the fourth is \\4, and so on."
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","Parentheses thus have a double function in regular expressions; they are used to group terms for specifying the order in which operators should apply, and they are used to capture something in a register. Occasionally we might want to use parentheses for grouping, but don't want to capture the resulting pattern in a register. In that case we use a non-capturing group, which is specified by putting the commands non-capturing group ?: after the open paren, in the form (?: pattern). /(?:some|a few) (people|cats) like some \\1/ will match some cats like some cats but not some cats like some a few."
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","Substitutions and capture groups are very useful in implementing simple chatbots like ELIZA (Weizenbaum, 1966). Recall that ELIZA simulates a Rogerian psychologist by carrying on conversations like the following: User 1 : Men are all alike. ELIZA 1 : IN WHAT WAY User 2 : They're always bugging us about something or other. ELIZA 2 : CAN YOU THINK OF A SPECIFIC EXAMPLE User 3 : Well, my boyfriend made me come here. ELIZA 3 : YOUR BOYFRIEND MADE YOU COME HERE User 4 : He says I'm depressed much of the time. ELIZA 4 : I AM SORRY TO HEAR YOU ARE DEPRESSED"
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","ELIZA works by having a series or cascade of regular expression substitutions each of which matches and changes some part of the input lines. Input lines are 2.2 \u2022 WORDS 11 first uppercased. The first substitutions then change all instances of MY to YOUR, and I'M to YOU ARE, and so on. The next set of substitutions matches and replaces other patterns in the input. Here are some examples:"
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA",s/.* I’M (depressed|sad) .*/I AM SORRY TO HEAR YOU ARE \1/
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA",s/.* I AM (depressed|sad) .*/WHY DO YOU THINK YOU ARE \1/
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA",s/.* all .*/IN WHAT WAY/
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA",s/.* always .*/CAN YOU THINK OF A SPECIFIC EXAMPLE/
2,Regular Expressions,2.1,Regular Expressions,2.1.6,"Substitution, Capture Groups, and ELIZA","Since multiple substitutions can apply to a given input, substitutions are assigned a rank and applied in order. Creating patterns is the topic of Exercise 2.3, and we return to the details of the ELIZA architecture in Chapter 24."
2,Regular Expressions,2.1,Regular Expressions,2.1.7,Lookahead Assertions,"Finally, there will be times when we need to predict the future: look ahead in the text to see if some pattern matches, but not advance the match cursor, so that we can then deal with the pattern if it occurs."
2,Regular Expressions,2.1,Regular Expressions,2.1.7,Lookahead Assertions,"These lookahead assertions make use of the (? syntax that we saw in the previous section for non-capture groups. The operator (?= pattern) is true if pattern occurs, but is zero-width, i.e. the match pointer doesn’t advance. The operator (?! pattern) only returns true if a pattern does not match, but again is zero-width and doesn’t advance the cursor. Negative lookahead is commonly used when we are parsing some complex pattern but want to rule out a special case. For example suppose we want to match, at the beginning of a line, any single word that doesn’t start with “Volcano”. We can use negative lookahead to do this: /ˆ(?!Volcano)[A-Za-z]+/"
2,Regular Expressions,2.2,Words,,,"Before we talk about processing words, we need to decide what counts as a word. Let’s start by looking at one particular corpus (plural corpora), a computer-readable collection of text or speech. For example the Brown corpus is a million-word collection of samples from 500 written English texts from different genres (newspaper, fiction, non-fiction, academic, etc.), assembled at Brown University in 1963-64 (Kučera and Francis, 1967) . How many words are in the following Brown sentence?"
2,Regular Expressions,2.2,Words,,,"He stepped out into the hall, was delighted to encounter a water brother."
2,Regular Expressions,2.2,Words,,,"This sentence has 13 words if we don't count punctuation marks as words, 15 if we count punctuation. Whether we treat period ("".""), comma ("",""), and so on as words depends on the task. Punctuation is critical for finding boundaries of things (commas, periods, colons) and for identifying some aspects of meaning (question marks, exclamation marks, quotation marks). For some tasks, like part-of-speech tagging or parsing or speech synthesis, we sometimes treat punctuation marks as if they were separate words."
2,Regular Expressions,2.2,Words,,,"The Switchboard corpus of American English telephone conversations between strangers was collected in the early 1990s; it contains 2430 conversations averaging 6 minutes each, totaling 240 hours of speech and about 3 million words (Godfrey et al., 1992) . Such corpora of spoken language don't have punctuation but do intro-duce other complications with regard to defining words. Let's look at one utterance from Switchboard; an utterance is the spoken correlate of a sentence: I do uh main-mainly business data processing This utterance has two kinds of disfluencies. The broken-off word main-is disfluency called a fragment. Words like uh and um are called fillers or filled pauses. Should we consider these to be words? Again, it depends on the application. If we are building a speech transcription system, we might want to eventually strip out the disfluencies."
2,Regular Expressions,2.2,Words,,,"But we also sometimes keep disfluencies around. Disfluencies like uh or um are actually helpful in speech recognition in predicting the upcoming word, because they may signal that the speaker is restarting the clause or idea, and so for speech recognition they are treated as regular words. Because people use different disfluencies they can also be a cue to speaker identification. In fact Clark and Fox Tree (2002) showed that uh and um have different meanings. What do you think they are?"
2,Regular Expressions,2.2,Words,,,"Are capitalized tokens like They and uncapitalized tokens like they the same word? These are lumped together in some tasks (speech recognition), while for partof-speech or named-entity tagging, capitalization is a useful feature and is retained."
2,Regular Expressions,2.2,Words,,,"How about inflected forms like cats versus cat? These two words have the same lemma cat but are different wordforms. A lemma is a set of lexical forms having lemma the same stem, the same major part-of-speech, and the same word sense. The wordform is the full inflected or derived form of the word. For morphologically complex wordform languages like Arabic, we often need to deal with lemmatization. For many tasks in English, however, wordforms are sufficient."
2,Regular Expressions,2.2,Words,,,"How many words are there in English? To answer this question we need to distinguish two ways of talking about words. Types are the number of distinct words word type in a corpus; if the set of words in the vocabulary is V , the number of types is the vocabulary size |V |. Tokens are the total number N of running words. If we ignore word token punctuation, the following Brown sentence has 16 tokens and 14 types:"
2,Regular Expressions,2.2,Words,,,"They picnicked by the pool, then lay back on the grass and looked at the stars."
2,Regular Expressions,2.2,Words,,,"When we speak about the number of words in the language, we are generally referring to word types."
2,Regular Expressions,2.2,Words,,,"Tokens = N Types = |V | Shakespeare 884 thousand 31 thousand Brown corpus 1 million 38 thousand Switchboard telephone conversations 2.4 million 20 thousand COCA 440 million 2 million Google n-grams 1 trillion 13 million Figure 2 .11 Rough numbers of types and tokens for some English language corpora. The largest, the Google n-grams corpus, contains 13 million types, but this count only includes types appearing 40 or more times, so the true number would be much larger."
2,Regular Expressions,2.2,Words,,,"Fig. 2 .11 shows the rough numbers of types and tokens computed from some popular English corpora. The larger the corpora we look at, the more word types we find, and in fact this relationship between the number of types |V | and number of tokens N is called Herdan's Law (Herdan, 1960) or Heaps' Law (Heaps, 1978) Herdan's Law Heaps' Law after its discoverers (in linguistics and information retrieval respectively). It is shown in Eq. 2.1, where k and β are positive constants, and 0 < β < 1."
2,Regular Expressions,2.2,Words,,,|V | = kN β (2.1) 2.3 • CORPORA 13
2,Regular Expressions,2.2,Words,,,"The value of β depends on the corpus size and the genre, but at least for the large corpora in Fig. 2 .11, β ranges from .67 to .75. Roughly then we can say that the vocabulary size for a text goes up significantly faster than the square root of its length in words."
2,Regular Expressions,2.2,Words,,,"Another measure of the number of words in the language is the number of lemmas instead of wordform types. Dictionaries can help in giving lemma counts; dictionary entries or boldface forms are a very rough upper bound on the number of lemmas (since some lemmas have multiple boldface forms). The 1989 edition of the Oxford English Dictionary had 615,000 entries."
2,Regular Expressions,2.3,Corpora,,,"Words don't appear out of nowhere. Any particular piece of text that we study is produced by one or more specific speakers or writers, in a specific dialect of a specific language, at a specific time, in a specific place, for a specific function."
2,Regular Expressions,2.3,Corpora,,,"Perhaps the most important dimension of variation is the language. NLP algorithms are most useful when they apply across many languages. The world has 7097 languages at the time of this writing, according to the online Ethnologue catalog (Simons and Fennig, 2018) . It is important to test algorithms on more than one language, and particularly on languages with different properties; by contrast there is an unfortunate current tendency for NLP algorithms to be developed or tested just on English (Bender, 2019) . Even when algorithms are developed beyond English, they tend to be developed for the official languages of large industrialized nations (Chinese, Spanish, Japanese, German etc.), but we don't want to limit tools to just these few languages. Furthermore, most languages also have multiple varieties, often spoken in different regions or by different social groups. Thus, for example, if we're processing text that uses features of African American English (AAE) or AAE African American Vernacular English (AAVE) -the variations of English used by millions of people in African American communities (King 2020) -we must use NLP tools that function with features of those varieties. Twitter posts might use features often used by speakers of African American English, such as constructions like iont (I don't in Mainstream American English (MAE)), or talmbout corresponding MAE to MAE talking about, both examples that influence word segmentation (Blodgett et al. 2016 , Jones 2015 ."
2,Regular Expressions,2.3,Corpora,,,"It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:)"
2,Regular Expressions,2.3,Corpora,,,"[For the first time I get to see @username actually being hateful! it was beautiful:) ] (2.3) dost tha or ra-hega ... dont wory ... but dherya rakhe [""he was and will remain a friend ... don't worry ..."
2,Regular Expressions,2.3,Corpora,,,"Another dimension of variation is the genre. The text that our algorithms must process might come from newswire, fiction or non-fiction books, scientific articles, Wikipedia, or religious texts. It might come from spoken genres like telephone conversations, business meetings, police body-worn cameras, medical interviews, or transcripts of television shows or movies. It might come from work situations like doctors' notes, legal text, or parliamentary or congressional proceedings."
2,Regular Expressions,2.3,Corpora,,,"Text also reflects the demographic characteristics of the writer (or speaker): their age, gender, race, socioeconomic class can all influence the linguistic properties of the text we are processing."
2,Regular Expressions,2.3,Corpora,,,"And finally, time matters too. Language changes over time, and for some languages we have good corpora of texts from different historical periods."
2,Regular Expressions,2.3,Corpora,,,"Because language is so situated, when developing computational models for language processing from a corpus, it's important to consider who produced the language, in what context, for what purpose. How can a user of a dataset know all these details? The best way is for the corpus creator to build a datasheet (Gebru et al., 2020) or data statement (Bender and Friedman, 2018) for each corpus. A datasheet specifies properties of a dataset like:"
2,Regular Expressions,2.3,Corpora,,,"Motivation: Why was the corpus collected, by whom, and who funded it? Situation: When and in what situation was the text written/spoken? For example, was there a task? Was the language originally spoken conversation, edited text, social media communication, monologue vs. dialogue? Language variety: What language (including dialect/region) was the corpus in? Speaker demographics: What was, e.g., age or gender of the authors of the text? Collection process: How big is the data? If it is a subsample how was it sampled?"
2,Regular Expressions,2.3,Corpora,,,"Was the data collected with consent? How was the data pre-processed, and what metadata is available? Annotation process: What are the annotations, what are the demographics of the annotators, how were they trained, how was the data annotated? Distribution: Are there copyright or other intellectual property restrictions?"
2,Regular Expressions,2.4,Text Normalization,,,"Before almost any natural language processing of a text, the text has to be normalized. At least three tasks are commonly applied as part of any normalization process:"
2,Regular Expressions,2.4,Text Normalization,,,1. Tokenizing (segmenting) words 2. Normalizing word formats 3. Segmenting sentences In the next sections we walk through each of these tasks.
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"Let's begin with an easy, if somewhat naive version of word tokenization and normalization (and frequency computation) that can be accomplished for English solely in a single UNIX command-line, inspired by Church (1994) . We'll make use of some Unix commands: tr, used to systematically change particular characters in the input; sort, which sorts input lines in alphabetical order; and uniq, which collapses and counts adjacent identical lines."
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"For example let's begin with the 'complete words' of Shakespeare in one file, sh.txt. We can use tr to tokenize the words by changing every sequence of nonalphabetic characters to a newline ('A-Za-z' means alphabetic, the -c option complements to non-alphabet, and the -s option squeezes all sequences into a single character): tr -sc 'A-Za-z' '\n' < sh.txt"
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,The output of this command will be: THE SONNETS by William Shakespeare From fairest creatures We ...
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"Now that there is one word per line, we can sort the lines, and pass them to uniq -c which will collapse and count them: tr -sc 'A-Za-z' '\n' < sh.txt | sort | uniq -c with the following output:"
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,1945 A 72 AARON 19 ABBESS 25 Aaron 6 Abate 1 Abates 5 Abbess 6 Abbey 3 Abbot ...
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"Alternatively, we can collapse all the upper case to lower case:"
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"tr -sc 'A-Za-z' '\n' < sh.txt | tr A-Z a-z | sort | uniq -c whose output is 14725 a 97 aaron 1 abaissiez 10 abandon 2 abandoned 2 abase 1 abash 14 abate 3 abated 3 abatement ... Now we can sort again to find the frequent words. The -n option to sort means to sort numerically rather than alphabetically, and the -r option means to sort in reverse order (highest-to-lowest):"
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,tr -sc 'A-Za-z' '\n' < sh.txt | tr A-Z a-z | sort | uniq -c | sort -n -r
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"The results show that the most frequent words in Shakespeare, as in any other corpus, are the short function words like articles, pronouns, prepositions:"
2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,Unix tools of this sort can be very handy in building quick word count statistics for any corpus.
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"The simple UNIX tools above were fine for getting rough word statistics but more sophisticated algorithms are generally necessary for tokenization, the task of segmenting running text into words."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"While the Unix command sequence just removed all the numbers and punctuation, for most NLP applications we’ll need to keep these in our tokenization. We often want to break off punctuation as a separate token; commas are a useful piece of information for parsers, periods help indicate sentence boundaries. But we’ll often want to keep the punctuation that occurs word internally, in examples like m.p.h., Ph.D., AT&T, and cap’n. Special characters and numbers will need to be kept in prices ($45.55) and dates (01/02/06); we don’t want to segment that price into separate tokens of “45” and “55”. And there are URLs (http://www.stanford.edu), Twitter hashtags (#nlproc), or email addresses (someone@cs.colorado.edu)."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"Number expressions introduce other complications as well; while commas normally appear at word boundaries, commas are used inside numbers in English, every three digits: 555,500.50. Languages, and hence tokenization requirements, differ on this; many continental European languages like Spanish, French, and German, by contrast, use a comma to mark the decimal point, and spaces (or sometimes periods) where English puts commas, for example, 555 500,50."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"A tokenizer can also be used to expand clitic contractions that are marked by apostrophes, for example, converting what’re to the two tokens what are, and we’re to we are. A clitic is a part of a word that can’t stand on its own, and can only occur when it is attached to another word. Some such contractions occur in other alphabetic languages, including articles and pronouns in French (j’ai, l’homme)."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"Depending on the application, tokenization algorithms may also tokenize multiword expressions like New York or rock ’n’ roll as a single token, which requires a multiword expression dictionary of some sort. Tokenization is thus intimately tied up with named entity recognition, the task of detecting names, dates, and organizations (Chapter 8)."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"One commonly used tokenization standard is known as the Penn Treebank tokenization standard, used for the parsed corpora (treebanks) released by the Linguistic Data Consortium (LDC), the source of many useful datasets. This standard separates out clitics (doesn’t becomes does plus n’t), keeps hyphenated words together, and separates out all punctuation (to save space we’re showing visible spaces ‘ ’ between tokens, although newlines is a more common output):"
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"Input: ""The San Francisco-based restaurant,"" they said, ""doesn't charge $10"". Output: "" The San Francisco-based restaurant , "" they said , "" does n't charge $ 10 "" ."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"In practice, since tokenization needs to be run before any other language processing, it needs to be very fast. The standard method for tokenization is therefore to use deterministic algorithms based on regular expressions compiled into very efficient finite state automata. For example, Fig. 2 .12 shows an example of a basic regular expression that can be used to tokenize with the nltk.regexp tokenize function of the Python-based Natural Language Toolkit (NLTK) (Bird et al. 2009; http://www.nltk.org)."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,">>> text = 'That U.S.A. poster-print costs $12.40...' >>> pattern = r''' (?x) # set flag to allow verbose regexps . ['That', 'U.S.A.', 'costs', '$12.40', '...'] Figure 2 .12 A Python trace of regular expression tokenization in the NLTK Python-based natural language processing toolkit (Bird et al., 2009) , commented for readability; the (?x) verbose flag tells Python to strip comments and whitespace."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"Carefully designed deterministic algorithms can deal with the ambiguities that arise, such as the fact that the apostrophe needs to be tokenized differently when used as a genitive marker (as in the book's cover), a quotative as in 'The other class', she said, or in clitics like they're."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"Word tokenization is more complex in languages like written Chinese, Japanese, and Thai, which do not use spaces to mark potential word-boundaries. In Chinese, for example, words are composed of characters (called hanzi in Chinese). Each hanzi character generally represents a single unit of meaning (called a morpheme) and is pronounceable as a single syllable. Words are about 2.4 characters long on average. But deciding what counts as a word in Chinese is complex. For example, consider the following sentence:"
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"(2.4) 姚明进入总决赛 ""Yao Ming reaches the finals"""
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"As Chen et al. (2017b) point out, this could be treated as 3 words ('Chinese Treebank' segmentation):"
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,(2.5) 姚明 YaoMing 进入 reaches 总决赛 finals or as 5 words ('Peking University' segmentation):
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"(2.6) 姚 Yao 明 Ming 进入 reaches 总 overall 决赛 finals Finally, it is possible in Chinese simply to ignore words altogether and use characters as the basic elements, treating the sentence as a series of 7 characters:"
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,(2.7) 姚 Yao 明 Ming 进 enter 入 enter 总 overall 决 decision 赛 game
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"In fact, for most Chinese NLP tasks it turns out to work better to take characters rather than words as input, since characters are at a reasonable semantic level for most applications, and since most word standards, by contrast, result in a huge vocabulary with large numbers of very rare words (Li et al., 2019b) ."
2,Regular Expressions,2.4,Text Normalization,2.4.2,Word Tokenization,"However, for Japanese and Thai the character is too small a unit, and so algorithms for word segmentation are required. These can also be useful for Chinese word segmentation in the rare situations where word rather than character boundaries are required. The standard segmentation algorithms for these languages use neural sequence models trained via supervised machine learning on hand-segmented training sets; we'll introduce sequence models in Chapter 8 and Chapter 9."
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"There is a third option to tokenizing text. Instead of defining tokens as words (whether delimited by spaces or more complex algorithms), or as characters (as in Chinese), we can use our data to automatically tell us what the tokens should be. This is especially useful in dealing with unknown words, an important problem in language processing. As we will see in the next chapter, NLP algorithms often learn some facts about language from one corpus (a training corpus) and then use these facts to make decisions about a separate test corpus and its language. Thus if our training corpus contains, say the words low, new, newer, but not lower, then if the word lower appears in our test corpus, our system will not know what to do with it."
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"To deal with this unknown word problem, modern tokenizers often automatically induce sets of tokens that include tokens smaller than words, called subwords. subwords Subwords can be arbitrary substrings, or they can be meaning-bearing units like the morphemes -est or -er. (A morpheme is the smallest meaning-bearing unit of a language; for example the word unlikeliest has the morphemes un-, likely, and -est.) In modern tokenization schemes, most tokens are words, but some tokens are frequently occurring morphemes or other subwords like -er. Every unseen word like lower can thus be represented by some sequence of known subword units, such as low and er, or even as a sequence of individual letters if necessary."
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"Most tokenization schemes have two parts: a token learner, and a token segmenter. The token learner takes a raw training corpus (sometimes roughly preseparated into words, for example by whitespace) and induces a vocabulary, a set of tokens. The token segmenter takes a raw test sentence and segments it into the tokens in the vocabulary. Three algorithms are widely used: byte-pair encoding (Sennrich et al., 2016) , unigram language modeling (Kudo, 2018) , and WordPiece (Schuster and Nakajima, 2012) ; there is also a SentencePiece library that includes implementations of the first two of the three (Kudo and Richardson, 2018) ."
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"In this section we introduce the simplest of the three, the byte-pair encoding or BPE algorithm (Sennrich et al., 2016) ; see Fig. 2 .13. The BPE token learner begins BPE with a vocabulary that is just the set of all individual characters. It then examines the training corpus, chooses the two symbols that are most frequently adjacent (say 'A', 'B'), adds a new merged symbol 'AB' to the vocabulary, and replaces every adjacent 'A' 'B' in the corpus with the new 'AB'. It continues to count and merge, creating new longer and longer character strings, until k merges have been done creating k novel tokens; k is thus a parameter of the algorithm. The resulting vocabulary consists of the original set of characters plus k new symbols."
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"The algorithm is usually run inside words (not merging across word boundaries), so the input corpus is first white-space-separated to give a set of strings, each corresponding to the characters of a word, plus a special end-of-word symbol , and its counts. Let's see its operation on the following tiny input corpus of 18 word tokens with counts for each word (the word low appears 5 times, the word newer 6 times, and so on), which would have a starting vocabulary of 11 letters"
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"The BPE algorithm first counts all pairs of adjacent symbols: the most frequent is the pair e r because it occurs in newer (frequency of 6) and wider (frequency of 3) for a total of 9 occurrences 1 . We then merge these symbols, treating er as one symbol, and count again:"
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"Now the most frequent pair is er , which we merge; our system has learned that there should be a token for word-final er, represented as er :"
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,get merged to ne:
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"If we continue, the next merges are:"
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"Once we've learned our vocabulary, the token parser is used to tokenize a test sentence. The token parser just runs on the test data the merges we have learned from the training data, greedily, in the order we learned them. (Thus the frequencies in the test data don't play a role, just the frequencies in the training data). So first we segment each test sentence word into characters. Then we apply the first rule: replace every instance of e r in the test corpus with er, and then the second rule: replace every instance of er in the test corpus with er , and so on. By the end, if the test corpus contained the word n e w e r , it would be tokenized as a full word. But a new (unknown) word like l o w e r would be merged into the two tokens low er ."
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"The token learner part of the BPE algorithm for taking a corpus broken up into individual characters or bytes, and learning a vocabulary by iteratively merging tokens."
2,Regular Expressions,2.4,Text Normalization,2.4.3,Byte-Pair Encoding for Tokenization,"Of course in real algorithms BPE is run with many thousands of merges on a very large input corpus. The result is that most words will be represented as full symbols, and only the very rare words (and unknown words) will have to be represented by their parts."
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","Word normalization is the task of putting words/tokens in a standard format, choosing a single normal form for words with multiple forms like USA and US or uh-huh and uhhuh. This standardization may be valuable, despite the spelling information that is lost in the normalization process. For information retrieval or information extraction about the US, we might want to see information from documents whether they mention the US or the USA."
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","Case folding is another kind of normalization. Mapping everything to lowercase means that Woodchuck and woodchuck are represented identically, which is very helpful for generalization in many tasks, such as information retrieval or speech recognition. For sentiment analysis and other text classification tasks, information extraction, and machine translation, by contrast, case can be quite helpful and case folding is generally not done. This is because maintaining the difference between, for example, US the country and us the pronoun can outweigh the advantage in generalization that case folding would have provided for other words. For many natural language processing situations we also want two morphologically different forms of a word to behave similarly. For example in web search, someone may type the string woodchucks but a useful system might want to also return pages that mention woodchuck with no s. This is especially common in morphologically complex languages like Russian, where for example the word Moscow has different endings in the phrases Moscow, of Moscow, to Moscow, and so on."
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","Lemmatization is the task of determining that two words have the same root, despite their surface differences. The words am, are, and is have the shared lemma be; the words dinner and dinners both have the lemma dinner. Lemmatizing each of these forms to the same lemma will let us find all mentions of words in Russian like Moscow. The lemmatized form of a sentence like He is reading detective stories would thus be He be read detective story."
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","How is lemmatization done? The most sophisticated methods for lemmatization involve complete morphological parsing of the word. Morphology is the study of the way words are built up from smaller meaning-bearing units called morphemes. Two broad classes of morphemes can be distinguished: stems-the central morpheme of the word, supplying the main meaning-and affixes-adding ""additional"" meanings of various kinds. So, for example, the word fox consists of one morpheme (the morpheme fox) and the word cats consists of two: the morpheme cat and the morpheme -s. A morphological parser takes a word like cats and parses it into the two morphemes cat and s, or parses a Spanish word like amaren ('if in the future they would love') into the morpheme amar 'to love', and the morphological features 3PL and future subjunctive."
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming",The Porter Stemmer
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","Lemmatization algorithms can be complex. For this reason we sometimes make use of a simpler but cruder method, which mainly consists of chopping off word-final affixes. This naive version of morphological analysis is called stemming. One of stemming the most widely used stemming algorithms is the Porter (1980) . The Porter stemmer stemmer applied to the following paragraph:"
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","This was not the map we found in Billy Bones's chest, but an accurate copy, complete in all things-names and heights and soundings-with the single exception of the red crosses and the written notes."
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming",produces the following stemmed output:
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming",Thi wa not the map we found in Billi Bone s chest but an accur copi complet in all thing name and height and sound with the singl except of the red cross and the written note
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","The algorithm is based on series of rewrite rules run in series, as a cascade, in cascade which the output of each pass is fed as input to the next pass; here is a sampling of the rules: ATIONAL → ATE (e.g., relational → relate) ING → if stem contains vowel (e.g., motoring → motor) SSES → SS (e.g., grasses → grass)"
2,Regular Expressions,2.4,Text Normalization,2.4.4,"Word Normalization, Lemmatization and Stemming","Detailed rule lists for the Porter stemmer, as well as code (in Java, Python, etc.) can be found on Martin Porter's homepage; see also the original paper (Porter, 1980) . Simple stemmers can be useful in cases where we need to collapse across different variants of the same lemma. Nonetheless, they do tend to commit errors of both over-and under-generalizing, as shown in the table below (Krovetz, 1993"
2,Regular Expressions,2.4,Text Normalization,2.4.5,Sentence Segmentation,"Sentence segmentation is another important step in text processing. The most useful cues for segmenting a text into sentences are punctuation, like periods, question marks, and exclamation points. Question marks and exclamation points are relatively unambiguous markers of sentence boundaries. Periods, on the other hand, are more ambiguous. The period character ""."" is ambiguous between a sentence boundary marker and a marker of abbreviations like Mr. or Inc. The previous sentence that you just read showed an even more complex case of this ambiguity, in which the final period of Inc. marked both an abbreviation and the sentence boundary marker. For this reason, sentence tokenization and word tokenization may be addressed jointly."
2,Regular Expressions,2.4,Text Normalization,2.4.5,Sentence Segmentation,"In general, sentence tokenization methods work by first deciding (based on rules or machine learning) whether a period is part of the word or is a sentence-boundary marker. An abbreviation dictionary can help determine whether the period is part of a commonly used abbreviation; the dictionaries can be hand-built or machinelearned (Kiss and Strunk, 2006) , as can the final sentence splitter. In the Stanford CoreNLP toolkit (Manning et al., 2014) , for example sentence splitting is rule-based, a deterministic consequence of tokenization; a sentence ends when a sentence-ending punctuation (., !, or ?) is not already grouped with other characters into a token (such as for an abbreviation or number), optionally followed by additional final quotes or brackets."
2,Regular Expressions,2.5,Minimum Edit Distance,,,"Much of natural language processing is concerned with measuring how similar two strings are. For example in spelling correction, the user typed some erroneous string—let’s say graffe–and we want to know what the user meant. The user probably intended a word that is similar to graffe. Among candidate similar words, the word giraffe, which differs by only one letter from graffe, seems intuitively to be more similar than, say grail or graf, which differ in more letters. Another example comes from coreference, the task of deciding whether two strings such as the following refer to the same entity:"
2,Regular Expressions,2.5,Minimum Edit Distance,,,Stanford President Marc Tessier-Lavigne
2,Regular Expressions,2.5,Minimum Edit Distance,,,Stanford University President Marc Tessier-Lavigne
2,Regular Expressions,2.5,Minimum Edit Distance,,,"Again, the fact that these two strings are very similar (differing by only one word) seems like useful evidence for deciding that they might be coreferent. Edit distance gives us a way to quantify both of these intuitions about string similarity. More formally, the minimum edit distance between two strings is defined as the minimum number of editing operations (operations like insertion, deletion, substitution) needed to transform one string into another."
2,Regular Expressions,2.5,Minimum Edit Distance,,,"The gap between intention and execution, for example, is 5 (delete an i, substitute e for n, substitute x for t, insert c, substitute u for n). It's much easier to see this by looking at the most important visualization for string distances, an alignment between the two strings, shown in Fig. 2 .14. Given two sequences, an alignment is a correspondence between substrings of the two sequences. Thus, we say I aligns with the empty string, N with E, and so on. Beneath the aligned strings is another representation; a series of symbols expressing an operation list for converting the top string into the bottom string: d for deletion, s for substitution, i for insertion. Figure 2 .14 Representing the minimum edit distance between two strings as an alignment."
2,Regular Expressions,2.5,Minimum Edit Distance,,,"We can also assign a particular cost or weight to each of these operations. The Levenshtein distance between two sequences is the simplest weighting factor in which each of the three operations has a cost of 1 (Levenshtein, 1966)-we assume that the substitution of a letter for itself, for example, t for t, has zero cost. The Levenshtein distance between intention and execution is 5. Levenshtein also proposed an alternative version of his metric in which each insertion or deletion has a cost of 1 and substitutions are not allowed. (This is equivalent to allowing substitution, but giving each substitution a cost of 2 since any substitution can be represented by one insertion and one deletion). Using this version, the Levenshtein distance between intention and execution is 8."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"How do we find the minimum edit distance? We can think of this as a search task, in which we are searching for the shortest path-a sequence of edits-from one string to another."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"The space of all possible edits is enormous, so we can't search naively. However, lots of distinct edit paths will end up in the same state (string), so rather than recomputing all those paths, we could just remember the shortest path to a state each time we saw it. We can do this by using dynamic programming. Dynamic programming dynamic programming is the name for a class of algorithms, first introduced by Bellman (1957) , that apply a table-driven method to solve problems by combining solutions to sub-problems. Some of the most commonly used algorithms in natural language processing make use of dynamic programming, such as the Viterbi algorithm (Chapter 8) and the CKY algorithm for parsing (Chapter 13) ."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,The intuition of a dynamic programming problem is that a large problem can be solved by properly combining the solutions to various sub-problems. Consider the shortest path of transformed words that represents the minimum edit distance between the strings intention and execution shown in Fig. 2.16 .
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"Imagine some string (perhaps it is exention) that is in this optimal path (whatever it is). The intuition of dynamic programming is that if exention is in the optimal operation list, then the optimal sequence must also include the optimal path from intention to exention. Why? If there were a shorter path from intention to exention, then we could use it instead, resulting in a shorter overall path, and the optimal sequence wouldn't be optimal, thus leading to a contradiction."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,The minimum edit distance algorithm algorithm was named by Wagner and Fischer (1974) but independently discovered by many people (see the Historical Notes section of Chapter 8).
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"Let's first define the minimum edit distance between two strings. Given two strings, the source string X of length n, and target string Y of length m, we'll define D [i, j] as the edit distance between X[1..i] and Y [1.. j], i.e., the first i characters of X and the first j characters of Y . The edit distance between X and Y is thus D [n, m] ."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"We'll use dynamic programming to compute D[n, m] bottom up, combining solutions to subproblems. In the base case, with a source substring of length i but an empty target string, going from i characters to 0 requires i deletes. With a target substring of length j but an empty source going from 0 characters to j characters requires j inserts. Having computed D[i, j] for small i, j we then compute larger D[i, j] based on previously computed smaller values. The value of D[i, j] is computed by taking the minimum of the three possible paths through the matrix which arrive there:"
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"D[i, j] = min D[i − 1, j] + del-cost(source[i]) D[i, j − 1] + ins-cost(target[ j]) D[i − 1, j − 1] + sub-cost(source[i], target[ j])"
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"If we assume the version of Levenshtein distance in which the insertions and deletions each have a cost of 1 (ins-cost(•) = del-cost(•) = 1), and substitutions have a cost of 2 (except substitution of identical letters have zero cost), the computation for D [i, j] becomes:"
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"D[i, j] = min D[i − 1, j] + 1 D[i, j − 1] + 1 D[i − 1, j − 1] + 2; if source[i] = target[ j] 0; if source[i] = target[ j] (2.8)"
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,The algorithm is summarized in Fig. 2.17; Fig. 2.18 shows the results of applying the algorithm to the distance between intention and execution with the version of Levenshtein in Eq. 2.8.
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,Alignment
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"Knowing the minimum edit distance is useful for algorithms like finding potential spelling error corrections. But the edit distance algorithm is important in another way; with a small change, it can also provide the minimum cost alignment between two strings. Aligning two strings is useful throughout speech and language processing. In speech recognition, minimum edit distance alignment is used to compute the word error rate (Chapter 26). Alignment plays a role in machine translation, in which sentences in a parallel corpus (a corpus with a text in two languages) need to be matched to each other."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"The minimum edit distance algorithm, an example of the class of dynamic programming algorithms. The various costs can either be fixed (e.g., ∀x, ins-cost(x) = 1) or can be specific to the letter (to model the fact that some letters are more likely to be inserted than others). We assume that there is no cost for substituting a letter for itself (i.e., sub-cost(x, x) = 0)."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"Figure 2.18 Computation of minimum edit distance between intention and execution with the algorithm of Fig. 2 .17, using Levenshtein distance with cost of 1 for insertions or deletions, 2 for substitutions."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"To extend the edit distance algorithm to produce an alignment, we can start by visualizing an alignment as a path through the edit distance matrix. Figure 2 .19 shows this path with the boldfaced cell. Each boldfaced cell represents an alignment of a pair of letters in the two strings. If two boldfaced cells occur in the same row, there will be an insertion in going from the source to the target; two boldfaced cells in the same column indicate a deletion."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"Figure 2 .19 also shows the intuition of how to compute this alignment path. The computation proceeds in two steps. In the first step, we augment the minimum edit distance algorithm to store backpointers in each cell. The backpointer from a cell points to the previous cell (or cells) that we came from in entering the current cell. We've shown a schematic of these backpointers in Fig. 2.19 . Some cells have multiple backpointers because the minimum extension could have come from multiple previous cells. In the second step, we perform a backtrace. In a backtrace, we start from the last cell (at the final row and column), and follow the pointers back through the dynamic programming matrix. Each complete path between the final cell and the initial cell is a minimum distance alignment. Exercise 2.7 asks you to modify the minimum edit distance algorithm to store the pointers and compute the backtrace to output an alignment."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"Figure 2 .19 When entering a value in each cell, we mark which of the three neighboring cells we came from with up to three arrows. After the table is full we compute an alignment (minimum edit path) by using a backtrace, starting at the 8 in the lower-right corner and following the arrows back. The sequence of bold cells represents one possible minimum cost alignment between the two strings. Diagram design after Gusfield (1997)."
2,Regular Expressions,2.5,Minimum Edit Distance,2.5.1,The Minimum Edit Distance Algorithm,"While we worked our example with simple Levenshtein distance, the algorithm in Fig. 2 .17 allows arbitrary weights on the operations. For spelling correction, for example, substitutions are more likely to happen between letters that are next to each other on the keyboard. The Viterbi algorithm is a probabilistic extension of minimum edit distance. Instead of computing the ""minimum edit distance"" between two strings, Viterbi computes the ""maximum probability alignment"" of one string with another. We'll discuss this more in Chapter 8."
2,Regular Expressions,2.6,Summary,,,"This chapter introduced a fundamental tool in language processing, the regular expression, and showed how to perform basic text normalization tasks including word segmentation and normalization, sentence segmentation, and stemming. We also introduced the important minimum edit distance algorithm for comparing strings. Here's a summary of the main points we covered about these ideas:"
2,Regular Expressions,2.6,Summary,,,• The regular expression language is a powerful tool for pattern-matching.
2,Regular Expressions,2.6,Summary,,,"• Basic operations in regular expressions include concatenation of symbols, disjunction of symbols ([], |, and .), counters (*, +, and {n,m}), anchors (ˆ, $) and precedence operators ((,) )."
2,Regular Expressions,2.6,Summary,,,• Word tokenization and normalization are generally done by cascades of simple regular expression substitutions or finite automata.
2,Regular Expressions,2.6,Summary,,,"• The Porter algorithm is a simple and efficient way to do stemming, stripping off affixes. It does not have high accuracy but may be useful for some tasks."
2,Regular Expressions,2.6,Summary,,,"• The minimum edit distance between two strings is the minimum number of operations it takes to edit one into the other. Minimum edit distance can be computed by dynamic programming, which also results in an alignment of the two strings."
2,Regular Expressions,2.7,Bibliographical and Historical Notes,,,"Kleene 1951; 1956 first defined regular expressions and the finite automaton, based on the McCulloch-Pitts neuron. Ken Thompson was one of the first to build regular expressions compilers into editors for text searching (Thompson, 1968) . His editor ed included a command ""g/regular expression/p"", or Global Regular Expression Print, which later became the Unix grep utility. Text normalization algorithms have been applied since the beginning of the field. One of the earliest widely used stemmers was Lovins (1968) . Stemming was also applied early to the digital humanities, by Packard (1973) , who built an affix-stripping morphological parser for Ancient Greek. Currently a wide variety of code for tokenization and normalization is available, such as the Stanford Tokenizer (http://nlp.stanford.edu/software/tokenizer.shtml) or specialized tokenizers for Twitter (O'Connor et al., 2010) , or for sentiment (http: //sentiment.christopherpotts.net/tokenizing.html). See Palmer (2012) for a survey of text preprocessing. NLTK is an essential tool that offers both useful Python libraries (http://www.nltk.org) and textbook descriptions (Bird et al., 2009 ) of many algorithms including text normalization and corpus interfaces."
2,Regular Expressions,2.7,Bibliographical and Historical Notes,,,"For more on Herdan's law and Heaps' Law, see Herdan (1960 , p. 28), Heaps (1978 , Egghe (2007) and Baayen (2001) ; Yasseri et al. (2012) discuss the relationship with other measures of linguistic complexity. For more on edit distance, see the excellent Gusfield (1997) . Our example measuring the edit distance from 'intention' to 'execution' was adapted from Kruskal (1983) . There are various publicly available packages to compute edit distance, including Unix diff and the NIST sclite program (NIST, 2005) ."
2,Regular Expressions,2.7,Bibliographical and Historical Notes,,,In his autobiography Bellman (1984) explains how he originally came up with the term dynamic programming:
2,Regular Expressions,2.7,Bibliographical and Historical Notes,,,"""...The 1950s were not good years for mathematical research. [the] Secretary of Defense ...had a pathological fear and hatred of the word, research... I decided therefore to use the word, ""programming"". I wanted to get across the idea that this was dynamic, this was multistage... I thought, let's ... take a word that has an absolutely precise meaning, namely dynamic... it's impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to."""
3,N-gram Language Models,,,,,"""You are uniformly charming!"" cried he, with a smile of associating and now and then I bowed and they perceived a chaise and four to wish for. Random sentence generated from a Jane Austen trigram model"
3,N-gram Language Models,,,,,"Predicting is difficult-especially about the future, as the old quip goes. But how about predicting something that seems much easier, like the next few words someone is going to say? What word, for example, is likely to follow"
3,N-gram Language Models,,,,,Please turn your homework ...
3,N-gram Language Models,,,,,"Hopefully, most of you concluded that a very likely word is in, or possibly over, but probably not refrigerator or the. In the following sections we will formalize this intuition by introducing models that assign a probability to each possible next word. The same models will also serve to assign a probability to an entire sentence. Such a model, for example, could predict that the following sequence has a much higher probability of appearing in a text:"
3,N-gram Language Models,,,,,all of a sudden I notice three guys standing on the sidewalk
3,N-gram Language Models,,,,,than does this same set of words in a different order:
3,N-gram Language Models,,,,,on guys all I of notice sidewalk three a sudden standing the
3,N-gram Language Models,,,,,"Why would you want to predict upcoming words, or assign probabilities to sentences? Probabilities are essential in any task in which we have to identify words in noisy, ambiguous input, like speech recognition. For a speech recognizer to realize that you said I will be back soonish and not I will be bassoon dish, it helps to know that back soonish is a much more probable sequence than bassoon dish. For writing tools like spelling correction or grammatical error correction, we need to find and correct errors in writing like Their are two midterms, in which There was mistyped as Their, or Everything has improve, in which improve should have been improved. The phrase There are will be much more probable than Their are, and has improved than has improve, allowing us to help users by detecting and correcting these errors."
3,N-gram Language Models,,,,,Assigning probabilities to sequences of words is also essential in machine translation. Suppose we are translating a Chinese source sentence:
3,N-gram Language Models,,,,,他 向 记者 介绍了 主要 内容 He to reporters introduced main content
3,N-gram Language Models,,,,,As part of the process we might have built the following set of potential rough English translations:
3,N-gram Language Models,,,,,he introduced reporters to the main contents of the statement
3,N-gram Language Models,,,,,he briefed to reporters the main contents of the statement
3,N-gram Language Models,,,,,he briefed reporters on the main contents of the statement
3,N-gram Language Models,,,,,"A probabilistic model of word sequences could suggest that briefed reporters on is a more probable English phrase than briefed to reporters (which has an awkward to after briefed) or introduced reporters to (which uses a verb that is less fluent English in this context), allowing us to correctly select the boldfaced sentence above."
3,N-gram Language Models,,,,,"Probabilities are also important for augmentative and alternative communication systems (Trnka et al. 2007 , Kane et al. 2017 . People often use such AAC devices if they are physically unable to speak or sign but can instead use eye gaze or other specific movements to select words from a menu to be spoken by the system. Word prediction can be used to suggest likely words for the menu."
3,N-gram Language Models,,,,,"Models that assign probabilities to sequences of words are called language models or LMs. In this chapter we introduce the simplest model that assigns probabilities to sentences and sequences of words, the n-gram. An n-gram is a sequence of n words: a 2-gram (which we'll call bigram) is a two-word sequence of words like ""please turn"", ""turn your"", or ""your homework"", and a 3-gram (a trigram) is a three-word sequence of words like ""please turn your"", or ""turn your homework"". We'll see how to use n-gram models to estimate the probability of the last word of an n-gram given the previous words, and also to assign probabilities to entire sequences. In a bit of terminological ambiguity, we usually drop the word ""model"", and use the term n-gram (and bigram, etc.) to mean either the word sequence itself or the predictive model that assigns it a probability. While n-gram models are much simpler than state-of-the art neural language models based on the RNNs and transformers we will introduce in Chapter 9, they are an important foundational tool for understanding the fundamental concepts of language modeling."
3,N-gram Language Models,3.1,N-Grams,,,"Let's begin with the task of computing P(w|h), the probability of a word w given some history h. Suppose the history h is ""its water is so transparent that"" and we want to know the probability that the next word is the:"
3,N-gram Language Models,3.1,N-Grams,,,P(the|its water is so transparent that). (3.1)
3,N-gram Language Models,3.1,N-Grams,,,"One way to estimate this probability is from relative frequency counts: take a very large corpus, count the number of times we see its water is so transparent that, and count the number of times this is followed by the. This would be answering the question ""Out of the times we saw the history h, how many times was it followed by the word w"", as follows:"
3,N-gram Language Models,3.1,N-Grams,,,P(the|its water is so transparent that) = C(its water is so transparent that the) C(its water is so transparent that) (3.2)
3,N-gram Language Models,3.1,N-Grams,,,"With a large enough corpus, such as the web, we can compute these counts and estimate the probability from Eq. 3.2. You should pause now, go to the web, and compute this estimate for yourself."
3,N-gram Language Models,3.1,N-Grams,,,"While this method of estimating probabilities directly from counts works fine in many cases, it turns out that even the web isn't big enough to give us good estimates in most cases. This is because language is creative; new sentences are created all the time, and we won't always be able to count entire sentences. Even simple extensions of the example sentence may have counts of zero on the web (such as ""Walden Pond's water is so transparent that the""; well, used to have counts of zero)."
3,N-gram Language Models,3.1,N-Grams,,,"Similarly, if we wanted to know the joint probability of an entire sequence of words like its water is so transparent, we could do it by asking ""out of all possible sequences of five words, how many of them are its water is so transparent?"" We would have to get the count of its water is so transparent and divide by the sum of the counts of all possible five word sequences. That seems rather a lot to estimate! For this reason, we'll need to introduce more clever ways of estimating the probability of a word w given a history h, or the probability of an entire word sequence W . Let's start with a little formalizing of notation. To represent the probability of a particular random variable X i taking on the value ""the"", or P(X i = ""the""), we will use the simplification P(the). We'll represent a sequence of N words either as w 1 . . . w n or w 1:n (so the expression w 1:n−1 means the string w 1 , w 2 , ..., w n−1 ). For the joint probability of each word in a sequence having a particular value P(X = w 1 ,Y = w 2 , Z = w 3 , ...,W = w n ) we'll use P(w 1 , w 2 , ..., w n )."
3,N-gram Language Models,3.1,N-Grams,,,"Now how can we compute probabilities of entire sequences like P(w 1 , w 2 , ..., w n )? One thing we can do is decompose this probability using the chain rule of probability:"
3,N-gram Language Models,3.1,N-Grams,,,P(X 1 ...X n ) = P(X 1 )P(X 2 |X 1 )P(X 3 |X 1:2 ) . . . P(X n |X 1:n−1 ) = n k=1 P(X k |X 1:k−1 ) (3.3)
3,N-gram Language Models,3.1,N-Grams,,,"Applying the chain rule to words, we get P(w 1:n ) = P(w 1 )P(w 2 |w 1 )P(w 3 |w 1:2 ) . . . P(w n |w 1: n−1 ) = n k=1 P(w k |w 1:k−1 ) (3.4)"
3,N-gram Language Models,3.1,N-Grams,,,"The chain rule shows the link between computing the joint probability of a sequence and computing the conditional probability of a word given previous words. Equation 3.4 suggests that we could estimate the joint probability of an entire sequence of words by multiplying together a number of conditional probabilities. But using the chain rule doesn't really seem to help us! We don't know any way to compute the exact probability of a word given a long sequence of preceding words, P(w n |w n−1 1 ). As we said above, we can't just estimate by counting the number of times every word occurs following every long string, because language is creative and any particular context might have never occurred before!"
3,N-gram Language Models,3.1,N-Grams,,,"The intuition of the n-gram model is that instead of computing the probability of a word given its entire history, we can approximate the history by just the last few words."
3,N-gram Language Models,3.1,N-Grams,,,"The bigram model, for example, approximates the probability of a word given all the previous words P(w n |w 1:n−1 ) by using only the conditional probability of the preceding word P(w n |w n−1 ). In other words, instead of computing the probability"
3,N-gram Language Models,3.1,N-Grams,,,P(the|Walden Pond's water is so transparent that) (3.5)
3,N-gram Language Models,3.1,N-Grams,,,we approximate it with the probability
3,N-gram Language Models,3.1,N-Grams,,,P(the|that) (3.6)
3,N-gram Language Models,3.1,N-Grams,,,"When we use a bigram model to predict the conditional probability of the next word, we are thus making the following approximation:"
3,N-gram Language Models,3.1,N-Grams,,,P(w n |w 1:n−1 ) ≈ P(w n |w n−1 ) (3.7)
3,N-gram Language Models,3.1,N-Grams,,,"The assumption that the probability of a word depends only on the previous word is called a Markov assumption. Markov models are the class of probabilistic models Markov that assume we can predict the probability of some future unit without looking too far into the past. We can generalize the bigram (which looks one word into the past) to the trigram (which looks two words into the past) and thus to the n-gram (which n-gram looks n − 1 words into the past). Thus, the general equation for this n-gram approximation to the conditional probability of the next word in a sequence is"
3,N-gram Language Models,3.1,N-Grams,,,P(w n |w 1:n−1 ) ≈ P(w n |w n−N+1:n−1 ) (3.8)
3,N-gram Language Models,3.1,N-Grams,,,"Given the bigram assumption for the probability of an individual word, we can compute the probability of a complete word sequence by substituting Eq. 3.7 into Eq. 3.4:"
3,N-gram Language Models,3.1,N-Grams,,,P(w 1:n ) ≈ n k=1 P(w k |w k−1 ) (3.9)
3,N-gram Language Models,3.1,N-Grams,,,"How do we estimate these bigram or n-gram probabilities? An intuitive way to estimate probabilities is called maximum likelihood estimation or MLE. We get maximum likelihood estimation the MLE estimate for the parameters of an n-gram model by getting counts from a corpus, and normalizing the counts so that they lie between 0 and 1. 1 normalize For example, to compute a particular bigram probability of a word y given a previous word x, we'll compute the count of the bigram C(xy) and normalize by the sum of all the bigrams that share the same first word x:"
3,N-gram Language Models,3.1,N-Grams,,,P(w n |w n−1 ) = C(w n−1 w n ) w C(w n−1 w) (3.10)
3,N-gram Language Models,3.1,N-Grams,,,"We can simplify this equation, since the sum of all bigram counts that start with a given word w n−1 must be equal to the unigram count for that word w n−1 (the reader should take a moment to be convinced of this):"
3,N-gram Language Models,3.1,N-Grams,,,P(w n |w n−1 ) = C(w n−1 w n ) C(w n−1 ) (3.11)
3,N-gram Language Models,3.1,N-Grams,,,"Let's work through an example using a mini-corpus of three sentences. We'll first need to augment each sentence with a special symbol at the beginning of the sentence, to give us the bigram context of the first word. We'll also need a special end-symbol. 2 I am Sam Sam I am I do not like green eggs and ham Here are the calculations for some of the bigram probabilities from this corpus P(I|) = 2 3 = .67 P(Sam|) = 1 3 = .33 P(am|I) = 2 3 = .67 P(|Sam) = 1 2 = 0.5 P(Sam|am) = 1 2 = .5 P(do|I) = 1 3 = .33 For the general case of MLE n-gram parameter estimation:"
3,N-gram Language Models,3.1,N-Grams,,,P(w n |w n−N+1:n−1 ) = C(w n−N+1:n−1 w n ) C(w n−N+1:n−1 ) (3.12)
3,N-gram Language Models,3.1,N-Grams,,,"Equation 3.12 (like Eq. 3.11) estimates the n-gram probability by dividing the observed frequency of a particular sequence by the observed frequency of a prefix. This ratio is called a relative frequency. We said above that this use of relative frequencies as a way to estimate probabilities is an example of maximum likelihood estimation or MLE. In MLE, the resulting parameter set maximizes the likelihood of the training set T given the model M (i.e., P(T |M)). For example, suppose the word Chinese occurs 400 times in a corpus of a million words like the Brown corpus. What is the probability that a random word selected from some other text of, say, a million words will be the word Chinese? The MLE of its probability is 400 1000000 or .0004. Now .0004 is not the best possible estimate of the probability of Chinese occurring in all situations; it might turn out that in some other corpus or context Chinese is a very unlikely word. But it is the probability that makes it most likely that Chinese will occur 400 times in a million-word corpus. We present ways to modify the MLE estimates slightly to get better probability estimates in Section 3.5."
3,N-gram Language Models,3.1,N-Grams,,,"Let's move on to some examples from a slightly larger corpus than our 14-word example above. We'll use data from the now-defunct Berkeley Restaurant Project, a dialogue system from the last century that answered questions about a database of restaurants in Berkeley, California (Jurafsky et al., 1994) . Here are some textnormalized sample user queries (a sample of 9332 sentences is on the website):"
3,N-gram Language Models,3.1,N-Grams,,,can you tell me about any good cantonese restaurants close by mid priced thai food is what i'm looking for tell me about chez panisse can you give me a listing of the kinds of food that are available i'm looking for a good place to eat breakfast when is caffe venezia open during the day
3,N-gram Language Models,3.1,N-Grams,,,"Figure 3 .1 shows the bigram counts from a piece of a bigram grammar from the Berkeley Restaurant Project. Note that the majority of the values are zero. In fact, we have chosen the sample words to cohere with each other; a matrix selected from a random set of seven words would be even more sparse ."
3,N-gram Language Models,3.1,N-Grams,,,"We leave it as Exercise 3.2 to compute the probability of i want chinese food. What kinds of linguistic phenomena are captured in these bigram statistics? Some of the bigram probabilities above encode some facts that we think of as strictly syntactic in nature, like the fact that what comes after eat is usually a noun or an adjective, or that what comes after to is usually a verb. Others might be a fact about the personal assistant task, like the high probability of sentences beginning with the words I. And some might even be cultural rather than linguistic, like the higher probability that people are looking for Chinese versus English food."
3,N-gram Language Models,3.1,N-Grams,,,"Some practical issues: Although for pedagogical purposes we have only described bigram models, in practice it's more common to use trigram models, which condition on the previous two words rather than the previous word, or 4-gram or even 5-gram models, when there is sufficient training data. Note that for these larger n-grams, we'll need to assume extra contexts to the left and right of the sentence end. For example, to compute trigram probabilities at the very beginning of the sentence, we use two pseudo-words for the first trigram (i.e., P(I|)."
3,N-gram Language Models,3.1,N-Grams,,,"We always represent and compute language model probabilities in log format as log probabilities. Since probabilities are (by definition) less than or equal to 1, the more probabilities we multiply together, the smaller the product becomes. Multiplying enough n-grams together would result in numerical underflow. By using log probabilities instead of raw probabilities, we get numbers that are not as small. Adding in log space is equivalent to multiplying in linear space, so we combine log probabilities by adding them. The result of doing all computation and storage in log space is that we only need to convert back into probabilities if we need to report them at the end; then we can just take the exp of the logprob:"
3,N-gram Language Models,3.2,Evaluating Language Models,,,p 1 × p 2 × p 3 × p 4 = exp(log p 1 + log p 2 + log p 3 + log p 4 ) (3.13)
3,N-gram Language Models,3.2,Evaluating Language Models,,,"The best way to evaluate the performance of a language model is to embed it in an application and measure how much the application improves. Such end-to-end evaluation is called extrinsic evaluation. Extrinsic evaluation is the only way to extrinsic evaluation know if a particular improvement in a component is really going to help the task at hand. Thus, for speech recognition, we can compare the performance of two language models by running the speech recognizer twice, once with each language model, and seeing which gives the more accurate transcription."
3,N-gram Language Models,3.2,Evaluating Language Models,,,"Unfortunately, running big NLP systems end-to-end is often very expensive. Instead, it would be nice to have a metric that can be used to quickly evaluate potential improvements in a language model. An intrinsic evaluation metric is one that mea-intrinsic evaluation sures the quality of a model independent of any application."
3,N-gram Language Models,3.2,Evaluating Language Models,,,"For an intrinsic evaluation of a language model we need a test set. As with many of the statistical models in our field, the probabilities of an n-gram model come from the corpus it is trained on, the training set or training corpus. We can then measure training set the quality of an n-gram model by its performance on some unseen data called the test set or test corpus. We will also sometimes call test sets and other datasets that test set are not in our training sets held out corpora because we hold them out from the held out training data."
3,N-gram Language Models,3.2,Evaluating Language Models,,,"So if we are given a corpus of text and want to compare two different n-gram models, we divide the data into training and test sets, train the parameters of both models on the training set, and then compare how well the two trained models fit the test set."
3,N-gram Language Models,3.2,Evaluating Language Models,,,"But what does it mean to ""fit the test set""? The answer is simple: whichever model assigns a higher probability to the test set-meaning it more accurately predicts the test set-is a better model. Given two probabilistic models, the better model is the one that has a tighter fit to the test data or that better predicts the details of the test data, and hence will assign a higher probability to the test data."
3,N-gram Language Models,3.2,Evaluating Language Models,,,"Since our evaluation metric is based on test set probability, it's important not to let the test sentences into the training set. Suppose we are trying to compute the probability of a particular ""test"" sentence. If our test sentence is part of the training corpus, we will mistakenly assign it an artificially high probability when it occurs in the test set. We call this situation training on the test set. Training on the test set introduces a bias that makes the probabilities all look too high, and causes huge inaccuracies in perplexity, the probability-based metric we introduce below."
3,N-gram Language Models,3.2,Evaluating Language Models,,,"Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or, devset. How do we divide our data into training, development, and test sets? We want our test set to be as large as possible, since a small test set may be accidentally unrepresentative, but we also want as much training data as possible. At the minimum, we would want to pick the smallest test set that gives us enough statistical power to measure a statistically significant difference between two potential models. In practice, we often just divide our data into 80% training, 10% development, and 10% test. Given a large corpus that we want to divide into training and test, test data can either be taken from some continuous sequence of text inside the corpus, or we can remove smaller ""stripes"" of text from randomly selected parts of our corpus and combine them into a test set."
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"In practice we don't use raw probability as our metric for evaluating language models, but a variant called perplexity. The perplexity (sometimes called PP for short) perplexity of a language model on a test set is the inverse probability of the test set, normalized by the number of words. For a test set W = w 1 w 2 . . . w N ,:"
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,PP(W ) = P(w 1 w 2 . . . w N ) − 1 N (3.14) = N 1 P(w 1 w 2 . . . w N )
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,We can use the chain rule to expand the probability of W :
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,PP(W ) = N N i=1 1 P(w i |w 1 . . . w i−1 ) (3.15)
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"Thus, if we are computing the perplexity of W with a bigram language model, we get:"
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,PP(W ) = N N i=1 1 P(w i |w i−1 ) (3.16)
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"Note that because of the inverse in Eq. 3.15, the higher the conditional probability of the word sequence, the lower the perplexity. Thus, minimizing perplexity is equivalent to maximizing the test set probability according to the language model. What we generally use for word sequence in Eq. 3.15 or Eq. 3.16 is the entire sequence of words in some test set. Since this sequence will cross many sentence boundaries, we need to include the begin-and end-sentence markers and in the probability computation. We also need to include the end-of-sentence marker (but not the beginning-of-sentence marker ) in the total count of word tokens N."
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"There is another way to think about perplexity: as the weighted average branching factor of a language. The branching factor of a language is the number of possible next words that can follow any word. Consider the task of recognizing the digits in English (zero, one, two,..., nine), given that (both in some training set and in some test set) each of the 10 digits occurs with equal probability P = 1 10 . The perplexity of this mini-language is in fact 10. To see that, imagine a test string of digits of length"
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"N, and assume that in the training set all the digits occurred with equal probability. By Eq. 3.15, the perplexity will be"
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,PP(W ) = P(w 1 w 2 . . . w N ) − 1 N = ( 1 10 N ) − 1 N = 1 10 −1 = 10 (3.17)
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"But suppose that the number zero is really frequent and occurs far more often than other numbers. Let's say that 0 occur 91 times in the training set, and each of the other digits occurred 1 time each. Now we see the following test set: 0 0 0 0 0 3 0 0 0 0. We should expect the perplexity of this test set to be lower since most of the time the next number will be zero, which is very predictable, i.e. has a high probability. Thus, although the branching factor is still 10, the perplexity or weighted branching factor is smaller. We leave this exact calculation as exercise 12."
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,We see in Section 3.8 that perplexity is also closely related to the informationtheoretic notion of entropy.
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"Finally, let's look at an example of how perplexity can be used to compare different n-gram models. We trained unigram, bigram, and trigram grammars on 38 million words (including start-of-sentence tokens) from the Wall Street Journal, using a 19,979 word vocabulary. We then computed the perplexity of each of these models on a test set of 1.5 million words with Eq. 3.16. The table below shows the perplexity of a 1.5 million word WSJ test set according to each of these grammars."
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"As we see above, the more information the n-gram gives us about the word sequence, the lower the perplexity (since as Eq. 3.15 showed, perplexity is related inversely to the likelihood of the test sequence according to the model)."
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"Note that in computing perplexities, the n-gram model P must be constructed without any knowledge of the test set or any prior knowledge of the vocabulary of the test set. Any kind of knowledge of the test set can cause the perplexity to be artificially low. The perplexity of two language models is only comparable if they use identical vocabularies."
3,N-gram Language Models,3.2,Evaluating Language Models,3.2.1,Perplexity,"An (intrinsic) improvement in perplexity does not guarantee an (extrinsic) improvement in the performance of a language processing task like speech recognition or machine translation. Nonetheless, because perplexity often correlates with such improvements, it is commonly used as a quick check on an algorithm. But a model's improvement in perplexity should always be confirmed by an end-to-end evaluation of a real task before concluding the evaluation of the model."
3,N-gram Language Models,3.3,Sampling sentences from a language model,,,"One important way to visualize what kind of knowledge a language model embodies is to sample from it. Sampling from a distribution means to choose random points sampling according to their likelihood. Thus sampling from a language model-which represents a distribution over sentences-means to generate some sentences, choosing each sentence according to its likelihood as defined by the model. Thus we are more likely to generate sentences that the model thinks have a high probability and less likely to generate sentences that the model thinks have a low probability."
3,N-gram Language Models,3.3,Sampling sentences from a language model,,,"This technique of visualizing a language model by sampling was first suggested very early on by Shannon (1951) and Miller and Selfridge (1950) It's simplest to visualize how this works for the unigram case. Imagine all the words of the English language covering the probability space between 0 and 1, each word covering an interval proportional to its frequency. Fig. 3 .3 shows a visualization, using a unigram LM computed from the text of this book. We choose a random value between 0 and 1, find that point on the probability line, and print the word whose interval includes this chosen value. We continue choosing random numbers and generating words until we randomly generate the sentence-final token ."
3,N-gram Language Models,3.3,Sampling sentences from a language model,,,"Figure 3 .3 A visualization of the sampling distribution for sampling sentences by repeatedly sampling unigrams. The blue bar represents the frequency of each word. The number line shows the cumulative probabilities. If we choose a random number between 0 and 1, it will fall in an interval corresponding to some word. The expectation for the random number to fall in the larger intervals of one of the frequent words (the, of, a) is much higher than in the smaller interval of one of the rare words (polyphonic)."
3,N-gram Language Models,3.3,Sampling sentences from a language model,,,"We can use the same technique to generate bigrams by first generating a random bigram that starts with (according to its bigram probability). Let's say the second word of that bigram is w. We next choose a random bigram starting with w (again, drawn according to its bigram probability), and so on."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"The n-gram model, like many statistical models, is dependent on the training corpus. One implication of this is that the probabilities often encode specific facts about a given training corpus. Another implication is that n-grams do a better and better job of modeling the training corpus as we increase the value of N."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"We can use the sampling method from the prior section to visualize both of these facts! To give an intuition for the increasing power of higher-order n-grams, Fig. 3 .4 shows random sentences generated from unigram, bigram, trigram, and 4gram models trained on Shakespeare's works."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"The longer the context on which we train the model, the more coherent the sentences. In the unigram sentences, there is no coherent relation between words or any sentence-final punctuation. The bigram sentences have some local word-to-word coherence (especially if we consider that punctuation counts as a word). The tri- .4 Eight sentences randomly generated from four n-grams computed from Shakespeare's works. All characters were mapped to lower-case and punctuation marks were treated as words. Output is hand-corrected for capitalization to improve readability."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"gram and 4-gram sentences are beginning to look a lot like Shakespeare. Indeed, a careful investigation of the 4-gram sentences shows that they look a little too much like Shakespeare. The words It cannot be but so are directly from King John. This is because, not to put the knock on Shakespeare, his oeuvre is not very large as corpora go (N = 884, 647,V = 29, 066), and our n-gram probability matrices are ridiculously sparse. There are V 2 = 844, 000, 000 possible bigrams alone, and the number of possible 4-grams is V 4 = 7 × 10 17 . Thus, once the generator has chosen the first 4-gram (It cannot be but), there are only five possible continuations (that, I, he, thou, and so); indeed, for many 4-grams, there is only one continuation."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"To get an idea of the dependence of a grammar on its training set, let's look at an n-gram grammar trained on a completely different corpus: the Wall Street Journal (WSJ) newspaper. Shakespeare and the Wall Street Journal are both English, so we might expect some overlap between our n-grams for the two genres. Fig. 3 .5 shows sentences generated by unigram, bigram, and trigram grammars trained on 40 million words from WSJ. .5 Three sentences randomly generated from three n-gram models computed from 40 million words of the Wall Street Journal, lower-casing all characters and treating punctuation as words. Output was then hand-corrected for capitalization to improve readability."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"Compare these examples to the pseudo-Shakespeare in Fig. 3 .4. While they both model ""English-like sentences"", there is clearly no overlap in generated sentences, and little overlap even in small phrases. Statistical models are likely to be pretty useless as predictors if the training sets and the test sets are as different as Shakespeare and WSJ."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"How should we deal with this problem when we build n-gram models? One step is to be sure to use a training corpus that has a similar genre to whatever task we are trying to accomplish. To build a language model for translating legal documents, we need a training corpus of legal documents. To build a language model for a question-answering system, we need a training corpus of questions."
3,N-gram Language Models,3.4,Generalization and Zeros,,,"It is equally important to get training data in the appropriate dialect or variety, especially when processing social media posts or spoken transcripts. For example some tweets will use features of African American Language (AAL)-the name for the many variations of language used in African American communities (King, 2020) . Such features include words like finna-an auxiliary verb that marks immediate future tense -that don't occur in other varieties, or spellings like den for then, in tweets like this one (Blodgett and O'Connor, 2017):"
3,N-gram Language Models,3.4,Generalization and Zeros,,,"(3.18) Bored af den my phone finna die!!! while tweets from varieties like Nigerian English have markedly different vocabulary and n-gram patterns from American English (Jurgens et al., 2017):"
3,N-gram Language Models,3.4,Generalization and Zeros,,,"(3.19) @username R u a wizard or wat gan sef: in d mornin -u tweet, afternoon -u tweet, nyt gan u dey tweet. beta get ur IT placement wiv twitter"
3,N-gram Language Models,3.4,Generalization and Zeros,,,"Matching genres and dialects is still not sufficient. Our models may still be subject to the problem of sparsity. For any n-gram that occurred a sufficient number of times, we might have a good estimate of its probability. But because any corpus is limited, some perfectly acceptable English word sequences are bound to be missing from it. That is, we'll have many cases of putative ""zero probability n-grams"" that should really have some non-zero probability. Consider the words that follow the bigram denied the in the WSJ Treebank3 corpus, together with their counts: denied the allegations: 5 denied the speculation: 2 denied the rumors: 1 denied the report: 1"
3,N-gram Language Models,3.4,Generalization and Zeros,,,"But suppose our test set has phrases like: denied the offer denied the loan Our model will incorrectly estimate that the P(offer|denied the) is 0! These zerosthings that don't ever occur in the training set but do occur in zeros the test set-are a problem for two reasons. First, their presence means we are underestimating the probability of all sorts of words that might occur, which will hurt the performance of any application we want to run on this data. Second, if the probability of any word in the test set is 0, the entire probability of the test set is 0. By definition, perplexity is based on the inverse probability of the test set. Thus if some words have zero probability, we can't compute perplexity at all, since we can't divide by 0!"
3,N-gram Language Models,3.4,Generalization and Zeros,3.4.1,Unknown Words,The previous section discussed the problem of words whose bigram probability is zero. But what about words we simply have never seen before?
3,N-gram Language Models,3.4,Generalization and Zeros,3.4.1,Unknown Words,"Sometimes we have a language task in which this can't happen because we know all the words that can occur. In such a closed vocabulary system the test set can only contain words from this lexicon, and there will be no unknown words. This is a reasonable assumption in some domains, such as speech recognition or machine translation, where we have a pronunciation dictionary or a phrase table that are fixed in advance, and so the language model can only use the words in that dictionary or phrase table."
3,N-gram Language Models,3.4,Generalization and Zeros,3.4.1,Unknown Words,"In other cases we have to deal with words we haven't seen before, which we'll call unknown words, or out of vocabulary (OOV) words. is one in which we model these potential unknown words in the test set by adding a pseudo-word called ."
3,N-gram Language Models,3.4,Generalization and Zeros,3.4.1,Unknown Words,There are two common ways to train the probabilities of the unknown word model . The first one is to turn the problem back into a closed vocabulary one by choosing a fixed vocabulary in advance:
3,N-gram Language Models,3.4,Generalization and Zeros,3.4.1,Unknown Words,1. Choose a vocabulary (word list) that is fixed in advance. 2. Convert in the training set any word that is not in this set (any word) to the unknown word token in a text normalization step. 3. Estimate the probabilities for from its counts just like any other regular word in the training set.
3,N-gram Language Models,3.4,Generalization and Zeros,3.4.1,Unknown Words,"The second alternative, in situations where we don't have a prior vocabulary in advance, is to create such a vocabulary implicitly, replacing words in the training data by based on their frequency. For example we can replace by all words that occur fewer than n times in the training set, where n is some small number, or equivalently select a vocabulary size V in advance (say 50,000) and choose the top V words by frequency and replace the rest by UNK. In either case we then proceed to train the language model as before, treating like a regular word. The exact choice of model does have an effect on metrics like perplexity. A language model can achieve low perplexity by choosing a small vocabulary and assigning the unknown word a high probability. For this reason, perplexities should only be compared across language models with the same vocabularies (Buck et al., 2014)."
3,N-gram Language Models,3.5,Smoothing,,,"What do we do with words that are in our vocabulary (they are not unknown words) but appear in a test set in an unseen context (for example they appear after a word they never appeared after in training)? To keep a language model from assigning zero probability to these unseen events, we'll have to shave off a bit of probability mass from some more frequent events and give it to the events we've never seen. This modification is called smoothing or discounting. In this section and the folsmoothing discounting lowing ones we'll introduce a variety of ways to do smoothing: Laplace (add-one) smoothing, add-k smoothing, stupid backoff, and Kneser-Ney smoothing."
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,"The simplest way to do smoothing is to add one to all the n-gram counts, before we normalize them into probabilities. All the counts that used to be zero will now have a count of 1, the counts of 1 will be 2, and so on. This algorithm is called Laplace smoothing. Laplace smoothing does not perform well enough to be used Laplace smoothing in modern n-gram models, but it usefully introduces many of the concepts that we see in other smoothing algorithms, gives a useful baseline, and is also a practical smoothing algorithm for other tasks like text classification (Chapter 4)."
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,Let's start with the application of Laplace smoothing to unigram probabilities. Recall that the unsmoothed maximum likelihood estimate of the unigram probability of the word w i is its count c i normalized by the total number of word tokens N:
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,P(w i ) = c i N
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,"Laplace smoothing merely adds one to each count (hence its alternate name addone smoothing). Since there are V words in the vocabulary and each one was increadd-one mented, we also need to adjust the denominator to take into account the extra V observations. (What happens to our P values if we don't increase the denominator?)"
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,P Laplace (w i ) = c i + 1 N +V (3.20)
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,"Instead of changing both the numerator and denominator, it is convenient to describe how a smoothing algorithm affects the numerator, by defining an adjusted count c * . This adjusted count is easier to compare directly with the MLE counts and can be turned into a probability like an MLE count by normalizing by N. To define this count, since we are only changing the numerator in addition to adding 1 we'll also need to multiply by a normalization factor N N+V :"
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,c * i = (c i + 1) N N +V (3.21)
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,"We can now turn c * i into a probability P * i by normalizing by N. A related way to view smoothing is as discounting (lowering) some non-zero discounting counts in order to get the probability mass that will be assigned to the zero counts. Thus, instead of referring to the discounted counts c * , we might describe a smoothing algorithm in terms of a relative discount d c , the ratio of the discounted counts to discount the original counts:"
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,d c = c * c
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,"Now that we have the intuition for the unigram case, let's smooth our Berkeley Restaurant Project bigrams. Figure 3 .6 shows the add-one smoothed counts for the bigrams in Fig. 3.1 . Figure 3 .7 shows the add-one smoothed probabilities for the bigrams in Fig. 3 .2. Recall that normal bigram probabilities are computed by normalizing each row of counts by the unigram count:"
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,P(w n |w n−1 ) = C(w n−1 w n ) C(w n−1 ) (3.22)
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,"Thus, each of the unigram counts given in the previous section will need to be augmented by V = 1446. The result is the smoothed bigram probabilities in Fig. 3.7."
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,It is often convenient to reconstruct the count matrix so we can see how much a smoothing algorithm has changed the original counts. These adjusted counts can be computed by Eq. 3.24. Figure 3 .8 shows the reconstructed counts.
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,"Note that add-one smoothing has made a very big change to the counts. C(want to) changed from 609 to 238! We can see this in probability space as well: P(to|want) decreases from .66 in the unsmoothed case to .26 in the smoothed case. Looking at the discount d (the ratio between new and old counts) shows us how strikingly the counts for each prefix word have been reduced; the discount for the bigram want to is .39, while the discount for Chinese food is .10, a factor of 10!"
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,P * Laplace (w n |w n−1 ) = C(w n−1 w n ) + 1 w (C(w n−1 w) + 1) = C(w n−1 w n ) + 1 C(w n−1 ) +V (3.23) 3.5
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,c * (w n−1 w n ) = [C(w n−1 w n ) + 1] ×C(w n−1 ) C(w n−1 ) +V (
3,N-gram Language Models,3.5,Smoothing,3.5.1,Laplace Smoothing,The sharp change in counts and probabilities occurs because too much probability mass is moved to all the zeros.
3,N-gram Language Models,3.5,Smoothing,3.5.2,Add-k smoothing,"One alternative to add-one smoothing is to move a bit less of the probability mass from the seen to the unseen events. Instead of adding 1 to each count, we add a fractional count k (.5? .05? .01?). This algorithm is therefore called add-k smoothing."
3,N-gram Language Models,3.5,Smoothing,3.5.2,Add-k smoothing,add-k P * Add-k (w n |w n−1 ) = C(w n−1 w n ) + k C(w n−1 ) + kV (3.25)
3,N-gram Language Models,3.5,Smoothing,3.5.2,Add-k smoothing,"Add-k smoothing requires that we have a method for choosing k; this can be done, for example, by optimizing on a devset. Although add-k is useful for some tasks (including text classification), it turns out that it still doesn't work well for language modeling, generating counts with poor variances and often inappropriate discounts (Gale and Church, 1994) ."
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"The discounting we have been discussing so far can help solve the problem of zero frequency n-grams. But there is an additional source of knowledge we can draw on. If we are trying to compute P(w n |w n−2 w n−1 ) but we have no examples of a particular trigram w n−2 w n−1 w n , we can instead estimate its probability by using the bigram probability P(w n |w n−1 ). Similarly, if we don't have counts to compute P(w n |w n−1 ), we can look to the unigram P(w n )."
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"In other words, sometimes using less context is a good thing, helping to generalize more for contexts that the model hasn't learned much about. There are two ways to use this n-gram ""hierarchy"". In backoff, we use the trigram if the evidence is backoff sufficient, otherwise we use the bigram, otherwise the unigram. In other words, we only ""back off"" to a lower-order n-gram if we have zero evidence for a higher-order n-gram. By contrast, in interpolation, we always mix the probability estimates from interpolation all the n-gram estimators, weighing and combining the trigram, bigram, and unigram counts."
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"In simple linear interpolation, we combine different order n-grams by linearly interpolating them. Thus, we estimate the trigram probability P(w n |w n−2 w n−1 ) by mixing together the unigram, bigram, and trigram probabilities, each weighted by a"
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,λ :P (w n |w n−2 w n−1 ) = λ 1 P(w n ) +λ 2 P(w n |w n−1 ) +λ 3 P(w n |w n−2 w n−1 ) (3.26)
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"The λ s must sum to 1, making Eq. 3.26 equivalent to a weighted average:"
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,i λ i = 1 (3.27)
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"In a slightly more sophisticated version of linear interpolation, each λ weight is computed by conditioning on the context. This way, if we have particularly accurate counts for a particular bigram, we assume that the counts of the trigrams based on this bigram will be more trustworthy, so we can make the λ s for those trigrams higher and thus give that trigram more weight in the interpolation. Equation 3.28 shows the equation for interpolation with context-conditioned weights:"
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,P(w n |w n−2 w n−1 ) = λ 1 (w n−2:n−1 )P(w n ) +λ 2 (w n−2:n−1 )P(w n |w n−1 ) + λ 3 (w n−2:n−1 )P(w n |w n−2 w n−1 ) (3.28)
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"How are these λ values set? Both the simple interpolation and conditional interpolation λ s are learned from a held-out corpus. A held-out corpus is an additional held-out training corpus that we use to set hyperparameters like these λ values, by choosing the λ values that maximize the likelihood of the held-out corpus. That is, we fix the n-gram probabilities and then search for the λ values that-when plugged into Eq. 3.26-give us the highest probability of the held-out set. There are various ways to find this optimal set of λ s. One way is to use the EM algorithm, an iterative learning algorithm that converges on locally optimal λ s (Jelinek and Mercer, 1980) ."
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"In a backoff n-gram model, if the n-gram we need has zero counts, we approximate it by backing off to the (N-1)-gram. We continue backing off until we reach a history that has some counts."
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"In order for a backoff model to give a correct probability distribution, we have to discount the higher-order n-grams to save some probability mass for the lower discount order n-grams. Just as with add-one smoothing, if the higher-order n-grams aren't discounted and we just used the undiscounted MLE probability, then as soon as we replaced an n-gram which has zero probability with a lower-order n-gram, we would be adding probability mass, and the total probability assigned to all possible strings by the language model would be greater than 1! In addition to this explicit discount factor, we'll need a function α to distribute this probability mass to the lower order n-grams."
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,This kind of backoff with discounting is also called Katz backoff. In Katz back-
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"off we rely on a discounted probability P * if we've seen this n-gram before (i.e., if we have non-zero counts). Otherwise, we recursively back off to the Katz probability for the shorter-history (N-1)-gram. The probability for a backoff n-gram P BO is thus computed as follows:"
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,"P BO (w n |w n−N+1:n−1 ) = P * (w n |w n−N+1:n−1 ), if C(w n−N+1:n ) > 0 α(w n−N+1:n−1 )P BO (w n |w n−N+2:n−1 ), otherwise. (3.29)"
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,Katz backoff is often combined with a smoothing method called Good-Turing.
3,N-gram Language Models,3.5,Smoothing,3.5.3,Backoff and Interpolation,The combined Good-Turing backoff algorithm involves quite detailed computation for estimating the Good-Turing smoothing and the P * and α values.
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"One of the most commonly used and best performing n-gram smoothing methods is the interpolated Kneser-Ney algorithm (Kneser and Ney 1995, Chen and Goodman"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,Kneser-Ney 1998).
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,Kneser-Ney has its roots in a method called absolute discounting. Recall that discounting of the counts for frequent n-grams is necessary to save some probability mass for the smoothing algorithm to distribute to the unseen n-grams.
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"To see this, we can use a clever idea from Church and Gale (1991) . Consider an n-gram that has count 4. We need to discount this count by some amount. But how much should we discount it? Church and Gale's clever idea was to look at a held-out corpus and just see what the count is for all those bigrams that had count 4 in the training set. They computed a bigram grammar from 22 million words of AP newswire and then checked the counts of each of these bigrams in another 22 million words. On average, a bigram that occurred 4 times in the first 22 million words occurred 3.23 times in the next 22 million words. Fig. 3 .9 from Church and Gale (1991) shows these counts for bigrams with c from 0 to 9. .9 For all bigrams in 22 million words of AP newswire of count 0, 1, 2,...,9, the counts of these bigrams in a held-out corpus also of 22 million words."
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"Notice in Fig. 3 .9 that except for the held-out counts for 0 and 1, all the other bigram counts in the held-out set could be estimated pretty well by just subtracting 0.75 from the count in the training set! Absolute discounting formalizes this intu-Absolute discounting ition by subtracting a fixed (absolute) discount d from each count. The intuition is that since we have good estimates already for the very high counts, a small discount d won't affect them much. It will mainly modify the smaller counts, for which we don't necessarily trust the estimate anyway, and Fig. 3 .9 suggests that in practice this discount is actually a good one for bigrams with counts 2 through 9. The equation for interpolated absolute discounting applied to bigrams:"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,P AbsoluteDiscounting (w i |w i−1 ) = C(w i−1 w i ) − d v C(w i−1 v) + λ (w i−1 )P(w i ) (3.30)
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"The first term is the discounted bigram, and the second term is the unigram with an interpolation weight λ . We could just set all the d values to .75, or we could keep a separate discount value of 0.5 for the bigrams with counts of 1."
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"Kneser-Ney discounting (Kneser and Ney, 1995) augments absolute discounting with a more sophisticated way to handle the lower-order unigram distribution. Consider the job of predicting the next word in this sentence, assuming we are interpolating a bigram and a unigram model. I can't see without my reading . The word glasses seems much more likely to follow here than, say, the word Kong, so we'd like our unigram model to prefer glasses. But in fact it's Kong that is more common, since Hong Kong is a very frequent word. A standard unigram model will assign Kong a higher probability than glasses. We would like to capture the intuition that although Kong is frequent, it is mainly only frequent in the phrase Hong Kong, that is, after the word Hong. The word glasses has a much wider distribution."
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"In other words, instead of P(w), which answers the question ""How likely is w?"", we'd like to create a unigram model that we might call P CONTINUATION , which answers the question ""How likely is w to appear as a novel continuation?"". How can we estimate this probability of seeing the word w as a novel continuation, in a new unseen context? The Kneser-Ney intuition is to base our estimate of P CONTINUATION on the number of different contexts word w has appeared in, that is, the number of bigram types it completes. Every bigram type was a novel continuation the first time it was seen. We hypothesize that words that have appeared in more contexts in the past are more likely to appear in some new context as well. The number of times a word w appears as a novel continuation can be expressed as:"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,P CONTINUATION (w) ∝ |{v : C(vw) > 0}| (3.31)
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"To turn this count into a probability, we normalize by the total number of word bigram types. In summary:"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"P CONTINUATION (w) = |{v : C(vw) > 0}| |{(u , w ) : C(u w ) > 0}| (3.32)"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,An equivalent formulation based on a different metaphor is to use the number of word types seen to precede w (Eq. 3.31 repeated):
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,P CONTINUATION (w) ∝ |{v : C(vw) > 0}| (3.33)
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"normalized by the number of words preceding all words, as follows:"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,P CONTINUATION (w) = |{v : C(vw) > 0}| w |{v : C(vw ) > 0}| (3.34)
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,A frequent word (Kong) occurring in only one context (Hong) will have a low continuation probability.
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,The final equation for Interpolated Kneser-Ney smoothing for bigrams is then:
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"Interpolated Kneser-Ney P KN (w i |w i−1 ) = max(C(w i−1 w i ) − d, 0) C(w i−1 ) + λ (w i−1 )P CONTINUATION (w i ) (3.35)"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,The λ is a normalizing constant that is used to distribute the probability mass we've discounted.:
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"λ (w i−1 ) = d v C(w i−1 v) |{w : C(w i−1 w) > 0}| (3.36) The first term, d v C(w i−1 v)"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,", is the normalized discount. The second term, |{w : C(w i−1 w) > 0}|, is the number of word types that can follow w i−1 or, equivalently, the number of word types that we discounted; in other words, the number of times we applied the normalized discount. The general recursive formulation is as follows: The continuation count is the number of unique single word contexts for •. At the termination of the recursion, unigrams are interpolated with the uniform distribution, where the parameter is the empty string:"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,EQUATION
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"P KN (w) = max(c KN (w) − d, 0) w c KN (w ) + λ ( ) 1 V (3.39)"
3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"If we want to include an unknown word , it's just included as a regular vocabulary entry with count zero, and hence its probability will be a lambda-weighted uniform distribution λ ( ) V . The best performing version of Kneser-Ney smoothing is called modified Kneser-Ney smoothing, and is due to Chen and Goodman (1998). Rather than use a single modified Kneser-Ney fixed discount d, modified Kneser-Ney uses three different discounts d 1 , d 2 , and d 3+ for n-grams with counts of 1, 2 and three or more, respectively. See Chen and Goodman (1998, p. 19) or Heafield et al. 2013for the details."
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,"By using text from the web or other enormous collections, it is possible to build extremely large language models. The Web 1 Trillion 5-gram corpus released by Google includes various large sets of n-grams, including 1-grams through 5-grams from all the five-word sequences that appear in at least 40 distinct books from 1,024,908,267,229 words of text from publicly accessible Web pages in English (Franz and Brants, 2006) . Google has also released Google Books Ngrams corpora with n-grams drawn from their book collections, including another 800 billion tokens of n-grams from Chinese, English, French, German, Hebrew, Italian, Russian, and Spanish (Lin et al., 2012a) . Smaller but more carefully curated n-gram corpora for English include the million most frequent n-grams drawn from the COCA (Corpus of Contemporary American English) 1 billion word corpus of American English (Davies, 2020). COCA is a balanced corpora, meaning that it has roughly equal numbers of words from different genres: web, newspapers, spoken conversation transcripts, fiction, and so on, drawn from the period 1990-2019, and has the context of each n-gram as well as labels for genre and provenance)."
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,Some example 4-grams from the Google Web corpus:
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,"4-gram Count serve as the incoming 92 serve as the incubator 99 serve as the independent 794 serve as the index 223 serve as the indication 72 serve as the indicator 120 serve as the indicators Efficiency considerations are important when building language models that use such large sets of n-grams. Rather than store each word as a string, it is generally represented in memory as a 64-bit hash number, with the words themselves stored on disk. Probabilities are generally quantized using only 4-8 bits (instead of 8-byte floats), and n-grams are stored in reverse tries."
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,"An n-gram language model can also be shrunk by pruning, for example only storing n-grams with counts greater than some threshold (such as the count threshold of 40 used for the Google n-gram release) or using entropy to prune less-important n-grams (Stolcke, 1998) . Another option is to build approximate language models using techniques like Bloom filters (Talbot and Osborne 2007, Church et al. 2007) ."
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,"Finally, efficient language model toolkits like KenLM (Heafield 2011, Heafield et al. 2013) use sorted arrays, efficiently combine probabilities and backoffs in a single value, and use merge sorts to efficiently build the probability tables in a minimal number of passes through a large corpus."
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,"Although with these toolkits it is possible to build web-scale language models using full Kneser-Ney smoothing, Brants et al. (2007) show that with very large language models a much simpler algorithm may be sufficient. The algorithm is called stupid backoff. Stupid backoff gives up the idea of trying to make the language stupid backoff model a true probability distribution. There is no discounting of the higher-order probabilities. If a higher-order n-gram has a zero count, we simply backoff to a lower order n-gram, weighed by a fixed (context-independent) weight. This algorithm does not produce a probability distribution, so we'll follow Brants et al. (2007) in referring to it as S:"
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,S(w i |w i−k+1 : i−1 ) = count(w i−k+1 : i ) count(w i−k+1 : i−1 ) if count(w i−k+1 : i ) > 0 λ S(w i |w i−k+2 : i−1 ) otherwise (3.40)
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,"The backoff terminates in the unigram, which has probability S(w) = count(w)"
3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,. Brants et al. (2007) find that a value of 0.4 worked well for λ .
Advanced:,Perplexity's Relation to Entropy,,,,,"We introduced perplexity in Section 3.2.1 as a way to evaluate n-gram models on a test set. A better n-gram model is one that assigns a higher probability to the test data, and perplexity is a normalized version of the probability of the test set. The perplexity measure actually arises from the information-theoretic concept of cross-entropy, which explains otherwise mysterious properties of perplexity (why the inverse probability, for example?) and its relationship to entropy. Entropy is a Entropy measure of information. Given a random variable X ranging over whatever we are predicting (words, letters, parts of speech, the set of which we'll call χ) and with a particular probability function, call it p(x), the entropy of the random variable X is:"
Advanced:,Perplexity's Relation to Entropy,,,,,H(X) = − x∈χ p(x) log 2 p(x) (3.41)
Advanced:,Perplexity's Relation to Entropy,,,,,"The log can, in principle, be computed in any base. If we use log base 2, the resulting value of entropy will be measured in bits."
Advanced:,Perplexity's Relation to Entropy,,,,,One intuitive way to think about entropy is as a lower bound on the number of bits it would take to encode a certain decision or piece of information in the optimal coding scheme.
Advanced:,Perplexity's Relation to Entropy,,,,,"Consider an example from the standard information theory textbook Cover and Thomas (1991) . Imagine that we want to place a bet on a horse race but it is too far to go all the way to Yonkers Racetrack, so we'd like to send a short message to the bookie to tell him which of the eight horses to bet on. One way to encode this message is just to use the binary representation of the horse's number as the code; thus, horse 1 would be 001, horse 2 010, horse 3 011, and so on, with horse 8 coded as 000. If we spend the whole day betting and each horse is coded with 3 bits, on average we would be sending 3 bits per race."
Advanced:,Perplexity's Relation to Entropy,,,,,Can we do better? Suppose that the spread is the actual distribution of the bets placed and that we represent it as the prior probability of each horse as follows: The entropy of the random variable X that ranges over horses gives us a lower bound on the number of bits and is
Advanced:,Perplexity's Relation to Entropy,,,,,H(X) = − i=8 i=1 p(i) log p(i) = − 1 2 log 1 2 − 1 4 log 1 4 − 1 8 log 1 8 − 1 16 log 1 16 −4( 1 64 log 1 64 ) = 2 bits (3.42)
Advanced:,Perplexity's Relation to Entropy,,,,,"A code that averages 2 bits per race can be built with short encodings for more probable horses, and longer encodings for less probable horses. For example, we could encode the most likely horse with the code 0, and the remaining horses as 10, then 110, 1110, 111100, 111101, 111110, and 111111."
Advanced:,Perplexity's Relation to Entropy,,,,,"What if the horses are equally likely? We saw above that if we used an equallength binary code for the horse numbers, each horse took 3 bits to code, so the average was 3. Is the entropy the same? In this case each horse would have a probability of 1 8 . The entropy of the choice of horses is then"
Advanced:,Perplexity's Relation to Entropy,,,,,H(X) = − i=8 i=1 1 8 log 1 8 = − log 1 8 = 3 bits (3.43)
Advanced:,Perplexity's Relation to Entropy,,,,,"Until now we have been computing the entropy of a single variable. But most of what we will use entropy for involves sequences. For a grammar, for example, we will be computing the entropy of some sequence of words W = {w 1 , w 2 , . . . , w n }. One way to do this is to have a variable that ranges over sequences of words. For example we can compute the entropy of a random variable that ranges over all finite sequences of words of length n in some language L as follows:"
Advanced:,Perplexity's Relation to Entropy,,,,,"H(w 1 , w 2 , . . . , w n ) = − w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.44)"
Advanced:,Perplexity's Relation to Entropy,,,,,We could define the entropy rate (we could also think of this as the per-word entropy rate entropy) as the entropy of this sequence divided by the number of words:
Advanced:,Perplexity's Relation to Entropy,,,,,1 n H(w 1 : n ) = − 1 n w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.45)
Advanced:,Perplexity's Relation to Entropy,,,,,"But to measure the true entropy of a language, we need to consider sequences of infinite length. If we think of a language as a stochastic process L that produces a sequence of words, and allow W to represent the sequence of words w 1 , . . . , w n , then L's entropy rate H(L) is defined as"
Advanced:,Perplexity's Relation to Entropy,,,,,"H(L) = lim n→∞ 1 n H(w 1 , w 2 , . . . , w n ) = − lim n→∞ 1 n W ∈L p(w 1 , . . . , w n ) log p(w 1 , . . . , w n ) (3.46)"
Advanced:,Perplexity's Relation to Entropy,,,,,"The Shannon-McMillan-Breiman theorem (Algoet and Cover 1988, Cover and Thomas 1991) states that if the language is regular in certain ways (to be exact, if it is both stationary and ergodic),"
Advanced:,Perplexity's Relation to Entropy,,,,,H(L) = lim n→∞ − 1 n log p(w 1 w 2 . . . w n ) (3.47)
Advanced:,Perplexity's Relation to Entropy,,,,,"That is, we can take a single sequence that is long enough instead of summing over all possible sequences. The intuition of the Shannon-McMillan-Breiman theorem is that a long-enough sequence of words will contain in it many other shorter sequences and that each of these shorter sequences will reoccur in the longer sequence according to their probabilities."
Advanced:,Perplexity's Relation to Entropy,,,,,"A stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index. In other words, the probability distribution for words at time t is the same as the probability distribution at time t + 1. Markov models, and hence n-grams, are stationary. For example, in a bigram, P i is dependent only on P i−1 . So if we shift our time index by x, P i+x is still dependent on P i+x−1 . But natural language is not stationary, since as we show in Chapter 12, the probability of upcoming words can be dependent on events that were arbitrarily distant and time dependent. Thus, our statistical models only give an approximation to the correct distributions and entropies of natural language. To summarize, by making some incorrect but convenient simplifying assumptions, we can compute the entropy of some stochastic process by taking a very long sample of the output and computing its average log probability. Now we are ready to introduce cross-entropy. The cross-entropy is useful when cross-entropy we don't know the actual probability distribution p that generated some data. It allows us to use some m, which is a model of p (i.e., an approximation to p). The cross-entropy of m on p is defined by"
Advanced:,Perplexity's Relation to Entropy,,,,,"H(p, m) = lim n→∞ − 1 n W ∈L p(w 1 , . . . , w n ) log m(w 1 , . . . , w n ) (3.48)"
Advanced:,Perplexity's Relation to Entropy,,,,,"That is, we draw sequences according to the probability distribution p, but sum the log of their probabilities according to m."
Advanced:,Perplexity's Relation to Entropy,,,,,"Again, following the Shannon-McMillan-Breiman theorem, for a stationary ergodic process:"
Advanced:,Perplexity's Relation to Entropy,,,,,"H(p, m) = lim n→∞ − 1 n log m(w 1 w 2 . . . w n ) (3.49)"
Advanced:,Perplexity's Relation to Entropy,,,,,"This means that, as for entropy, we can estimate the cross-entropy of a model m on some distribution p by taking a single sequence that is long enough instead of summing over all possible sequences."
Advanced:,Perplexity's Relation to Entropy,,,,,"What makes the cross-entropy useful is that the cross-entropy H(p, m) is an upper bound on the entropy H(p). For any model m:"
Advanced:,Perplexity's Relation to Entropy,,,,,"H(p) ≤ H(p, m) (3.50)"
Advanced:,Perplexity's Relation to Entropy,,,,,"This means that we can use some simplified model m to help estimate the true entropy of a sequence of symbols drawn according to probability p. The more accurate m is, the closer the cross-entropy H(p, m) will be to the true entropy H(p). Thus, the difference between H(p, m) and H(p) is a measure of how accurate a model is. Between two models m 1 and m 2 , the more accurate model will be the one with the lower cross-entropy. (The cross-entropy can never be lower than the true entropy, so a model cannot err by underestimating the true entropy.)"
Advanced:,Perplexity's Relation to Entropy,,,,,"We are finally ready to see the relation between perplexity and cross-entropy as we saw it in Eq. 3.49. Cross-entropy is defined in the limit as the length of the observed word sequence goes to infinity. We will need an approximation to crossentropy, relying on a (sufficiently long) sequence of fixed length. This approximation to the cross-entropy of a model"
Advanced:,Perplexity's Relation to Entropy,,,,,M = P(w i |w i−N+1 : i−1 ) on a sequence of words W is H(W ) = − 1 N log P(w 1 w 2 . . . w N ) (3.51)
Advanced:,Perplexity's Relation to Entropy,,,,,The perplexity of a model P on a sequence of words W is now formally defined as perplexity 2 raised to the power of this cross-entropy:
Advanced:,Perplexity's Relation to Entropy,,,,,Perplexity(W ) = 2 H(W ) = P(w 1 w 2 . . . w N ) − 1 N = N 1 P(w 1 w 2 . . . w N ) = N N i=1 1 P(w i |w 1 . . . w i−1 ) (3.52)
Advanced:,Perplexity's Relation to Entropy,3.9,Summary,,,"This chapter introduced language modeling and the n-gram, one of the most widely used tools in language processing."
Advanced:,Perplexity's Relation to Entropy,3.9,Summary,,,"• Language models offer a way to assign a probability to a sentence or other sequence of words, and to predict a word from preceding words. • n-grams are Markov models that estimate words from a fixed window of previous words. n-gram probabilities can be estimated by counting in a corpus and normalizing (the maximum likelihood estimate). • n-gram language models are evaluated extrinsically in some task, or intrinsically using perplexity. • The perplexity of a test set according to a language model is the geometric mean of the inverse test set probability computed by the model. • Smoothing algorithms provide a more sophisticated way to estimate the probability of n-grams. Commonly used smoothing algorithms for n-grams rely on lower-order n-gram counts through backoff or interpolation."
Advanced:,Perplexity's Relation to Entropy,3.9,Summary,,,• Both backoff and interpolation require discounting to create a probability distribution. • Kneser-Ney smoothing makes use of the probability of a word being a novel continuation. The interpolated Kneser-Ney smoothing algorithm mixes a discounted probability with a lower-order continuation probability.
Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"The underlying mathematics of the n-gram was first proposed by Markov (1913) , who used what are now called Markov chains (bigrams and trigrams) to predict whether an upcoming letter in Pushkin's Eugene Onegin would be a vowel or a consonant. Markov classified 20,000 letters as V or C and computed the bigram and trigram probability that a given letter would be a vowel given the previous one or two letters. Shannon (1948) applied n-grams to compute approximations to English word sequences. Based on Shannon's work, Markov models were commonly used in engineering, linguistic, and psychological work on modeling word sequences by the 1950s. In a series of extremely influential papers starting with Chomsky (1956) and including Chomsky (1957) and Miller and Chomsky (1963) , Noam Chomsky argued that ""finite-state Markov processes"", while a possibly useful engineering heuristic, were incapable of being a complete cognitive model of human grammatical knowledge. These arguments led many linguists and computational linguists to ignore work in statistical modeling for decades. The resurgence of n-gram models came from Jelinek and colleagues at the IBM Thomas J. Watson Research Center, who were influenced by Shannon, and Baker at CMU, who was influenced by the work of Baum and colleagues. Independently these two labs successfully used n-grams in their speech recognition systems (Baker 1975b , Jelinek 1976 , Baker 1975a , Bahl et al. 1983 , Jelinek 1990 )."
Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,Add-one smoothing derives from Laplace's 1812 law of succession and was first applied as an engineering solution to the zero frequency problem by Jeffreys (1948) based on an earlier Add-K suggestion by Johnson (1932) . Problems with the addone algorithm are summarized in Gale and Church (1994) .
Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"A wide variety of different language modeling and smoothing techniques were proposed in the 80s and 90s, including Good-Turing discounting-first applied to the n-gram smoothing at IBM by Katz (Nádas 1984, Church and Gale 1991)-Witten-Bell discounting (Witten and Bell, 1991) , and varieties of class-based ngram models that used information about word classes."
Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"Starting in the late 1990s, Chen and Goodman performed a number of carefully controlled experiments comparing different discounting algorithms, cache models, class-based models, and other language model parameters (Chen and Goodman 1999, Goodman 2006, inter alia) . They showed the advantages of Modified Interpolated Kneser-Ney, which became the standard baseline for n-gram language modeling, especially because they showed that caches and class-based models provided only minor additional improvement. These papers are recommended for any reader with further interest in n-gram language modeling. SRILM (Stolcke, 2002) and KenLM (Heafield 2011 , Heafield et al. 2013 are publicly available toolkits for building n-gram language models."
Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"Modern language modeling is more commonly done with neural network language models, which solve the major problems with n-grams: the number of parameters increases exponentially as the n-gram order increases, and n-grams have no way to generalize from training to test set. Neural language models instead project words into a continuous space in which words with similar contexts have similar representations. We'll introduce both feedforward language models (Bengio et al. 2006 , Schwenk 2007 in Chapter 7, and recurrent language models (Mikolov, 2012) in Chapter 9."
4,Naive Bayes and Sentiment Classification,,,,,"Classification lies at the heart of both human and machine intelligence. Deciding what letter, word, or image has been presented to our senses, recognizing faces or voices, sorting mail, assigning grades to homeworks; these are all examples of assigning a category to an input. The potential challenges of this task are highlighted by the fabulist Jorge Luis Borges 1964, who imagined classifying animals into:"
4,Naive Bayes and Sentiment Classification,,,,,"(a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel's hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance."
4,Naive Bayes and Sentiment Classification,,,,,"Many language processing tasks involve classification, although luckily our classes are much easier to define than those of Borges. In this chapter we introduce the naive Bayes algorithm and apply it to text categorization, the task of assigning a label or text categorization category to an entire text or document."
4,Naive Bayes and Sentiment Classification,,,,,"We focus on one common text categorization task, sentiment analysis, the ex-sentiment analysis traction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Extracting consumer or public sentiment is thus relevant for fields from marketing to politics. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... Spam detection is another important commercial application, the binary classpam detection sification task of assigning an email to one of the two classes spam or not-spam. Many lexical and other features can be used to perform this classification. For example you might quite reasonably be suspicious of an email containing phrases like ""online pharmaceutical"" or ""WITHOUT ANY COST"" or ""Dear Winner""."
4,Naive Bayes and Sentiment Classification,,,,,"Another thing we might want to know about a text is the language it's written in. Texts on social media, for example, can be in any number of languages and we'll need to apply different processing. The task of language id is thus the first language id step in most language processing pipelines. Related text classification tasks like authorship attributiondetermining a text's author-are also relevant to the digital authorship attribution humanities, social sciences, and forensic linguistics."
4,Naive Bayes and Sentiment Classification,,,,,"Finally, one of the oldest tasks in text classification is assigning a library subject category or topic label to a text. Deciding whether a research paper concerns epidemiology or instead, perhaps, embryology, is an important component of information retrieval. Various sets of subject categories exist, such as the MeSH (Medical Subject Headings) thesaurus. In fact, as we will see, subject category classification is the task for which the naive Bayes algorithm was invented in 1961."
4,Naive Bayes and Sentiment Classification,,,,,"Classification is essential for tasks below the level of the document as well. We've already seen period disambiguation (deciding if a period is the end of a sentence or part of a word), and word tokenization (deciding if a character should be a word boundary). Even language modeling can be viewed as classification: each word can be thought of as a class, and so predicting the next word is classifying the context-so-far into a class for each next word. A part-of-speech tagger (Chapter 8) classifies each occurrence of a word in a sentence as, e.g., a noun or a verb."
4,Naive Bayes and Sentiment Classification,,,,,"The goal of classification is to take a single observation, extract some useful features, and thereby classify the observation into one of a set of discrete classes. One method for classifying text is to use handwritten rules. There are many areas of language processing where handwritten rule-based classifiers constitute a state-ofthe-art system, or at least part of it."
4,Naive Bayes and Sentiment Classification,,,,,"Rules can be fragile, however, as situations or data change over time, and for some tasks humans aren't necessarily good at coming up with the rules. Most cases of classification in language processing are instead done via supervised machine learning, and this will be the subject of the remainder of this chapter. In supervised supervised machine learning learning, we have a data set of input observations, each associated with some correct output (a 'supervision signal'). The goal of the algorithm is to learn how to map from a new observation to a correct output. Formally, the task of supervised classification is to take an input x and a fixed set of output classes Y = y 1 , y 2 , ..., y M and return a predicted class y ∈ Y . For text classification, we'll sometimes talk about c (for ""class"") instead of y as our output variable, and d (for ""document"") instead of x as our input variable. In the supervised situation we have a training set of N documents that have each been hand-labeled with a class:"
4,Naive Bayes and Sentiment Classification,,,,,"(d 1 , c 1 ), ...., (d N , c N )."
4,Naive Bayes and Sentiment Classification,,,,,Our goal is to learn a classifier that is capable of mapping from a new document d to its correct class c ∈ C. A probabilistic classifier additionally will tell us the probability of the observation being in the class. This full distribution over the classes can be useful information for downstream decisions; avoiding making discrete decisions early on can be useful when combining systems.
4,Naive Bayes and Sentiment Classification,,,,,"Many kinds of machine learning algorithms are used to build classifiers. This chapter introduces naive Bayes; the following one introduces logistic regression. These exemplify two ways of doing classification. Generative classifiers like naive Bayes build a model of how a class could generate some input data. Given an observation, they return the class most likely to have generated the observation. Discriminative classifiers like logistic regression instead learn what features from the input are most useful to discriminate between the different possible classes. While discriminative systems are often more accurate and hence more commonly used, generative classifiers still have a role."
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"In this section we introduce the multinomial naive Bayes classifier, so called be- cause it is a Bayesian classifier that makes a simplifying (naive) assumption about how the features interact. The intuition of the classifier is shown in Fig. 4 .1. We represent a text document as if it were a bag-of-words, that is, an unordered set of words with their position bag-of-words ignored, keeping only their frequency in the document. In the example in the figure, instead of representing the word order in all the phrases like ""I love this movie"" and ""I would recommend it"", we simply note that the word I occurred 5 times in the entire excerpt, the word it 6 times, the words love, recommend, and movie once, and so on."
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,Figure 4 .1 Intuition of the multinomial naive Bayes classifier applied to a movie review. The position of the words is ignored (the bag of words assumption) and we make use of the frequency of each word.
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"Naive Bayes is a probabilistic classifier, meaning that for a document d, out of all classes c ∈ C the classifier returns the classĉ which has the maximum posterior probability given the document. In Eq. 4.1 we use the hat notationˆto mean ""our estimate of the correct class""."
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"This idea of Bayesian inference has been known since the work of Bayes (1763), and was first applied to text classification by Mosteller and Wallace (1964) . The intuition of Bayesian classification is to use Bayes' rule to transform Eq. 4.1 into other probabilities that have some useful properties. Bayes' rule is presented in Eq. 4.2; it gives us a way to break down any conditional probability P(x|y) into three other probabilities:"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,P(x|y) = P(y|x)P(x) P(y) (4.2)
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,We can then substitute Eq. 4.2 into Eq. 4.1 to get Eq. 4.3:
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,c = argmax c∈C P(c|d) = argmax c∈C P(d|c)P(c) P(d) (4.3) 4.1 • NAIVE BAYES CLASSIFIERS 59
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,We can conveniently simplify Eq. 4.3 by dropping the denominator P(d). This is possible because we will be computing P(d|c)P(c)
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,P(d)
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"for each possible class. But P(d) doesn't change for each class; we are always asking about the most likely class for the same document d, which must have the same probability P(d). Thus, we can choose the class that maximizes this simpler formula:"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,c = argmax c∈C P(c|d) = argmax c∈C P(d|c)P(c) (4.4)
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"We call Naive Bayes a generative model because we can read Eq. 4.4 as stating a kind of implicit assumption about how a document is generated: first a class is sampled from P(c), and then the words are generated by sampling from P(d|c). (In fact we could imagine generating artificial documents, or at least their word counts, by following this process). We'll say more about this intuition of generative models in Chapter 5."
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,To return to classification: we compute the most probable classĉ given some document d by choosing the class which has the highest product of two probabilities: the prior probability of the class P(c) and the likelihood of the document P(d|c):
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,prior probability likelihoodĉ = argmax c∈C likelihood P(d|c) prior P(c) (4.5)
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"Without loss of generalization, we can represent a document d as a set of features"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"f 1 , f 2 , ..., f n :ĉ = argmax c∈C likelihood P( f 1 , f 2 , ...., f n |c) prior P(c) (4.6)"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"Unfortunately, Eq. 4.6 is still too hard to compute directly: without some simplifying assumptions, estimating the probability of every possible combination of features (for example, every possible set of words and positions) would require huge numbers of parameters and impossibly large training sets. Naive Bayes classifiers therefore make two simplifying assumptions."
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"The first is the bag of words assumption discussed intuitively above: we assume position doesn't matter, and that the word ""love"" has the same effect on classification whether it occurs as the 1st, 20th, or last word in the document. Thus we assume that the features f 1 , f 2 , ..., f n only encode word identity and not position."
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,The second is commonly called the naive Bayes assumption: this is the condi-naive Bayes assumption tional independence assumption that the probabilities P( f i |c) are independent given the class c and hence can be 'naively' multiplied as follows:
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"P( f 1 , f 2 , ...., f n |c) = P( f 1 |c) • P( f 2 |c) • ... • P( f n |c) (4.7)"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,The final equation for the class chosen by a naive Bayes classifier is thus:
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,c NB = argmax c∈C P(c) f ∈F P( f |c) (4.8)
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"To apply the naive Bayes classifier to text, we need to consider word positions, by simply walking an index through every word position in the document:"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,positions ← all word positions in test document
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"Naive Bayes calculations, like calculations for language modeling, are done in log"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"space, to avoid underflow and increase speed. Thus Eq. 4.9 is generally instead"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,expressed as:
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"By considering features in log space, Eq. 4.10 computes the predicted class as a lin-"
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,ear function of input features. Classifiers that use a linear combination of the inputs
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,to make a classification decision —like naive Bayes and also logistic regression—
4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,are called linear classifiers.
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,How can we learn the probabilities P(c) and P( f i |c)? Let's first consider the maximum likelihood estimate. We'll simply use the frequencies in the data. For the class prior P(c) we ask what percentage of the documents in our training set are in each class c. Let N c be the number of documents in our training data with class c and N doc be the total number of documents. Then:
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,P(c) = N c N doc (4.11)
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"To learn the probability P( f i |c), we'll assume a feature is just the existence of a word in the document's bag of words, and so we'll want P(w i |c), which we compute as the fraction of times the word w i appears among all words in all documents of topic c. We first concatenate all documents with category c into one big ""category c"" text. Then we use the frequency of w i in this concatenated document to give a maximum likelihood estimate of the probability:"
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"P(w i |c) = count(w i , c) w∈V count(w, c) (4.12)"
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"Here the vocabulary V consists of the union of all the word types in all classes, not just the words in one class c."
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"There is a problem, however, with maximum likelihood training. Imagine we are trying to estimate the likelihood of the word ""fantastic"" given class positive, but suppose there are no training documents that both contain the word ""fantastic"" and are classified as positive. Perhaps the word ""fantastic"" happens to occur (sarcastically?) in the class negative. In such a case the probability for this feature will be zero:P (""fantastic""|positive) = count(""fantastic"", positive)"
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"w∈V count(w, positive) = 0 (4.13)"
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"But since naive Bayes naively multiplies all the feature likelihoods together, zero probabilities in the likelihood term for any class will cause the probability of the class to be zero, no matter the other evidence! The simplest solution is the add-one (Laplace) smoothing introduced in Chapter 3. While Laplace smoothing is usually replaced by more sophisticated smoothing algorithms in language modeling, it is commonly used in naive Bayes text categorization:P"
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"(w i |c) = count(w i , c) + 1 w∈V (count(w, c) + 1) = count(w i , c) + 1 w∈V count(w, c) + |V | (4.14)"
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"Note once again that it is crucial that the vocabulary V consists of the union of all the word types in all classes, not just the words in one class c (try to convince yourself why this must be true; see the exercise at the end of the chapter). What do we do about words that occur in our test data but are not in our vocabulary at all because they did not occur in any training document in any class? The solution for such unknown words is to ignore them-remove them from the test unknown word document and not include any probability for them at all."
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"Finally, some systems choose to completely ignore another class of words: stop words, very frequent words like the and a. This can be done by sorting the vocabustop words lary by frequency in the training set, and defining the top 10-100 vocabulary entries as stop words, or alternatively by using one of the many predefined stop word lists available online. Then each instance of these stop words is simply removed from both training and test documents as if it had never occurred. In most text classification applications, however, using a stop word list doesn't improve performance, and so it is more common to make use of the entire vocabulary and not use a stop word list. Fig. 4 .2 shows the final algorithm."
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"function TRAIN NAIVE BAYES(D, C) returns log P(c) and log P(w|c)"
4,Naive Bayes and Sentiment Classification,4.2,Training the Naive Bayes Classifier,,,"for each class c ∈ C # Calculate P(c) terms N doc = number of documents in D N c = number of documents from D in class c logprior[c] ← log N c N doc V ← vocabulary of D bigdoc[c] ← append(d) for d ∈ D with class c for each word w in V # Calculate P(w|c) terms count(w,c) ← # of occurrences of w in bigdoc[c] loglikelihood[w,c] ← log count(w, c) + 1 w in V (count (w , c) + 1) return logprior, loglikelihood, V function TEST NAIVE BAYES(testdoc, logprior, loglikelihood, C, V) returns best c for each class c ∈ C sum[c] ← logprior[c] for each position i in testdoc word ← testdoc[i] if word ∈ V sum[c] ← sum[c]+ loglikelihood[word,c] return argmax c sum[c]"
4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,"Let's walk through an example of training and testing naive Bayes with add-one smoothing. We'll use a sentiment analysis domain with the two classes positive (+) and negative (-), and take the following miniature training and test documents simplified from actual movie reviews. : P(−) = 3 5 P(+) = 2 5 The word with doesn't occur in the training set, so we drop it completely (as mentioned above, we don't use unknown word models for naive Bayes). The likelihoods from the training set for the remaining three words ""predictable"", ""no"", and ""fun"", are as follows, from Eq. 4.14 (computing the probabilities for the remainder of the words in the training set is left as an exercise for the reader): P(""predictable""|−) = 1 + 1 14 + 20 P(""predictable""|+) = 0 + 1 9 + 20 P(""no""|−) = 1 + 1 14 + 20 P(""no""|+) = 0 + 1 9 + 20 P(""fun""|−) = 0 + 1 14 + 20 P(""fun"""
4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,|+) = 1 + 1 9 + 20
4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,"For the test sentence S = ""predictable with no fun"", after removing the word 'with', the chosen class, via Eq. 4.9, is therefore computed as follows:"
4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,P(−)P(S|−) = 3 5 × 2 × 2 × 1 34 3 = 6.1 × 10 −5 P(+)P(S|+) = 2 5 × 1 × 1 × 2 29 3 = 3.2 × 10 −5
4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,The model thus predicts the class negative for the test sentence.
4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"While standard naive Bayes text classification can work well for sentiment analysis, some small changes are generally employed that improve performance. First, for sentiment classification and a number of other text classification tasks, whether a word occurs or not seems to matter more than its frequency. Thus it often improves performance to clip the word counts in each document at 1 (see the end of the chapter for pointers to these results). This variant is called binary 4.4 • OPTIMIZING FOR SENTIMENT ANALYSIS 63 multinomial naive Bayes or binary NB. The variant uses the same Eq. 4.10 except binary NB that for each document we remove all duplicate words before concatenating them into the single big document. Fig. 4 .3 shows an example in which a set of four documents (shortened and text-normalized for this example) are remapped to binary, with the modified counts shown in the table on the right. The example is worked without add-1 smoothing to make the differences clearer. Note that the results counts need not be 1; the word great has a count of 2 even for Binary NB, because it appears in multiple documents."
4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"− it was pathetic the worst part was the boxing scenes − no plot twists or great scenes + and satire and great plot twists + great scenes great film After per-document binarization: A second important addition commonly made when doing text classification for sentiment is to deal with negation. Consider the difference between I really like this movie (positive) and I didn't like this movie (negative). The negation expressed by didn't completely alters the inferences we draw from the predicate like. Similarly, negation can modify a negative word to produce a positive review (don't dismiss this film, doesn't let us get bored)."
4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"A very simple baseline that is commonly used in sentiment analysis to deal with negation is the following: during text normalization, prepend the prefix NOT to every word after a token of logical negation (n't, not, no, never) until the next punctuation mark. Thus the phrase didn't like this movie , but I becomes didn't NOT_like NOT_this NOT_movie , but I Newly formed 'words' like NOT like, NOT recommend will thus occur more often in negative document and act as cues for negative sentiment, while words like NOT bored, NOT dismiss will acquire positive associations. We will return in Chapter 16 to the use of parsing to deal more accurately with the scope relationship between these negation words and the predicates they modify, but this simple baseline works quite well in practice."
4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"Finally, in some situations we might have insufficient labeled training data to train accurate naive Bayes classifiers using all words in the training set to estimate positive and negative sentiment. In such cases we can instead derive the positive and negative word features from sentiment lexicons, lists of words that are pre-sentiment lexicons annotated with positive or negative sentiment. Four popular lexicons are the General Inquirer (Stone et al., 1966) , LIWC (Pennebaker et al., 2007) , the opinion lexicon"
4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"of Hu and Liu (2004a) and the MPQA Subjectivity Lexicon (Wilson et al., 2005) . For example the MPQA subjectivity lexicon has 6885 words, 2718 positive and 4912 negative, each marked for whether it is strongly or weakly biased. Some samples of positive and negative words from the MPQA lexicon include: + : admirable, beautiful, confident, dazzling, ecstatic, favor, glee, great − : awful, bad, bias, catastrophe, cheat, deny, envious, foul, harsh, hate A common way to use lexicons in a naive Bayes classifier is to add a feature that is counted whenever a word from that lexicon occurs. Thus we might add a feature called 'this word occurs in the positive lexicon', and treat all instances of words in the lexicon as counts for that one feature, instead of counting each word separately. Similarly, we might add as a second feature 'this word occurs in the negative lexicon' of words in the negative lexicon. If we have lots of training data, and if the test data matches the training data, using just two features won't work as well as using all the words. But when training data is sparse or not representative of the test set, using dense lexicon features instead of sparse individual-word features may generalize better."
4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"We'll return to this use of lexicons in Chapter 20, showing how these lexicons can be learned automatically, and how they can be applied to many other tasks beyond sentiment classification."
4,Naive Bayes and Sentiment Classification,4.5,Naive Bayes for Other Text Classification Tasks,,,In the previous section we pointed out that naive Bayes doesn't require that our classifier use all the words in the training data as features. In fact features in naive Bayes can express any property of the input text we want.
4,Naive Bayes and Sentiment Classification,4.5,Naive Bayes for Other Text Classification Tasks,,,"Consider the task of spam detection, deciding if a particular piece of email is spam detection an example of spam (unsolicited bulk email) -and one of the first applications of naive Bayes to text classification (Sahami et al., 1998) . A common solution here, rather than using all the words as individual features, is to predefine likely sets of words or phrases as features, combined with features that are not purely linguistic. For example the open-source SpamAssassin tool 1 predefines features like the phrase ""one hundred percent guaranteed"", or the feature mentions millions of dollars, which is a regular expression that matches suspiciously large sums of money. But it also includes features like HTML has a low ratio of text to image area, that aren't purely linguistic and might require some sophisticated computation, or totally non-linguistic features about, say, the path that the email took to arrive. More sample SpamAssassin features:"
4,Naive Bayes and Sentiment Classification,4.5,Naive Bayes for Other Text Classification Tasks,,,"• Email subject line is all capital letters • Contains phrases of urgency like ""urgent reply"" • Email subject line contains ""online pharmaceutical"""
4,Naive Bayes and Sentiment Classification,4.5,Naive Bayes for Other Text Classification Tasks,,,"• HTML has unbalanced ""head"" tags • Claims you can be removed from the list For other tasks, like language id-determining what language a given piece language id of text is written in-the most effective naive Bayes features are not words at all, but character n-grams, 2-grams ('zw') 3-grams ('nya', ' Vo'), or 4-grams ('ie z', 'thei'), or, even simpler byte n-grams, where instead of using the multibyte Unicode character representations called codepoints, we just pretend everything is a string of raw bytes. Because spaces count as a byte, byte n-grams can model statistics about the beginning or ending of words. A widely used naive Bayes system, langid.py (Lui and Baldwin, 2012) begins with all possible n-grams of lengths 1-4, using feature selection to winnow down to the most informative 7000 final features. Language ID systems are trained on multilingual text, such as Wikipedia (Wikipedia text in 68 different languages was used in (Lui and Baldwin, 2011)), or newswire. To make sure that this multilingual text correctly reflects different regions, dialects, and socioeconomic classes, systems also add Twitter text in many languages geotagged to many regions (important for getting world English dialects from countries with large Anglophone populations like Nigeria or India), Bible and Quran translations, slang websites like Urban Dictionary, corpora of African American Vernacular English (Blodgett et al., 2016) , and so on (Jurgens et al., 2017)."
4,Naive Bayes and Sentiment Classification,4.6,Naive Bayes as a Language Model,,,"As we saw in the previous section, naive Bayes classifiers can use any sort of feature: dictionaries, URLs, email addresses, network features, phrases, and so on. But if, as in the previous section, we use only individual word features, and we use all of the words in the text (not a subset), then naive Bayes has an important similarity to language modeling. Specifically, a naive Bayes model can be viewed as a set of class-specific unigram language models, in which the model for each class instantiates a unigram language model."
4,Naive Bayes and Sentiment Classification,4.6,Naive Bayes as a Language Model,,,"Since the likelihood features from the naive Bayes model assign a probability to each word P(word|c), the model also assigns a probability to each sentence:"
4,Naive Bayes and Sentiment Classification,4.6,Naive Bayes as a Language Model,,,P(s|c) = i∈positions P(w i |c) (4.15)
4,Naive Bayes and Sentiment Classification,4.6,Naive Bayes as a Language Model,,,Thus consider a naive Bayes model with the classes positive (+) and negative (-) and the following model parameters:
4,Naive Bayes and Sentiment Classification,4.6,Naive Bayes as a Language Model,,,w P(w|+) P(w|-) I 0.1 0.2 love 0.1 0.001 this 0.01 0.01 fun 0.05 0.005 film 0.1 0.1 ... ... ...
4,Naive Bayes and Sentiment Classification,4.6,Naive Bayes as a Language Model,,,"Each of the two columns above instantiates a language model that can assign a probability to the sentence ""I love this fun film"": P(""I love this fun film""|+) = 0.1 × 0.1 × 0.01 × 0.05 × 0.1 = 0.0000005 P(""I love this fun film""|−) = 0.2 × 0.001 × 0.01 × 0.005 × 0.1 = .0000000010"
4,Naive Bayes and Sentiment Classification,4.6,Naive Bayes as a Language Model,,,"As it happens, the positive model assigns a higher probability to the sentence: P(s|pos) > P(s|neg). Note that this is just the likelihood part of the naive Bayes model; once we multiply in the prior a full naive Bayes model might well make a different classification decision."
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"To introduce the methods for evaluating text classification, let's first consider some simple binary detection tasks. For example, in spam detection, our goal is to label every text as being in the spam category (""positive"") or not in the spam category (""negative""). For each item (email document) we therefore need to know whether our system called it spam or not. We also need to know whether the email is actually spam or not, i.e. the human-defined labels for each document that we are trying to match. We will refer to these human labels as the gold labels."
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"Or imagine you're the CEO of the Delicious Pie Company and you need to know what people are saying about your pies on social media, so you build a system that detects tweets concerning Delicious Pie. Here the positive class is tweets about Delicious Pie and the negative class is all other tweets."
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"In both cases, we need a metric for knowing how well our spam detector (or pie-tweet-detector) is doing. To evaluate any system for detecting things, we start by building a confusion matrix like the one shown in Fig. 4"
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"is a table for visualizing how an algorithm performs with respect to the human gold labels, using two dimensions (system output and gold labels), and each cell labeling a set of possible outcomes. In the spam detection case, for example, true positives are documents that are indeed spam (indicated by human-created gold labels) that our system correctly said were spam. False negatives are documents that are indeed spam but our system incorrectly labeled as non-spam."
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"To the bottom right of the table is the equation for accuracy, which asks what percentage of all the observations (for the spam or pie examples that means all emails or tweets) our system labeled correctly. Although accuracy might seem a natural metric, we generally don't use it for text classification tasks. That's because accuracy doesn't work well when the classes are unbalanced (as indeed they are with spam, which is a large majority of email, or with tweets, which are mainly not about pie). To make this more explicit, imagine that we looked at a million tweets, and let's say that only 100 of them are discussing their love (or hatred) for our pie,"
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"while the other 999,900 are tweets about something completely unrelated. Imagine a simple classifier that stupidly classified every tweet as ""not about pie"". This classifier would have 999,900 true negatives and only 100 false negatives for an accuracy of 999,900/1,000,000 or 99.99%! What an amazing accuracy level! Surely we should be happy with this classifier? But of course this fabulous 'no pie' classifier would be completely useless, since it wouldn't find a single one of the customer comments we are looking for. In other words, accuracy is not a good metric when the goal is to discover something that is rare, or at least not completely balanced in frequency, which is a very common situation in the world."
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"That's why instead of accuracy we generally turn to two other metrics shown in Precision and recall will help solve the problem with the useless ""nothing is pie"" classifier. This classifier, despite having a fabulous accuracy of 99.99%, has a terrible recall of 0 (since there are no true positives, and 100 false negatives, the recall is 0/100). You should convince yourself that the precision at finding relevant tweets is equally problematic. Thus precision and recall, unlike accuracy, emphasize true positives: finding the things that we are supposed to be looking for."
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"There are many ways to define a single metric that incorporates aspects of both precision and recall. The simplest of these combinations is the F-measure (van F-measure Rijsbergen, 1975) , defined as:"
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,F β = (β 2 + 1)PR
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"β 2 P + R The β parameter differentially weights the importance of recall and precision, based perhaps on the needs of an application. Values of β > 1 favor recall, while values of β < 1 favor precision. When β = 1, precision and recall are equally balanced; this is the most frequently used metric, and is called F β =1 or just F 1 :"
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,F1 F 1 = 2PR P + R (4.16)
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,F-measure comes from a weighted harmonic mean of precision and recall. The harmonic mean of a set of numbers is the reciprocal of the arithmetic mean of reciprocals:
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,"HarmonicMean(a 1 , a 2 , a 3 , a 4 , ..., a n ) = n 1 a 1 + 1 a 2 + 1 a 3 + ... + 1 a n (4.17)"
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,and hence F-measure is
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,F = 1 α 1 P + (1 − α) 1 R or with β 2 = 1 − α α F = (β 2 + 1)PR β 2 P + R (4.18)
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",,,Harmonic mean is used because it is a conservative metric; the harmonic mean of two values is closer to the minimum of the two values than the arithmetic mean is. Thus it weighs the lower of the two numbers more heavily.
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",4.7.1,Evaluating with more than two classes,"Up to now we have been describing text classification tasks with only two classes. But lots of classification tasks in language processing have more than two classes. For sentiment analysis we generally have 3 classes (positive, negative, neutral) and even more classes are common for tasks like part-of-speech tagging, word sense disambiguation, semantic role labeling, emotion detection, and so on. Luckily the naive Bayes algorithm is already a multi-class classification algorithm. But we'll need to slightly modify our definitions of precision and recall. Consider the sample confusion matrix for a hypothetical 3-way one-of email categorization decision (urgent, normal, spam) shown in Fig. 4 .5. The matrix shows, for example, that the system mistakenly labeled one spam document as urgent, and we have shown how to compute a distinct precision and recall value for each class. In order to derive a single metric that tells us how well the system is doing, we can combine these values in two ways. In macroaveraging, we compute the performance macroaveraging for each class, and then average over classes. In microaveraging, we collect the demicroaveraging cisions for all classes into a single confusion matrix, and then compute precision and recall from that table. Fig. 4 .6 shows the confusion matrix for each class separately, and shows the computation of microaveraged and macroaveraged precision."
4,Naive Bayes and Sentiment Classification,4.7,"Evaluation: Precision, Recall, F-measure",4.7.1,Evaluating with more than two classes,"As the figure shows, a microaverage is dominated by the more frequent class (in this case spam), since the counts are pooled. The macroaverage better reflects the statistics of the smaller classes, and so is more appropriate when performance on all the classes is equally important."
4,Naive Bayes and Sentiment Classification,4.8,Test sets and Cross-validation,,,"The training and testing procedure for text classification follows what we saw with language modeling (Section 3.2): we use the training set to train the model, then use the development test set (also called a devset) to perhaps tune some parameters, and in general decide what the best model is. Once we come up with what we think is the best model, we run it on the (hitherto unseen) test set to report its performance."
4,Naive Bayes and Sentiment Classification,4.8,Test sets and Cross-validation,,,"While the use of a devset avoids overfitting the test set, having a fixed training set, devset, and test set creates another problem: in order to save lots of data for training, the test set (or devset) might not be large enough to be representative. Wouldn't it be better if we could somehow use all our data for training and still use all our data for test? We can do this by cross-validation."
4,Naive Bayes and Sentiment Classification,4.8,Test sets and Cross-validation,,,"In cross-validation, we choose a number k, and partition our data into k disjoint subsets called folds. Now we choose one of those k folds as a test set, train our folds classifier on the remaining k − 1 folds, and then compute the error rate on the test set. Then we repeat with another fold as the test set, again training on the other k − 1 folds. We do this sampling process k times and average the test set error rate from these k runs to get an average error rate. If we choose k = 10, we would train 10 different models (each on 90% of our data), test the model 10 times, and average these 10 values. This is called 10-fold cross-validation."
4,Naive Bayes and Sentiment Classification,4.8,Test sets and Cross-validation,,,"The only problem with cross-validation is that because all the data is used for testing, we need the whole corpus to be blind; we can't examine any of the data to suggest possible features and in general see what's going on, because we'd be peeking at the test set, and such cheating would cause us to overestimate the performance of our system. However, looking at the corpus to understand what's going on is important in designing NLP systems! What to do? For this reason, it is common to create a fixed training set and test set, then do 10-fold cross-validation inside the training set, but compute error rate the normal way in the test set, as shown in Fig. 4 .7."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"In building systems we often need to compare the performance of two systems. How can we know if the new system we just built is better than our old one? Or better than the some other system described in the literature? This is the domain of statistical hypothesis testing, and in this section we introduce tests for statistical significance for NLP classifiers, drawing especially on the work of Dror et al. 2020 such as F 1 , or accuracy. Perhaps we want to know if our logistic regression sentiment classifier A (Chapter 5) gets a higher F 1 score than our naive Bayes sentiment classifier B on a particular test set x. Let's call M(A, x) the score that system A gets on test set x, and δ (x) the performance difference between A and B on x:"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"δ (x) = M(A, x) − M(B, x) (4.19)"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"We would like to know if δ (x) > 0, meaning that our logistic regression classifier has a higher F 1 than our naive Bayes classifier on X. δ (x) is called the effect size;"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"effect size a bigger δ means that A seems to be way better than B; a small δ means A seems to be only a little better. Why don't we just check if δ (x) is positive? Suppose we do, and we find that the F 1 score of A is higher than B's by .04. Can we be certain that A is better? We cannot! That's because A might just be accidentally better than B on this particular x. We need something more: we want to know if A's superiority over B is likely to hold again if we checked another test set x , or under some other set of circumstances."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"In the paradigm of statistical hypothesis testing, we test this by formalizing two hypotheses."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,H 0 : δ (x) ≤ 0 H 1 : δ (x) > 0 (4.20)
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"The hypothesis H 0 , called the null hypothesis, supposes that δ (x) is actually neganull hypothesis tive or zero, meaning that A is not better than B. We would like to know if we can confidently rule out this hypothesis, and instead support H 1 , that A is better."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"We do this by creating a random variable X ranging over all test sets. Now we ask how likely is it, if the null hypothesis H 0 was correct, that among these test sets we would encounter the value of δ (x) that we found. We formalize this likelihood as the p-value: the probability, assuming the null hypothesis H 0 is true, of seeing p-value the δ (x) that we saw or one even greater"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,P(δ (X) ≥ δ (x)|H 0 is true) (4.21)
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"So in our example, this p-value is the probability that we would see δ (x) assuming A is not better than B. If δ (x) is huge (let's say A has a very respectable F 1 of .9 and B has a terrible F 1 of only .2 on x), we might be surprised, since that would be extremely unlikely to occur if H 0 were in fact true, and so the p-value would be low (unlikely to have such a large δ if A is in fact not better than B). But if δ (x) is very small, it might be less surprising to us even if H 0 were true and A is not really better than B, and so the p-value would be higher."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"A very small p-value means that the difference we observed is very unlikely under the null hypothesis, and we can reject the null hypothesis. What counts as very small? It is common to use values like .05 or .01 as the thresholds. A value of .01 means that if the p-value (the probability of observing the δ we saw assuming H 0 is true) is less than .01, we reject the null hypothesis and assume that A is indeed better than B. We say that a result (e.g., ""A is better than B"") is statistically significant if statistically significant the δ we saw has a probability that is below the threshold and we therefore reject this null hypothesis. How do we compute this probability we need for the p-value? In NLP we generally don't use simple parametric tests like t-tests or ANOVAs that you might be familiar with. Parametric tests make assumptions about the distributions of the test statistic (such as normality) that don't generally hold in our cases. So in NLP we usually use non-parametric tests based on sampling: we artificially create many versions of the experimental setup. For example, if we had lots of different test sets x we could just measure all the δ (x ) for all the x . That gives us a distribution. Now we set a threshold (like .01) and if we see in this distribution that 99% or more of those deltas are smaller than the delta we observed, i.e. that p-value(x)-the probability of seeing a δ (x) as big as the one we saw, is less than .01, then we can reject the null hypothesis and agree that δ (x) was a sufficiently surprising difference and A is really a better algorithm than B."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,,,"There are two common non-parametric tests used in NLP: approximate randomization (Noreen, 1989) and the bootstrap test. We will describe bootstrap approximate randomization below, showing the paired version of the test, which again is most common in NLP. Paired tests are those in which we compare two sets of observations that are aligned: paired each observation in one set can be paired with an observation in another. This happens naturally when we are comparing the performance of two systems on the same test set; we can pair the performance of system A on an individual observation x i with the performance of system B on the same x i ."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,"The bootstrap test (Efron and Tibshirani, 1993) can apply to any metric; from prebootstrap test cision, recall, or F1 to the BLEU metric used in machine translation. The word bootstrapping refers to repeatedly drawing large numbers of smaller samples with bootstrapping replacement (called bootstrap samples) from an original larger sample. The intuition of the bootstrap test is that we can create many virtual test sets from an observed test set by repeatedly sampling from it. The method only makes the assumption that the sample is representative of the population."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,"Consider a tiny text classification example with a test set x of 10 documents. The first row of Fig. 4 .8 shows the results of two classifiers (A and B) on this test set, with each document labeled by one of the four possibilities: (A and B both right, both wrong, A right and B wrong, A wrong and B right); a slash through a letter ( B) means that that classifier got the answer wrong. On the first document both A and B get the correct class (AB), while on the second document A got it right but B got it wrong (A B). If we assume for simplicity that our metric is accuracy, A has an accuracy of .70 and B of .50, so δ (x) is .20. Now we create a large number b (perhaps 10 5 ) of virtual test sets x (i) , each of size n = 10. Fig. 4 .8 shows a couple examples. To create each virtual test set x (i) , we repeatedly (n = 10 times) select a cell from row x with replacement. For example, to create the first cell of the first virtual test set x (1) , if we happened to randomly select the second cell of the x row; we would copy the value A B into our new cell, and move on to create the second cell of x (1) , each time sampling (randomly choosing) from the original x with replacement. 1 2 3 4 5 6 7 8 9 10 A% B% δ () Now that we have the b test sets, providing a sampling distribution, we can do statistics on how often A has an accidental advantage. There are various ways to compute this advantage; here we follow the version laid out in Berg-Kirkpatrick et al. (2012) . Assuming H 0 (A isn't better than B), we would expect that δ (X), estimated over many test sets, would be zero; a much higher value would be surprising, since H 0 specifically assumes A isn't better than B. To measure exactly how surprising is our observed δ (x) we would in other circumstances compute the p-value by counting over many test sets how often δ (x (i) ) exceeds the expected zero value by δ (x) or more:"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,x AB A B AB AB A B AB A B AB A B A B .70 .50 .20 x (1) A B AB A B AB AB A B AB AB A B AB .60 .60 .00 x (2) A B AB A B AB AB AB AB A B AB AB .60 .70 -.10 ... x (b)
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,p-value(x) = 1 b b i=1 1 δ (x (i) ) − δ (x) ≥ 0
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,"(We use the notation 1(x) to mean ""1 if x is true, and 0 otherwise"".) However, although it's generally true that the expected value of δ (X) over many test sets, (again assuming A isn't better than B) is 0, this isn't true for the bootstrapped test sets we created. That's because we didn't draw these samples from a distribution with 0 mean; we happened to create them from the original test set x, which happens to be biased (by .20) in favor of A. So to measure how surprising is our observed δ (x), we actually compute the p-value by counting over many test sets how often δ (x (i) ) exceeds the expected value of δ (x) by δ (x) or more:"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,p-value(x) = 1 b b i=1 1 δ (x (i) ) − δ (x) ≥ δ (x) = 1 b b i=1 1 δ (x (i) ) ≥ 2δ (x) (4.22)
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,"So if for example we have 10,000 test sets x (i) and a threshold of .01, and in only 47 of the test sets do we find that δ (x (i) ) ≥ 2δ (x), the resulting p-value of .0047 is smaller than .01, indicating δ (x) is indeed sufficiently surprising, and we can reject the null hypothesis and conclude A is better than B."
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,"function BOOTSTRAP(test set x, num of samples b) returns p-value(x)"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,Calculate δ (x) # how much better does algorithm A do than B on x s = 0 for i = 1 to b do for j = 1 to n do # Draw a bootstrap sample x (i) of size n Select a member of x at random and add it to
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,x (i) Calculate δ (x (i) ) # how much better does algorithm A do than B on x (i) s ← s + 1 if δ (x (i) ) ≥ 2δ (x) p-value(x) ≈ s
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,"b # on what % of the b samples did algorithm A beat expectations? return p-value(x) # if very few did, our observed δ is probably not accidental The full algorithm for the bootstrap is shown in Fig. 4 .9. It is given a test set x, a number of samples b, and counts the percentage of the b bootstrap test sets in which"
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,δ (x * (i) ) > 2δ (x)
4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,. This percentage then acts as a one-sided empirical p-value
4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"It is important to avoid harms that may result from classifiers, harms that exist both for naive Bayes classifiers and for the other classification algorithms we introduce in later chapters."
4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"One class of harms is representational harms (Crawford 2017, Blodgett et al."
4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"representational harms 2020), harms caused by a system that demeans a social group, for example by perpetuating negative stereotypes about them. For example Kiritchenko and Mohammad (2018) examined the performance of 200 sentiment analysis systems on pairs of sentences that were identical except for containing either a common African American first name (like Shaniqua) or a common European American first name (like Stephanie), chosen from the Caliskan et al. (2017) study discussed in Chapter 6. They found that most systems assigned lower sentiment and more negative emotion to sentences with African American names, reflecting and perpetuating stereotypes that associate African Americans with negative emotions (Popp et al., 2003) . In other tasks classifiers may lead to both representational harms and other harms, such as censorship. For example the important text classification task of toxicity detection is the task of detecting hate speech, abuse, harassment, or other toxicity detection kinds of toxic language. While the goal of such classifiers is to help reduce societal harm, toxicity classifiers can themselves cause harms. For example, researchers have shown that some widely used toxicity classifiers incorrectly flag as being toxic sentences that are non-toxic but simply mention minority identities like women (Park et al., 2018), blind people (Hutchinson et al., 2020) or gay people (Dixon et al., 2018), or simply use linguistic features characteristic of varieties like African-American Vernacular English (Sap et al. 2019 , Davidson et al. 2019 . Such false positive errors, if employed by toxicity detection systems without human oversight, could lead to the censoring of discourse by or about these groups."
4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"These model problems can be caused by biases or other problems in the training data; in general, machine learning systems replicate and even amplify the biases in their training data. But these problems can also be caused by the labels (for example due to biases in the human labelers), by the resources used (like lexicons, or model components like pretrained embeddings), or even by model architecture (like what the model is trained to optimized). While the mitigation of these biases (for example by carefully considering the training data sources) is an important area of research, we currently don't have general solutions. For this reason it's important, when introducing any NLP model, to study these these kinds of factors and make them clear. One way to do this is by releasing a model card (Mitchell et al., 2019) model card for each version of a model. A model card documents a machine learning model with information like:"
4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"• training algorithms and parameters • training data sources, motivation, and preprocessing • evaluation data sources, motivation, and preprocessing • intended use and users • model performance across different demographic or other groups and environmental situations"
4,Naive Bayes and Sentiment Classification,4.11,Summary,,,This chapter introduced the naive Bayes model for classification and applied it to the text categorization task of sentiment analysis.
4,Naive Bayes and Sentiment Classification,4.11,Summary,,,• Many language processing tasks can be viewed as tasks of classification.
4,Naive Bayes and Sentiment Classification,4.11,Summary,,,"• Text categorization, in which an entire text is assigned a class from a finite set, includes such tasks as sentiment analysis, spam detection, language identification, and authorship attribution. • Sentiment analysis classifies a text as reflecting the positive or negative orientation (sentiment) that a writer expresses toward some object. • Naive Bayes is a generative model that makes the bag of words assumption (position doesn't matter) and the conditional independence assumption (words are conditionally independent of each other given the class) • Naive Bayes with binarized features seems to work better for many text classification tasks. • Classifiers are evaluated based on precision and recall."
4,Naive Bayes and Sentiment Classification,4.11,Summary,,,"• Classifiers are trained using distinct training, dev, and test sets, including the use of cross-validation in the training set. • Statistical significance tests should be used to determine whether we can be confident that one version of a classifier is better than another. • Designers of classifiers should carefully consider harms that may be caused by the model, including its training data and other components, and report model characteristics in a model card."
4,Naive Bayes and Sentiment Classification,4.12,Bibliographical and Historical Notes,,,"Multinomial naive Bayes text classification was proposed by Maron (1961) at the RAND Corporation for the task of assigning subject categories to journal abstracts. His model introduced most of the features of the modern form presented here, approximating the classification task with one-of categorization, and implementing add-δ smoothing and information-based feature selection."
4,Naive Bayes and Sentiment Classification,4.12,Bibliographical and Historical Notes,,,"The conditional independence assumptions of naive Bayes and the idea of Bayesian analysis of text seems to have arisen multiple times. The same year as Maron's paper, Minsky (1961) proposed a naive Bayes classifier for vision and other artificial intelligence problems, and Bayesian techniques were also applied to the text classification task of authorship attribution by Mosteller and Wallace (1963) . It had long been known that Alexander Hamilton, John Jay, and James Madison wrote the anonymously-published Federalist papers in 1787-1788 to persuade New York to ratify the United States Constitution. Yet although some of the 85 essays were clearly attributable to one author or another, the authorship of 12 were in dispute between Hamilton and Madison. Mosteller and Wallace (1963) trained a Bayesian probabilistic model of the writing of Hamilton and another model on the writings of Madison, then computed the maximum-likelihood author for each of the disputed essays. Naive Bayes was first applied to spam detection in Heckerman et al. (1998) ."
4,Naive Bayes and Sentiment Classification,4.12,Bibliographical and Historical Notes,,,"Metsis et al. 2006, Pang et al. 2002, and Wang and Manning 2012show that using boolean attributes with multinomial naive Bayes works better than full counts. Binary multinomial naive Bayes is sometimes confused with another variant of naive Bayes that also use a binary representation of whether a term occurs in a document: Multivariate Bernoulli naive Bayes. The Bernoulli variant instead estimates P(w|c) as the fraction of documents that contain a term, and includes a probability for whether a term is not in a document. McCallum and Nigam (1998) and Wang and Manning (2012) show that the multivariate Bernoulli variant of naive Bayes doesn't work as well as the multinomial algorithm for sentiment or other text tasks."
4,Naive Bayes and Sentiment Classification,4.12,Bibliographical and Historical Notes,,,"There are a variety of sources covering the many kinds of text classification tasks. For sentiment analysis see Pang and Lee (2008) , and Liu and Zhang (2012). Stamatatos (2009) surveys authorship attribute algorithms. On language identification see Jauhiainen et al. (2018); Jaech et al. (2016) is an important early neural system. The task of newswire indexing was often used as a test case for text classification algorithms, based on the Reuters-21578 collection of newswire articles."
4,Naive Bayes and Sentiment Classification,4.12,Bibliographical and Historical Notes,,,"See Manning et al. (2008) and Aggarwal and Zhai (2012) on text classification; classification in general is covered in machine learning textbooks (Hastie et al. 2001 , Witten and Frank 2005 , Bishop 2006 , Murphy 2012 ."
4,Naive Bayes and Sentiment Classification,4.12,Bibliographical and Historical Notes,,,"Non-parametric methods for computing statistical significance were used first in NLP in the MUC competition (Chinchor et al., 1993) , and even earlier in speech recognition (Gillick and Cox 1989, Bisani and Ney 2004) . Our description of the bootstrap draws on the description in Berg-Kirkpatrick et al. (2012) . Recent work has focused on issues including multiple test sets and multiple metrics (Søgaard et al. 2014 , Dror et al. 2017 ."
4,Naive Bayes and Sentiment Classification,4.12,Bibliographical and Historical Notes,,,"Feature selection is a method of removing features that are unlikely to generalize well. Features are generally ranked by how informative they are about the classification decision. A very common metric, information gain, tells us how many bits of information gain information the presence of the word gives us for guessing the class. Other feature selection metrics include χ 2 , pointwise mutual information, and GINI index; see Yang and Pedersen (1997) for a comparison and Guyon and Elisseeff (2003) for an introduction to feature selection."
5,Logistic Regression,,,,,Detective stories are as littered with clues as texts are with words. Yet for the poor reader it can be challenging to know how to weigh the author's clues in order to make the crucial classification task: deciding whodunnit.
5,Logistic Regression,,,,,In this chapter we introduce an algorithm that is admirably suited for discovering the link between features or cues and some particular outcome: logistic regression.
5,Logistic Regression,,,,,"Indeed, logistic regression is one of the most important analytic tools in the social and natural sciences. In natural language processing, logistic regression is the baseline supervised machine learning algorithm for classification, and also has a very close relationship with neural networks. As we will see in Chapter 7, a neural network can be viewed as a series of logistic regression classifiers stacked on top of each other. Thus the classification and machine learning techniques introduced here will play an important role throughout the book."
5,Logistic Regression,,,,,"Logistic regression can be used to classify an observation into one of two classes (like 'positive sentiment' and 'negative sentiment'), or into one of many classes. Because the mathematics for the two-class case is simpler, we'll describe this special case of logistic regression first in the next few sections, and then briefly summarize the use of multinomial logistic regression for more than two classes in Section 5.6."
5,Logistic Regression,,,,,We'll introduce the mathematics of logistic regression in the next few sections. But let's begin with some high-level issues.
5,Logistic Regression,,,,,Generative and Discriminative Classifiers: The most important difference between naive Bayes and logistic regression is that logistic regression is a discriminative classifier while naive Bayes is a generative classifier.
5,Logistic Regression,,,,,"These are two very different frameworks for how to build a machine learning model. Consider a visual metaphor: imagine we're trying to distinguish dog images from cat images. A generative model would have the goal of understanding what dogs look like and what cats look like. You might literally ask such a model to 'generate', i.e., draw, a dog. Given a test image, the system then asks whether it's the cat model or the dog model that better fits (is less surprised by) the image, and chooses that as its label."
5,Logistic Regression,,,,,"A discriminative model, by contrast, is only trying to learn to distinguish the classes (perhaps without learning much about them). So maybe all the dogs in the training data are wearing collars and the cats aren't. If that one feature neatly separates the classes, the model is satisfied. If you ask such a model what it knows about cats all it can say is that they don't wear collars."
5,Logistic Regression,,,,,"More formally, recall that the naive Bayes assigns a class c to a document d not by directly computing P(c|d) but by computing a likelihood and a prior"
5,Logistic Regression,,,,,"A generative model like naive Bayes makes use of this likelihood term, w which"
5,Logistic Regression,,,,,"expresses how to generate the features of a document if we knew it was of class c. By contrast a discriminative model in this text categorization scenario attempts to directly compute P(c|d). Perhaps it will learn to assign a high weight to document features that directly improve its ability to discriminate between possible classes, even if it couldn’t generate an example of one of the classes."
5,Logistic Regression,,,,,"Components of a probabilistic machine learning classifier: Like naive Bayes, logistic regression is a probabilistic classifier that makes use of supervised machine learning. Machine learning classifiers require a training corpus of m input/output pairs (x (i) , y (i) ). (We'll use superscripts in parentheses to refer to individual instances in the training set-for sentiment classification each instance might be an individual document to be classified). A machine learning system for classification then has four components:"
5,Logistic Regression,,,,,"1. A feature representation of the input. For each input observation x (i) , this will be a vector of features [x 1 , x 2 , ..., x n ]. We will generally refer to feature i for input x ( j) as x ( j)i , sometimes simplified as x i , but we will also see the notation f i , f i (x), or, for multiclass classification, f i (c, x)."
5,Logistic Regression,,,,,"2. A classification function that computesŷ, the estimated class, via p(y|x). In the next section we will introduce the sigmoid and softmax tools for classification."
5,Logistic Regression,,,,,"3. An objective function for learning, usually involving minimizing error on training examples. We will introduce the cross-entropy loss function."
5,Logistic Regression,,,,,4. An algorithm for optimizing the objective function. We introduce the stochastic gradient descent algorithm.
5,Logistic Regression,,,,,Logistic regression has two phases:
5,Logistic Regression,,,,,training: we train the system (specifically the weights w and b) using stochastic gradient descent and the cross-entropy loss.
5,Logistic Regression,,,,,test: Given a test example x we compute p(y|x) and return the higher probability label y = 1 or y = 0.
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation. Here we introduce the sigmoid classifier that will help us make this decision. Consider a single input observation x, which we will represent by a vector of features [x 1 , x 2 , ..., x n ] (we'll show sample features in the next subsection). The classifier output y can be 1 (meaning the observation is a member of the class) or 0 (the observation is not a member of the class). We want to know the probability P(y = 1|x) that this observation is a member of the class. So perhaps the decision is ""positive sentiment"" versus ""negative sentiment"", the features represent counts of words in a document, P(y = 1|x) is the probability that the document has positive sentiment, and P(y = 0|x) is the probability that the document has negative sentiment. Logistic regression solves this task by learning, from a training set, a vector of weights and a bias term. Each weight w i is a real number, and is associated with one of the input features x i . The weight w i represents how important that input feature is to the classification decision, and can be positive (providing evidence that the instance being classified belongs in the positive class) or negative (providing evidence that the instance being classified belongs in the negative class). Thus we might expect in a sentiment task the word awesome to have a high positive weight, and abysmal to have a very negative weight. The bias term, also called the intercept, is bias term intercept another real number that's added to the weighted inputs."
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"To make a decision on a test instance-after we've learned the weights in training-the classifier first multiplies each x i by its weight w i , sums up the weighted features, and adds the bias term b. The resulting single number z expresses the weighted sum of the evidence for the class."
5,Logistic Regression,5.1,Classification: the Sigmoid,,,z = n i=1 w i x i + b (5.2)
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"In the rest of the book we'll represent such sums using the dot product notation from dot product linear algebra. The dot product of two vectors a and b, written as a • b is the sum of the products of the corresponding elements of each vector. Thus the following is an equivalent formation to Eq. 5.2:"
5,Logistic Regression,5.1,Classification: the Sigmoid,,,z = w • x + b (5.3)
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"But note that nothing in Eq. 5.3 forces z to be a legal probability, that is, to lie between 0 and 1. In fact, since weights are real-valued, the output might even be negative; z ranges from −∞ to ∞. To create a probability, we'll pass z through the sigmoid function, σ (z). The sigmoid sigmoid function (named because it looks like an s) is also called the logistic function, and gives logistic regression its name. The sigmoid has the following equation,"
5,Logistic Regression,5.1,Classification: the Sigmoid,,,σ (z) = 1 1 + e −z = 1 1 + exp (−z) (5.4)
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"(For the rest of the book, we'll use the notation exp(x) to mean e x .) The sigmoid has a number of advantages; it takes a real-valued number and maps it into the range [0, 1], which is just what we want for a probability. Because it is nearly linear around 0 but flattens toward the ends, it tends to squash outlier values toward 0 or 1. And it's differentiable, which as we'll see in Section 5.8 will be handy for learning."
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"We're almost there. If we apply the sigmoid to the sum of the weighted features, we get a number between 0 and 1. To make it a probability, we just need to make sure that the two cases, p(y = 1) and p(y = 0), sum to 1. We can do this as follows:"
5,Logistic Regression,5.1,Classification: the Sigmoid,,,P(y = 1) = σ (w • x + b) = 1 1 + exp (−(w • x + b)) P(y = 0) = 1 − σ (w • x + b) = 1 − 1 1 + exp (−(w • x + b)) = exp (−(w • x + b)) 1 + exp (−(w • x + b)) (5.5)
5,Logistic Regression,5.1,Classification: the Sigmoid,,,The sigmoid function has the property
5,Logistic Regression,5.1,Classification: the Sigmoid,,,1 − σ (x) = σ (−x) (5.6)
5,Logistic Regression,5.1,Classification: the Sigmoid,,,so we could also have expressed P(y = 0) as σ (−(w • x + b)).
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"Now we have an algorithm that given an instance x computes the probability P(y = 1|x). How do we make a decision? For a test instance x, we say yes if the probability P(y = 1|x) is more than .5, and no otherwise. We call .5 the decision boundary:"
5,Logistic Regression,5.1,Classification: the Sigmoid,,,"decision boundary decision(x) = 1 if P(y = 1|x) > 0.5, 0 otherwise"
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"Let's have an example. Suppose we are doing binary sentiment classification on movie review text, and we would like to know whether to assign the sentiment class + or − to a review document doc. We'll represent each input observation by the 6 features x 1 . . . x 6 of the input shown in the following x 3 1 if ""no"" ∈ doc 0 otherwise 1"
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,x 4 count(1st and 2nd pronouns ∈ doc) 3
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"x 5 1 if ""!"" ∈ doc 0 otherwise 0"
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,x 6 log(word count of doc) ln(66) = 4.19
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"Let's assume for the moment that we've already learned a real-valued weight for each of these features, and that the 6 weights corresponding to the 6 features are [2.5, −5.0, −1.2, 0.5, 2.0, 0.7], while b = 0.1. (We'll discuss in the next section how the weights are learned.) The weight w 1 , for example indicates how important a feature the number of positive lexicon words (great, nice, enjoyable, etc.) is to a positive sentiment decision, while w 2 tells us the importance of negative lexicon words. Note that w 1 = 2.5 is positive, while w 2 = −5.0, meaning that negative words are negatively associated with a positive sentiment decision, and are about twice as important as positive words."
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"Given these 6 features and the input review x, P(+|x) and P(−|x) can be computed using Eq. 5.5:"
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"p(+|x) = P(y = 1|x) = σ (w • x + b) = σ ([2.5, −5.0, −1.2, 0.5, 2.0, 0.7] • [3, 2, 1, 3, 0, 4.19] + 0.1) = σ (.833) = 0.70 (5.7) p(−|x) = P(y = 0|x) = 1 − σ (w • x + b) = 0.30"
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"Logistic regression is commonly applied to all sorts of NLP tasks, and any property of the input can be a feature. Consider the task of period disambiguation: deciding if a period is the end of a sentence or part of a word, by classifying each period into one of two classes EOS (end-of-sentence) and not-EOS. We might use features like x 1 below expressing that the current word is lower case (perhaps with a positive weight), or that the current word is in our abbreviations dictionary (""Prof."") (perhaps with a negative weight). A feature can also express a quite complex combination of properties. For example a period following an upper case word is likely to be an EOS, but if the word itself is St. and the previous word is capitalized, then the period is likely part of a shortening of the word street."
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"x 1 = 1 if ""Case(w i ) = Lower"" 0 otherwise"
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"x 2 = 1 if ""w i ∈ AcronymDict"" 0 otherwise x 3 = 1 if ""w i = St. & Case(w i−1 ) = Cap"" 0 otherwise"
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,Designing features: Features are generally designed by examining the training set with an eye to linguistic intuitions and the linguistic literature on the domain. A careful error analysis on the training set or devset of an early version of a system often provides insights into features.
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"For some tasks it is especially helpful to build complex features that are combinations of more primitive features. We saw such a feature for period disambiguation above, where a period on the word St. was less likely to be the end of the sentence if the previous word was capitalized. For logistic regression and naive Bayes these combination features or feature interactions have to be designed by hand."
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"For many tasks (especially when feature values can reference specific words) we'll need large numbers of features. Often these are created automatically via feature templates, abstract specifications of features. For example a bigram template feature templates for period disambiguation might create a feature for every pair of words that occurs before a period in the training set. Thus the feature space is sparse, since we only have to create a feature if that n-gram exists in that position in the training set. The feature is generally created as a hash from the string descriptions. A user description of a feature as, ""bigram(American breakfast)"" is hashed into a unique integer i that becomes the feature number f i ."
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"In order to avoid the extensive human effort of feature design, recent research in NLP has focused on representation learning: ways to learn features automatically in an unsupervised way from the input. We'll introduce methods for representation learning in Chapter 6 and Chapter 7."
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"Choosing a classifier Logistic regression has a number of advantages over naive Bayes. Naive Bayes has overly strong conditional independence assumptions. Consider two features which are strongly correlated; in fact, imagine that we just add the same feature f 1 twice. Naive Bayes will treat both copies of f 1 as if they were separate, multiplying them both in, overestimating the evidence. By contrast, logistic regression is much more robust to correlated features; if two features f 1 and f 2 are perfectly correlated, regression will simply assign part of the weight to w 1 and part to w 2 . Thus when there are many correlated features, logistic regression will assign a more accurate probability than naive Bayes. So logistic regression generally works better on larger documents or datasets and is a common default."
5,Logistic Regression,5.1,Classification: the Sigmoid,5.1.1,Example: Sentiment Classification,"Despite the less accurate probabilities, naive Bayes still often makes the correct classification decision. Furthermore, naive Bayes can work extremely well (sometimes even better than logistic regression) on very small datasets (Ng and Jordan, 2002) or short documents (Wang and Manning, 2012). Furthermore, naive Bayes is easy to implement and very fast to train (there's no optimization step). So it's still a reasonable approach to use in some situations."
5,Logistic Regression,5.2,Learning in Logistic Regression,,,"How are the parameters of the model, the weights w and bias b, learned? Logistic regression is an instance of supervised classification in which we know the correct label y (either 0 or 1) for each observation x. What the system produces via Eq. 5.5 isŷ, the system's estimate of the true y. We want to learn parameters (meaning w and b) that makeŷ for each training observation as close as possible to the true y."
5,Logistic Regression,5.2,Learning in Logistic Regression,,,"This requires two components that we foreshadowed in the introduction to the chapter. The first is a metric for how close the current label (ŷ) is to the true gold label y. Rather than measure similarity, we usually talk about the opposite of this: the distance between the system output and the gold output, and we call this distance the loss function or the cost function. In the next section we'll introduce the loss loss function that is commonly used for logistic regression and also for neural networks,"
5,Logistic Regression,5.2,Learning in Logistic Regression,,,the cross-entropy loss.
5,Logistic Regression,5.2,Learning in Logistic Regression,,,The second thing we need is an optimization algorithm for iteratively updating the weights so as to minimize this loss function. The standard algorithm for this is gradient descent; we'll introduce the stochastic gradient descent algorithm in the following section.
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"We need a loss function that expresses, for an observation x, how close the classifier output (ŷ = σ (w • x + b)) is to the correct output (y, which is 0 or 1). We'll call this: L(ŷ, y) = How muchŷ differs from the true y"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,(5.8)
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"We do this via a loss function that prefers the correct class labels of the training examples to be more likely. This is called conditional maximum likelihood estimation: we choose the parameters w, b that maximize the log probability of the true y labels in the training data given the observations x. The resulting loss function is the negative log likelihood loss, generally called the cross-entropy loss."
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"Let's derive this loss function, applied to a single observation x. We'd like to learn weights that maximize the probability of the correct label p(y|x). Since there are only two discrete outcomes (1 or 0), this is a Bernoulli distribution, and we can express the probability p(y|x) that our classifier produces for one observation as the following (keeping in mind that if y=1, Eq. 5.9 simplifies toŷ; if y=0, Eq. 5.9 simplifies to 1 −ŷ):"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,p(y|x) =ŷ y (1 −ŷ) 1−y (5.9)
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"Now we take the log of both sides. This will turn out to be handy mathematically, and doesn't hurt us; whatever values maximize a probability will also maximize the log of the probability:"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,log p(y|x) = log ŷ y (1 −ŷ) 1−y = y logŷ + (1 − y) log(1 −ŷ) (5.10)
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"Eq. 5.10 describes a log likelihood that should be maximized. In order to turn this into loss function (something that we need to minimize), we'll just flip the sign on Eq. 5.10. The result is the cross-entropy loss L CE :"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"L CE (ŷ, y) = − log p(y|x) = − [y logŷ + (1 − y) log(1 −ŷ)] (5.11)"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"Finally, we can plug in the definition ofŷ = σ (w • x + b):"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"L CE (ŷ, y) = − [y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] (5.12)"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"Let's see if this loss function does the right thing for our example from Fig. 5 .2. We want the loss to be smaller if the model's estimate is close to correct, and bigger if the model is confused. So first let's suppose the correct gold label for the sentiment example in Fig. 5 .2 is positive, i.e., y = 1. In this case our model is doing well, since from Eq. 5.7 it indeed gave the example a higher probability of being positive (.70) than negative (.30). If we plug σ (w • x + b) = .70 and y = 1 into Eq. 5.12, the right side of the equation drops out, leading to the following loss (we'll use log to mean natural log when the base is not specified):"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"L CE (ŷ, y) = −[y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] = − [log σ (w • x + b)] = − log(.70) = .36"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"By contrast, let's pretend instead that the example in Fig. 5 .2 was actually negative, i.e., y = 0 (perhaps the reviewer went on to say ""But bottom line, the movie is terrible! I beg you not to see it!""). In this case our model is confused and we'd want the loss to be higher. Now if we plug y = 0 and 1 − σ (w • x + b) = .31 from Eq. 5.7 into Eq. 5.12, the left side of the equation drops out:"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"L CE (ŷ, y) = −[y log σ (w • x + b)+(1 − y) log (1 − σ (w • x + b))] = − [log (1 − σ (w • x + b))] = − log (.30) = 1.2"
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"Sure enough, the loss for the first classifier (.37) is less than the loss for the second classifier (1.17). Why does minimizing this negative log probability do what we want? A perfect classifier would assign probability 1 to the correct outcome (y=1 or y=0) and probability 0 to the incorrect outcome. That means the higherŷ (the closer it is to 1), the better the classifier; the lowerŷ is (the closer it is to 0), the worse the classifier. The negative log of this probability is a convenient loss metric since it goes from 0 (negative log of 1, no loss) to infinity (negative log of 0, infinite loss). This loss function also ensures that as the probability of the correct answer is maximized, the probability of the incorrect answer is minimized; since the two sum to one, any increase in the probability of the correct answer is coming at the expense of the incorrect answer. It's called the cross-entropy loss, because Eq. 5.10 is also the formula for the cross-entropy between the true probability distribution y and our estimated distributionŷ."
5,Logistic Regression,5.3,The Cross-Entropy Loss Function,,,"Now we know what we want to minimize; in the next section, we'll see how to find the minimum."
5,Logistic Regression,5.4,Gradient Descent,,,"Our goal with gradient descent is to find the optimal weights: minimize the loss function we've defined for the model. In Eq. 5.13 below, we'll explicitly represent the fact that the loss function L is parameterized by the weights, which we'll refer to in machine learning in general as θ (in the case of logistic regression θ = w, b). So the goal is to find the set of weights which minimizes the loss function, averaged over all examples:θ"
5,Logistic Regression,5.4,Gradient Descent,,,"= argmin θ 1 m m i=1 L CE ( f (x (i) ; θ ), y (i) ) (5.13)"
5,Logistic Regression,5.4,Gradient Descent,,,"How shall we find the minimum of this (or any) loss function? Gradient descent is a method that finds a minimum of a function by figuring out in which direction (in the space of the parameters θ ) the function's slope is rising the most steeply, and moving in the opposite direction. The intuition is that if you are hiking in a canyon and trying to descend most quickly down to the river at the bottom, you might look around yourself 360 degrees, find the direction where the ground is sloping the steepest, and walk downhill in that direction."
5,Logistic Regression,5.4,Gradient Descent,,,"For logistic regression, this loss function is conveniently convex. A convex funcconvex tion has just one minimum; there are no local minima to get stuck in, so gradient descent starting from any point is guaranteed to find the minimum. (By contrast, the loss for multi-layer neural networks is non-convex, and gradient descent may get stuck in local minima for neural network training and never find the global optimum.) Although the algorithm (and the concept of gradient) are designed for direction vectors, let's first consider a visualization of the case where the parameter of our system is just a single scalar w, shown in Fig. 5 .3."
5,Logistic Regression,5.4,Gradient Descent,,,"Given a random initialization of w at some value w 1 , and assuming the loss function L happened to have the shape in Fig. 5 .3, we need the algorithm to tell us whether at the next iteration we should move left (making w 2 smaller than w 1 ) or right (making w 2 bigger than w 1 ) to reach the minimum. .3 The first step in iteratively finding the minimum of this loss function, by moving w in the reverse direction from the slope of the function. Since the slope is negative, we need to move w in a positive direction, to the right. Here superscripts are used for learning steps, so w 1 means the initial value of w (which is 0), w 2 at the second step, and so on."
5,Logistic Regression,5.4,Gradient Descent,,,"The gradient descent algorithm answers this question by finding the gradient gradient of the loss function at the current point and moving in the opposite direction. The gradient of a function of many variables is a vector pointing in the direction of the greatest increase in a function. The gradient is a multi-variable generalization of the slope, so for a function of one variable like the one in Fig. 5 .3, we can informally think of the gradient as the slope. The dotted line in Fig. 5 .3 shows the slope of this hypothetical loss function at point w = w 1 . You can see that the slope of this dotted line is negative. Thus to find the minimum, gradient descent tells us to go in the opposite direction: moving w in a positive direction. The magnitude of the amount to move in gradient descent is the value of the slope d dw L( f (x; w), y) weighted by a learning rate η. A higher (faster) learning learning rate rate means that we should move w more on each step. The change we make in our parameter is the learning rate times the gradient (or the slope, in our single-variable example):"
5,Logistic Regression,5.4,Gradient Descent,,,"w t+1 = w t − η d dw L( f (x; w), y) (5.14)"
5,Logistic Regression,5.4,Gradient Descent,,,"Now let's extend the intuition from a function of one scalar variable w to many variables, because we don't just want to move left or right, we want to know where in the N-dimensional space (of the N parameters that make up θ ) we should move. The gradient is just such a vector; it expresses the directional components of the sharpest slope along each of those N dimensions. If we're just imagining two weight dimensions (say for one weight w and one bias b), the gradient might be a vector with two orthogonal components, each of which tells us how much the ground slopes in the w dimension and in the b dimension. In an actual logistic regression, the parameter vector w is much longer than 1 or 2, since the input feature vector x can be quite long, and we need a weight w i for each x i . For each dimension/variable w i in w (plus the bias b), the gradient will have a component that tells us the slope with respect to that variable. Essentially we're asking: ""How much would a small change in that variable w i influence the total loss function L?"""
5,Logistic Regression,5.4,Gradient Descent,,,"In each dimension w i , we express the slope as a partial derivative ∂ ∂ w i of the loss function. The gradient is then defined as a vector of these partials. We'll representŷ as f (x; θ ) to make the dependence on θ more obvious:"
5,Logistic Regression,5.4,Gradient Descent,,,"∇ θ L( f (x; θ ), y)) = ∂ ∂ w 1 L( f (x; θ ), y) ∂ ∂ w 2 L( f (x; θ ), y) . . . ∂ ∂ w n L( f (x; θ ), y) ∂ ∂ b L( f (x; θ ), y) (5.15)"
5,Logistic Regression,5.4,Gradient Descent,,,The final equation for updating θ based on the gradient is thus
5,Logistic Regression,5.4,Gradient Descent,,,"θ t+1 = θ t − η∇L( f (x; θ ), y) (5.16)"
5,Logistic Regression,5.4,Gradient Descent,5.4.1,The Gradient for Logistic Regression,"In order to update θ , we need a definition for the gradient ∇L( f (x; θ ), y). Recall that for logistic regression, the cross-entropy loss function is:"
5,Logistic Regression,5.4,Gradient Descent,5.4.1,The Gradient for Logistic Regression,"L CE (ŷ, y) = − [y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] (5.17)"
5,Logistic Regression,5.4,Gradient Descent,5.4.1,The Gradient for Logistic Regression,It turns out that the derivative of this function for one observation vector x is Eq. 5.18 (the interested reader can see Section 5.8 for the derivation of this equation):
5,Logistic Regression,5.4,Gradient Descent,5.4.1,The Gradient for Logistic Regression,"∂ L CE (ŷ, y) ∂ w j = [σ (w • x + b) − y]x j (5.18)"
5,Logistic Regression,5.4,Gradient Descent,5.4.1,The Gradient for Logistic Regression,"Note in Eq. 5.18 that the gradient with respect to a single weight w j represents a very intuitive value: the difference between the true y and our estimatedŷ = σ (w • x + b) for that observation, multiplied by the corresponding input value x j ."
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,"Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example, and nudging θ in the right direction (the opposite direction of the gradient). (an ""online algorithm"" is one that processes its input example by example, rather than waiting until it sees the entire input). x is the set of training inputs"
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,"x (1) , x (2) , ..., x (m) #"
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,"y is the set of training outputs (labels) y (1) , y (2) , ..., y (m) θ ← 0 repeat til done # see caption For each training tuple (x (i) , y (i) ) (in random order) 1. Optional (for reporting): # How are we doing on this tuple? Computeŷ (i) = f (x (i) ; θ ) # What is our estimated outputŷ? Compute the loss L(ŷ (i) , y (i) ) # How far off isŷ (i) from the true output"
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,"y (i) ? 2. g ← ∇ θ L( f (x (i) ; θ ), y (i) )"
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,# How should we move θ to maximize loss? 3. θ ← θ − η g # Go the other way instead return θ Figure 5 .5 The stochastic gradient descent algorithm.
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,"Step 1 (computing the loss) is used to report how well we are doing on the current tuple. The algorithm can terminate when it converges (or when the gradient norm < ), or when progress halts (for example when the loss starts going up on a held-out set)."
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,"The learning rate η is a hyperparameter that must be adjusted. If it's too high, hyperparameter the learner will take steps that are too large, overshooting the minimum of the loss function. If it's too low, the learner will take steps that are too small, and take too long to get to the minimum. It is common to start with a higher learning rate and then slowly decrease it, so that it is a function of the iteration k of training; the notation η k can be used to mean the value of the learning rate at iteration k."
5,Logistic Regression,5.4,Gradient Descent,5.4.2,The Stochastic Gradient Descent Algorithm,"We'll discuss hyperparameters in more detail in Chapter 7, but briefly they are a special kind of parameter for any machine learning model. Unlike regular parameters of a model (weights like w and b), which are learned by the algorithm from the training set, hyperparameters are special parameters chosen by the algorithm designer that affect how the algorithm works."
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"Let's walk though a single step of the gradient descent algorithm. We'll use a simplified version of the example in Fig. 5 .2 as it sees a single observation x, whose correct value is y = 1 (this is a positive review), and with only two features:"
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,x 1 = 3 (count of positive lexicon words)
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"x 2 = 2 (count of negative lexicon words) Let's assume the initial weights and bias in θ 0 are all set to 0, and the initial learning rate η is 0.1:"
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,w 1 = w 2 = b = 0 η = 0.1
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"The single update step requires that we compute the gradient, multiplied by the learning rate"
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"θ t+1 = θ t − η∇ θ L( f (x (i) ; θ ), y (i) )"
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"In our mini example there are three parameters, so the gradient vector has 3 dimensions, for w 1 , w 2 , and b. We can compute the first gradient as follows:"
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"∇ w,b L = ∂ L CE (ŷ,y) ∂ w 1 ∂ L CE (ŷ,y) ∂ w 2 ∂ L CE (ŷ,y) ∂ b = (σ (w • x + b) − y)x 1 (σ (w • x + b) − y)x 2 σ (w • x + b) − y = (σ (0) − 1)x 1 (σ (0) − 1)x 2 σ (0) − 1 = −0.5x 1 −0.5x 2 −0.5 = −1.5 −1.0 −0.5 "
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"Now that we have a gradient, we compute the new parameter vector θ 1 by moving θ 0 in the opposite direction from the gradient:"
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,θ 1 = w 1 w 2 b − η −1.5 −1.0 −0.5 = .15 .1 .05
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"So after one step of gradient descent, the weights have shifted to be: w 1 = .15, w 2 = .1, and b = .05."
5,Logistic Regression,5.4,Gradient Descent,5.4.3,Working through an Example,"Note that this observation x happened to be a positive example. We would expect that after seeing more negative examples with high counts of negative words, that the weight w 2 would shift to have a negative value."
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,"Stochastic gradient descent is called stochastic because it chooses a single random example at a time, moving the weights so as to improve performance on that single example. That can result in very choppy movements, so it's common to compute the gradient over batches of training instances rather than a single instance."
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,For example in batch training we compute the gradient over the entire dataset.
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,"By seeing so many examples, batch training offers a superb estimate of which direction to move the weights, at the cost of spending a lot of time processing every single example in the training set to compute this perfect direction."
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,"A compromise is mini-batch training: we train on a group of m examples (permini-batch haps 512, or 1024) that is less than the whole dataset. (If m is the size of the dataset, then we are doing batch gradient descent; if m = 1, we are back to doing stochastic gradient descent). Mini-batch training also has the advantage of computational efficiency. The mini-batches can easily be vectorized, choosing the size of the minibatch based on the computational resources. This allows us to process all the examples in one mini-batch in parallel and then accumulate the loss, something that's not possible with individual or batch training. We just need to define mini-batch versions of the cross-entropy loss function we defined in Section 5.3 and the gradient in Section 5.4.1. Let's extend the crossentropy loss for one example from Eq. 5.11 to mini-batches of size m. We'll continue to use the notation that x (i) and y (i) mean the ith training features and training label, respectively. We make the assumption that the training examples are independent:"
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,"log p(training labels) = log m i=1 p(y (i) |x (i) ) = m i=1 log p(y (i) |x (i) ) = − m i=1 L CE (ŷ (i) , y (i) ) (5.19)"
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,Now the cost function for the mini-batch of m examples is the average loss for each example:
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,"Cost(ŷ, y) = 1 m m i=1 L CE (ŷ (i) , y (i) ) = − 1 m m i=1 y (i) log σ (w • x (i) + b) + (1 − y (i) ) log 1 − σ (w • x (i) + b) (5.20)"
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,The mini-batch gradient is the average of the individual gradients from Eq. 5.18:
5,Logistic Regression,5.4,Gradient Descent,5.4.4,Mini-batch Training,"∂Cost(ŷ, y) ∂ w j = 1 m m i=1 σ (w • x (i) + b) − y (i) x (i) j (5.21)"
5,Logistic Regression,5.5,Regularization,,,Numquam ponenda est pluralitas sine necessitate 'Plurality should never be proposed unless needed'
5,Logistic Regression,5.5,Regularization,,,"There is a problem with learning weights that make the model perfectly match the training data. If a feature is perfectly predictive of the outcome because it happens to only occur in one class, it will be assigned a very high weight. The weights for features will attempt to perfectly fit details of the training set, in fact too perfectly, modeling noisy factors that just accidentally correlate with the class. This problem is called overfitting. A good model should be able to generalize well from the training overfitting generalize data to the unseen test set, but a model that overfits will have poor generalization."
5,Logistic Regression,5.5,Regularization,,,"To avoid overfitting, a new regularization term R(θ ) is added to the objective regularization function in Eq. 5.13, resulting in the following objective for a batch of m examples (slightly rewritten from Eq. 5.13 to be maximizing log probability rather than minimizing loss, and removing the 1 m term which doesn't affect the argmax):"
5,Logistic Regression,5.5,Regularization,,,θ = argmax θ m i=1 log P(y (i) |x (i) ) − αR(θ ) (5.22)
5,Logistic Regression,5.5,Regularization,,,"The new regularization term R(θ ) is used to penalize large weights. Thus a setting of the weights that matches the training data perfectly-but uses many weights with high values to do so-will be penalized more than a setting that matches the data a little less well, but does so using smaller weights. There are two common ways to compute this regularization term R(θ ). L2 regularization is a quadratic function of"
5,Logistic Regression,5.5,Regularization,,,"the weight values, named because it uses the (square of the) L2 norm of the weight values. The L2 norm, ||θ || 2 , is the same as the Euclidean distance of the vector θ from the origin. If θ consists of n weights, then:"
5,Logistic Regression,5.5,Regularization,,,R(θ ) = ||θ || 2 2 = n j=1 θ 2 j (5.23)
5,Logistic Regression,5.5,Regularization,,,The L2 regularized objective function becomes:
5,Logistic Regression,5.5,Regularization,,,θ = argmax θ m i=1 log P(y (i) |x (i) ) − α n j=1 θ 2 j (5.24)
5,Logistic Regression,5.5,Regularization,,,"L1 regularization is a linear function of the weight values, named after the L1 norm L1 regularization ||W || 1 , the sum of the absolute values of the weights, or Manhattan distance (the Manhattan distance is the distance you'd have to walk between two points in a city with a street grid like New York):"
5,Logistic Regression,5.5,Regularization,,,R(θ ) = ||θ || 1 = n i=1 |θ i | (5.25)
5,Logistic Regression,5.5,Regularization,,,The L1 regularized objective function becomes:
5,Logistic Regression,5.5,Regularization,,,θ = argmax θ m 1=i log P(y (i) |x (i) ) − α n j=1 |θ j | (5.26)
5,Logistic Regression,5.5,Regularization,,,"These kinds of regularization come from statistics, where L1 regularization is called lasso regression (Tibshirani, 1996) and L2 regularization is called ridge regression, lasso ridge and both are commonly used in language processing. L2 regularization is easier to optimize because of its simple derivative (the derivative of θ 2 is just 2θ ), while L1 regularization is more complex (the derivative of |θ | is non-continuous at zero)."
5,Logistic Regression,5.5,Regularization,,,"But where L2 prefers weight vectors with many small weights, L1 prefers sparse solutions with some larger weights but many more weights set to zero. Thus L1 regularization leads to much sparser weight vectors, that is, far fewer features."
5,Logistic Regression,5.5,Regularization,,,"Both L1 and L2 regularization have Bayesian interpretations as constraints on the prior of how weights should look. L1 regularization can be viewed as a Laplace prior on the weights. L2 regularization corresponds to assuming that weights are distributed according to a Gaussian distribution with mean µ = 0. In a Gaussian or normal distribution, the further away a value is from the mean, the lower its probability (scaled by the variance σ ). By using a Gaussian prior on the weights, we are saying that weights prefer to have the value 0. A Gaussian for a weight θ j is 1"
5,Logistic Regression,5.5,Regularization,,,2πσ 2 j exp − (θ j − µ j ) 2 2σ 2 j (5.27)
5,Logistic Regression,5.5,Regularization,,,"If we multiply each weight by a Gaussian prior on the weight, we are thus maximizing the following constraint:"
5,Logistic Regression,5.5,Regularization,,,θ = argmax θ M i=1 P(y (i) |x (i) ) × n j=1 1 2πσ 2 j exp − (θ j − µ j ) 2 2σ 2 j (5.28)
5,Logistic Regression,5.5,Regularization,,,"which in log space, with µ = 0, and assuming 2σ 2 = 1, corresponds tô"
5,Logistic Regression,5.5,Regularization,,,θ = argmax θ m i=1 log P(y (i) |x (i) ) − α n j=1 θ 2 j (5.29)
5,Logistic Regression,5.5,Regularization,,,which is in the same form as Eq. 5.24.
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"Sometimes we need more than two classes. Perhaps we might want to do 3-way sentiment classification (positive, negative, or neutral). Or we could be assigning some of the labels we will introduce in Chapter 8, like the part of speech of a word (choosing from 10, 30, or even 50 different parts of speech), or the named entity type of a phrase (choosing from tags like person, location, organization). In such cases we use multinomial logistic regression, also called softmax re-multinomial logistic regression gression (or, historically, the maxent classifier). In multinomial logistic regression the target y is a variable that ranges over more than two classes; we want to know the probability of y being in each potential class c ∈ C, p(y = c|x)."
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"The multinomial logistic classifier uses a generalization of the sigmoid, called the softmax function, to compute the probability p(y = c|x). The softmax function softmax takes a vector z = [z 1 , z 2 , ..., z k ] of k arbitrary values and maps them to a probability distribution, with each value in the range (0,1), and all the values summing to 1. Like the sigmoid, it is an exponential function."
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"For a vector z of dimensionality k, the softmax is defined as:"
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,softmax(z i ) = exp (z i ) k j=1 exp (z j ) 1 ≤ i ≤ k (5.30)
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"The softmax of an input vector z = [z 1 , z 2 , ..., z k ] is thus a vector itself:"
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"softmax(z) = exp (z 1 ) k i=1 exp (z i ) , exp (z 2 ) k i=1 exp (z i ) , ..., exp (z k ) k i=1 exp (z i ) (5.31)"
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,The denominator k i=1 exp (z i ) is used to normalize all the values into probabilities. Thus for example given a vector:
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"z = [0.6, 1.1, −1.5, 1.2, 3.2, −1.1]"
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"the resulting (rounded) softmax(z) is [0.055, 0.090, 0.006, 0.099, 0.74, 0.010] Again like the sigmoid, the input to the softmax will be the dot product between a weight vector w and an input vector x (plus a bias). But now we'll need separate weight vectors (and bias) for each of the K classes."
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,p(y = c|x) = exp (w c • x + b c ) K j=1 exp (w j • x + b j ) (5.32)
5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"Like the sigmoid, the softmax has the property of squashing values toward 0 or 1. Thus if one of the inputs is larger than the others, it will tend to push its probability toward 1, and suppress the probabilities of the smaller inputs."
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.1,Features in Multinomial Logistic Regression,"Features in multinomial logistic regression function similarly to binary logistic regression, with one difference that we'll need separate weight vectors (and biases) for each of the K classes. Recall our binary exclamation point feature x 5 from page 80:"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.1,Features in Multinomial Logistic Regression,"x 5 = 1 if ""!"" ∈ doc 0 otherwise"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.1,Features in Multinomial Logistic Regression,"In binary classification a positive weight w 5 on a feature influences the classifier toward y = 1 (positive sentiment) and a negative weight influences it toward y = 0 (negative sentiment) with the absolute value indicating how important the feature is. For multinominal logistic regression, by contrast, with separate weights for each class, a feature can be evidence for or against each individual class."
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.1,Features in Multinomial Logistic Regression,"In 3-way multiclass sentiment classification, for example, we must assign each document one of the 3 classes +, −, or 0 (neutral). Now a feature related to exclamation marks might have a negative weight for 0 documents, and a positive weight for + or − documents:"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.1,Features in Multinomial Logistic Regression,"Feature Definition w 5,+ w 5,− w 5,0 f 5 (x) 1 if ""!"" ∈ doc 0 otherwise 3.5 3.1 −5.3"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.1,Features in Multinomial Logistic Regression,"Because these feature weights are dependent both on the input text and the output class, we sometimes make this dependence explicit and represent the features themselves as f (x, y): a function of both the input and the class. Using such a notation f 5 (x) above could be represented as three features f 5 (x, +), f 5 (x, −), and f 5 (x, 0), each of which has a single weight."
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,The loss function for multinomial logistic regression generalizes the loss function for binary logistic regression from 2 to K classes. Recall that that the cross-entropy loss for binary logistic regression (repeated from Eq. 5.11) is:
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"L CE (ŷ, y) = − log p(y|x) = − [y logŷ + (1 − y) log(1 −ŷ)] (5.33)"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"The loss function for multinominal logistic regression generalizes the two terms in Eq. 5.33 (one that is non-zero when y = 1 and one that is non-zero when y = 0) to K terms. The loss function for a single example x is thus the sum of the logs of the K output classes, each weighted by y k , the probability of the true class :"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"L CE (ŷ, y) = − K k=1 y k logŷ k = − K k=1 y k logp(y = k|x) (5.34)"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"Because only one class (let's call it i) is the correct one, the vector y takes the value 1 only for this value of k, i.e., has y i = 1 and y j = 0 ∀ j = i. A vector like this, with one value=1 and the rest 0, is called a one-hot vector. The terms in the sum in Eq. 5.34 will thus be 0 except for the term corresponding to the true class, i.e.:"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"L CE (ŷ, y) = − K k=1 1{y = k} logp(y = k|x) = − K k=1 1{y = k} log exp (w k • x + b k ) K j=1 exp (w j • x + b j ) (5.35)"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"Here we'll use the notation w k to mean the vector of weights from each input x i to the output node k, and the indicator function 1{}, which evaluates to 1 if the condition in the brackets is true and to 0 otherwise. Hence the cross-entropy loss is simply the log of the output probability corresponding to the correct class, and we therefore also call this the negative log likelihood loss:"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"negative log likelihood loss L CE (ŷ, y) = − logŷ k , (where k is the correct class) = − log exp (w k • x + b k ) K j=1 exp (w j • x + b j ) ("
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,where k is the correct class)(5.36)
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"The gradient for a single example turns out to be very similar to the gradient for binary logistic regression, although we don't show the derivation here. It is the difference between the value for the true class k (which is 1) and the probability the classifier outputs for class k, weighted by the value of the input x i corresponding to the ith element of the weight for class k w k,i :"
5,Logistic Regression,5.6,Multinomial Logistic Regression,5.6.2,Learning in Multinomial Logistic Regression,"∂ L CE ∂ w k,i = −(1{y = k} − p(y = k|x))x i = − 1{y = k} − exp (w k • x + b k ) K j=1 exp (w j • x + b j ) x i (5.37)"
5,Logistic Regression,5.7,Interpreting Models,,,"Often we want to know more than just the correct classification of an observation. We want to know why the classifier made the decision it did. That is, we want our decision to be interpretable. Interpretability can be hard to define strictly, but the interpretable core idea is that as humans we should know why our algorithms reach the conclusions they do. Because the features to logistic regression are often human-designed, one way to understand a classifier's decision is to understand the role each feature plays in the decision. Logistic regression can be combined with statistical tests (the likelihood ratio test, or the Wald test); investigating whether a particular feature is significant by one of these tests, or inspecting its magnitude (how large is the weight w associated with the feature?) can help us interpret why the classifier made the decision it makes. This is enormously important for building transparent models. Furthermore, in addition to its use as a classifier, logistic regression in NLP and many other fields is widely used as an analytic tool for testing hypotheses about the effect of various explanatory variables (features). In text classification, perhaps we want to know if logically negative words (no, not, never) are more likely to be associated with negative sentiment, or if negative reviews of movies are more likely to discuss the cinematography. However, in doing so it's necessary to control for potential confounds: other factors that might influence sentiment (the movie genre, the year it was made, perhaps the length of the review in words). Or we might be studying the relationship between NLP-extracted linguistic features and non-linguistic outcomes (hospital readmissions, political outcomes, or product sales), but need to control for confounds (the age of the patient, the county of voting, the brand of the product). In such cases, logistic regression allows us to test whether some feature is associated with some outcome above and beyond the effect of other features."
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,"In this section we give the derivation of the gradient of the cross-entropy loss function L CE for logistic regression. Let's start with some quick calculus refreshers. First, the derivative of ln(x):"
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,d dx ln(x) = 1 x (5.38)
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,"Second, the (very elegant) derivative of the sigmoid:"
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,dσ (z) dz = σ (z)(1 − σ (z))
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,d f dx = du dv • dv dx (5.40)
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,"First, we want to know the derivative of the loss function with respect to a single weight w j (we'll need to compute it for each weight, and for the bias):"
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,∂ L CE ∂ w j = ∂ ∂ w j − [y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] = − ∂ ∂ w j y log σ (w • x + b) + ∂ ∂ w j (1 − y) log [1 − σ (w • x + b)] (5.41)
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,"Next, using the chain rule, and relying on the derivative of log:"
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,∂ L CE ∂ w j = − y σ (w • x + b) ∂ ∂ w j σ (w • x + b) − 1 − y 1 − σ (w • x + b) ∂ ∂ w j 1 − σ (w • x + b) (5.42)
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,Rearranging terms:
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,∂ L CE ∂ w j = − y σ (w • x + b) − 1 − y 1 − σ (w • x + b) ∂ ∂ w j σ (w • x + b) (5.43)
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,"And now plugging in the derivative of the sigmoid, and using the chain rule one more time, we end up with Eq. 5.44:"
5,Logistic Regression,5.8,Advanced: Deriving the Gradient Equation,,,∂ L CE ∂ w j = − y − σ (w • x + b) σ (w • x + b)[1 − σ (w • x + b)] σ (w • x + b)[1 − σ (w • x + b)] ∂ (w • x + b) ∂ w j = − y − σ (w • x + b) σ (w • x + b)[1 − σ (w • x + b)] σ (w • x + b)[1 − σ (w • x + b)]x j = −[y − σ (w • x + b)]x j = [σ (w • x + b) − y]x j (5.44)
5,Logistic Regression,5.9,Summary,,,This chapter introduced the logistic regression model of classification.
5,Logistic Regression,5.9,Summary,,,"• Logistic regression is a supervised machine learning classifier that extracts real-valued features from the input, multiplies each by a weight, sums them, and passes the sum through a sigmoid function to generate a probability. A threshold is used to make a decision. • Logistic regression can be used with two classes (e.g., positive and negative sentiment) or with multiple classes (multinomial logistic regression, for example for n-ary text classification, part-of-speech labeling, etc.). • Multinomial logistic regression uses the softmax function to compute probabilities. • The weights (vector w and bias b) are learned from a labeled training set via a loss function, such as the cross-entropy loss, that must be minimized. • Minimizing this loss function is a convex optimization problem, and iterative algorithms like gradient descent are used to find the optimal weights. • Regularization is used to avoid overfitting."
5,Logistic Regression,5.9,Summary,,,"• Logistic regression is also one of the most useful analytic tools, because of its ability to transparently study the importance of individual features."
5,Logistic Regression,5.10,Bibliographical and Historical Notes,,,"Logistic regression was developed in the field of statistics, where it was used for the analysis of binary data by the 1960s, and was particularly common in medicine (Cox, 1969) . Starting in the late 1970s it became widely used in linguistics as one of the formal foundations of the study of linguistic variation (Sankoff and Labov, 1979) . Nonetheless, logistic regression didn't become common in natural language processing until the 1990s, when it seems to have appeared simultaneously from two directions. The first source was the neighboring fields of information retrieval and speech processing, both of which had made use of regression, and both of which lent many other statistical techniques to NLP. Indeed a very early use of logistic regression for document routing was one of the first NLP applications to use (LSI) embeddings as word representations (Schütze et al., 1995) ."
5,Logistic Regression,5.10,Bibliographical and Historical Notes,,,"At the same time in the early 1990s logistic regression was developed and applied to NLP at IBM Research under the name maximum entropy modeling or maximum entropy maxent (Berger et al., 1996) , seemingly independent of the statistical literature. Under that name it was applied to language modeling (Rosenfeld, 1996) , part-of-speech tagging (Ratnaparkhi, 1996) , parsing (Ratnaparkhi, 1997), coreference resolution (Kehler, 1997b) , and text classification (Nigam et al., 1999) . More on classification can be found in machine learning textbooks (Hastie et al. 2001 , Witten and Frank 2005 , Bishop 2006 , Murphy 2012"
6,Vector Semantics and Embeddings,,,,,"Nets are for fish; Once you get the fish, you can forget the net."
6,Vector Semantics and Embeddings,,,,,"Words are for meaning; Once you get the meaning, you can forget the words"
6,Vector Semantics and Embeddings,,,,,"庄子(Zhuangzi), Chapter 26"
6,Vector Semantics and Embeddings,,,,,"The asphalt that Los Angeles is famous for occurs mainly on its freeways. But in the middle of the city is another patch of asphalt, the La Brea tar pits, and this asphalt preserves millions of fossil bones from the last of the Ice Ages of the Pleistocene Epoch. One of these fossils is the Smilodon, or saber-toothed tiger, instantly recognizable by its long canines. Five million years ago or so, a completely different sabre-tooth tiger called Thylacosmilus lived in Argentina and other parts of South America. Thylacosmilus was a marsupial whereas Smilodon was a placental mammal, but Thylacosmilus had the same long upper canines and, like Smilodon, had a protective bone flange on the lower jaw. The similarity of these two mammals is one of many examples of parallel or convergent evolution, in which particular contexts or environments lead to the evolution of very similar structures in different species (Gould, 1980) . The role of context is also important in the similarity of a less biological kind of organism: the word. Words that occur in similar contexts tend to have similar meanings. This link between similarity in how words are distributed and similarity in what they mean is called the distributional hypothesis. The hypothesis was distributional hypothesis first formulated in the 1950s by linguists like Joos (1950), Harris (1954) , and Firth (1957), who noticed that words which are synonyms (like oculist and eye-doctor) tended to occur in the same environment (e.g., near words like eye or examined) with the amount of meaning difference between two words ""corresponding roughly to the amount of difference in their environments"" (Harris, 1954, 157) . In this chapter we introduce vector semantics, which instantiates this linguistic vector semantics hypothesis by learning representations of the meaning of words, called embeddings, embeddings directly from their distributions in texts. These representations are used in every natural language processing application that makes use of meaning, and the static embeddings we introduce here underlie the more powerful dynamic or contextualized embeddings like BERT that we will see in Chapter 11."
6,Vector Semantics and Embeddings,,,,,"These word representations are also the first example in this book of representation learning, automatically learning useful representations of the input text."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Finding such self-supervised ways to learn representations of the input, instead of creating representations by hand via feature engineering, is an important focus of NLP research (Bengio et al., 2013)."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Let's begin by introducing some basic principles of word meaning. How should we represent the meaning of a word? In the n-gram models of Chapter 3, and in classical NLP applications, our only representation of a word is as a string of letters, or an index in a vocabulary list. This representation is not that different from a tradition in philosophy, perhaps you've seen it in introductory logic classes, in which the meaning of words is represented by just spelling the word with small capital letters; representing the meaning of ""dog"" as DOG, and ""cat"" as CAT."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Representing the meaning of a word by capitalizing it is a pretty unsatisfactory model. You might have seen a joke due originally to semanticist Barbara Partee (Carlson, 1977) : Q: What's the meaning of life? A: LIFE' Surely we can do better than this! After all, we'll want a model of word meaning to do all sorts of things for us. It should tell us that some words have similar meanings (cat is similar to dog), others are antonyms (cold is the opposite of hot), some have positive connotations (happy) while others have negative connotations (sad). It should represent the fact that the meanings of buy, sell, and pay offer differing perspectives on the same underlying purchasing event (If I buy something from you, you've probably sold it to me, and I likely paid you). More generally, a model of word meaning should allow us to draw inferences to address meaning-related tasks like question-answering or dialogue."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"In this section we summarize some of these desiderata, drawing on results in the linguistic study of word meaning, which is called lexical semantics; we'll return to lexical semantics and expand on this list in Chapter 18 and Chapter 10."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,Lemmas and Senses Let's start by looking at how one word (we'll choose mouse) might be defined in a dictionary (simplified from the online dictionary WordNet): mouse (N) 1. any of numerous small rodents... 2. a hand-operated device that controls a cursor...
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Here the form mouse is the lemma, also called the citation form. The form lemma citation form mouse would also be the lemma for the word mice; dictionaries don't have separate definitions for inflected forms like mice. Similarly sing is the lemma for sing, sang, sung. In many languages the infinitive form is used as the lemma for the verb, so Spanish dormir ""to sleep"" is the lemma for duermes ""you sleep"". The specific forms sung or carpets or sing or duermes are called wordforms."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"wordform As the example above shows, each lemma can have multiple meanings; the lemma mouse can refer to the rodent or the cursor control device. We call each of these aspects of the meaning of mouse a word sense. The fact that lemmas can be polysemous (have multiple senses) can make interpretation difficult (is someone who types ""mouse info"" into a search engine looking for a pet or a tool?). Chapter 18 will discuss the problem of polysemy, and introduce word sense disambiguation, the task of determining which sense of a word is being used in a particular context."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Synonymy One important component of word meaning is the relationship between word senses. For example when one word has a sense whose meaning is identical to a sense of another word, or nearly identical, we say the two senses of those two words are synonyms. Synonyms include such pairs as synonym 6.1 • LEXICAL SEMANTICS 99 couch/sofa vomit/throw up filbert/hazelnut car/automobile A more formal definition of synonymy (between words rather than senses) is that two words are synonymous if they are substitutable for one another in any sentence without changing the truth conditions of the sentence, the situations in which the sentence would be true. We often say in this case that the two words have the same propositional meaning."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,propositional meaning
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"While substitutions between some pairs of words like car / automobile or water / H 2 O are truth preserving, the words are still not identical in meaning. Indeed, probably no two words are absolutely identical in meaning. One of the fundamental tenets of semantics, called the principle of contrast (Girard 1718, Bréal 1897, Clark principle of contrast 1987), states that a difference in linguistic form is always associated with some difference in meaning. For example, the word H 2 O is used in scientific contexts and would be inappropriate in a hiking guide-water would be more appropriate-and this genre difference is part of the meaning of the word. In practice, the word synonym is therefore used to describe a relationship of approximate or rough synonymy."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Word Similarity While words don't have many synonyms, most words do have lots of similar words. Cat is not a synonym of dog, but cats and dogs are certainly similar words. In moving from synonymy to similarity, it will be useful to shift from talking about relations between word senses (like synonymy) to relations between words (like similarity). Dealing with words avoids having to commit to a particular representation of word senses, which will turn out to simplify our task. The notion of word similarity is very useful in larger semantic tasks. Knowing similarity how similar two words are can help in computing how similar the meaning of two phrases or sentences are, a very important component of tasks like question answering, paraphrasing, and summarization. One way of getting values for word similarity is to ask humans to judge how similar one word is to another. A number of datasets have resulted from such experiments. For example the SimLex-999 dataset (Hill et al., 2015) gives values on a scale from 0 to 10, like the examples below, which range from near-synonyms (vanish, disappear) to pairs that scarcely seem to have anything in common (hole, agreement): Consider the meanings of the words coffee and cup. Coffee is not similar to cup; they share practically no features (coffee is a plant or a beverage, while a cup is a manufactured object with a particular shape). But coffee and cup are clearly related; they are associated by co-participating in an everyday event (the event of drinking coffee out of a cup). Similarly scalpel and surgeon are not similar but are related eventively (a surgeon tends to make use of a scalpel)."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,vanish
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"One common kind of relatedness between words is if they belong to the same semantic field. A semantic field is a set of words which cover a particular semantic semantic field domain and bear structured relations with each other. For example, words might be related by being in the semantic field of hospitals (surgeon, scalpel, nurse, anesthetic, hospital), restaurants (waiter, menu, plate, food, chef), or houses (door, roof, kitchen, family, bed). Semantic fields are also related to topic models, like Latent topic models Dirichlet Allocation, LDA, which apply unsupervised learning on large sets of texts to induce sets of associated words from text. Semantic fields and topic models are very useful tools for discovering topical structure in documents."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"In Chapter 18 we'll introduce more relations between senses like hypernymy or IS-A, antonymy (opposites) and meronymy (part-whole relations)."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Semantic Frames and Roles Closely related to semantic fields is the idea of a semantic frame. A semantic frame is a set of words that denote perspectives or semantic frame participants in a particular type of event. A commercial transaction, for example, is a kind of event in which one entity trades money to another entity in return for some good or service, after which the good changes hands or perhaps the service is performed. This event can be encoded lexically by using verbs like buy (the event from the perspective of the buyer), sell (from the perspective of the seller), pay (focusing on the monetary aspect), or nouns like buyer. Frames have semantic roles (like buyer, seller, goods, money), and words in a sentence can take on these roles."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Knowing that buy and sell have this relation makes it possible for a system to know that a sentence like Sam bought the book from Ling could be paraphrased as Ling sold the book to Sam, and that Sam has the role of the buyer in the frame and Ling the seller. Being able to recognize such paraphrases is important for question answering, and can help in shifting perspective for machine translation."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Connotation Finally, words have affective meanings or connotations. The word connotations connotation has different meanings in different fields, but here we use it to mean the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations. For example some words have positive connotations (happy) while others have negative connotations (sad). Even words whose meanings are similar in other ways can vary in connotation; consider the difference in connotations between fake, knockoff, forgery, on the one hand, and copy, replica, reproduction on the other, or innocent (positive connotation) and naive (negative connotation). Some words describe positive evaluation (great, love) and others negative evaluation (terrible, hate). Positive or negative evaluation language is called sentiment, as we saw in Chapter 4, and word sentiment plays a role in important sentiment tasks like sentiment analysis, stance detection, and applications of NLP to the language of politics and consumer reviews."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Early work on affective meaning (Osgood et al., 1957) found that words varied along three important dimensions of affective meaning:"
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"valence: the pleasantness of the stimulus arousal: the intensity of emotion provoked by the stimulus dominance: the degree of control exerted by the stimulus Thus words like happy or satisfied are high on valence, while unhappy or annoyed are low on valence. Excited is high on arousal, while calm is low on arousal."
6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Controlling is high on dominance, while awed or influenced are low on dominance. Each word is thus represented by three numbers, corresponding to its value on each of the three dimensions: (1957) noticed that in using these 3 numbers to represent the meaning of a word, the model was representing each word as a point in a threedimensional space, a vector whose three dimensions corresponded to the word's rating on the three scales. This revolutionary idea that word meaning could be represented as a point in space (e.g., that part of the meaning of heartbreak can be represented as the point [2.45, 5.65, 3.58]) was the first expression of the vector semantics models that we introduce next."
6,Vector Semantics and Embeddings,6.2,Vector Semantics,,,"Vectors semantics is the standard way to represent word meaning in NLP, helping vector semantics us model many of the aspects of word meaning we saw in the previous section. The roots of the model lie in the 1950s when two big ideas converged: Osgood's 1957 idea mentioned above to use a point in three-dimensional space to represent the connotation of a word, and the proposal by linguists like Joos (1950), Harris (1954), and Firth (1957) to define the meaning of a word by its distribution in language use, meaning its neighboring words or grammatical environments. Their idea was that two words that occur in very similar distributions (whose neighboring words are similar) have similar meanings."
6,Vector Semantics and Embeddings,6.2,Vector Semantics,,,"For example, suppose you didn't know the meaning of the word ongchoi (a recent borrowing from Cantonese) but you see it in the following contexts: The fact that ongchoi occurs with words like rice and garlic and delicious and salty, as do words like spinach, chard, and collard greens might suggest that ongchoi is a leafy green similar to these other leafy greens. 1 We can do the same thing computationally by just counting words in the context of ongchoi."
6,Vector Semantics and Embeddings,6.2,Vector Semantics,,,"The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors. Vectors for representing words are called embeddings (although embeddings the term is sometimes more strictly applied only to dense vectors like word2vec (Section 6.8), rather than sparse tf-idf or PPMI vectors (Section 6.3-Section 6.6)). The word ""embedding"" derives from its mathematical sense as a mapping from one space or structure to another, although the meaning has shifted; see the end of the chapter. Figure 6 .1 A two-dimensional (t-SNE) projection of embeddings for some words and phrases, showing that words with similar meanings are nearby in space. The original 60dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. 2015with colors added for explanation."
6,Vector Semantics and Embeddings,6.2,Vector Semantics,,,"The fine-grained model of word similarity of vector semantics offers enormous power to NLP applications. NLP applications like the sentiment classifiers of Chapter 4 or Chapter 5 depend on the same words appearing in the training and test sets. But by representing words as embeddings, classifiers can assign sentiment as long as it sees some words with similar meanings. And as we'll see, vector semantic models can be learned automatically from text without supervision."
6,Vector Semantics and Embeddings,6.2,Vector Semantics,,,"In this chapter we'll introduce the two most commonly used models. In the tf-idf model, an important baseline, the meaning of a word is defined by a simple function of the counts of nearby words. We will see that this method results in very long vectors that are sparse, i.e. mostly zeros (since most words simply never occur in the context of others). We'll introduce the word2vec model family for constructing short, dense vectors that have useful semantic properties. We'll also introduce the cosine, the standard way to use embeddings to compute semantic similarity, between two words, two sentences, or two documents, an important tool in practical applications like question answering, summarization, or automatic essay grading."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,,,"""The most important attributes of a vector in 3-space are {Location, Location, Location}"" Randall Munroe, https://xkcd.com/2358/ Vector or distributional models of meaning are generally based on a co-occurrence matrix, a way of representing how often words co-occur. We'll look at two popular matrices: the term-document matrix and the term-term matrix."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.1,Vectors and documents,"In a term-document matrix, each row represents a word in the vocabulary and each term-document matrix column represents a document from some collection of documents. Fig. 6 .2 shows a small selection from a term-document matrix showing the occurrence of four words in four plays by Shakespeare. Each cell in this matrix represents the number of times a particular word (defined by the row) occurs in a particular document (defined by the column). Thus fool appeared 58 times in Twelfth Night."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.1,Vectors and documents,"The term-document matrix of Fig. 6 .2 was first defined as part of the vector space model of information retrieval (Salton, 1971). In this model, a document is represented as a count vector, a column in Fig. 6.3."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.1,Vectors and documents,"To review some basic linear algebra, a vector is, at heart, just a list or array of numbers. So As You Like It is represented as the list [1,114,36,20] (the first column vector in Fig. 6.3) and Julius Caesar is represented as the list [7,62,1,2] (the third column vector). A vector space is a collection of vectors, characterized by their dimension."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.1,Vectors and documents,"In the example in Fig. 6 .3, the document vectors are of dimension 4, dimension just so they fit on the page; in real term-document matrices, the vectors representing each document would have dimensionality |V |, the vocabulary size. The ordering of the numbers in a vector space indicates different meaningful dimensions on which documents vary. Thus the first dimension for both these vectors corresponds to the number of times the word battle occurs, and we can compare each dimension, noting for example that the vectors for As You Like It and Twelfth Night have similar values (1 and 0, respectively) for the first dimension. Figure 6 .3 The term-document matrix for four words in four Shakespeare plays. The red boxes show that each document is represented as a column vector of length four."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.1,Vectors and documents,"We can think of the vector for a document as a point in |V |-dimensional space; thus the documents in Fig. 6 .3 are points in 4-dimensional space. Since 4-dimensional spaces are hard to visualize, Fig. 6 .4 shows a visualization in two dimensions; we've arbitrarily chosen the dimensions corresponding to the words battle and fool. Term-document matrices were originally defined as a means of finding similar documents for the task of document information retrieval. Two documents that are similar will tend to have similar words, and if two documents have similar words their column vectors will tend to be similar. The vectors for the comedies As You Like It [1, 114, 36, 20] and Twelfth Night [0,80,58,15] look a lot more like each other (more fools and wit than battles) than they look like Julius Caesar [7, 62, 1, 2] or Henry V [13, 89, 4, 3] . This is clear with the raw numbers; in the first dimension (battle) the comedies have low numbers and the others have high numbers, and we can see it visually in Fig. 6 .4; we'll see very shortly how to quantify this intuition more formally."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.1,Vectors and documents,"A real term-document matrix, of course, wouldn't just have 4 rows and columns, let alone 2. More generally, the term-document matrix has |V | rows (one for each word type in the vocabulary) and D columns (one for each document in the collection); as we'll see, vocabulary sizes are generally in the tens of thousands, and the number of documents can be enormous (think about all the pages on the web)."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.1,Vectors and documents,"Information retrieval (IR) is the task of finding the document d from the D information retrieval documents in some collection that best matches a query q. For IR we'll therefore also represent a query by a vector, also of length |V |, and we'll need a way to compare two vectors to find how similar they are. (Doing IR will also require efficient ways to store and manipulate these vectors by making use of the convenient fact that these vectors are sparse, i.e., mostly zeros). Later in the chapter we'll introduce some of the components of this vector comparison process: the tf-idf term weighting, and the cosine similarity metric."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.2,Words as vectors: document dimensions,"We've seen that documents can be represented as vectors in a vector space. But vector semantics can also be used to represent the meaning of words. We do this by associating each word with a word vector-a row vector rather than a column row vector vector, hence with different dimensions, as shown in Fig. 6 Figure 6 .5 The term-document matrix for four words in four Shakespeare plays. The red boxes show that each word is represented as a row vector of length four."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.2,Words as vectors: document dimensions,"For documents, we saw that similar documents had similar vectors, because similar documents tend to have similar words. This same principle applies to words: similar words have similar vectors because they tend to occur in similar documents. The term-document matrix thus lets us represent the meaning of a word by the documents it tends to occur in."
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.3,Words as vectors: word dimensions,"An alternative to using the term-document matrix to represent words as vectors of document counts, is to use the term-term matrix, also called the word-word matrix or the term-context matrix, in which the columns are labeled by words rather word-word matrix than documents. This matrix is thus of dimensionality |V |×|V | and each cell records the number of times the row (target) word and the column (context) word co-occur in some context in some training corpus. The context could be the document, in which case the cell represents the number of times the two words appear in the same document. It is most common, however, to use smaller contexts, generally a window around the word, for example of 4 words to the left and 4 words to the right, in which case the cell represents the number of times (in some training corpus) the column word occurs in such a ±4 word window around the row word. For example here is one example each of some words in their windows:"
6,Vector Semantics and Embeddings,6.3,Words and Vectors,6.3.3,Words as vectors: word dimensions,"is traditionally followed by cherry pie, a traditional dessert often mixed, such as strawberry rhubarb pie. Apple pie computer peripherals and personal digital assistants. These devices usually a computer. This includes information available on the internet If we then take every occurrence of each word (say strawberry) and count the context words around it, we get a word-word co-occurrence matrix. Fig. 6 .6 shows a simplified subset of the word-word co-occurrence matrix for these four words computed from the Wikipedia corpus (Davies, 2015 Figure 6 .6 Co-occurrence vectors for four words in the Wikipedia corpus, showing six of the dimensions (hand-picked for pedagogical purposes). The vector for digital is outlined in red. Note that a real vector would have vastly more dimensions and thus be much sparser. Fig. 6 .6 that the two words cherry and strawberry are more similar to each other (both pie and sugar tend to occur in their window) than they are to other words like digital; conversely, digital and information are more similar to each other than, say, to strawberry. Note that |V |, the length of the vector, is generally the size of the vocabulary, often between 10,000 and 50,000 words (using the most frequent words in the training corpus; keeping words after about the most frequent 50,000 or so is generally not helpful). Since most of these numbers are zero these are sparse vector representations; there are efficient algorithms for storing and computing with sparse matrices. Now that we have some intuitions, let's move on to examine the details of computing word similarity. Afterwards we'll discuss methods for weighting cells."
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"To measure similarity between two target words v and w, we need a metric that takes two vectors (of the same dimensionality, either both with words as dimensions, hence of length |V |, or both with documents as dimensions as documents, of length |D|) and gives a measure of their similarity. By far the most common similarity metric is the cosine of the angle between the vectors."
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"The cosine-like most measures for vector similarity used in NLP-is based on the dot product operator from linear algebra, also called the inner product:"
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"dot product inner product dot product(v, w) = v • w = N i=1 v i w i = v 1 w 1 + v 2 w 2 + ... + v N w N (6.7)"
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"As we will see, most metrics for similarity between vectors are based on the dot product. The dot product acts as a similarity metric because it will tend to be high just when the two vectors have large values in the same dimensions. Alternatively, vectors that have zeros in different dimensions-orthogonal vectors-will have a dot product of 0, representing their strong dissimilarity."
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"This raw dot product, however, has a problem as a similarity metric: it favors long vectors. The vector length is defined as"
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,vector length |v| = N i=1 v 2 i (6.8)
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"The dot product is higher if a vector is longer, with higher values in each dimension. More frequent words have longer vectors, since they tend to co-occur with more words and have higher co-occurrence values with each of them. The raw dot product thus will be higher for frequent words. But this is a problem; we'd like a similarity metric that tells us how similar two words are regardless of their frequency."
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"We modify the dot product to normalize for the vector length by dividing the dot product by the lengths of each of the two vectors. This normalized dot product turns out to be the same as the cosine of the angle between the two vectors, following from the definition of the dot product between two vectors a and b:"
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,a • b = |a||b| cos θ a • b |a||b| = cos θ (6.9)
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,The cosine similarity metric between two vectors v and w thus can be computed as:
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"cosine cosine(v, w) = v • w |v||w| = N i=1 v i w i N i=1 v 2 i N i=1 w 2 i (6.10)"
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"For some applications we pre-normalize each vector, by dividing it by its length, creating a unit vector of length 1. Thus we could compute a unit vector from a by unit vector dividing it by |a|. For unit vectors, the dot product is the same as the cosine."
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"The cosine value ranges from 1 for vectors pointing in the same direction, through 0 for orthogonal vectors, to -1 for vectors pointing in opposite directions. But since raw frequency values are non-negative, the cosine for these vectors ranges from 0-1."
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"Let's see how the cosine computes which of the words cherry or digital is closer in meaning to information, just using raw counts from the following shortened The model decides that information is way closer to digital than it is to cherry, a result that seems sensible. Fig. 6 .8 shows a visualization."
6,Vector Semantics and Embeddings,6.4,Cosine for measuring similarity,,,"Figure 6 .8 A (rough) graphical demonstration of cosine similarity, showing vectors for three words (cherry, digital, and information) in the two dimensional space defined by counts of the words computer and pie nearby. The figure doesn't show the cosine, but it highlights the angles; note that the angle between digital and information is smaller than the angle between cherry and information. When two vectors are more similar, the cosine is larger but the angle is smaller; the cosine has its maximum (1) when the angle between two vectors is smallest (0 • ); the cosine of all other angles is less than 1."
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"The co-occurrence matrices above represent each cell by frequencies, either of words with documents ( Fig. 6.5 ), or words with other words (Fig. 6.6 ). But raw frequency is not the best measure of association between words. Raw frequency is very skewed and not very discriminative. If we want to know what kinds of contexts are shared by cherry and strawberry but not by digital and information, we're not going to get good discrimination from words like the, it, or they, which occur frequently with all sorts of words and aren't informative about any particular word. We saw this also in Fig. 6 .3 for the Shakespeare corpus; the dimension for the word good is not very discriminative between plays; good is simply a frequent word and has roughly equivalent high frequencies in each of the plays."
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"It's a bit of a paradox. Words that occur nearby frequently (maybe pie nearby cherry) are more important than words that only appear once or twice. Yet words that are too frequent-ubiquitous, like the or good-are unimportant. How can we balance these two conflicting constraints?"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"There are two common solutions to this problem: in this section we'll describe the tf-idf weighting, usually used when the dimensions are documents. In the next we introduce the PPMI algorithm (usually used when the dimensions are words)."
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"The tf-idf weighting (the '-' here is a hyphen, not a minus sign) is the product of two terms, each term capturing one of these two intuitions:"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"The first is the term frequency (Luhn, 1957) : the frequency of the word t in the term frequency document d. We can just use the raw count as the term frequency:"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"tf t, d = count(t, d) (6.11)"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"More commonly we squash the raw frequency a bit, by using the log 10 of the frequency instead. The intuition is that a word appearing 100 times in a document doesn't make that word 100 times more likely to be relevant to the meaning of the document. Because we can't take the log of 0, we normally add 1 to the count:"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"2 tf t, d = log 10 (count(t, d) + 1) (6.12)"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"If we use log weighting, terms which occur 0 times in a document would have tf = log 10 (1) = 0, 10 times in a document tf = log 10 (11) = 1.04, 100 times tf = log 10 (101) = 2.004, 1000 times tf = 3.00044, and so on."
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"The second factor in tf-idf is used to give a higher weight to words that occur only in a few documents. Terms that are limited to a few documents are useful for discriminating those documents from the rest of the collection; terms that occur frequently across the entire collection aren't as helpful. The document frequency document frequency df t of a term t is the number of documents it occurs in. Document frequency is not the same as the collection frequency of a term, which is the total number of times the word appears in the whole collection in any document. Consider in the collection of Shakespeare's 37 plays the two words Romeo and action. The words have identical collection frequencies (they both occur 113 times in all the plays) but very different document frequencies, since Romeo only occurs in a single play. If our goal is to find documents about the romantic tribulations of Romeo, the word Romeo should be highly weighted, but not action:"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,Collection Frequency Document Frequency Romeo 113 1 action 113 31
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"We emphasize discriminative words like Romeo via the inverse document frequency or idf term weight (Sparck Jones, 1972) . The idf is defined using the fracidf tion N/df t , where N is the total number of documents in the collection, and df t is the number of documents in which term t occurs. The fewer documents in which a term occurs, the higher this weight. The lowest weight of 1 is assigned to terms that occur in all the documents. It's usually clear what counts as a document: in Shakespeare we would use a play; when processing a collection of encyclopedia articles like Wikipedia, the document is a Wikipedia page; in processing newspaper articles, the document is a single article. Occasionally your corpus might not have appropriate document divisions and you might need to break up the corpus into documents yourself for the purposes of computing idf."
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"Because of the large number of documents in many collections, this measure too is usually squashed with a log function. The resulting definition for inverse document frequency (idf) is thus"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,idf t = log 10 N df t (6.13)
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"Here are some idf values for some words in the Shakespeare corpus, ranging from extremely informative words which occur in only one play like Romeo, to those that occur in a few like salad or Falstaff, to those which are very common like fool or so common as to be completely non-discriminative since they occur in all 37 plays like good or sweet. 3"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"w t, d = tf t, d × idf t (6.14)"
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"Fig . 6 .9 applies tf-idf weighting to the Shakespeare term-document matrix in Fig. 6 .2, using the tf equation Eq. 6.12. Note that the tf-idf values for the dimension corresponding to the word good have now all become 0; since this word appears in every document, the tf-idf weighting leads it to be ignored. Similarly, the word fool, which appears in 36 out of the 37 plays, has a much lower weight. Figure 6 .9 A tf-idf weighted term-document matrix for four words in four Shakespeare plays, using the counts in Fig. 6.2 . For example the 0.049 value for wit in As You Like It is the product of tf = log 10 (20 + 1) = 1.322 and idf = .037. Note that the idf weighting has eliminated the importance of the ubiquitous word good and vastly reduced the impact of the almost-ubiquitous word fool."
6,Vector Semantics and Embeddings,6.5,TF-IDF: Weighting Terms in the Vector,,,"The tf-idf weighting is the way for weighting co-occurrence matrices in information retrieval, but also plays a role in many other aspects of natural language processing. It's also a great baseline, the simple thing to try first. We'll look at other weightings like PPMI (Positive Pointwise Mutual Information) in Section 6.6."
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"An alternative weighting function to tf-idf, PPMI (positive pointwise mutual information), is used for term-term-matrices, when the vector dimensions correspond to words rather than documents. PPMI draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in our corpus than we would have a priori expected them to appear by chance."
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"Pointwise mutual information (Fano, 1961) 4 is one of the most important con-pointwise mutual information cepts in NLP. It is a measure of how often two events x and y occur, compared with what we would expect if they were independent:"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"I(x, y) = log 2 P(x, y) P(x)P(y) (6.16)"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"The pointwise mutual information between a target word w and a context word c (Church and Hanks 1989, Church and Hanks 1990) is then defined as:"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"PMI(w, c) = log 2 P(w, c) P(w)P(c) (6.17)"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"The numerator tells us how often we observed the two words together (assuming we compute probability by using the MLE). The denominator tells us how often we would expect the two words to co-occur assuming they each occurred independently; recall that the probability of two independent events both occurring is just the product of the probabilities of the two events. Thus, the ratio gives us an estimate of how much more the two words co-occur than we expect by chance. PMI is a useful tool whenever we need to find words that are strongly associated."
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"PMI values range from negative to positive infinity. But negative PMI values (which imply things are co-occurring less often than we would expect by chance) tend to be unreliable unless our corpora are enormous. To distinguish whether two words whose individual probability is each 10 −6 occur together less often than chance, we would need to be certain that the probability of the two occurring together is significantly different than 10 −12 , and this kind of granularity would require an enormous corpus. Furthermore it's not clear whether it's even possible to evaluate such scores of 'unrelatedness' with human judgments. For this reason it is more common to use Positive PMI (called PPMI) which replaces all negative PPMI PMI values with zero (Church and Hanks 1989 , Dagan et al. 1993 , Niwa and Nitta 1994 5 :"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"PPMI(w, c) = max(log 2 P(w, c) P(w)P(c) , 0) (6.18)"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"More formally, let's assume we have a co-occurrence matrix F with W rows (words) and C columns (contexts), where f i j gives the number of times word w i occurs in 4 PMI is based on the mutual information between two random variables X and Y , defined as:"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"I(X,Y ) = x y P(x, y) log 2 P(x, y) P(x)P(y) (6.15)"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"In a confusion of terminology, Fano used the phrase mutual information to refer to what we now call pointwise mutual information and the phrase expectation of the mutual information for what we now call mutual information 5 Positive PMI also cleanly solves the problem of what to do with zero counts, using 0 to replace the −∞ from log(0)."
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,context c j . This can be turned into a PPMI matrix where ppmi i j gives the PPMI value of word w i with context c j as follows:
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"p i j = f i j W i=1 C j=1 f i j , p i * = C j=1 f i j W i=1 C j=1 f i j , p * j = W i=1 f i j W i=1 C j=1 f i j (6.19) PPMI i j = max(log 2 p i j p i * p * j , 0) (6.20)"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"Let's see some PPMI calculations. We'll use Fig. 6 .10, which repeats Fig. 6 .6 plus all the count marginals, and let's pretend for ease of calculation that these are the only words/contexts that matter. Figure 6 .10 Co-occurrence counts for four words in 5 contexts in the Wikipedia corpus, together with the marginals, pretending for the purpose of this calculation that no other words/contexts matter."
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"Thus for example we could compute PPMI(w=information, c=data), assuming we pretended that Figure 6 .11 Replacing the counts in Fig. 6 .6 with joint probabilities, showing the marginals around the outside."
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"PMI has the problem of being biased toward infrequent events; very rare words tend to have very high PMI values. One way to reduce this bias toward low frequency Figure 6 .12 The PPMI matrix showing the association between words and context words, computed from the counts in Fig. 6 .11. Note that most of the 0 PPMI values are ones that had a negative PMI; for example PMI(cherry,computer) = -6.7, meaning that cherry and computer co-occur on Wikipedia less often than we would expect by chance, and with PPMI we replace negative values by zero. events is to slightly change the computation for P(c), using a different function P α (c) that raises the probability of the context word to the power of α:"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"PPMI α (w, c) = max(log 2 P(w, c) P(w)P α (c) , 0) (6.21) P α (c) = count(c) α c count(c) α (6.22)"
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"Levy et al. 2015found that a setting of α = 0.75 improved performance of embeddings on a wide range of tasks (drawing on a similar weighting used for skipgrams described below in Eq. 6.32). This works because raising the count to α = 0.75 increases the probability assigned to rare contexts, and hence lowers their PMI (P α (c) > P(c) when c is rare)."
6,Vector Semantics and Embeddings,6.6,Pointwise Mutual Information (PMI),,,"Another possible solution is Laplace smoothing: Before computing PMI, a small constant k (values of 0.1-3 are common) is added to each of the counts, shrinking (discounting) all the non-zero values. The larger the k, the more the non-zero counts are discounted."
6,Vector Semantics and Embeddings,6.7,Applications of the TF-IDF or PPMI Vector Models,,,"In summary, the vector semantics model we've described so far represents a target word as a vector with dimensions corresponding either to the documents in a large collection (the term-document matrix) or to the counts of words in some neighboring window (the term-term matrix). The values in each dimension are counts, weighted by tf-idf (for term-document matrices) or PPMI (for term-term matrices), and the vectors are sparse (since most values are zero)."
6,Vector Semantics and Embeddings,6.7,Applications of the TF-IDF or PPMI Vector Models,,,"The model computes the similarity between two words x and y by taking the cosine of their tf-idf or PPMI vectors; high cosine, high similarity. This entire model is sometimes referred to as the tf-idf model or the PPMI model, after the weighting function."
6,Vector Semantics and Embeddings,6.7,Applications of the TF-IDF or PPMI Vector Models,,,"The tf-idf model of meaning is often used for document functions like deciding if two documents are similar. We represent a document by taking the vectors of all the words in the document, and computing the centroid of all those vectors."
6,Vector Semantics and Embeddings,6.7,Applications of the TF-IDF or PPMI Vector Models,,,"The centroid is the multidimensional version of the mean; the centroid of a set of vectors is a single vector that has the minimum sum of squared distances to each of the vectors in the set. Given k word vectors w 1 , w 2 , ..., w k , the centroid document vector d is:"
6,Vector Semantics and Embeddings,6.7,Applications of the TF-IDF or PPMI Vector Models,,,document vector d = w 1 + w 2 + ... + w k k (6.23)
6,Vector Semantics and Embeddings,6.7,Applications of the TF-IDF or PPMI Vector Models,,,"Given two documents, we can then compute their document vectors d 1 and d 2 , and estimate the similarity between the two documents by cos (d 1 , d 2 ) . Document similarity is also useful for all sorts of applications; information retrieval, plagiarism detection, news recommender systems, and even for digital humanities tasks like comparing different versions of a text to see which are similar to each other. Either the PPMI model or the tf-idf model can be used to compute word similarity, for tasks like finding word paraphrases, tracking changes in word meaning, or automatically discovering meanings of words in different corpora. For example, we can find the 10 most similar words to any target word w by computing the cosines between w and each of the V − 1 other words, sorting, and looking at the top 10."
6,Vector Semantics and Embeddings,6.8,Word2Vec,,,"In the previous sections we saw how to represent a word as a sparse, long vector with dimensions corresponding to words in the vocabulary or documents in a collection. We now introduce a more powerful word representation: embeddings, short dense vectors. Unlike the vectors we've seen so far, embeddings are short, with number of dimensions d ranging from 50-1000, rather than the much larger vocabulary size |V | or number of documents D we've seen. These d dimensions don't have a clear interpretation. And the vectors are dense: instead of vector entries being sparse, mostly-zero counts or functions of counts, the values will be real-valued numbers that can be negative."
6,Vector Semantics and Embeddings,6.8,Word2Vec,,,"It turns out that dense vectors work better in every NLP task than sparse vectors. While we don't completely understand all the reasons for this, we have some intuitions. Representing words as 300-dimensional dense vectors requires our classifiers to learn far fewer weights than if we represented words as 50,000-dimensional vectors, and the smaller parameter space possibly helps with generalization and avoiding overfitting. Dense vectors may also do a better job of capturing synonymy. For example, in a sparse vector representation, dimensions for synonyms like car and automobile dimension are distinct and unrelated; sparse vectors may thus fail to capture the similarity between a word with car as a neighbor and a word with automobile as a neighbor."
6,Vector Semantics and Embeddings,6.8,Word2Vec,,,"In this section we introduce one method for computing embeddings: skip-gram vocabulary. In Chapter 11 we'll introduce methods for learning dynamic contextual embeddings like the popular family of BERT representations, in which the vector for each word is different in different contexts."
6,Vector Semantics and Embeddings,6.8,Word2Vec,,,"The intuition of word2vec is that instead of counting how often each word w occurs near, say, apricot, we'll instead train a classifier on a binary prediction task: ""Is word w likely to show up near apricot?"" We don't actually care about this prediction task; instead we'll take the learned classifier weights as the word embeddings."
6,Vector Semantics and Embeddings,6.8,Word2Vec,,,"The revolutionary intuition here is that we can just use running text as implicitly supervised training data for such a classifier; a word c that occurs near the target word apricot acts as gold 'correct answer' to the question ""Is word c likely to show up near apricot?"" This method, often called self-supervision, avoids the need for self-supervision any sort of hand-labeled supervision signal. This idea was first proposed in the task of neural language modeling, when Bengio et al. (2003) and Collobert et al. 2011showed that a neural language model (a neural network that learned to predict the next word from prior words) could just use the next word in running text as its supervision signal, and could be used to learn an embedding representation for each word as part of doing this prediction task."
6,Vector Semantics and Embeddings,6.8,Word2Vec,,,"We'll see how to do neural networks in the next chapter, but word2vec is a much simpler model than the neural network language model, in two ways. First, word2vec simplifies the task (making it binary classification instead of word prediction). Second, word2vec simplifies the architecture (training a logistic regression classifier instead of a multi-layer neural network with hidden layers that demand more sophisticated training algorithms). The intuition of skip-gram is:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,,,1. Treat the target word and a neighboring context word as positive examples. 2. Randomly sample other words in the lexicon to get negative samples. 3. Use logistic regression to train a classifier to distinguish those two cases. 4. Use the learned weights as the embeddings.
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"Let's start by thinking about the classification task, and then turn to how to train. Imagine a sentence like the following, with a target word apricot, and assume we're using a window of ±2 context words:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"... lemon, a [tablespoon of apricot jam, a] pinch ... c1 c2 w c3 c4"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"Our goal is to train a classifier such that, given a tuple (w, c) of a target word w paired with a candidate context word c (for example (apricot, jam), or perhaps (apricot, aardvark)) it will return the probability that c is a real context word (true for jam, false for aardvark):"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"P(+|w, c) (6.24)"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,The probability that word c is not a real context word for w is just 1 minus Eq. 6.24:
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"P(−|w, c) = 1 − P(+|w, c) (6.25)"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"How does the classifier compute the probability P? The intuition of the skipgram model is to base this probability on embedding similarity: a word is likely to occur near the target if its embedding vector is similar to the target embedding. To compute similarity between these dense embeddings, we rely on the intuition that two vectors are similar if they have a high dot product (after all, cosine is just a normalized dot product). In other words:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"Similarity(w, c) ≈ c • w (6.26)"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"The dot product c • w is not a probability, it's just a number ranging from −∞ to ∞ (since the elements in word2vec embeddings can be negative, the dot product can be negative). To turn the dot product into a probability, we'll use the logistic or sigmoid function σ (x), the fundamental core of logistic regression:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,σ (x) = 1 1 + exp (−x) (6.27) 6.8 • WORD2VEC 115
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,We model the probability that word c is a real context word for target word w as:
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"P(+|w, c) = σ (c • w) = 1 1 + exp (−c • w) (6.28)"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"The sigmoid function returns a number between 0 and 1, but to make it a probability we'll also need the total probability of the two possible events (c is a context word, and c isn't a context word) to sum to 1. We thus estimate the probability that word c is not a real context word for w as:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"P(−|w, c) = 1 − P(+|w, c) = σ (−c • w) = 1 1 + exp (c • w) (6.29)"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"Equation 6.28 gives us the probability for one word, but there are many context words in the window. Skip-gram makes the simplifying assumption that all context words are independent, allowing us to just multiply their probabilities:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"P(+|w, c 1:L ) = L i=1 σ (c i • w) (6.30) log P(+|w, c 1:L ) = L i=1 log σ (c i • w) (6.31)"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.1,The classifier,"In summary, skip-gram trains a probabilistic classifier that, given a test target word w and its context window of L words c 1:L , assigns a probability based on how similar this context window is to the target word. The probability is based on applying the logistic (sigmoid) function to the dot product of the embeddings of the target word with each context word. To compute this probability, we just need embeddings for each target word and context word in the vocabulary. Figure 6 .13 The embeddings learned by the skipgram model. The algorithm stores two embeddings for each word, the target embedding (sometimes called the input embedding) and the context embedding (sometimes called the output embedding). The parameter θ that the algorithm learns is thus a matrix of 2|V | vectors, each of dimension d, formed by concatenating two matrices, the target embeddings W and the context+noise embeddings C. Fig. 6 .13 shows the intuition of the parameters we'll need. Skip-gram actually stores two embeddings for each word, one for the word as a target, and one for the word considered as context. Thus the parameters we need to learn are two matrices W and C, each containing an embedding for every one of the |V | words in the vocabulary V . 6 Let's now turn to learning these embeddings (which is the real goal of training this classifier in the first place)."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"The learning algorithm for skip-gram embeddings takes as input a corpus of text, and a chosen vocabulary size N. It begins by assigning a random embedding vector for each of the N vocabulary words, and then proceeds to iteratively shift the embedding of each word w to be more like the embeddings of words that occur nearby in texts, and less like the embeddings of words that don't occur nearby. Let's start by considering a single piece of training data:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"... lemon, a [tablespoon of apricot jam, a] pinch ... c1 c2 w c3 c4"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"This example has a target word w (apricot), and 4 context words in the L = ±2 window, resulting in 4 positive training instances (on the left below): For training a binary classifier we also need negative examples. In fact skipgram with negative sampling (SGNS) uses more negative examples than positive examples (with the ratio between them set by a parameter k). So for each of these (w, c pos ) training instances we'll create k negative samples, each consisting of the target w plus a 'noise word' c neg . A noise word is a random word from the lexicon, constrained not to be the target word w. The right above shows the setting where k = 2, so we'll have 2 negative examples in the negative training set − for each positive example w, c pos ."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,positive examples + w c
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"The noise words are chosen according to their weighted unigram frequency p α (w), where α is a weight. If we were sampling according to unweighted frequency p(w), it would mean that with unigram probability p(""the"") we would choose the word the as a noise word, with unigram probability p(""aardvark"") we would choose aardvark, and so on. But in practice it is common to set α = .75, i.e. use the weighting p 3 4 (w):"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,P α (w) = count(w) α w count(w ) α (6.32)
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"Setting α = .75 gives better performance because it gives rare noise words slightly higher probability: for rare words, P α (w) > P(w). To illustrate this intuition, it might help to work out the probabilities for an example with two events, P(a) = .99"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"and P(b) = .01: If we consider one word/context pair (w, c pos ) with its k noise words c neg 1 ...c neg k , we can express these two goals as the following loss function L to be minimized (hence the −); here the first term expresses that we want the classifier to assign the real context word c pos a high probability of being a neighbor, and the second term expresses that we want to assign each of the noise words c neg i a high probability of being a non-neighbor, all multiplied because we assume independence:"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,P
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"L CE = − log P(+|w, c pos ) k i=1 P(−|w, c neg i ) = − log P(+|w, c pos ) + k i=1 log P(−|w, c neg i ) = − log P(+|w, c pos ) + k i=1 log 1 − P(+|w, c neg i ) = − log σ (c pos • w) + k i=1 log σ (−c neg i • w) (6.34)"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"That is, we want to maximize the dot product of the word with the actual context words, and minimize the dot products of the word with the k negative sampled nonneighbor words."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,We minimize this loss function using stochastic gradient descent. Fig. 6 .14 shows the intuition of one step of learning.
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"To get the gradient, we need to take the derivative of Eq. 6.34 with respect to the different embeddings. It turns out the derivatives are the following (we leave the proof as an exercise at the end of the chapter):"
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,∂ L CE ∂ c pos = [σ (c pos • w) − 1]w (6.35) ∂ L CE ∂ c neg = [σ (c neg • w)]w (6.36) ∂ L CE ∂ w = [σ (c pos • w) − 1]c pos + k i=1 [σ (c neg i • w)]c neg i (6.37)
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,The update equations going from time step t to t + 1 in stochastic gradient descent Figure 6 .14 Intuition of one step of gradient descent. The skip-gram model tries to shift embeddings so the target embeddings (here for apricot) are closer to (have a higher dot product with) context embeddings for nearby words (here jam) and further from (lower dot product with) context embeddings for noise words that don't occur nearby (here Tolstoy and matrix).
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,are thus:
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,c t+1 pos = c t pos − η[σ (c t pos • w t ) − 1]w t (6.38) c t+1 neg = c t neg − η[σ (c t neg • w t )]w t (6.39) w t+1 = w t − η [σ (c pos • w t ) − 1]c pos + k i=1 [σ (c neg i • w t )]c neg i (6.40)
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"Just as in logistic regression, then, the learning algorithm starts with randomly initialized W and C matrices, and then walks through the training corpus using gradient descent to move W and C so as to maximize the objective in Eq. 6.34 by making the updates in (Eq. 6.39)-(Eq. 6.40)."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"Recall that the skip-gram model learns two separate embeddings for each word i: the target embedding w i and the context embedding c i , stored in two matrices, the target embedding context embedding target matrix W and the context matrix C. It's common to just add them together, representing word i with the vector w i + c i . Alternatively we can throw away the C matrix and just represent each word i by the vector w i ."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.2,Learning skip-gram embeddings,"As with the simple count-based methods like tf-idf, the context window size L affects the performance of skip-gram embeddings, and experiments often tune the parameter L on a devset."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.3,Other kinds of static embeddings,"There are many kinds of static embeddings. An extension of word2vec, fasttext fasttext (Bojanowski et al., 2017) , addresses a problem with word2vec as we have presented it so far: it has no good way to deal with unknown words -words that appear in a test corpus but were unseen in the training corpus. A related problem is word sparsity, such as in languages with rich morphology, where some of the many forms for each noun and verb may only occur rarely. Fasttext deals with these problems by using subword models, representing each word as itself plus a bag of constituent n-grams, with special boundary symbols < and > added to each word. For example, 6.9 • VISUALIZING EMBEDDINGS 119 with n = 3 the word where would be represented by the sequence plus the character n-grams: "
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.3,Other kinds of static embeddings,"Then a skipgram embedding is learned for each constituent n-gram, and the word where is represented by the sum of all of the embeddings of its constituent n-grams. Unknown words can then be presented only by the sum of the constituent n-grams. A fasttext open-source library, including pretrained embeddings for 157 languages, is available at https://fasttext.cc."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.3,Other kinds of static embeddings,"Another very widely used static embedding model is GloVe (Pennington et al., 2014), short for Global Vectors, because the model is based on capturing global corpus statistics. GloVe is based on ratios of probabilities from the word-word cooccurrence matrix, combining the intuitions of count-based models like PPMI while also capturing the linear structures used by methods like word2vec."
6,Vector Semantics and Embeddings,6.8,Word2Vec,6.8.3,Other kinds of static embeddings,"It turns out that dense embeddings like word2vec actually have an elegant mathematical relationships with sparse embeddings like PPMI, in which word2vec can be seen as implicitly optimizing a shifted version of a PPMI matrix (Levy and Goldberg, 2014c)."
6,Vector Semantics and Embeddings,6.9,Visualizing Embeddings,,,"""I see well in many dimensions as long as the dimensions are around two."""
6,Vector Semantics and Embeddings,6.9,Visualizing Embeddings,,,"The late economist Martin Shubik Visualizing embeddings is an important goal in helping understand, apply, and improve these models of word meaning. But how can we visualize a (for example) 100-dimensional vector?"
6,Vector Semantics and Embeddings,6.9,Visualizing Embeddings,,,"The simplest way to visualize the meaning of a word w embedded in a space is to list the most similar words to w by sorting the vectors for all words in the vocabulary by their cosine with the vector for w. For example the 7 closest words to frog using the GloVe embeddings are: frogs, toad, litoria, leptodactylidae, rana, lizard, and eleutherodactylus (Pennington et al., 2014)."
6,Vector Semantics and Embeddings,6.9,Visualizing Embeddings,,,"Yet another visualization method is to use a clustering algorithm to show a hierarchical representation of which words are similar to others in the embedding space. The uncaptioned figure on the left uses hierarchical clustering of some embedding vectors for nouns as a visualization method (Rohde et al., 2006) ."
6,Vector Semantics and Embeddings,6.9,Visualizing Embeddings,,,"Probably the most common visualization method, however, is to project the 100 dimensions of a word down into 2 dimensions. Fig. 6 .1 showed one such visualization, as does Fig. 6 .16, using a projection method called t-SNE (van der Maaten and Hinton, 2008)."
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,In this section we briefly summarize some of the semantic properties of embeddings that have been studied.
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,Different types of similarity or association: One parameter of vector semantic models that is relevant to both sparse tf-idf vectors and dense word2vec vectors is the size of the context window used to collect counts. This is generally between 1 and 10 words on each side of the target word (for a total context of 2-20 words).
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,"The choice depends on the goals of the representation. Shorter context windows tend to lead to representations that are a bit more syntactic, since the information is coming from immediately nearby words. When the vectors are computed from short context windows, the most similar words to a target word w tend to be semantically similar words with the same parts of speech. When vectors are computed from long context windows, the highest cosine words to a target word w tend to be words that are topically related but not similar."
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,"For example Levy and Goldberg (2014a) showed that using skip-gram with a window of ±2, the most similar words to the word Hogwarts (from the Harry Potter series) were names of other fictional schools: Sunnydale (from Buffy the Vampire Slayer) or Evernight (from a vampire series). With a window of ±5, the most similar words to Hogwarts were other words topically related to the Harry Potter series: Dumbledore, Malfoy, and half-blood."
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,"It's also often useful to distinguish two kinds of similarity or association between words (Schütze and Pedersen, 1993). Two words have first-order co-occurrence for solving simple analogy problems of the form a is to b as a* is to what?. In such problems, a system given a problem like apple:tree::grape:?, i.e., apple is to tree as grape is to , and must fill in the word vine. In the parallelogram model, illustrated in Fig. 6 .15, the vector from the word apple to the word tree (= # » tree − # » apple) is added to the vector for grape ( # » grape); the nearest word to that point is returned. tree apple grape vine Figure 6 .15 The parallelogram model for analogy problems (Rumelhart and Abrahamson, 1973) : the location of # » vine can be found by subtracting # » apple from # » tree and adding # » grape."
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,"In early work with sparse embeddings, scholars showed that sparse vector mod- Rome. The embedding model thus seems to be extracting representations of relations like MALE-FEMALE, or CAPITAL-CITY-OF, or even COMPARATIVE/SUPERLATIVE, as shown in Fig. 6 .16 from GloVe. For a a : b :: a * : b * problem, meaning the algorithm is given vectors a, b, and a * and must find b * , the parallelogram method is thus:"
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,"b * = argmin x distance(x, a * − a + b) (6.41)"
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,"with some distance function, such as Euclidean distance."
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,,,"There are some caveats. For example, the closest value returned by the parallelogram algorithm in word2vec or GloVe embedding spaces is usually not in fact b* but one of the 3 input words or their morphological variants (i.e., cherry:red :: potato:x returns potato or potatoes instead of brown), so these must be explicitly excluded. Furthermore while embedding spaces perform well if the task involves frequent words, small distances, and certain relations (like relating countries with their capitals or verbs/nouns with their inflected forms), the parallelogram method with embeddings doesn't work as well for other relations (Linzen 2016 , Gladkova et al. 2016 , Schluter 2018 , Ethayarajh et al. 2019a ), and indeed Peterson et al. (2020) argue that the parallelogram method is in general too simple to model the human cognitive process of forming analogies of this kind."
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,6.10.1,Embeddings and Historical Semantics,"Embeddings can also be a useful tool for studying how meaning changes over time, by computing multiple embedding spaces, each from texts written in a particular time period. For example Fig. 6 .17 shows a visualization of changes in meaning in English words over the last two centuries, computed by building separate embed-ding spaces for each decade from historical corpora like Google n-grams (Lin et al., 2012b) and the Corpus of Historical American English (Davies, 2012)."
6,Vector Semantics and Embeddings,6.10,Semantic Properties of Embeddings,6.10.1,Embeddings and Historical Semantics,"Figure 6 .17: Two-dimensional visualization of semantic change in English using SGNS vectors (see Section 5.8 for the visualization algorithm). A, The word gay shifted from meaning ""cheerful"" or ""frolicsome"" to referring to homosexuality. A, In the early 20th century broadcast referred to ""casting out seeds""; with the rise of television and radio its meaning shifted to ""transmitting signals"". C, Awful underwent a process of pejoration, as it shifted from meaning ""full of awe"" to meaning ""terrible or appalling"" [212] . that adverbials (e.g., actually) have a general tendency to undergo subjectification where they shift from objective statements about the world (e.g., ""Sorry, the car is actually broken"") to subjective statements (e.g., ""I can't believe he actually did that"", indicating surprise/disbelief). .17 A t-SNE visualization of the semantic change of 3 words in English using word2vec vectors. The modern sense of each word, and the grey context words, are computed from the most recent (modern) time-point embedding space. Earlier points are computed from earlier historical embedding spaces. The visualizations show the changes in the word gay from meanings related to ""cheerful"" or ""frolicsome"" to referring to homosexuality, the development of the modern ""transmission"" sense of broadcast from its original sense of sowing seeds, and the pejoration of the word awful as it shifted from meaning ""full of awe"" to meaning ""terrible or appalling"" (Hamilton et al., 2016b)."
6,Vector Semantics and Embeddings,6.11,Bias and Embeddings,,,"In addition to their ability to learn word meaning from text, embeddings, alas, also reproduce the implicit biases and stereotypes that were latent in the text. As the prior section just showed, embeddings can roughly model relational similarity: 'queen' as the closest word to 'king' -'man' + 'woman' implies the analogy man:woman::king:queen. But these same embedding analogies also exhibit gender stereotypes. For example Bolukbasi et al. (2016) find that the closest occupation to 'man' -'computer programmer' + 'woman' in word2vec embeddings trained on news text is 'homemaker', and that the embeddings similarly suggest the analogy 'father' is to 'doctor' as 'mother' is to 'nurse'. This could result in what Crawford (2017) and Blodgett et al. (2020) call an allocational harm, when a system allo-allocational harm cates resources (jobs or credit) unfairly to different groups. For example algorithms that use embeddings as part of a search for hiring potential programmers or doctors might thus incorrectly downweight documents with women's names. It turns out that embeddings don't just reflect the statistics of their input, but also amplify bias; gendered terms become more gendered in embedding space than they bias amplification were in the input text statistics (Zhao et al. 2017 , Ethayarajh et al. 2019b , Jia et al. 2020 , and biases are more exaggerated than in actual labor employment statistics (Garg et al., 2018) ."
6,Vector Semantics and Embeddings,6.11,Bias and Embeddings,,,"Embeddings also encode the implicit associations that are a property of human reasoning. The Implicit Association Test (Greenwald et al., 1998) measures people's associations between concepts (like 'flowers' or 'insects') and attributes (like 'pleasantness' and 'unpleasantness') by measuring differences in the latency with which they label words in the various categories. 7 Using such methods, people"
6,Vector Semantics and Embeddings,6.11,Bias and Embeddings,,,"in the United States have been shown to associate African-American names with unpleasant words (more than European-American names), male names more with mathematics and female names with the arts, and old people's names with unpleasant words (Greenwald et al. 1998 , Nosek et al. 2002a , Nosek et al. 2002b . Caliskan et al. 2017replicated all these findings of implicit associations using GloVe vectors and cosine similarity instead of human latencies. For example African-American names like 'Leroy' and 'Shaniqua' had a higher GloVe cosine with unpleasant words while European-American names ('Brad', 'Greg', 'Courtney') had a higher cosine with pleasant words. These problems with embeddings are an example of a representational harm (Crawford 2017, Blodgett et al. 2020), which is a harm caused by representational harm a system demeaning or even ignoring some social groups. Any embedding-aware algorithm that made use of word sentiment could thus exacerbate bias against African Americans. Recent research focuses on ways to try to remove these kinds of biases, for example by developing a transformation of the embedding space that removes gender stereotypes but preserves definitional gender (Bolukbasi et al. 2016 , Zhao et al. 2017 or changing the training procedure (Zhao et al., 2018b). However, although these sorts of debiasing may reduce bias in embeddings, they do not eliminate it Historical embeddings are also being used to measure biases in the past. Garg et al. 2018used embeddings from historical texts to measure the association between embeddings for occupations and embeddings for names of various ethnicities or genders (for example the relative cosine similarity of women's names versus men's to occupation words like 'librarian' or 'carpenter') across the 20th century. They found that the cosines correlate with the empirical historical percentages of women or ethnic groups in those occupations. Historical embeddings also replicated old surveys of ethnic stereotypes; the tendency of experimental participants in 1933 to associate adjectives like 'industrious' or 'superstitious' with, e.g., Chinese ethnicity, correlates with the cosine between Chinese last names and those adjectives using embeddings trained on 1930s text. They also were able to document historical gender biases, such as the fact that embeddings for adjectives related to competence ('smart', 'wise', 'thoughtful', 'resourceful') had a higher cosine with male than female words, and showed that this bias has been slowly decreasing since 1960. We return in later chapters to this question about the role of bias in natural language processing."
6,Vector Semantics and Embeddings,6.12,Evaluating Vector Models,,,"The most important evaluation metric for vector models is extrinsic evaluation on tasks, i.e., using vectors in an NLP task and seeing whether this improves performance over some other model."
6,Vector Semantics and Embeddings,6.12,Evaluating Vector Models,,,"Nonetheless it is useful to have intrinsic evaluations. The most common metric is to test their performance on similarity, computing the correlation between an algorithm's word similarity scores and word similarity ratings assigned by humans. WordSim-353 (Finkelstein et al., 2002 ) is a commonly used set of ratings from 0 (love, laughter, pleasure) and a red button for 'insects' (flea, spider, mosquito) and 'unpleasant words' (abuse, hatred, ugly) they are faster than in an incongruous condition where they push a red button for 'flowers' and 'unpleasant words' and a green button for 'insects' and 'pleasant words'."
6,Vector Semantics and Embeddings,6.12,Evaluating Vector Models,,,"to 10 for 353 noun pairs; for example (plane, car) had an average score of 5.77. SimLex-999 (Hill et al., 2015 ) is a more difficult dataset that quantifies similarity (cup, mug) rather than relatedness (cup, coffee), and including both concrete and abstract adjective, noun and verb pairs. The TOEFL dataset is a set of 80 questions, each consisting of a target word with 4 additional word choices; the task is to choose which is the correct synonym, as in the example: Levied is closest in meaning to: imposed, believed, requested, correlated (Landauer and Dumais, 1997). All of these datasets present words without context."
6,Vector Semantics and Embeddings,6.12,Evaluating Vector Models,,,"Slightly more realistic are intrinsic similarity tasks that include context. The Stanford Contextual Word Similarity (SCWS) dataset (Huang et al., 2012) and the Word-in-Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019) offer richer evaluation scenarios. SCWS gives human judgments on 2,003 pairs of words in their sentential context, while WiC gives target words in two sentential contexts that are either in the same or different senses; see Section 18.5.3. The semantic textual similarity task (Agirre et al. 2012, Agirre et al. 2015) evaluates the performance of sentence-level similarity algorithms, consisting of a set of pairs of sentences, each pair with human-labeled similarity scores."
6,Vector Semantics and Embeddings,6.12,Evaluating Vector Models,,,"Another task used for evaluation is the analogy task, discussed on page 120, where the system has to solve problems of the form a is to b as a* is to b*, given a, b, and a* and having to find b* (Turney and Littman, 2005) . A number of sets of tuples have been created for this task, (Mikolov et al. 2013a , Mikolov et al. 2013c , Gladkova et al. 2016 , covering morphology (city:cities::child:children), lexicographic relations (leg:table::spout::teapot) and encyclopedia relations (Beijing:China::Dublin:Ireland), some drawing from the SemEval-2012 Task 2 dataset of 79 different relations (Jurgens et al., 2012) ."
6,Vector Semantics and Embeddings,6.12,Evaluating Vector Models,,,"All embedding algorithms suffer from inherent variability. For example because of randomness in the initialization and the random negative sampling, algorithms like word2vec may produce different results even from the same dataset, and individual documents in a collection may strongly impact the resulting embeddings (Tian et al. 2016, Hellrich and Hahn 2016, Antoniak and Mimno 2018). When embeddings are used to study word associations in particular corpora, therefore, it is best practice to train multiple embeddings with bootstrap sampling over documents and average the results (Antoniak and Mimno, 2018)."
6,Vector Semantics and Embeddings,6.13,Summary,,,"• In vector semantics, a word is modeled as a vector-a point in high-dimensional space, also called an embedding. In this chapter we focus on static embeddings, in each each word is mapped to a fixed embedding."
6,Vector Semantics and Embeddings,6.13,Summary,,,"• Vector semantic models fall into two classes: sparse and dense. In sparse models each dimension corresponds to a word in the vocabulary V and cells are functions of co-occurrence counts. The term-document matrix has a row for each word (term) in the vocabulary and a column for each document. The word-context or term-term matrix has a row for each (target) word in the vocabulary and a column for each context term in the vocabulary. Two sparse weightings are common: the tf-idf weighting which weights each cell by its term frequency and inverse document frequency, and PPMI (pointwise positive mutual information) most common for for word-context matrices."
6,Vector Semantics and Embeddings,6.13,Summary,,,"• Dense vector models have dimensionality 50-1000. Word2vec algorithms like skip-gram are a popular way to compute dense embeddings. Skip-gram trains a logistic regression classifier to compute the probability that two words are 'likely to occur nearby in text'. This probability is computed from the dot product between the embeddings for the two words. • Skip-gram uses stochastic gradient descent to train the classifier, by learning embeddings that have a high dot product with embeddings of words that occur nearby and a low dot product with noise words. • Other important embedding algorithms include GloVe, a method based on ratios of word co-occurrence probabilities. • Whether using sparse or dense vectors, word and document similarities are computed by some function of the dot product between vectors. The cosine of two vectors-a normalized dot product-is the most popular such metric."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"The idea of vector semantics arose out of research in the 1950s in three distinct fields: linguistics, psychology, and computer science, each of which contributed a fundamental aspect of the model. The idea that meaning is related to the distribution of words in context was widespread in linguistic theory of the 1950s, among distributionalists like Zellig Harris, Martin Joos, and J. R. Firth, and semioticians like Thomas Sebeok. As Joos (1950) put it, the linguist's ""meaning"" of a morpheme. . . is by definition the set of conditional probabilities of its occurrence in context with all other morphemes."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"The idea that the meaning of a word might be modeled as a point in a multidimensional semantic space came from psychologists like Charles E. Osgood, who had been studying how people responded to the meaning of words by assigning values along scales like happy/sad or hard/soft. Osgood et al. (1957) proposed that the meaning of a word in general could be modeled as a point in a multidimensional Euclidean space, and that the similarity of meaning between two words could be modeled as the distance between these points in the space."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"A final intellectual source in the 1950s and early 1960s was the field then called mechanical indexing, now known as information retrieval. In what became known mechanical indexing as the vector space model for information retrieval (Salton 1971 , Sparck Jones 1986 , researchers demonstrated new ways to define the meaning of words in terms of vectors (Switzer, 1965) , and refined methods for word similarity based on measures of statistical association between words like mutual information (Giuliano, 1965) and idf (Sparck Jones, 1972) , and showed that the meaning of documents could be represented in the same vector spaces used for words. Some of the philosophical underpinning of the distributional way of thinking came from the late writings of the philosopher Wittgenstein, who was skeptical of the possibility of building a completely formal theory of meaning definitions for each word, suggesting instead that ""the meaning of a word is its use in the language"" (Wittgenstein, 1953, PI 43) . That is, instead of using some logical language to define each word, or drawing on denotations or truth values, Wittgenstein's idea is that we should define a word by how it is used by people in speaking and understanding in their day-to-day interactions, thus prefiguring the movement toward embodied and experiential models in linguistics and NLP (Glenberg and Robertson 2000, Lake and Murphy 2021, Bisk et al. 2020, Bender and Koller 2020)."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"More distantly related is the idea of defining words by a vector of discrete features, which has roots at least as far back as Descartes and Leibniz (Wierzbicka 1992, Wierzbicka 1996) . By the middle of the 20th century, beginning with the work of Hjelmslev (Hjelmslev, 1969) (originally 1943) and fleshed out in early models of generative grammar (Katz and Fodor, 1963) , the idea arose of representing meaning with semantic features, symbols that represent some sort of primitive meaning."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"For example words like hen, rooster, or chick, have something in common (they all describe chickens) and something different (their age and sex), representable as:"
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"hen +female, +chicken, +adult rooster -female, +chicken, +adult chick +chicken, -adult"
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"The dimensions used by vector models of meaning to define words, however, are only abstractly related to this idea of a small fixed number of hand-built dimensions. Nonetheless, there has been some attempt to show that certain dimensions of embedding models do contribute some specific compositional aspect of meaning like these early semantic features."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"The use of dense vectors to model word meaning, and indeed the term embedding, grew out of the latent semantic indexing (LSI) model (Deerwester et al., 1988) recast as LSA (latent semantic analysis) (Deerwester et al., 1990) . In LSA singular value decomposition-SVDis applied to a term-document matrix (each SVD cell weighted by log frequency and normalized by entropy), and then the first 300 dimensions are used as the LSA embedding. Singular Value Decomposition (SVD) is a method for finding the most important dimensions of a data set, those dimensions along which the data varies the most. LSA was then quickly widely applied: as a cognitive model Landauer and Dumais (1997), and for tasks like spell checking (Jones and Martin, 1997), language modeling (Bellegarda 1997, Coccaro and Jurafsky 1998, Bellegarda 2000) morphology induction (Schone and Jurafsky 2000, Schone and Jurafsky 2001b), multiword expressions (MWEs) (Schone and Jurafsky, 2001a), and essay grading (Rehder et al., 1998) . Related models were simultaneously developed and applied to word sense disambiguation by Schütze (1992b). LSA also led to the earliest use of embeddings to represent words in a probabilistic classifier, in the logistic regression document router of Schütze et al. (1995) . The idea of SVD on the term-term matrix (rather than the term-document matrix) as a model of meaning for NLP was proposed soon after LSA by Schütze (1992b). Schütze applied the low-rank (97-dimensional) embeddings produced by SVD to the task of word sense disambiguation, analyzed the resulting semantic space, and also suggested possible techniques like dropping high-order dimensions. See Schütze (1997a)."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"A number of alternative matrix models followed on from the early SVD work, including Probabilistic Latent Semantic Indexing (PLSI) (Hofmann, 1999), Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , and Non-negative Matrix Factorization (NMF) (Lee and Seung, 1999)."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"The LSA community seems to have first used the word ""embedding"" in Landauer et al. (1997) , in a variant of its mathematical meaning as a mapping from one space or mathematical structure to another. In LSA, the word embedding seems to have described the mapping from the space of sparse count vectors to the latent space of SVD dense vectors. Although the word thus originally meant the mapping from one space to another, it has metonymically shifted to mean the resulting dense vector in the latent space. and it is in this sense that we currently use the word."
6,Vector Semantics and Embeddings,6.14,Bibliographical and Historical Notes,,,"By the next decade, Bengio et al. (2003) and Bengio et al. (2006) showed that neural language models could also be used to develop embeddings as part of the task of word prediction. Collobert and Weston 2007 See Manning et al. (2008) for a deeper understanding of the role of vectors in information retrieval, including how to compare queries with documents, more details on tf-idf, and issues of scaling to very large datasets. See Kim (2019) for a clear and comprehensive tutorial on word2vec. Cruse 2004 is a useful introductory linguistic text of lexical semantics."
7,Neural Networks and Neural Language Models,,,,,"Neural networks are a fundamental computational tool for language processing, and a very old one. They are called neural because their origins lie in the McCulloch-Pitts neuron (McCulloch and Pitts, 1943 ), a simplified model of the human neuron as a kind of computing element that could be described in terms of propositional logic. But the modern use in language processing no longer draws on these early biological inspirations."
7,Neural Networks and Neural Language Models,,,,,"Instead, a modern neural network is a network of small computing units, each of which takes a vector of input values and produces a single output value. In this chapter we introduce the neural net applied to classification. The architecture we introduce is called a feedforward network because the computation proceeds iterfeedforward atively from one layer of units to the next. The use of modern neural nets is often called deep learning, because modern networks are often deep (have many layers)."
7,Neural Networks and Neural Language Models,,,,,"Neural networks share much of the same mathematics as logistic regression. But neural networks are a more powerful classifier than logistic regression, and indeed a minimal neural network (technically one with a single 'hidden layer') can be shown to learn any function."
7,Neural Networks and Neural Language Models,,,,,"Neural net classifiers are different from logistic regression in another way. With logistic regression, we applied the regression classifier to many different tasks by developing many rich kinds of feature templates based on domain knowledge. When working with neural networks, it is more common to avoid most uses of rich handderived features, instead building neural networks that take raw words as inputs and learn to induce features as part of the process of learning to classify. We saw examples of this kind of representation learning for embeddings in Chapter 6. Nets that are very deep are particularly good at representation learning. For that reason deep neural nets are the right tool for large scale problems that offer sufficient data to learn features automatically."
7,Neural Networks and Neural Language Models,,,,,"In this chapter we'll introduce feedforward networks as classifiers, and also apply them to the simple task of language modeling: assigning probabilities to word sequences and predicting upcoming words. In subsequent chapters we'll introduce many other aspects of neural models, such as recurrent neural networks and the Transformer (Chapter 9), contextual embeddings like BERT (Chapter 11), and encoder-decoder models and attention (Chapter 10)."
7,Neural Networks and Neural Language Models,7.1,Units,,,"The building block of a neural network is a single computational unit. A unit takes a set of real valued numbers as input, performs some computation on them, and produces an output."
7,Neural Networks and Neural Language Models,7.1,Units,,,"At its heart, a neural unit is taking a weighted sum of its inputs, with one additional term in the sum called a bias term. Given a set of inputs x 1 ...x n , a unit has bias term a set of corresponding weights w 1 ...w n and a bias b, so the weighted sum z can be represented as:"
7,Neural Networks and Neural Language Models,7.1,Units,,,z = b + i w i x i (7.1)
7,Neural Networks and Neural Language Models,7.1,Units,,,"Often it's more convenient to express this weighted sum using vector notation; recall from linear algebra that a vector is, at heart, just a list or array of numbers. Thus vector we'll talk about z in terms of a weight vector w, a scalar bias b, and an input vector x, and we'll replace the sum with the convenient dot product:"
7,Neural Networks and Neural Language Models,7.1,Units,,,z = w • x + b (7.2)
7,Neural Networks and Neural Language Models,7.1,Units,,,"As defined in Eq. 7.2, z is just a real valued number. Finally, instead of using z, a linear function of x, as the output, neural units apply a non-linear function f to z. We will refer to the output of this function as the activation value for the unit, a. Since we are just modeling a single unit, the activation activation for the node is in fact the final output of the network, which we'll generally call y. So the value y is defined as:"
7,Neural Networks and Neural Language Models,7.1,Units,,,y = a = f (z)
7,Neural Networks and Neural Language Models,7.1,Units,,,"We'll discuss three popular non-linear functions f () below (the sigmoid, the tanh, and the rectified linear ReLU) but it's pedagogically convenient to start with the sigmoid function since we saw it in Chapter 5:"
7,Neural Networks and Neural Language Models,7.1,Units,,,sigmoid y = σ (z) = 1 1 + e −z (7.3)
7,Neural Networks and Neural Language Models,7.1,Units,,,"The sigmoid (shown in Fig. 7 .1) has a number of advantages; it maps the output into the range [0, 1], which is useful in squashing outliers toward 0 or 1. And it's differentiable, which as we saw in Section 5.8 will be handy for learning. Substituting Eq. 7.2 into Eq. 7.3 gives us the output of a neural unit:"
7,Neural Networks and Neural Language Models,7.1,Units,,,EQUATION
7,Neural Networks and Neural Language Models,7.1,Units,,,"4) Fig. 7 .2 shows a final schematic of a basic neural unit. In this example the unit takes 3 input values x 1 , x 2 , and x 3 , and computes a weighted sum, multiplying each value by a weight (w 1 , w 2 , and w 3 , respectively), adds them to a bias term b, and then passes the resulting sum through a sigmoid function to result in a number between 0 and 1."
7,Neural Networks and Neural Language Models,7.1,Units,,,x 1 x 2 x 3 y w 1 w 2 w 3 ∑ b σ +1
7,Neural Networks and Neural Language Models,7.1,Units,,,"z a Figure 7 .2 A neural unit, taking 3 inputs x 1 , x 2 , and x 3 (and a bias b that we represent as a weight for an input clamped at +1) and producing an output y. We include some convenient intermediate variables: the output of the summation, z, and the output of the sigmoid, a. In this case the output of the unit y is the same as a, but in deeper networks we'll reserve y to mean the final output of the entire network, leaving a as the activation of an individual node."
7,Neural Networks and Neural Language Models,7.1,Units,,,Let's walk through an example just to get an intuition. Let's suppose we have a unit with the following weight vector and bias:
7,Neural Networks and Neural Language Models,7.1,Units,,,"w = [0.2, 0.3, 0.9] b = 0.5"
7,Neural Networks and Neural Language Models,7.1,Units,,,What would this unit do with the following input vector:
7,Neural Networks and Neural Language Models,7.1,Units,,,"x = [0.5, 0.6, 0.1]"
7,Neural Networks and Neural Language Models,7.1,Units,,,The resulting output y would be:
7,Neural Networks and Neural Language Models,7.1,Units,,,y = σ (w • x + b) = 1 1 + e −(w•x+b) = 1
7,Neural Networks and Neural Language Models,7.1,Units,,,1 + e −(.5 * .2+.6 * .3+.1 * .9+.5) = 1 1 + e −0.87 = .70
7,Neural Networks and Neural Language Models,7.1,Units,,,"In practice, the sigmoid is not commonly used as an activation function. A function that is very similar but almost always better is the tanh function shown in Fig. 7.3a; tanh tanh is a variant of the sigmoid that ranges from -1 to +1:"
7,Neural Networks and Neural Language Models,7.1,Units,,,y =
7,Neural Networks and Neural Language Models,7.1,Units,,,e z − e −z e z + e −z (7.5)
7,Neural Networks and Neural Language Models,7.1,Units,,,"The simplest activation function, and perhaps the most commonly used, is the rectified linear unit, also called the ReLU, shown in Fig. 7 .3b. It's just the same as z ReLU when z is positive, and 0 otherwise:"
7,Neural Networks and Neural Language Models,7.1,Units,,,"y = max(z, 0) (7.6)"
7,Neural Networks and Neural Language Models,7.1,Units,,,"These activation functions have different properties that make them useful for different language applications or network architectures. For example, the tanh function has the nice properties of being smoothly differentiable and mapping outlier values toward the mean. The rectifier function, on the other hand has nice properties that result from it being very close to linear. In the sigmoid or tanh functions, very high values of z result in values of y that are saturated, i.e., extremely close to 1, saturated and have derivatives very close to 0. Zero derivatives cause problems for learning, because as we'll see in Section 7.6, we'll train networks by propagating an error signal backwards, multiplying gradients (partial derivatives) from each layer of the network; gradients that are almost 0 cause the error signal to get smaller and smaller until it is too small to be used for training, a problem called the vanishing gradient vanishing gradient problem. Rectifiers don't have this problem, since the derivative of ReLU for high values of z is 1 rather than very close to 0."
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,"Early in the history of neural networks it was realized that the power of neural networks, as with the real neurons that inspired them, comes from combining these units into larger networks. One of the most clever demonstrations of the need for multi-layer networks was the proof by Minsky and Papert (1969) that a single neural unit cannot compute some very simple functions of its input. Consider the task of computing elementary logical functions of two inputs, like AND, OR, and XOR. As a reminder, here are the truth tables for those functions:"
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,AND OR XOR x1 x2 y x1 x2 y x1 x2 y 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 0 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,"This example was first shown for the perceptron, which is a very simple neural perceptron unit that has a binary output and does not have a non-linear activation function. The output y of a perceptron is 0 or 1, and is computed as follows (using the same weight w, input x, and bias b as in Eq. 7.2):"
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,"y = 0, if w • x + b ≤ 0 1, if w • x + b > 0 (7.7)"
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,"It's very easy to build a perceptron that can compute the logical AND and OR functions of its binary inputs; Fig. 7 .4 shows the necessary weights. It turns out, however, that it's not possible to build a perceptron to compute logical XOR! (It's worth spending a moment to give it a try!)"
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,EQUATION
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,"The intuition behind this important result relies on understanding that a perceptron is a linear classifier. For a two-dimensional input x 1 and x 2 , the perception equation, w 1 x 1 + w 2 x 2 + b = 0 is the equation of a line. (We can see this by putting it in the standard linear format:"
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,x 2 = (−w 1 /w 2 )x 1 + (−b/w 2 )
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,,,".) This line acts as a decision boundary in two-dimensional space in which the output 0 is assigned to all decision boundary inputs lying on one side of the line, and the output 1 to all input points lying on the other side of the line. If we had more than 2 inputs, the decision boundary becomes a hyperplane instead of a line, but the idea is the same, separating the space into two categories. Fig. 7 .5 shows the possible logical inputs (00, 01, 10, and 11) and the line drawn by one possible set of parameters for an AND and an OR classifier. Notice that there is simply no way to draw a line that separates the positive cases of XOR (01 and 10) from the negative cases (00 and 11). We say that XOR is not a linearly separable linearly separable function. Of course we could draw a boundary with a curve, or some other function, but not a single line."
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,7.2.1,The solution: neural networks,"While the XOR function cannot be calculated by a single perceptron, it can be calculated by a layered network of units. Let's see an example of how to do this from Goodfellow et al. (2016) that computes XOR using two layers of ReLU-based units. Fig. 7 .6 shows a figure with the input being processed by two layers of neural units. The middle layer (called h) has two units, and the output layer (called y) has one unit. A set of weights and biases are shown for each ReLU that correctly computes the XOR function."
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,7.2.1,The solution: neural networks,"Let's walk through what happens with the input x = [0, 0]. If we multiply each input value by the appropriate weight, sum, and then add the bias b, we get the vector [0, -1], and we then apply the rectified linear transformation to give the output of the h layer as [0, 0]. Now we once again multiply by the weights, sum, and add the bias (0 in this case) resulting in the value 0. The reader should work through the computation of the remaining 3 possible input pairs to see that the resulting y values are 1 for the inputs [0, 1] ? Figure 7 .5 The functions AND, OR, and XOR, represented with input x 1 on the x-axis and input x 2 on the y axis. Filled circles represent perceptron outputs of 1, and white circles perceptron outputs of 0. There is no way to draw a line that correctly separates the two categories for XOR. Figure styled after Russell and Norvig (2002) ."
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,7.2.1,The solution: neural networks,x 1 x 2 h 1 h 2 y 1 +1 1 -1 1 1 1 -2 0 1 +1 0 Figure 7
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,7.2.1,The solution: neural networks,".6 XOR solution after Goodfellow et al. (2016). There are three ReLU units, in two layers; we've called them h 1 , h 2 (h for ""hidden layer"") and y 1 . As before, the numbers on the arrows represent the weights w for each unit, and we represent the bias b as a weight on a unit clamped to +1, with the bias weights/units in gray."
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,7.2.1,The solution: neural networks,"It's also instructive to look at the intermediate results, the outputs of the two hidden nodes h 1 and h 2 . We showed in the previous paragraph that the h vector for the inputs x = [0, 0] was [0, 0]. Fig. 7.7b shows the values of the h layer for all 4 inputs. Notice that hidden representations of the two input points x = [0, 1] and x = [1, 0] (the two cases with XOR output = 1) are merged to the single point h = [1, 0]. The merger makes it easy to linearly separate the positive and negative cases of XOR. In other words, we can view the hidden layer of the network as forming a representation for the input."
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,7.2.1,The solution: neural networks,"In this example we just stipulated the weights in Fig. 7 .6. But for real examples the weights for neural networks are learned automatically using the error backpropagation algorithm to be introduced in Section 7.6. That means the hidden layers will learn to form useful representations. This intuition, that neural networks can automatically learn useful representations of the input, is one of their key advantages, and one that we will return to again and again in later chapters."
7,Neural Networks and Neural Language Models,7.2,The XOR Problem,7.2.1,The solution: neural networks,"Note that the solution to the XOR problem requires a network of units with nonlinear activation functions. A network made up of simple linear (perceptron) units cannot solve the XOR problem. This is because a network formed by many layers of purely linear units can always be reduced (i.e., shown to be computationally identical to) a single layer of linear units with appropriate weights, and we've already shown (visually, in Fig. 7 .5) that a single unit cannot solve the XOR problem. We'll return to this question on page 137."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"Let's now walk through a slightly more formal presentation of the simplest kind of neural network, the feedforward network. A feedforward network is a multilayer feedforward network network in which the units are connected with no cycles; the outputs from units in each layer are passed to units in the next higher layer, and no outputs are passed back to lower layers. (In Chapter 9 we'll introduce networks with cycles, called recurrent neural networks.) For historical reasons multilayer networks, especially feedforward networks, are sometimes called multi-layer perceptrons (or MLPs); this is a technical misnomer, multi-layer perceptrons MLP since the units in modern multilayer networks aren't perceptrons (perceptrons are purely linear, but modern networks are made up of units with non-linearities like sigmoids), but at some point the name stuck. Simple feedforward networks have three kinds of nodes: input units, hidden units, and output units. Fig. 7.8 shows a picture."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"The input layer x is a vector of simple scalar values just as we saw in Fig. 7 .2. The core of the neural network is the hidden layer h formed of hidden units h i , hidden layer each of which is a neural unit as described in Section 7.1, taking a weighted sum of its inputs and then applying a non-linearity. In the standard architecture, each layer is fully-connected, meaning that each unit in each layer takes as input the outputs fully-connected from all the units in the previous layer, and there is a link between every pair of units from two adjacent layers. Thus each hidden unit sums over all the input units."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"Recall that a single hidden unit has as parameters a weight vector and a bias. We represent the parameters for the entire hidden layer by combining the weight vector and bias for each unit i into a single weight matrix W and a single bias vector b for the whole layer (see Fig. 7.8) . Each element W ji of the weight matrix W represents the weight of the connection from the ith input unit x i to the jth hidden unit h j . Figure 7 .8 A simple 2-layer feedforward network, with one hidden layer, one output layer, and one input layer (the input layer is usually not counted when enumerating layers)."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"The advantage of using a single matrix W for the weights of the entire layer is that now the hidden layer computation for a feedforward network can be done very efficiently with simple matrix operations. In fact, the computation only has three steps: multiplying the weight matrix by the input vector x, adding the bias vector b, and applying the activation function g (such as the sigmoid, tanh, or ReLU activation function defined above)."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"The output of the hidden layer, the vector h, is thus the following; (for this example we'll use the sigmoid function σ as our activation function):"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,h = σ (Wx + b) (7.8)
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"Notice that we're applying the σ function here to a vector, while in Eq. 7.3 it was applied to a scalar. We're thus allowing σ (•), and indeed any activation function g(•), to apply to a vector element-wise, so"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"g[z 1 , z 2 , z 3 ] = [g(z 1 ), g(z 2 ), g(z 3 )]."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"Let's introduce some constants to represent the dimensionalities of these vectors and matrices. We'll refer to the input layer as layer 0 of the network, and have n 0 represent the number of inputs, so x is a vector of real numbers of dimension n 0 , or more formally x ∈ R n 0 , a column vector of dimensionality [n 0 , 1]. Let's call the hidden layer layer 1 and the output layer layer 2. The hidden layer has dimensionality n 1 , so h ∈ R n 1 and also b ∈ R n 1 (since each hidden unit can take a different bias value). And the weight matrix W has dimensionality W ∈ R n 1 ×n 0 , i.e. [n 1 , n 0 ]."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,Take a moment to convince yourself that the matrix multiplication in Eq. 7.8 will compute the value of each h j as σ
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,n 0 i=1 W ji x i + b j .
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"As we saw in Section 7.2, the resulting value h (for hidden but also for hypothesis) forms a representation of the input. The role of the output layer is to take this new representation h and compute a final output. This output could be a realvalued number, but in many cases the goal of the network is to make some sort of classification decision, and so we will focus on the case of classification."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"If we are doing a binary task like sentiment classification, we might have a single output node, and its scalar value y is the probability of positive versus negative sentiment. If we are doing multinomial classification, such as assigning a part-ofspeech tag, we might have one output node for each potential part-of-speech, whose output value is the probability of that part-of-speech, and the values of all the output nodes must sum to one. The output layer is thus a vector y that gives a probability distribution across the output nodes."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"Let's see how this happens. Like the hidden layer, the output layer has a weight matrix (let's call it U), but some models don't include a bias vector b in the output layer, so we'll simplify by eliminating the bias vector in this example. The weight matrix is multiplied by its input vector (h) to produce the intermediate output z. z = Uh There are n 2 output nodes, so z ∈ R n 2 , weight matrix U has dimensionality U ∈ R n 2 ×n 1 , and element U i j is the weight from unit j in the hidden layer to unit i in the output layer."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"However, z can't be the output of the classifier, since it's a vector of real-valued numbers, while what we need for classification is a vector of probabilities. There is a convenient function for normalizing a vector of real values, by which we mean normalizing converting it to a vector that encodes a probability distribution (all the numbers lie between 0 and 1 and sum to 1): the softmax function that we saw on page 91 of softmax Chapter 5. For a vector z of dimensionality d, the softmax is defined as:"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,softmax(z i ) = exp(z i ) d j=1 exp(z j ) 1 ≤ i ≤ d (7.9)
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,Thus for example given a vector
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,EQUATION
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"The softmax function will normalize it to a probability distribution: You may recall that softmax was exactly what is used to create a probability distribution from a vector of real-valued numbers (computed from summing weights times features) in the multinomial version of logistic regression in Chapter 5. That means we can think of a neural network classifier with one hidden layer as building a vector h which is a hidden layer representation of the input, and then running standard logistic regression on the features that the network develops in h. By contrast, in Chapter 5 the features were mainly designed by hand via feature templates. So a neural network is like logistic regression, but (a) with many layers, since a deep neural network is like layer after layer of logistic regression classifiers, and (b) rather than forming the features by feature templates, the prior layers of the network induce the feature representations themselves."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"Here are the final equations for a feedforward network with a single hidden layer, which takes an input vector x, outputs a probability distribution y, and is parameterized by weight matrices W and U and a bias vector b:"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,h = σ (Wx + b) z = Uh y = softmax(z) (7.12)
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,,,"We'll call this network a 2-layer network (we traditionally don't count the input layer when numbering layers, but do count the output layer). So by this terminology logistic regression is a 1-layer network."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"Let's now set up some notation to make it easier to talk about deeper networks of depth more than 2. We'll use superscripts in square brackets to mean layer numbers, starting at 0 for the input layer. So W [1] will mean the weight matrix for the (first) hidden layer, and b [1] will mean the bias vector for the (first) hidden layer. n j will mean the number of units at layer j. We'll use g(•) to stand for the activation function, which will tend to be ReLU or tanh for intermediate layers and softmax for output layers. We'll use a [i] to mean the output from layer i, and z [i] to mean the combination of weights and biases W [i] a [i−1] + b [i] . The 0th layer is for inputs, so the inputs x we'll refer to more generally as a [0] ."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,Thus we can re-represent our 2-layer net from Eq. 7.12 as follows:
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,z [1] = W [1] a [0] + b [1]
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,a [1] = g [1] (z [1] )
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,z [2] = W [2] a [1] + b [2]
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,a [2] = g [2] (z [2] ) y = a [2] (7.13)
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"Note that with this notation, the equations for the computation done at each layer are the same. The algorithm for computing the forward step in an n-layer feedforward network, given the input vector a [0] is thus simply:"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,for i in 1..n z [i] = W [i] a [i−1] + b [i]
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"a [i] = g [i] (z [i] ) y = a [n] The activation functions g(•) are generally different at the final layer. Thus g [2] might be softmax for multinomial classification or sigmoid for binary classification, while ReLU or tanh might be the activation function g(•) at the internal layers."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"More on the need for non-linear activation functions We mentioned in Section 7.2 that one of the reasons we use non-linear activation functions for each layer in a neural network is that if we did not, the resulting network is exactly equivalent to a single-layer network. Now that we have the notation for multilayer networks, we can see that intuition is more detail. Imagine the first two layers of such a network of purely linear layers:"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,z [1] = W [1] x + b [1] z [2] = W [2] z [1] + b [2]
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,We can rewrite the function that the network is computing as: [2] = W [2] (W [1] x + b [1]
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,z [2] = W [2] z [1] + b
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,) + b [2] = W [2] W [1] x + W [2] b [1] + b [2] = W x + b (7.14)
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"This generalizes to any number of layers. So without non-linear activation functions, a multilayer network is just a notational variant of a single layer network with a different set of weights, and we lose all the representational power of multilayer networks as we discussed in Section 7.2."
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"Replacing the bias unit In describing networks, we will often use a slightly simplified notation that represents exactly the same function without referring to an explicit bias node b. Instead, we add a dummy node a 0 to each layer whose value will always be 1. Thus layer 0, the input layer, will have a dummy node a we'll use:"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,h = σ (Wx) (7.16)
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"But now instead of our vector x having n values: x = x 1 , . . . , x n , it will have n + 1 values, with a new 0th dummy value x 0 = 1: x = x 0 , . . . , x n 0 . And instead of computing each h j as follows:"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"h j = σ n 0 i=1 W ji x i + b j , (7.17)"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,we'll instead use:
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"σ n 0 i=0 W ji x i , (7.18)"
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,where the value W j0 replaces what had been b j . Fig. 7 .9 shows a visualization. Figure 7 .9 Replacing the bias node (shown in a) with x 0 (b).
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,EQUATION
7,Neural Networks and Neural Language Models,7.3,Feedforward Neural Networks,7.3.1,More details on feedforward networks,"We'll continue showing the bias as b when we go over the learning algorithm in Section 7.6, but then we'll switch to this simplified notation without explicit bias terms for the rest of the book."
7,Neural Networks and Neural Language Models,7.4,Feedforward networks for NLP: Classification,,,Let's see how to apply feedforward networks to NLP tasks! In this section we'll look at classification tasks like sentiment analysis; in the next section we'll introduce neural language modeling.
7,Neural Networks and Neural Language Models,7.4,Feedforward networks for NLP: Classification,,,"Let's begin with a simple two-layer sentiment classifier. You might imagine taking our logistic regression classifier of Chapter 5, which corresponds to a 1-layer network, and just adding a hidden layer. The input element x i could be scalar features like those in Fig. 5.2, e .g., x 1 = count(words ∈ doc), x 2 = count(positive lexicon words ∈ doc), x 3 = 1 if ""no"" ∈ doc, and so on. And the output layer y could have two nodes (one each for positive and negative), or 3 nodes (positive, negative, neutral), in which case y 1 would be the estimated probability of positive sentiment, y 2 the probability of negative and y 3 the probability of neutral. The resulting equations would be just what we saw above for a two-layer network (as sketched in Fig. 7.10 ):"
7,Neural Networks and Neural Language Models,7.4,Feedforward networks for NLP: Classification,,,x = vector of hand-designed features
7,Neural Networks and Neural Language Models,7.4,Feedforward networks for NLP: Classification,,,h = σ (Wx + b) z = Uh y = softmax(z) (7.19)
7,Neural Networks and Neural Language Models,7.4,Feedforward networks for NLP: Classification,,,"As we mentioned earlier, adding this hidden layer to our logistic regression regression classifier allows the network to represent the non-linear interactions between features. This alone might give us a better sentiment classifier."
7,Neural Networks and Neural Language Models,7.4,Feedforward networks for NLP: Classification,,,"Most neural NLP applications do something different, however. Instead of using hand-built human-engineered features as the input to our classifier, we draw on deep learning's ability to learn features from the data by representing words as word2vec or GloVe embeddings (Chapter 6). For a text with n input words/tokens w 1 , ..., w n , the input vector will be the concatenated embeddings of the n words: [e w 1 ; ...; e w n ]. If we use the semicolon ';' to mean concatenation of vectors, the equation for our sentiment classifier will be (as sketched in Fig. 7.11 The idea of using word2vec or GloVe embeddings as our input representationand more generally the idea of relying on another algorithm to have already learned an embedding representation for our input words -is called pretraining. Using pretraining pretrained embedding representations, whether simple static word embeddings like word2vec or the more powerful contextual embeddings we'll introduce in Chapter 11, is one of the central ideas of deep learning. (It's also, possible, however, to train the word embeddings as part of an NLP task; we'll talk about how to do this Section 7.7 in the context of the neural language modeling task.)"
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,,,"As our second application of feedforward networks, let's consider language modeling: predicting upcoming words from prior word context. Neural language modeling is an important NLP task in itself, and it plays a role in many important algorithms for tasks like machine translation, summarization, speech recognition, grammar correction, and dialogue. We'll describe simple feedforward neural language models, first introduced by Bengio et al. (2003) more powerful architectures like the recurrent nets or transformer networks to be introduced in Chapter 9, the feedforward language model introduces many of the important concepts of neural language modeling. Neural language models have many advantages over the n-gram language models of Chapter 3. Compared to n-gram models, neural language models can handle much longer histories, can generalize better over contexts of similar words, and are more accurate at word-prediction. On the other hand, neural net language models are much more complex, slower to train, and less interpretable than n-gram models, so for many (especially smaller) tasks an n-gram language model is still the right tool."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,,,"A feedforward neural LM is a feedforward network that takes as input at time t a representation of some number of previous words (w t−1 , w t−2 , etc.) and outputs a probability distribution over possible next words. Thus-like the n-gram LM-the feedforward neural LM approximates the probability of a word given the entire prior context P(w t |w 1:t−1 ) by approximating based on the N previous words:"
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,,,"P(w t |w 1 , . . . , w t−1 ) ≈ P(w t |w t−N+1 , . . . , w t−1 ) (7.21)"
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,,,"In the following examples we'll use a 4-gram example, so we'll show a net to estimate the probability P(w t = i|w t−3 , w t−2 , w t−1 )."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,,,"Neural language models represent words in this prior context by their embeddings, rather than just by their word identity as used in n-gram language models. Using embeddings allows neural language models to generalize better to unseen data. For example, suppose we've seen this sentence in training:"
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,,,"I have to make sure that the cat gets fed. but have never seen the words ""gets fed"" after the word ""dog"". Our test set has the prefix ""I forgot to make sure that the dog gets"". What's the next word? An n-gram language model will predict ""fed"" after ""that the cat gets"", but not after ""that the dog gets"". But a neural LM, knowing that ""cat"" and ""dog"" have similar embeddings, will be able to generalize from the ""cat"" context to assign a high enough probability to ""fed"" even after seeing ""dog""."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,Let's walk through forward inference or decoding for neural language models.
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"forward inference Forward inference is the task, given an input, of running a forward pass on the network to produce a probability distribution over possible outputs, in this case next words."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"We first represent each of the N previous words as a one-hot vector of length |V |, i.e., with one dimension for each word in the vocabulary. A one-hot vector is one-hot vector a vector that has one element equal to 1-in the dimension corresponding to that word's index in the vocabulary-while all the other elements are set to zero. Thus in a one-hot representation for the word ""toothpaste"", supposing it is V 5 , i.e., index 5 in the vocabulary, x 5 = 1, and x i = 0 ∀i = 5, as shown here:"
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"[0 0 0 0 1 0 0 ... 0 0 0 0] 1 2 3 4 5 6 7 ... ... |V| The feedforward neural language model (sketched in Fig. 7 .13) has a moving window that can see N words into the past. We'll let N-3, so the 3 words w t−1 , w t−2 , and w t−3 are each represented as a one-hot vector. We then multiply these one-hot vectors by the embedding matrix E. The embedding weight matrix E has a column for each word, each a column vector of d dimensions, and hence has dimensionality d × |V |. Multiplying by a one-hot vector that has only one non-zero element x i = 1 simply selects out the relevant column vector for word i, resulting in the embedding for word i, as shown in Fig. 7 .12."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"The 3 resulting embedding vectors are concatenated to produce e, the embedding layer. This is followed by a hidden layer and an output layer whose softmax produces a probability distribution over words. For example y 42 , the value of output node 42, is the probability of the next word w t being V 42 , the vocabulary word with index 42 (which is the word 'fish' in our example)."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"Here's the algorithm in detail for our mini example: 1. Select three embeddings from E: Given the three previous words, we look up their indices, create 3 one-hot vectors, and then multiply each by the embedding matrix E. Consider w t−3 . The one-hot vector for 'for' (index 35) is Figure 7 .13 Forward inference in a feedforward neural language model. At each timestep t the network computes a d-dimensional embedding for each context word (by multiplying a one-hot vector by the embedding matrix E), and concatenates the 3 resulting embeddings to get the embedding layer e. The embedding vector e is multiplied by a weight matrix W and then an activation function is applied element-wise to produce the hidden layer h, which is then multiplied by another weight matrix U. Finally, a softmax output layer predicts at each node i the probability that the next word w t will be vocabulary word V i ."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"multiplied by the embedding matrix E, to give the first part of the first hidden layer, the embedding layer. Since each column of the input matrix E is an embedding layer embedding for a word, and the input is a one-hot column vector x i for word V i , the embedding layer for input w will be Ex i = e i , the embedding for word i. We now concatenate the three embeddings for the three context words to produce the embedding layer e."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"We multiply by W (and add b) and pass through the ReLU (or other) activation function to get the hidden layer h. 3. Multiply by U: h is now multiplied by U 4. Apply softmax: After the softmax, each node i in the output layer estimates the probability P(w t = i|w t−1 , w t−2 , w t−3 )"
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"In summary, the equations for a neural language model with a window size of 3, Note that we formed the embedding layer e by concatenating the 3 embeddings for the three context vectors; we'll often use semicolons to mean concatenation of vectors."
7,Neural Networks and Neural Language Models,7.5,Feedforward Neural Language Modeling,7.5.1,Forward inference in the neural language model,"In the next section we'll introduce a general algorithm for training neural networks, and then return to how to specifically train the neural language model in Section 7.7."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,,,"A feedforward neural net is an instance of supervised machine learning in which we know the correct output y for each observation x. What the system produces, via Eq. 7.13, isŷ, the system's estimate of the true y. The goal of the training procedure is to learn parameters W [i] and b [i] for each layer i that makeŷ for each training observation as close as possible to the true y."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,,,"In general, we do all this by drawing on the methods we introduced in Chapter 5 for logistic regression, so the reader should be comfortable with that chapter before proceeding."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,,,"First, we'll need a loss function that models the distance between the system output and the gold output, and it's common to use the loss function used for logistic regression, the cross-entropy loss."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,,,"Second, to find the parameters that minimize this loss function, we'll use the gradient descent optimization algorithm introduced in Chapter 5."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,,,"Third, gradient descent requires knowing the gradient of the loss function, the vector that contains the partial derivative of the loss function with respect to each of the parameters. In logistic regression, for each observation we could directly compute the derivative of the loss function with respect to an individual w or b. But for neural networks, with millions of parameters in many layers, it's much harder to see how to compute the partial derivative of some weight in layer 1 when the loss is attached to some much later layer. How do we partial out the loss over all those intermediate layers? The answer is the algorithm called error backpropagation or backward differentiation."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"The cross-entropy loss that is used in neural networks is the same one we saw for cross-entropy loss logistic regression. In fact, if the neural network is being used as a binary classifier, with the sigmoid at the final layer, the loss function is exactly the same as we saw with logistic regression in Eq. 5.11:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"L CE (ŷ, y) = − log p(y|x) = − [y logŷ + (1 − y) log(1 −ŷ)] (7.23)"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,What about if the neural network is being used as a multinomial classifier? Let y be a vector over the C classes representing the true output probability distribution. The cross-entropy loss here is
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"L CE (ŷ, y) = − C i=1 y i logŷ i (7.24)"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"We can simplify this equation further. Assume this is a hard classification task, meaning that only one class is the correct one, and that there is one output unit in y for each class. If the true class is i, then y is a vector where y i = 1 and y j = 0 ∀ j = i. A vector like this, with one value=1 and the rest 0, is called a one-hot vector. The terms in the sum in Eq. 7.24 will be 0 except for the term corresponding to the true class, i.e.:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"L CE (ŷ, y) = − K k=1 1{y = k} logŷ i = − K k=1 1{y = k} logp(y = k|x) = − K k=1 1{y = k} log exp(z k ) K j=1 exp(z j ) (7.25)"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"Hence the cross-entropy loss is simply the log of the output probability corresponding to the correct class, and we therefore also call this the negative log likelihood loss:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"negative log likelihood loss L CE (ŷ, y) = − logŷ i , (where i is the correct class) (7.26) Plugging in the softmax formula from Eq. 7.9, and with K the number of classes:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,"L CE (ŷ, y) = − log exp(z i ) K j=1 exp(z j )"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.1,Loss function,(where i is the correct class) (7.27)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.2,Computing the Gradient,"How do we compute the gradient of this loss function? Computing the gradient requires the partial derivative of the loss function with respect to each parameter. For a network with one weight layer and sigmoid output (which is what logistic regression is), we could simply use the derivative of the loss that we used for logistic regression in Eq. 7.28 (and derived in Section 5.8):"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.2,Computing the Gradient,"∂ L CE (w, b) ∂ w j = (ŷ − y) x j = (σ (w • x + b) − y) x j (7.28)"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.2,Computing the Gradient,"Or for a network with one hidden layer and softmax output, we could use the derivative of the softmax loss from Eq. 5.37:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.2,Computing the Gradient,∂ L CE ∂ w k = −(1{y = k} − p(y = k|x))x k = − 1{y = k} − exp(w k • x + b k ) K j=1 exp(w j • x + b j ) x k (7.29)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.2,Computing the Gradient,"But these derivatives only give correct updates for one weight layer: the last one! For deep networks, computing the gradients for each weight is much more complex, since we are computing the derivative with respect to weight parameters that appear all the way back in the very early layers of the network, even though the loss is computed only at the very end of the network. The solution to computing this gradient is an algorithm called error backpropagation or backprop (Rumelhart et al., 1986) . While backprop was invented spe-error backpropagation cially for neural networks, it turns out to be the same as a more general procedure called backward differentiation, which depends on the notion of computation graphs. Let's see how that works in the next subsection."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.3,Computation Graphs,"A computation graph is a representation of the process of computing a mathematical expression, in which the computation is broken down into separate operations, each of which is modeled as a node in a graph."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.3,Computation Graphs,"Consider computing the function L(a, b, c) = c(a + 2b). If we make each of the component addition and multiplication operations explicit, and add names (d and e) for the intermediate outputs, the resulting series of computations is:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.3,Computation Graphs,d = 2 * b e = a + d L = c * e
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.3,Computation Graphs,"We can now represent this as a graph, with nodes for each operation, and directed edges showing the outputs from each operation as the inputs to the next, as in Fig. 7 .14. The simplest use of computation graphs is to compute the value of the function with some given inputs. In the figure, we've assumed the inputs a = 3, b = 1, c = −2, and we've shown the result of the forward pass to compute the result L(3, 1, −2) = −10. In the forward pass of a computation graph, we apply each operation left to right, passing the outputs of each computation as the input to the next node."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"The importance of the computation graph comes from the backward pass, which is used to compute the derivatives that we'll need for the weight update. In this example our goal is to compute the derivative of the output function L with respect to each of the input variables, i.e., ∂ L ∂ a , ∂ L ∂ b , and ∂ L ∂ c . The derivative ∂ L ∂ a , tells us how much a small change in a affects L."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"Backwards differentiation makes use of the chain rule in calculus, so let's rechain rule mind ourselves of that. Suppose we are computing the derivative of a composite function f (x) = u(v(x)). The derivative of f (x) is the derivative of u(x) with respect to v(x) times the derivative of v(x) with respect to x:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,d f dx = du dv • dv dx (7.30)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"The chain rule extends to more than two functions. If computing the derivative of a composite function f (x) = u(v(w(x))), the derivative of f (x) is:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,d f dx = du dv • dv dw • dw dx (7.31)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"The intuition of backward differentiation is to pass gradients back from the final node to all the nodes in the graph. Fig. 7 .17 shows part of the backward computation at one node e. Each node takes an upstream gradient that is passed in from its parent node to the right, and for each of its inputs computes a local gradient (the gradient of its output with respect to its input), and uses the chain rule to multiply these two to compute a downstream gradient to be passed on to the next earlier node. Figure 7 .15 Each node (like e here) takes an upstream gradient, multiplies it by the local gradient (the gradient of its output with respect to its input), and uses the chain rule to compute a downstream gradient to be passed on to a prior node. A node may have multiple local gradients if it has multiple inputs."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"Let's now compute the 3 derivatives we need. Since in the computation graph L = ce, we can directly compute the derivative ∂ L ∂ c :"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,∂ L ∂ c = e (7.32)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"For the other two, we'll need to use the chain rule:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,∂ L ∂ a = ∂ L ∂ e ∂ e ∂ a ∂ L ∂ b = ∂ L ∂ e ∂ e ∂ d ∂ d ∂ b (7.33)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,Eq. 7.33 and Eq. 7.32 thus require five intermediate derivatives:
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"∂ L ∂ e , ∂ L ∂ c , ∂ e ∂ a ,"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,d = 2b : ∂ d ∂ b = 2
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"In the backward pass, we compute each of these partials along each edge of the graph from right to left, using the chain rule just as we did above. Thus we begin by computing the downstream gradients from node L, which are ∂ L ∂ e and ∂ L ∂ c . For node e, we then multiply this upstream gradient ∂ L ∂ e by the local gradient (the gradient of the output with respect to the input), ∂ e ∂ d to get the output we send back to node d:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,∂ L ∂ d
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,". And so on, until we have annotated the graph all the way to all the input variables. The forward pass conveniently already will have computed the values of the forward intermediate variables we need (like d and e) to compute these derivatives. Fig. 7.16 shows the backward pass."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"Of course computation graphs for real neural networks are much more complex. Fig. 7 .17 shows a sample computation graph for a 2-layer neural network with n 0 = 2, n 1 = 2, and n 2 = 1, assuming binary classification and hence using a sigmoid output unit for simplicity. The function that the computation graph is computing is:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,z [1] = W [1] x + b [1]
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,a [1] = ReLU(z [1] )
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,z [2] = W [2] a [1] + b [2]
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,a [2] = σ (z [2] ) y = a [2] (7.34)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,For the backward pass we'll also need to compute the loss L. The loss function for binary sigmoid output from Eq. 7.23 is
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"L CE (ŷ, y) = − [y logŷ + (1 − y) log(1 −ŷ)] (7.35)"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"Our outputŷ = a [2] , so we can rephrase this as L CE (a [2] , y) = − y log a [2] Figure 7 .17 Sample computation graph for a simple 2-layer neural net (= 1 hidden layer) with two input dimensions and 2 hidden dimensions."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,EQUATION
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"The weights that need updating (those for which we need to know the partial derivative of the loss function) are shown in teal. In order to do the backward pass, we'll need to know the derivatives of all the functions in the graph. We already saw in Section 5.8 the derivative of the sigmoid σ :"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,dσ (z) dz = σ (z)(1 − σ (z)) (7.37)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,We'll also need the derivatives of each of the other activation functions. The derivative of tanh is:
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,d tanh(z) dz = 1 − tanh 2 (z) (7.38)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,The derivative of the ReLU is
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,d ReLU(z) dz = 0 f or z < 0 1 f or z ≥ 0 (7.39)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"We'll give the start of the computation, computing the derivative of the loss function L with respect to z, or ∂ L ∂ z (and leaving the rest of the computation as an exercise for the reader). By the chain rule:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,∂ L ∂ z = ∂ L ∂ a [2]
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,∂ a [2] ∂ z (7.40)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"So let's first compute ∂ L ∂ a [2] , taking the derivative of Eq. 7.36, repeated here:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"L CE (a [2] , y) = − y log a [2] 2] (7.41)"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,+ (1 − y) log(1 − a [2] ) ∂ L ∂ a [2] = − y ∂ log(a [2] ) ∂ a [2] + (1 − y) ∂ log(1 − a [2] ) ∂ a [2] = − y 1 a [2] + (1 − y) 1 1 − a [2] (−1) = − y a [2] + y − 1 1 − a [
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"Next, by the derivative of the sigmoid: 2] (1 − a [2] ) Finally, we can use the chain rule:"
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,∂ L ∂ a [2] = a [
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,∂ L ∂ z = ∂ L ∂ a [2] ∂ a [2] ∂ z = − y a [2] + y − 1 1 − a [2] a [2] (1 − a [2] ) = a [2] − y (7.42)
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.4,Backward differentiation on computation graphs,"Continuing the backward computation of the gradients (next by passing the gradients over b [2] 1 and the two product nodes, and so on, back to all the orange nodes), is left as an exercise for the reader."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.5,More details on learning,"Optimization in neural networks is a non-convex optimization problem, more complex than for logistic regression, and for that and other reasons there are many best practices for successful learning."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.5,More details on learning,"For logistic regression we can initialize gradient descent with all the weights and biases having the value 0. In neural networks, by contrast, we need to initialize the weights with small random numbers. It's also helpful to normalize the input values to have 0 mean and unit variance."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.5,More details on learning,"Various forms of regularization are used to prevent overfitting. One of the most important is dropout: randomly dropping some units and their connections from dropout the network during training (Hinton et al. 2012 , Srivastava et al. 2014 . Tuning of hyperparameters is also important. The parameters of a neural network are the hyperparameter weights W and biases b; those are learned by gradient descent. The hyperparameters are things that are chosen by the algorithm designer; optimal values are tuned on a devset rather than by gradient descent learning on the training set. Hyperparameters include the learning rate η, the mini-batch size, the model architecture (the number of layers, the number of hidden nodes per layer, the choice of activation functions), how to regularize, and so on. Gradient descent itself also has many architectural variants such as Adam (Kingma and Ba, 2015) ."
7,Neural Networks and Neural Language Models,7.6,Training Neural Nets,7.6.5,More details on learning,"Finally, most modern neural networks are built using computation graph formalisms that make it easy and natural to do gradient computation and parallelization onto vector-based GPUs (Graphic Processing Units). PyTorch (Paszke et al., 2017) and TensorFlow (Abadi et al., 2015) are two of the most popular. The interested reader should consult a neural network textbook for further details; some suggestions are at the end of the chapter."
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"Now that we've seen how to train a generic neural net, let's talk about the architecture for training a neural language model, setting the parameters θ = E, W, U, b."
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"For some tasks, it's ok to freeze the embedding layer E with initial word2vec valfreeze ues. Freezing means we use word2vec or some other pretraining algorithm to compute the initial embedding matrix E, and then hold it constant while we only modify W, U, and b, i.e., we don't update E during language model training. However, often we'd like to learn the embeddings simultaneously with training the network. This is useful when the task the network is designed for (sentiment classification, or translation, or parsing) places strong constraints on what makes a good representation for words. Let's see how to train the entire model including E, i.e. to set all the parameters θ = E, W, U, b. We'll do this via gradient descent (Fig. 5.5 ), using error backpropagation on the computation graph to compute the gradient. Training thus not only sets the weights W and U of the network, but also as we're predicting upcoming words, we're learning the embeddings E for each word that best predict upcoming words. Figure 7 .18 Learning all the way back to embeddings. Again, the embedding matrix E is shared among the 3 context words. Fig. 7.18 shows the set up for a window size of N=3 context words. The input x consists of 3 one-hot vectors, fully connected to the embedding layer via 3 instanti-ations of the embedding matrix E. We don't want to learn separate weight matrices for mapping each of the 3 previous words to the projection layer. We want one single embedding dictionary E that's shared among these three. That's because over time, many different words will appear as w t−2 or w t−1 , and we'd like to just represent each word with one vector, whichever context position it appears in. Recall that the embedding weight matrix E has a column for each word, each a column vector of d dimensions, and hence has dimensionality d × |V |."
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"Generally training proceeds by taking as input a very long text, concatenating all the sentences, starting with random weights, and then iteratively moving through the text predicting each word w t . At each word w t , we use the cross-entropy (negative log likelihood) loss. Recall that the general form for this (repeated from Eq. 7.26 is:"
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"L CE (ŷ, y) = − logŷ i , (where i is the correct class) (7.43)"
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"For language modeling, the classes are the words in the vocabulary, soŷ i here means the probability that the model assigns to the correct next word w t :"
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"L CE = − log p(w t |w t−1 , ..., w t−n+1 ) (7.44)"
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,The parameter update for stochastic gradient descent for this loss from step s to s + 1 is then:
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"θ s+1 = θ s − η ∂ [− log p(w t |w t−1 , ..., w t−n+1 )] ∂ θ (7.45)"
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,"This gradient can be computed in any standard neural network framework which will then backpropagate through θ = E, W, U, b."
7,Neural Networks and Neural Language Models,7.7,Training the neural language model,,,Training the parameters to minimize loss will result both in an algorithm for language modeling (a word predictor) but also a new set of embeddings E that can be used as word representations for other tasks.
7,Neural Networks and Neural Language Models,7.8,Summary,,,"• Neural networks are built out of neural units, originally inspired by human neurons but now simply an abstract computational device. • Each neural unit multiplies input values by a weight vector, adds a bias, and then applies a non-linear activation function like sigmoid, tanh, or rectified linear unit. • In a fully-connected, feedforward network, each unit in layer i is connected to each unit in layer i + 1, and there are no cycles. • The power of neural networks comes from the ability of early layers to learn representations that can be utilized by later layers in the network. • Neural networks are trained by optimization algorithms like gradient descent. • Error backpropagation, backward differentiation on a computation graph, is used to compute the gradients of the loss function for a network. • Neural language models use a neural network as a probabilistic classifier, to compute the probability of the next word given the previous n words. • Neural language models can use pretrained embeddings, or can learn embeddings from scratch in the process of language modeling."
7,Neural Networks and Neural Language Models,7.9,Bibliographical and Historical Notes,,,"The origins of neural networks lie in the 1940s McCulloch-Pitts neuron (McCulloch and Pitts, 1943 ), a simplified model of the human neuron as a kind of computing element that could be described in terms of propositional logic. By the late 1950s and early 1960s, a number of labs (including Frank Rosenblatt at Cornell and Bernard Widrow at Stanford) developed research into neural networks; this phase saw the development of the perceptron (Rosenblatt, 1958) , and the transformation of the threshold into a bias, a notation we still use (Widrow and Hoff, 1960) . The field of neural networks declined after it was shown that a single perceptron unit was unable to model functions as simple as XOR (Minsky and Papert, 1969) . While some small amount of work continued during the next two decades, a major revival for the field didn't come until the 1980s, when practical tools for building deeper networks like error backpropagation became widespread (Rumelhart et al., 1986) . During the 1980s a wide variety of neural network and related architectures were developed, particularly for applications in psychology and cognitive science (Rumelhart and McClelland 1986b , McClelland and Elman 1986 , Rumelhart and McClelland 1986a , Elman 1990 , for which the term connectionist or paralconnectionist lel distributed processing was often used (Feldman and Ballard 1982, Smolensky 1988) . Many of the principles and techniques developed in this period are foundational to modern work, including the ideas of distributed representations (Hinton, 1986) , recurrent networks (Elman, 1990) , and the use of tensors for compositionality (Smolensky, 1990) ."
7,Neural Networks and Neural Language Models,7.9,Bibliographical and Historical Notes,,,"By the 1990s larger neural networks began to be applied to many practical language processing tasks as well, like handwriting recognition (LeCun et al. 1989) and speech recognition (Morgan and Bourlard 1990) . By the early 2000s, improvements in computer hardware and advances in optimization and training techniques made it possible to train even larger and deeper networks, leading to the modern term deep learning (Hinton et al. 2006 , Bengio et al. 2007 . We cover more related history in Chapter 9 and Chapter 26."
7,Neural Networks and Neural Language Models,7.9,Bibliographical and Historical Notes,,,There are a number of excellent books on the subject. Goldberg (2017) has superb coverage of neural networks for natural language processing. For neural networks in general see Goodfellow et al. (2016) and Nielsen (2015).
8,Sequence Labeling for Parts of Speech and Named Entities,,,,,"Dionysius Thrax of Alexandria (c. 100 B.C.), or perhaps someone else (it was a long time ago), wrote a grammatical sketch of Greek (a “techn ̄e”) that summarized the linguistic knowledge of his day. This work is the source of an astonishing proportion of modern linguistic vocabulary, including the words syntax, diphthong, clitic, and analogy. Also included are a description of eight parts of speech: noun, verb,parts of speech pronoun, preposition, adverb, conjunction, participle, and article. Although earlier scholars (including Aristotle as well as the Stoics) had their own lists of parts of speech, it was Thrax’s set of eight that became the basis for descriptions of European languages for the next 2000 years. (All the way to the Schoolhouse Rock educational television shows of our childhood, which had songs about 8 parts of speech, like the late great Bob Dorough’s Conjunction Junction.) The durability of parts of speech through two millennia speaks to their centrality in models of human language."
8,Sequence Labeling for Parts of Speech and Named Entities,,,,,"Proper names are another important and anciently studied linguistic category. While parts of speech are generally assigned to individual words or morphemes, a proper name is often an entire multiword phrase, like the name ""Marie Curie"", the location ""New York City"", or the organization ""Stanford University"". We'll use the term named entity for, roughly speaking, anything that can be referred to with a named entity proper name: a person, a location, an organization, although as we'll see the term is commonly extended to include things that aren't entities per se."
8,Sequence Labeling for Parts of Speech and Named Entities,,,,,"Parts of speech (also known as POS) and named entities are useful clues to POS sentence structure and meaning. Knowing whether a word is a noun or a verb tells us about likely neighboring words (nouns in English are preceded by determiners and adjectives, verbs by nouns) and syntactic structure (verbs have dependency links to nouns), making part-of-speech tagging a key aspect of parsing. Knowing if a named entity like Washington is a name of a person, a place, or a university is important to many natural language processing tasks like question answering, stance detection, or information extraction. In this chapter we'll introduce the task of part-of-speech tagging, taking a sequence of words and assigning each word a part of speech like NOUN or VERB, and the task of named entity recognition (NER), assigning words or phrases tags like PERSON, LOCATION, or ORGANIZATION."
8,Sequence Labeling for Parts of Speech and Named Entities,,,,,"Such tasks in which we assign, to each word x i in an input word sequence, a label y i , so that the output sequence Y has the same length as the input sequence X are called sequence labeling tasks. We'll introduce classic sequence labeling algo-sequence labeling rithms, one generative-the Hidden Markov Model (HMM)-and one discriminativethe Conditional Random Field (CRF). In following chapters we'll introduce modern sequence labelers based on RNNs and Transformers."
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"Until now we have been using part-of-speech terms like noun and verb rather freely. In this section we give more complete definitions. While word classes do have semantic tendencies-adjectives, for example, often describe properties and nouns people-parts of speech are defined instead based on their grammatical relationship with neighboring words or the morphological properties about their affixes."
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"Open Class Nouns are words for people, places, or things, but include others as well. Actually, I ran home extremely quickly yesterday"
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"Adverbs generally modify something (often verbs, hence the name ""adverb"", but also other adverbs and entire verb phrases). Directional adverbs or locative adlocative verbs (home, here, downhill) specify the direction or location of some action; degree degree adverbs (extremely, very, somewhat) specify the extent of some action, process, or property; manner adverbs (slowly, slinkily, delicately) describe the manner of some manner action or process; and temporal adverbs describe the time that some action or event temporal took place (yesterday, Monday)."
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"Interjections (oh, hey, alas, uh, um) , are a smaller open class, that also includes interjection greetings (hello, goodbye), and question responses (yes, no, uh-huh) ."
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"English adpositions occur before nouns, hence are called prepositions. They can preposition indicate spatial or temporal relations, whether literal (on it, before then, by the house) or metaphorical (on time, with gusto, beside herself), and relations like marking the agent in Hamlet was written by Shakespeare."
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"A particle resembles a preposition or an adverb and is used in combination with particle a verb. Particles often have extended meanings that aren't quite the same as the prepositions they resemble, as in the particle over in she turned the paper over. A verb and a particle acting as a single unit is called a phrasal verb. The meaning phrasal verb of phrasal verbs is often non-compositional-not predictable from the individual meanings of the verb and the particle. Thus, turn down means 'reject', rule out 'eliminate', and go on 'continue'. Determiners like this and that (this chapter, that page) can mark the start of an determiner English noun phrase. Articles like a, an, and the, are a type of determiner that mark article discourse properties of the noun and are quite frequent; the is the most common word in written English, with a and an right behind."
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"Conjunctions join two phrases, clauses, or sentences. Coordinating conjuncconjunction tions like and, or, and but join two elements of equal status. Subordinating conjunctions are used when one of the elements has some embedded status. For example, the subordinating conjunction that in ""I thought that you might like some milk"" links the main clause I thought with the subordinate clause you might like some milk. This clause is called subordinate because this entire clause is the ""content"" of the main verb thought. Subordinating conjunctions like that which link a verb to its argument in this way are also called complementizers."
8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"complementizer Pronouns act as a shorthand for referring to an entity or event. Personal propronoun nouns refer to persons or entities (you, she, I, it, me, etc.). Possessive pronouns are forms of personal pronouns that indicate either actual possession or more often just an abstract relation between the person and some object (my, your, his, her, its, one's, our, their). Wh-pronouns (what, who, whom, whoever) are used in certain wh question forms, or act as complementizers (Frida, who married Diego. . . ). Auxiliary verbs mark semantic features of a main verb such as its tense, whether auxiliary it is completed (aspect), whether it is negated (polarity), and whether an action is necessary, possible, suggested, or desired (mood). English auxiliaries include the copula verb be, the two verbs do and have, forms, as well as modal verbs used to copula modal mark the mood associated with the event depicted by the main verb: can indicates ability or possibility, may permission or possibility, must necessity. An English-specific tagset, the 45-tag Penn Treebank tagset (Marcus et al., 1993) , shown in Fig. 8.2 , has been used to label many syntactically annotated corpora like the Penn Treebank corpora, so is worth knowing about. Below we show some examples with each word tagged according to both the UD and Penn tagsets. Notice that the Penn tagset distinguishes tense and participles on verbs, and has a special tag for the existential there construction in English. Note that since New England Journal of Medicine is a proper noun, both tagsets mark its component nouns as NNP, including journal and medicine, which might otherwise be labeled as common nouns (NOUN/NN)."
8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,"Part-of-speech tagging is the process of assigning a part-of-speech to each word in part-of-speech tagging a text. The input is a sequence x 1 , x 2 , ..., x n of (tokenized) words and a tagset, and the output is a sequence y 1 , y 2 , ..., y n of tags, each output y i corresponding exactly to one input x i , as shown in the intuition in Fig. 8.3 . Tagging is a disambiguation task; words are ambiguous -have more than one ambiguous possible part-of-speech-and the goal is to find the correct tag for the situation. For example, book can be a verb (book that flight) or a noun (hand me that book). That can be a determiner (Does that flight serve dinner) or a complementizer (I We'll introduce algorithms for the task in the next few sections, but first let's explore the task. Exactly how hard is it? Fig. 8.4 shows that most word types (85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the ambiguous words, though accounting for only 14-15% of the vocabulary, are very common, and 55-67% of word tokens in running text are ambiguous. Particularly ambiguous common words include that, back, down, put and set; here are some examples of the 6 different parts of speech for the word back: earnings growth took a back/JJ seat a small building in the back/NN a clear majority of senators back/VBP the bill Dave began to back/VB toward the door enable the country to buy back/RP debt I was twenty-one back/RB then Nonetheless, many words are easy to disambiguate, because their different tags aren't equally likely. For example, a can be a determiner or the letter a, but the determiner sense is much more likely."
8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,"This idea suggests a useful baseline: given an ambiguous word, choose the tag which is most frequent in the training corpus. This is a key concept:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,Most Frequent Class Baseline: Always compare a classifier against a baseline at least as good as the most frequent class baseline (assigning each token to the class it occurred in most often in the training set).
8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,The most-frequent-tag baseline has an accuracy of about 92% 1 . The baseline thus differs from the state-of-the-art and human ceiling (97%) by only 5%.
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"Part of speech tagging can tell us that words like Janet, Stanford University, and Colorado are all proper nouns; being a proper noun is a grammatical property of these words. But viewed from a semantic perspective, these proper nouns refer to different kinds of entities: Janet is a person, Stanford University is an organization,.. and Colorado is a location."
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"A named entity is, roughly speaking, anything that can be referred to with a named entity"
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"proper name: a person, a location, an organization. The text contains 13 mentions of named entities including 5 organizations, 4 locations, 2 times, 1 person, and 1 mention of money. Figure 8 .5 shows typical generic named entity types. Many applications will also need to use specific entity types like proteins, genes, commercial products, or works of art."
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"Palo Alto is raising the fees for parking. Named entity tagging is a useful first step in lots of natural language processing tasks. In sentiment analysis we might want to know a consumer's sentiment toward a particular entity. Entities are a useful first stage in question answering, or for linking text to information in structured knowledge sources like Wikipedia. And named entity tagging is also central to tasks involving building semantic representations, like extracting events and the relationship between participants."
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"Unlike part-of-speech tagging, where there is no segmentation problem since each word gets one tag, the task of named entity recognition is to find and label spans of text, and is difficult partly because of the ambiguity of segmentation; we"
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"need to decide what's an entity and what isn't, and where the boundaries are. Indeed, most words in a text will not be named entities. Another difficulty is caused by type ambiguity. The mention JFK can refer to a person, the airport in New York, or any number of schools, bridges, and streets around the United States. Some examples of this kind of cross-type confusion are given in Figure 8 The standard approach to sequence labeling for a span-recognition problem like NER is BIO tagging (Ramshaw and Marcus, 1995) . This is a method that allows us to treat NER like a word-by-word sequence labeling task, via tags that capture both the boundary and the named entity type. Consider the following sentence: variants called IO tagging and BIOES tagging. In BIO tagging we label any token that begins a span of interest with the label B, tokens that occur inside a span are tagged with an I, and any tokens outside of any span of interest are labeled O. While there is only one O tag, we'll have distinct B and I tags for each named entity class. The number of tags is thus 2n + 1 tags, where n is the number of entity types. BIO tagging can represent exactly the same information as the bracketed notation, but has the advantage that we can represent the task in the same simple sequence modeling way as part-of-speech tagging: assigning a single label y i to each input word x i : We've also shown two variant tagging schemes: IO tagging, which loses some information by eliminating the B tag, and BIOES tagging, which adds an end tag E for the end of a span, and a span tag S for a span consisting of only one word. A sequence labeler (HMM, CRF, RNN, Transformer, etc.) is trained to label each token in a text with tags that indicate the presence (or absence) of particular kinds of named entities."
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"[ PER Jane Villanueva ] of [ ORG United] ,"
8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,Words IO
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,,,"In this section we introduce our first sequence labeling algorithm, the Hidden Markov Model, and show how to apply it to part-of-speech tagging. Recall that a sequence labeler is a model whose job is to assign a label to each unit in a sequence, thus mapping a sequence of observations to a sequence of labels of the same length. The HMM is a classic model that introduces many of the key concepts of sequence modeling that we will see again in more modern models."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,,,"An HMM is a probabilistic sequence model: given a sequence of units (words, letters, morphemes, sentences, whatever), it computes a probability distribution over possible sequences of labels and chooses the best label sequence."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,"The HMM is based on augmenting the Markov chain. A Markov chain is a model Markov chain that tells us something about the probabilities of sequences of random variables, states, each of which can take on values from some set. These sets can be words, or tags, or symbols representing anything, for example the weather. A Markov chain makes a very strong assumption that if we want to predict the future in the sequence, all that matters is the current state. All the states before the current state have no impact on the future except via the current state. It's as if to predict tomorrow's weather you could examine today's weather but you weren't allowed to look at yesterday's weather. More formally, consider a sequence of state variables q 1 , q 2 , ..., q i . A Markov model embodies the Markov assumption on the probabilities of this sequence: that"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,"Markov assumption when predicting the future, the past doesn't matter, only the present."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,"Markov Assumption: Figure 8 .8a shows a Markov chain for assigning a probability to a sequence of weather events, for which the vocabulary consists of HOT, COLD, and WARM. The states are represented as nodes in the graph, and the transitions, with their probabilities, as edges. The transitions are probabilities: the values of arcs leaving a given state must sum to 1. Figure 8 .8b shows a Markov chain for assigning a probability to a sequence of words w 1 ...w t . This Markov chain should be familiar; in fact, it represents a bigram language model, with each edge expressing the probability p(w i |w j )! Given the two models in Fig. 8.8 , we can assign a probability to any sequence from our vocabulary."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,P(q i = a|q 1 ...q i−1 ) = P(q i = a|q i−1 ) (8.3)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,"Formally, a Markov chain is specified by the following components:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,Q = q 1 q 2 . . . q N a set of N states A = a 11 a 12 . . . a N1 .
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,". . a NN a transition probability matrix A, each a i j representing the probability of moving from state i to state j, s.t."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,"n j=1 a i j = 1 ∀i π = π 1 , π 2 , ..., π N"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,"an initial probability distribution over states. π i is the probability that the Markov chain will start in state i. Some states j may have π j = 0, meaning that they cannot be initial states. Also, n i=1 π i = 1 Before you go on, use the sample probabilities in Fig. 8.8a (with π = [0.1, 0.7, 0.2] ) to compute the probability of each of the following sequences:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.1,Markov Chains,(8.4) hot hot hot hot (8.5) cold hot cold hot What does the difference in these probabilities tell you about a real-world weather fact encoded in Fig. 8 .8a?
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"A Markov chain is useful when we need to compute a probability for a sequence of observable events. In many cases, however, the events we are interested in are hidden: we don't observe them directly. For example we don't normally observe hidden part-of-speech tags in a text. Rather, we see words, and must infer the tags from the word sequence. We call the tags hidden because they are not observed."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,A hidden Markov model (HMM) allows us to talk about both observed events hidden Markov model (like words that we see in the input) and hidden events (like part-of-speech tags) that we think of as causal factors in our probabilistic model. An HMM is specified by the following components:
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,Q = q 1 q 2 . . . q N a set of N states A = a 11 . . . a i j .
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,". . a NN a transition probability matrix A, each a i j representing the probability of moving from state i to state j,"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,s.t. N j=1 a i j = 1 ∀i O = o 1 o 2 . . . o T
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"a sequence of T observations, each one drawn from a vocabulary"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"V = v 1 , v 2 , ..., v V B = b i (o t )"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"a sequence of observation likelihoods, also called emission probabilities, each expressing the probability of an observation o t being generated from a state q i"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"π = π 1 , π 2 , ..., π N"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"an initial probability distribution over states. π i is the probability that the Markov chain will start in state i. Some states j may have π j = 0, meaning that they cannot be initial states. Also, n i=1 π i = 1 A first-order hidden Markov model instantiates two simplifying assumptions. First, as with a first-order Markov chain, the probability of a particular state depends only on the previous state:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"Markov Assumption: P(q i |q 1 , ..., q i−1 ) = P(q i |q i−1 ) (8.6)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"Second, the probability of an output observation o i depends only on the state that produced the observation q i and not on any other states or any other observations:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.2,The Hidden Markov Model,"Output Independence: P(o i |q 1 , . . . q i , . . . , q T , o 1 , . . . , o i , . . . , o T ) = P(o i |q i ) (8.7)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"Let's start by looking at the pieces of an HMM tagger, and then we'll see how to use it to tag. An HMM has two components, the A and B probabilities."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"The A matrix contains the tag transition probabilities P(t i |t i−1 ) which represent the probability of a tag occurring given the previous tag. For example, modal verbs like will are very likely to be followed by a verb in the base form, a VB, like race, so we expect this probability to be high. We compute the maximum likelihood estimate of this transition probability by counting, out of the times we see the first tag in a labeled corpus, how often the first tag is followed by the second:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"P(t i |t i−1 ) = C(t i−1 ,t i ) C(t i−1 ) (8.8)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"In the WSJ corpus, for example, MD occurs 13124 times of which it is followed by VB 10471, for an MLE estimate of"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"P(V B|MD) = C(MD,V B) C(MD) = 10471 13124 = .80 (8.9)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"Let's walk through an example, seeing how these probabilities are estimated and used in a sample tagging task, before we return to the algorithm for decoding."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"In HMM tagging, the probabilities are estimated by counting on a tagged training corpus. For this example we'll use the tagged WSJ corpus."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"The B emission probabilities, P(w i |t i ), represent the probability, given a tag (say MD), that it will be associated with a given word (say will). The MLE of the emission probability is"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"P(w i |t i ) = C(t i , w i ) C(t i ) (8.10)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"Of the 13124 occurrences of MD in the WSJ corpus, it is associated with will 4046 times:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"P(will|MD) = C(MD, will) C(MD) = 4046 13124 = .31 (8.11)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"We saw this kind of Bayesian modeling in Chapter 4; recall that this likelihood term is not asking ""which is the most likely tag for the word will?"" That would be the posterior P(MD|will). Instead, P(will|MD) answers the slightly counterintuitive question ""If we were going to generate a MD, how likely is it that this modal would be will?"""
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.3,The components of a HMM tagger,"The A transition probabilities, and B observation likelihoods of the HMM are illustrated in Fig. 8 .9 for three states in an HMM part-of-speech tagger; the full tagger would have one state for each tag."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,"For any model, such as an HMM, that contains hidden variables, the task of determining the hidden variables sequence corresponding to the sequence of observations is called decoding. More formally, Figure 8 .9 An illustration of the two parts of an HMM representation: the A transition probabilities used to compute the prior probability, and the B observation likelihoods that are associated with each state, one likelihood for each possible observation word."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,"For part-of-speech tagging, the goal of HMM decoding is to choose the tag sequence t 1 . . .t n that is most probable given the observation sequence of n words w 1 . . . w n :t"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,1:n = argmax t 1 ... t n P(t 1 . . .t n |w 1 . . . w n ) (8.12)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,The way we'll do this in the HMM is to use Bayes' rule to instead compute:
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,t 1:n = argmax t 1 ... t n P(w 1 . . . w n |t 1 . . .t n )P(t 1 . . .t n ) P(w 1 . . . w n ) (8.13)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,"Furthermore, we simplify Eq. 8.13 by dropping the denominator P(w n 1 ):"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,t 1:n = argmax t 1 ... t n P(w 1 . . . w n |t 1 . . .t n )P(t 1 . . .t n ) (8.14)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,HMM taggers make two further simplifying assumptions. The first is that the probability of a word appearing depends only on its own tag and is independent of neighboring words and tags:
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,P(w 1 . . . w n |t 1 . . .t n ) ≈ n i=1 P(w i |t i ) (8.15)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,"The second assumption, the bigram assumption, is that the probability of a tag is dependent only on the previous tag, rather than the entire tag sequence;"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,P(t 1 . . .t n ) ≈ n i=1 P(t i |t i−1 ) (8.16)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,Plugging the simplifying assumptions from Eq. 8.15 and Eq. 8.16 into Eq. 8.14 results in the following equation for the most probable tag sequence from a bigram tagger:t 1:n = argmax
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,t 1 ... t n P(t 1 . . .t n |w 1 . . . w n ) ≈ argmax t 1 ... t n n i=1 emission P(w i |t i ) transition P(t i |t i−1 ) (8.17)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.4,HMM tagging as decoding,The two parts of Eq. 8.17 correspond neatly to the B emission probability and A transition probability that we just defined above!
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,The decoding algorithm for HMMs is the Viterbi algorithm shown in Fig. 8 .10.
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"As an instance of dynamic programming, Viterbi resembles the dynamic programming minimum edit distance algorithm of Chapter 2."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"function VITERBI(observations of len T,state-graph of len N) returns best-path, path-prob create a path probability matrix viterbi [N,T] for each state s from 1 to N do"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"; initialization step viterbi[s,1] ← π s * b s (o 1 ) backpointer[s,1] ← 0"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,for each time step t from 2 to T do ; recursion step for each state s from
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"1 to N do viterbi[s,t] ← N max s =1 viterbi[s ,t − 1] * a s ,s * b s (o t ) backpointer[s,t] ← N argmax s =1 viterbi[s ,t − 1] * a s ,s * b s (o t ) bestpathprob ← N max s=1 viterbi[s, T ] ; termination step bestpathpointer ← N argmax s=1 viterbi[s, T ] ; termination step"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"bestpath ← the path starting at state bestpathpointer, that follows backpointer[] to states back in time return bestpath, bestpathprob Figure 8 .10 Viterbi algorithm for finding the optimal sequence of tags. Given an observation sequence and an HMM λ = (A, B), the algorithm returns the state path through the HMM that assigns maximum likelihood to the observation sequence."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"The Viterbi algorithm first sets up a probability matrix or lattice, with one column for each observation o t and one row for each state in the state graph. Each column thus has a cell for each state q i in the single combined automaton. Figure 8 .11 shows an intuition of this lattice for the sentence Janet will back the bill."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"Each cell of the lattice, v t ( j), represents the probability that the HMM is in state j after seeing the first t observations and passing through the most probable state sequence q 1 , ..., q t−1 , given the HMM λ . The value of each cell v t ( j) is computed by recursively taking the most probable path that could lead us to this cell. Formally, each cell expresses the probability"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"v t ( j) = max q 1 ,...,q t−1 P(q 1 ...q t−1 , o 1 , o 2 . . . o t , q t = j|λ ) (8.18)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,We represent the most probable path by taking the maximum over all possible previous state sequences max
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"q 1 ,...,q t−1"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,". Like other dynamic programming algorithms, Viterbi fills each cell recursively. Given that we had already computed the probability of being in every state at time t − 1, we compute the Viterbi probability by taking the most probable of the extensions of the paths that lead to the current cell. For a given state q j at time t, the value"
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,v t ( j) is computed as v t ( j) = N max i=1 v t−1 (i) a i j b j (o t ) (8.19)
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,"The three factors that are multiplied in Eq. 8.19 for extending the previous paths to compute the Viterbi probability at time t are .11 A sketch of the lattice for Janet will back the bill, showing the possible tags (q i ) for each word and highlighting the path corresponding to the correct tag sequence through the hidden states. States (parts of speech) which have a zero probability of generating a particular word according to the B matrix (such as the probability that a determiner DT will be realized as Janet) are greyed out."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.5,The Viterbi Algorithm,v t−1 (i) the previous Viterbi path probability from the previous time step a i j the transition probability from previous state q i to current state q j b j (o t ) the state observation likelihood of the observation symbol o t given the current state j
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.6,Working through an example,"Let's tag the sentence Janet will back the bill; the goal is the correct series of tags (see also Fig. 8.11 Let the HMM be defined by the two tables in Fig. 8 .12 and Fig. 8 .13. Figure 8 .12 lists the a i j probabilities for transitioning between the hidden states (part-of-speech tags). Figure 8 .13 expresses the b i (o t ) probabilities, the observation likelihoods of words given tags. This table is (slightly simplified) from counts in the WSJ corpus. So the word Janet only appears as an NNP, back has 4 possible parts of speech, and the word the can appear as a determiner or as an NNP (in titles like ""Somewhere Over the Rainbow"" all words are tagged as NNP). Figure 8 .14 The first few entries in the individual state columns for the Viterbi algorithm. Each cell keeps the probability of the best path so far and a pointer to the previous cell along that path. We have only filled out columns 1 and 2; to avoid clutter most cells with value 0 are left empty. The rest is left as an exercise for the reader. After the cells are filled in, backtracing from the end state, we should be able to reconstruct the correct state sequence NNP MD VB DT NN. Figure 8 .14 shows a fleshed-out version of the sketch we saw in Fig. 8 .11, the Viterbi lattice for computing the best hidden state sequence for the observation sequence Janet will back the bill."
8,Sequence Labeling for Parts of Speech and Named Entities,8.4,HMM Part-of-Speech Tagging,8.4.6,Working through an example,"There are N = 5 state columns. We begin in column 1 (for the word Janet) by setting the Viterbi value in each cell to the product of the π transition probability (the start probability for that state i, which we get from the entry of Fig. 8.12) , and the observation likelihood of the word Janet given the tag for that cell. Most of the cells in the column are zero since the word Janet cannot be any of those tags. The reader should find this in Fig. 8.14. Next, each cell in the will column gets updated. For each state, we compute the value viterbi[s,t] by taking the maximum over the extensions of all the paths from the previous column that lead to the current cell according to Eq. 8.19 . We have shown the values for the MD, VB, and NN cells. Each cell gets the max of the 7 values from the previous column, multiplied by the appropriate transition probability; as it happens in this case, most of them are zero from the previous column. The remaining value is multiplied by the relevant observation probability, and the (trivial) max is taken. In this case the final value, 2.772e-8, comes from the NNP state at the previous column. The reader should fill in the rest of the lattice in Fig. 8 .14 and backtrace to see whether or not the Viterbi algorithm returns the gold state sequence NNP MD VB DT NN."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"While the HMM is a useful and powerful model, it turns out that HMMs need a number of augmentations to achieve high accuracy. For example, in POS tagging as in other tasks, we often run into unknown words: proper names and acronyms unknown words are created very often, and even new common nouns and verbs enter the language at a surprising rate. It would be great to have ways to add arbitrary features to help with this, perhaps based on capitalization or morphology (words starting with capital letters are likely to be proper nouns, words ending with -ed tend to be past tense (VBD or VBN), etc.) Or knowing the previous or following words might be a useful feature (if the previous word is the, the current tag is unlikely to be a verb). Although we could try to hack the HMM to find ways to incorporate some of these, in general it's hard for generative models like HMMs to add arbitrary features directly into the model in a clean way. We've already seen a model for combining arbitrary features in a principled way: log-linear models like the logistic regression model of Chapter 5! But logistic regression isn't a sequence model; it assigns a class to a single observation."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"Luckily, there is a discriminative sequence model based on log-linear models: the conditional random field (CRF). We'll describe here the linear chain CRF, CRF the version of the CRF most commonly used for language processing, and the one whose conditioning closely matches the HMM."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,Assuming we have a sequence of input words X = x 1 ...x n and want to compute a sequence of output tags Y = y 1 ...y n . In an HMM to compute the best tag sequence that maximizes P(Y |X) we rely on Bayes' rule and the likelihood P(X|Y ):
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,Y = argmax Y p(Y |X) = argmax Y p(X|Y )p(Y ) = argmax Y i p(x i |y i ) i p(y i |y i−1 ) (8.21)
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"In a CRF, by contrast, we compute the posterior p(Y |X) directly, training the CRF to discriminate among the possible tag sequences:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,Y = argmax Y ∈Y P(Y |X) (8.22)
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"However, the CRF does not compute a probability for each tag at each time step. Instead, at each time step the CRF computes log-linear functions over a set of relevant features, and these local features are aggregated and normalized to produce a global probability for the whole sequence. Let's introduce the CRF more formally, again using X and Y as the input and output sequences. A CRF is a log-linear model that assigns a probability to an entire output (tag) sequence Y , out of all possible sequences Y, given the entire input (word) sequence X. We can think of a CRF as like a giant version of what multinomial logistic regression does for a single token. Recall that the feature function f in regular multinomial logistic regression can be viewed as a function of a tuple: a token x and a label y (page 92). In a CRF, the function F maps an entire input sequence X and an entire output sequence Y to a feature vector. Let's assume we have K features, with a weight w k for each feature F k :"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"p(Y |X) = exp K k=1 w k F k (X,Y ) Y ∈Y exp K k=1 w k F k (X,Y ) (8.23)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,It's common to also describe the same equation by pulling out the denominator into a function Z(X):
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"p(Y |X) = 1 Z(X) exp K k=1 w k F k (X,Y ) (8.24) Z(X) = Y ∈Y exp K k=1 w k F k (X,Y ) (8.25)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"We'll call these K functions F k (X,Y ) global features, since each one is a property of the entire input sequence X and output sequence Y . We compute them by decomposing into a sum of local features for each position i in Y :"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"F k (X,Y ) = n i=1 f k (y i−1 , y i , X, i) (8.26)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),,,"Each of these local features f k in a linear-chain CRF is allowed to make use of the current output token y i , the previous output token y i−1 , the entire input string X (or any subpart of it), and the current position i. This constraint to only depend on the current and previous output tokens y i and y i−1 are what characterizes a linear chain CRF. As we will see, this limitation makes it possible to use versions of the linear chain CRF efficient Viterbi and Forward-Backwards algorithms from the HMM. A general CRF, by contrast, allows a feature to make use of any output token, and are thus necessary for tasks in which the decision depend on distant output tokens, like y i−4 . General CRFs require more complex inference, and are less commonly used for language processing."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"Let's look at some of these features in detail, since the reason to use a discriminative sequence model is that it's easier to incorporate a lot of features. 2 Again, in a linear-chain CRF, each local feature f k at position i can depend on any information from: (y i−1 , y i , X, i). So some legal features representing common situations might be the following:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"1{x i = the, y i = DET} 1{y i = PROPN, x i+1 = Street, y i−1 = NUM} 1{y i = VERB, y i−1 = AUX}"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"For simplicity, we'll assume all CRF features take on the value 1 or 0. Above, we explicitly use the notation 1{x} to mean ""1 if x is true, and 0 otherwise"". From now on, we'll leave off the 1 when we define features, but you can assume each feature has it there implicitly."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"Although the idea of what features to use is done by the system designer by hand, the specific features are automatically populated by using feature templates as we feature templates briefly mentioned in Chapter 5. Here are some templates that only use information from y i−1 , y i , X, i):"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"y i , x i , y i , y i−1 , y i , x i−1 , x i+2"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"These templates automatically populate the set of features from every instance in the training and test set. Thus for our example Janet/NNP will/MD back/VB the/DT bill/NN, when x i is the word back, the following features would be generated and have the value 1 (we've assigned them arbitrary feature numbers):"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"f 3743 : y i = VB and x i = back f 156 : y i = VB and y i−1 = MD f 99732 : y i = VB and x i−1 = will and x i+2 = bill It's also important to have features that help with unknown words. One of the most important is word shape features, which represent the abstract letter pattern word shape of the word by mapping lower-case letters to 'x', upper-case to 'X', numbers to 'd', and retaining punctuation. Thus for example I.M.F would map to X.X.X. and DC10-30 would map to XXdd-dd. A second class of shorter word shape features is also used. In these features consecutive character types are removed, so words in all caps map to X, words with initial-caps map to Xx, DC10-30 would be mapped to Xd-d but I.M.F would still map to X.X.X. Prefix and suffix features are also useful. In summary, here are some sample feature templates that help with unknown words:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,x i contains a particular prefix (perhaps from all prefixes of length ≤ 2) x i contains a particular suffix (perhaps from all suffixes of length ≤ 2) x i 's word shape x i 's short word shape For example the word well-dressed might generate the following non-zero valued feature values:
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,prefix(x i ) = w prefix(x i ) = we suffix(x i ) = ed suffix(x i ) = d word-shape(x i ) = xxxx-xxxxxxx short-word-shape(x i ) = x-x
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"The known-word templates are computed for every word seen in the training set; the unknown word features can also be computed for all words in training, or only on training words whose frequency is below some threshold. The result of the known-word templates and word-signature features is a very large set of features. Generally a feature cutoff is used in which features are thrown out if they have count < 5 in the training set."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.1,Features in a CRF POS Tagger,"Remember that in a CRF we don't learn weights for each of these local features f k . Instead, we first sum the values of each local feature (for example feature f 3743 ) over the entire sentence, to create each global feature (for example F 3743 ). It is those global features that will then be multiplied by weight w 3743 . Thus for training and inference there is always a fixed set of K features with K weights, even though the length of each sentence is different."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.2,Features for CRF Named Entity Recognizers,"A CRF for NER makes use of very similar features to a POS tagger, as shown in Figure 8 .15. identity of w i , identity of neighboring words embeddings for w i , embeddings for neighboring words part of speech of w i , part of speech of neighboring words presence of w i in a gazetteer w i contains a particular prefix (from all prefixes of length ≤ 4) w i contains a particular suffix (from all suffixes of length ≤ 4) word shape of w i , word shape of neighboring words short word shape of w i , short word shape of neighboring words gazetteer features Figure 8 .15 Typical features for a feature-based NER system. One feature that is especially useful for locations is a gazetteer, a list of place gazetteer names, often providing millions of entries for locations with detailed geographical and political information. 3 This can be implemented as a binary feature indicating a phrase appears in the list. Other related resources like name-lists, for example from the United States Census Bureau 4 , can be used, as can other entity dictionaries like lists of corporations or products, although they may not be as helpful as a gazetteer (Mikheev et al., 1999) ."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.2,Features for CRF Named Entity Recognizers,The sample named entity token L'Occitane would generate the following nonzero valued feature values (assuming that L'Occitane is neither in the gazetteer nor the census).
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.2,Features for CRF Named Entity Recognizers,prefix(x i ) = L suffix(x i ) = tane prefix(x i ) = L' suffix(x i ) = ane prefix(x i ) = L'O suffix(x i ) = ne prefix(x i ) = L'Oc
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.2,Features for CRF Named Entity Recognizers,suffix(x i ) = e word-shape(x i ) = X'Xxxxxxxx short-word-shape(x i ) = X'Xx Figure 8 .16 illustrates the result of adding part-of-speech tags and some shape information to our earlier example.
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,How do we find the best tag sequenceŶ for a given input X? We start with Eq. 8.22:
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"Y = argmax Y ∈Y P(Y |X) = argmax Y ∈Y 1 Z(X) exp K k=1 w k F k (X,Y ) (8.27) = argmax Y ∈Y exp K k=1 w k n i=1 f k (y i−1 , y i , X, i) (8.28) = argmax Y ∈Y K k=1 w k n i=1 f k (y i−1 , y i , X, i) (8.29) = argmax Y ∈Y n i=1 K k=1 w k f k (y i−1 , y i , X, i) (8.30)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"We can ignore the exp function and the denominator Z(X), as we do above, because exp doesn't change the argmax, and the denominator Z(X) is constant for a given observation sequence X. How should we decode to find this optimal tag sequenceŷ? Just as with HMMs, we'll turn to the Viterbi algorithm, which works because, like the HMM, the linearchain CRF depends at each timestep on only one previous output token y i−1 ."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"Concretely, this involves filling an N ×T array with the appropriate values, maintaining backpointers as we proceed. As with HMM Viterbi, when the table is filled, we simply follow pointers back from the maximum value in the final column to retrieve the desired set of labels."
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,The requisite changes from HMM Viterbi have to do only with how we fill each cell. Recall from Eq. 8.19 that the recursive step of the Viterbi equation computes the Viterbi value of time t for state j as
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"v t ( j) = N max i=1 v t−1 (i) a i j b j (o t ); 1 ≤ j ≤ N, 1 < t ≤ T (8.31)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,which is the HMM implementation of
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"v t ( j) = N max i=1 v t−1 (i) P(s j |s i ) P(o t |s j ) 1 ≤ j ≤ N, 1 < t ≤ T (8.32)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"The CRF requires only a slight change to this latter formula, replacing the a and b prior and likelihood probabilities with the CRF features:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"v t ( j) = N max i=1 v t−1 (i) K k=1 w k f k (y t−1 , y t , X,t) 1 ≤ j ≤ N, 1 < t ≤ T (8.33)"
8,Sequence Labeling for Parts of Speech and Named Entities,8.5,Conditional Random Fields (CRFs),8.5.3,Inference and Training for CRFs,"Learning in CRFs relies on the same supervised learning algorithms we presented for logistic regression. Given a sequence of observations, feature functions, and corresponding outputs, we use stochastic gradient descent to train the weights to maximize the log-likelihood of the training corpus. The local nature of linear-chain CRFs means that a CRF version of the forward-backward algorithm (see Appendix A) can be used to efficiently compute the necessary derivatives. As with logistic regression, L1 or L2 regularization is important."
8,Sequence Labeling for Parts of Speech and Named Entities,8.6,Evaluation of Named Entity Recognition,,,"Part-of-speech taggers are evaluated by the standard metric of accuracy. Named entity recognizers are evaluated by recall, precision, and F 1 measure. Recall that recall is the ratio of the number of correctly labeled responses to the total that should have been labeled; precision is the ratio of the number of correctly labeled responses to the total labeled; and F-measure is the harmonic mean of the two."
8,Sequence Labeling for Parts of Speech and Named Entities,8.6,Evaluation of Named Entity Recognition,,,"To know if the difference between the F 1 scores of two NER systems is a significant difference, we use the paired bootstrap test, or the similar randomization test (Section 4.9)."
8,Sequence Labeling for Parts of Speech and Named Entities,8.6,Evaluation of Named Entity Recognition,,,"For named entity tagging, the entity rather than the word is the unit of response. Thus in the example in Fig. 8.16 , the two entities Jane Villanueva and United Airlines Holding and the non-entity discussed would each count as a single response."
8,Sequence Labeling for Parts of Speech and Named Entities,8.6,Evaluation of Named Entity Recognition,,,"The fact that named entity tagging has a segmentation component which is not present in tasks like text categorization or part-of-speech tagging causes some problems with evaluation. For example, a system that labeled Jane but not Jane Villanueva as a person would cause two errors, a false positive for O and a false negative for I-PER. In addition, using entities as the unit of response but words as the unit of training means that there is a mismatch between the training and test conditions."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,,,"In this section we summarize a few remaining details of the data and models, beginning with data. Since the algorithms we have presented are supervised, hav-"
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,,,"ing labeled data is essential for training and test. A wide variety of datasets exist for part-of-speech tagging and/or NER. The Universal Dependencies (UD) dataset (Nivre et al., 2016b) has POS tagged corpora in 92 languages at the time of this writing, as do the Penn Treebanks in English, Chinese, and Arabic. OntoNotes has corpora labeled for named entities in English, Chinese, and Arabic (Hovy et al., 2006) . Named entity tagged corpora also available in particular domains, such as for biomedical (Bada et al., 2012) and literary text (Bamman et al., 2019)."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.1,Bidirectionality,"One problem with the CRF and HMM architectures as presented is that the models are exclusively run left-to-right. While the Viterbi algorithm still allows present decisions to be influenced indirectly by future decisions, it would help even more if a decision about word w i could directly use information about future tags t i+1 and t i+2 ."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.1,Bidirectionality,"Alternatively, any sequence model can be turned into a bidirectional model by using multiple passes. For example, the first pass would use only part-of-speech features from already-disambiguated words on the left. In the second pass, tags for all words, including those on the right, can be used. Alternately, the tagger can be run twice, once left-to-right and once right-to-left. In Viterbi decoding, the labeler would chooses the higher scoring of the two sequences (left-to-right or right-to-left). Bidirectional models are quite standard for neural models, as we will see with the biLSTM models to be introduced in Chapter 9."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.2,Rule-based Methods,"While machine learned (neural or CRF) sequence models are the norm in academic research, commercial approaches to NER are often based on pragmatic combinations of lists and rules, with some smaller amount of supervised machine learning (Chiticariu et al., 2013) . For example in the IBM System T architecture, a user specifies declarative constraints for tagging tasks in a formal query language that includes regular expressions, dictionaries, semantic constraints, and other operators, which the system compiles into an efficient extractor (Chiticariu et al., 2018) ."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.2,Rule-based Methods,"One common approach is to make repeated rule-based passes over a text, starting with rules with very high precision but low recall, and, in subsequent stages, using machine learning methods that take the output of the first pass into account (an approach first worked out for coreference (Lee et al., 2017a)):"
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.2,Rule-based Methods,"1. First, use high-precision rules to tag unambiguous entity mentions. 2. Then, search for substring matches of the previously detected names. 3. Use application-specific name lists to find likely domain-specific mentions. 4. Finally, apply supervised sequence labeling techniques that use tags from previous stages as additional features."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.2,Rule-based Methods,"Rule-based methods were also the earliest methods for part-of-speech tagging. Rule-based taggers like the English Constraint Grammar system (Karlsson et al. 1995 , Voutilainen 1999 ) use a two-stage formalism invented in the 1950s and 1960s:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.2,Rule-based Methods,"(1) a morphological analyzer with tens of thousands of word stem entries returns all parts of speech for a word, then (2) a large set of thousands of constraints are applied to the input sentence to rule out parts of speech inconsistent with the context."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,"Augmentations to tagging algorithms become necessary when dealing with languages with rich morphology like Czech, Hungarian and Turkish."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,"These productive word-formation processes result in a large vocabulary for these languages: a 250,000 word token corpus of Hungarian has more than twice as many word types as a similarly sized corpus of English (Oravecz and Dienes, 2002) , while a 10 million word token corpus of Turkish contains four times as many word types as a similarly sized English corpus (Hakkani-Tür et al., 2002) . Large vocabularies mean many unknown words, and these unknown words cause significant performance degradations in a wide variety of languages (including Czech, Slovene, Estonian, and Romanian) (Hajič, 2000) ."
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,"Highly inflectional languages also have much more information than English coded in word morphology, like case (nominative, accusative, genitive) or gender (masculine, feminine). Because this information is important for tasks like parsing and coreference resolution, part-of-speech taggers for morphologically rich languages need to label words with case and gender information. Tagsets for morphologically rich languages are therefore sequences of morphological tags rather than a single primitive tag. Here's a Turkish example, in which the word izin has three possible morphological/part-of-speech tags and meanings (Hakkani-Tür et al., 2002) :"
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,1. Yerdeki izin temizlenmesi gerek.
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,iz + Noun+A3sg+Pnon+Gen The trace on the floor should be cleaned.
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,iz + Noun+A3sg+P2sg+Nom Your finger print is left on (it).
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,izin + Noun+A3sg+Pnon+Nom You need permission to enter.
8,Sequence Labeling for Parts of Speech and Named Entities,8.7,Further Details,8.7.3,POS Tagging for Morphologically Rich Languages,"Using a morphological parse sequence like Noun+A3sg+Pnon+Gen as the partof-speech tag greatly increases the number of parts of speech, and so tagsets can be 4 to 10 times larger than the 50-100 tags we have seen for English. With such large tagsets, each word needs to be morphologically analyzed to generate the list of possible morphological tag sequences (part-of-speech tags) for the word. The role of the tagger is then to disambiguate among these tags. This method also helps with unknown words since morphological parsers can accept unknown stems and still segment the affixes properly."
8,Sequence Labeling for Parts of Speech and Named Entities,8.8,Summary,,,"This chapter introduced parts of speech and named entities, and the tasks of partof-speech tagging and named entity recognition:"
8,Sequence Labeling for Parts of Speech and Named Entities,8.8,Summary,,,"• Languages generally have a small set of closed class words that are highly frequent, ambiguous, and act as function words, and open-class words like nouns, verbs, adjectives. Various part-of-speech tagsets exist, of between 40 and 200 tags. • Part-of-speech tagging is the process of assigning a part-of-speech label to each of a sequence of words. • Named entities are words for proper nouns referring mainly to people, places, and organizations, but extended to many other types that aren't strictly entities or even proper nouns."
8,Sequence Labeling for Parts of Speech and Named Entities,8.8,Summary,,,"• Two common approaches to sequence modeling are a generative approach, HMM tagging, and a discriminative approach, CRF tagging. We will see a neural approach in following chapters. • The probabilities in HMM taggers are estimated by maximum likelihood estimation on tag-labeled training corpora. The Viterbi algorithm is used for decoding, finding the most likely tag sequence • Conditional Random Fields or CRF taggers train a log-linear model that can choose the best tag sequence given an observation sequence, based on features that condition on the output tag, the prior output tag, the entire input sequence, and the current timestep. They use the Viterbi algorithm for inference, to choose the best sequence of tags, and a version of the Forward-Backward algorithm (see Appendix A) for training."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"What is probably the earliest part-of-speech tagger was part of the parser in Zellig Harris's Transformations and Discourse Analysis Project (TDAP), implemented between June 1958 and July 1959 at the University of Pennsylvania (Harris, 1962), although earlier systems had used part-of-speech dictionaries. TDAP used 14 handwritten rules for part-of-speech disambiguation; the use of part-of-speech tag sequences and the relative frequency of tags for a word prefigures modern algorithms. The parser was implemented essentially as a cascade of finite-state transducers; see Joshi and Hopely (1999) and Karttunen (1999) for a reimplementation. The Computational Grammar Coder (CGC) of Klein and Simmons (1963) had three components: a lexicon, a morphological analyzer, and a context disambiguator. The small 1500-word lexicon listed only function words and other irregular words. The morphological analyzer used inflectional and derivational suffixes to assign part-of-speech classes. These were run over words to produce candidate parts of speech which were then disambiguated by a set of 500 context rules by relying on surrounding islands of unambiguous words. For example, one rule said that between an ARTICLE and a VERB, the only allowable sequences were ADJ-NOUN, NOUN-ADVERB, or NOUN-NOUN. The TAGGIT tagger (Greene and Rubin, 1971) used the same architecture as Klein and Simmons (1963) , with a bigger dictionary and more tags (87). TAGGIT was applied to the Brown corpus and, according to Francis and Kučera (1982, p. 9) , accurately tagged 77% of the corpus; the remainder of the Brown corpus was then tagged by hand. All these early algorithms were based on a two-stage architecture in which a dictionary was first used to assign each word a set of potential parts of speech, and then lists of handwritten disambiguation rules winnowed the set down to a single part of speech per word."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"Probabilities were used in tagging by Stolz et al. (1965) and a complete probabilistic tagger with Viterbi decoding was sketched by Bahl and Mercer (1976) . The Lancaster-Oslo/Bergen (LOB) corpus, a British English equivalent of the Brown corpus, was tagged in the early 1980's with the CLAWS tagger (Marshall 1983; Marshall 1987; Garside 1987) , a probabilistic algorithm that approximated a simplified HMM tagger. The algorithm used tag bigram probabilities, but instead of storing the word likelihood of each tag, the algorithm marked tags either as rare (P(tag|word) < .01) infrequent (P(tag|word) < .10) or normally frequent (P(tag|word) > .10)."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"DeRose (1988) developed a quasi-HMM algorithm, including the use of dynamic programming, although computing P(t|w)P(w) instead of P(w|t)P(w). The same year, the probabilistic PARTS tagger of Church 1988 Church , 1989 was probably the first implemented HMM tagger, described correctly in Church (1989), although Church (1988) also described the computation incorrectly as P(t|w)P(w) instead of P(w|t)P(w). Church (p.c.) explained that he had simplified for pedagogical purposes because using the probability P(t|w) made the idea seem more understandable as ""storing a lexicon in an almost standard form""."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"Later taggers explicitly introduced the use of the hidden Markov model (Kupiec 1992; Weischedel et al. 1993; Schütze and Singer 1994) . Merialdo (1994) showed that fully unsupervised EM didn't work well for the tagging task and that reliance on hand-labeled data was important. Charniak et al. (1993) showed the importance of the most frequent tag baseline; the 92.3% number we give above was from Abney et al. (1999) . See Brants (2000) for HMM tagger implementation details, including the extension to trigram contexts, and the use of sophisticated unknown word features; its performance is still close to state of the art taggers."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"Log-linear models for POS tagging were introduced by Ratnaparkhi (1996), who introduced a system called MXPOST which implemented a maximum entropy Markov model (MEMM), a slightly simpler version of a CRF. Around the same time, sequence labelers were applied to the task of named entity tagging, first with HMMs (Bikel et al., 1997) and MEMMs (McCallum et al., 2000) , and then once CRFs were developed (Lafferty et al. 2001) , they were also applied to NER (Mc-Callum and Li, 2003) . A wide exploration of features followed (Zhou et al., 2005) . Neural approaches to NER mainly follow from the pioneering results of Collobert et al. 2011, who applied a CRF on top of a convolutional net. BiLSTMs with word and character-based embeddings as input followed shortly and became a standard neural algorithm for NER (Huang et al. 2015 , Ma and Hovy 2016 , Lample et al. 2016 followed by the more recent use of Transformers and BERT."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"The idea of using letter suffixes for unknown words is quite old; the early Klein and Simmons (1963) system checked all final letter suffixes of lengths 1-5. The unknown word features described on page 169 come mainly from Ratnaparkhi (1996) , with augmentations from Toutanova et al. (2003) and Manning (2011)."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"State of the art POS taggers use neural algorithms, either bidirectional RNNs or Transformers like BERT; see Chapter 9 and Chapter 11. HMM (Brants 2000; Thede and Harper 1999) and CRF tagger accuracies are likely just a tad lower."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"Manning (2011) investigates the remaining 2.7% of errors in a high-performing tagger (Toutanova et al., 2003) . He suggests that a third or half of these remaining errors are due to errors or inconsistencies in the training data, a third might be solvable with richer linguistic models, and for the remainder the task is underspecified or unclear."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"Supervised tagging relies heavily on in-domain training data hand-labeled by experts. Ways to relax this assumption include unsupervised algorithms for clustering words into part-of-speech-like classes, summarized in Christodoulopoulos et al. (2010), and ways to combine labeled and unlabeled data, for example by co-training (Clark et al. 2003; Søgaard 2010) ."
8,Sequence Labeling for Parts of Speech and Named Entities,8.9,Bibliographical and Historical Notes,,,"See Householder (1995) for historical notes on parts of speech, and Sampson (1987) and Garside et al. (1997)"
9,Deep Learning Architectures for Sequence Processing,,,,,"Language is an inherently temporal phenomenon. Spoken language is a sequence of acoustic events over time, and we comprehend and produce both spoken and written language as a continuous input stream. The temporal nature of language is reflected in the metaphors we use; we talk of the flow of conversations, news feeds, and twitter streams, all of which emphasize that language is a sequence that unfolds in time. This temporal nature is reflected in some of the algorithms we use to process language. For example, the Viterbi algorithm applied to HMM part-of-speech tagging, proceeds through the input a word at a time, carrying forward information gleaned along the way. Yet other machine learning approaches, like those we've studied for sentiment analysis or other text classification tasks don't have this temporal naturethey assume simultaneous access to all aspects of their input."
9,Deep Learning Architectures for Sequence Processing,,,,,"The feedforward networks of Chapter 7 also assumed simultaneous access, although they also had a simple model for time. Recall that we applied feedforward networks to language modeling by having them look only at a fixed-size window of words, and then sliding this window over the input, making independent predictions along the way. Fig. 9 .1, reproduced from Chapter 7, shows a neural language model with window size 3 predicting what word follows the input for all the. Subsequent words are predicted by sliding the window forward a word at a time."
9,Deep Learning Architectures for Sequence Processing,,,,,"The simple feedforward sliding-window is promising, but isn't a completely satisfactory solution to temporality. By using embeddings as inputs, it does solve the main problem of the simple n-gram models of Chapter 3 (recall that n-grams were based on words rather than embeddings, making them too literal, unable to generalize across contexts of similar words). But feedforward networks still share another weakness of n-gram approaches: limited context. Anything outside the context window has no impact on the decision being made. Yet many language tasks require access to information that can be arbitrarily distant from the current word. Second, the use of windows makes it difficult for networks to learn systematic patterns arising from phenomena like constituency and compositionality: the way the meaning of words in phrases combine together. For example, in Fig. 9 .1 the phrase all the appears in one window in the second and third positions, and in the next window in the first and second positions, forcing the network to learn two separate patterns for what should be the same item."
9,Deep Learning Architectures for Sequence Processing,,,,,"This chapter introduces two important deep learning architectures designed to address these challenges: recurrent neural networks and transformer networks. Both approaches have mechanisms to deal directly with the sequential nature of language that allow them to capture and exploit the temporal nature of language. The recurrent network offers a new way to represent the prior context, allowing the model's deci- Figure 9 .1 Simplified sketch of a feedforward neural language model moving through a text. At each time step t the network converts N context words, each to a d-dimensional embedding, and concatenates the N embeddings together to get the Nd × 1 unit input vector x for the network. The output of the network is a probability distribution over the vocabulary representing the model's belief with respect to each word being the next possible word. sion to depend on information from hundreds of words in the past. The transformer offers new mechanisms (self-attention and positional encodings) that help represent time and help focus on how words relate to each other over long distances. We'll see how to apply both models to the task of language modeling, to sequence modeling tasks like part-of-speech tagging, and to text classification tasks like sentiment analysis."
9,Deep Learning Architectures for Sequence Processing,,,,,w t-1 w t-2 w t w t-3 p(doe|…) p(ant|…) p(zebra|…) p(fish|…) … U W
9,Deep Learning Architectures for Sequence Processing,9.1,Language Models Revisited,,,"In this chapter, we'll begin exploring the RNN and transformer architectures through the lens of probabilistic language models, so let's briefly remind ourselves of the framework for language modeling. Recall from Chapter 3 that probabilistic language models predict the next word in a sequence given some preceding context. For example, if the preceding context is ""Thanks for all the"" and we want to know how likely the next word is ""fish"" we would compute:"
9,Deep Learning Architectures for Sequence Processing,9.1,Language Models Revisited,,,"Language models give us the ability to assign such a conditional probability to every possible next word, giving us a distribution over the entire vocabulary. We can also assign probabilities to entire sequences by using these conditional probabilities in combination with the chain rule:"
9,Deep Learning Architectures for Sequence Processing,9.1,Language Models Revisited,,,P(w 1:n ) = n i=1 P(w i |w A A A C A 3 i c b"
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V B L S 8 N A G N z U V 6 2 v q E c v i 1 X w V B I p q A e h W A Q P H i r 0 B U 0 N m + 2 m X b p 5 s P t F K C F H L / 4 V L y J e F L z 7 F / w 3 J m 0 u b R 1 Y G G Z m 2 Z 1 x Q s E V G M a v V l h Z X V v f K G 6 W t r Z 3 d v f 0 / Y O 2 C i J J W Y s G I p B d h y g m u M 9 a w E G w b i g Z 8 R z B O s 6 4 n v m d J y Y V D / w m T E L W 9 8 j Q 5 y 6 n B F L J 1 k 8 s V x I a m 0 n c T L C l I s + O 4 d p M H p v 4 3 o 7 r t 0 k p g 6 2 X j Y o x B V 4 m Z k 7 K K E f D 1 n + s Q U A j j / l A B V G q Z x o h 9 G M i g V P B k p I V K R Y S O i Z D F k 8 7 J P g 0 l Q b Y D W R 6 f M B T d S 5 H P K U m n p M m P Q I j t e h l 4 n 9 e L w L 3 s h 9 z P 4 y A + X T 2 k B s J D A H O B s E D L h k F M U k J o Z K n P 8 R 0 R N J R I J 0 t q 2 4 u F l 0 m 7 f O K W a 1 c P V T L t Z t 8 h C I 6 Q s f o D J n o A t X Q H W q g F q L o B b 2 h T / S l P W u v 2 r v 2 M Y s W t P z O I Z q D 9 v 0 H 1 O O V
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"c A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "" K z 9"
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"+ I Q R F a h E D C m J O p t N b J + 7 D x O Q = "" > A A A B + 3 i c b V D L S s N A F L 2 p r 1 p f U Z e 6 G C y C G 0 u i o i 6 L b l x W s A 9 o S p h M J + 3 Q y Y O Z i R B C N v 6 K G x E 3 C v 6 D v + D f O G m z a e u B g c M 5 Z 7 j 3 H i / m T C r L + j U q K 6 t r 6 x v V z d r W 9 s 7 u n r l / 0 J F R I g h t k 4 h H o u d h S T k L a V s x x W k v F h Q H H q d d b 3 J f + N 1 n K i S L w i e V x n Q Q 4 F H I f E a w 0 p J r H p 8 j h 0 c j l L q Z E 2 A 1 F k H m R y L P a w V c s 2 4 1 r C n Q M r F L U o c S L d f 8 c Y Y R S Q I a K s K x l H 3 b i t U g w 0 I"
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"x w m l e c x J J Y 0 w m e E S z 6 e 4 5 O t X S E O m Z + o U K T d W 5 H A 6 k T A N P J 4 v 9 5 K J X i P 9 5 / U T 5 t 4 O M h X G i a E h m g / y E I x W h o g g 0 Z I I S x V N N M B F M b 4 j I G A t M l K 6 r O N 1 e P H S Z d C 4 a 9 n X j 8 v G q 3 r w r S 6 j C E Z z A G d h w A 0 1 4 g B a 0 g c A L v M E n f B m 5 8 W q 8 G x + z a M U o / x z C H I z v P 6 S a k q c = < / l a t e x i t > log y for < l a t e x i t s h a 1 _ b a s e 6 4 = "" E q u g m y X m W 7 n S i U m o x R H Y 3 T j n b 2 g = "" > A A A B 9 3 i c b"
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V D L S s N A F J 3 U V 6 2 v q B t B k c E i u L E k C u q y 6 M Z l C / Y B T Q m T y a Q d O p k J M x M h h L r z O 9 y I u F F w 1 0 / w F / w G f 8 K k 7 a a t B w Y O 5 5 z h n n u 9 i F G l L e v H K C w t r 6 y u F d d L G 5 t b 2 z v m 7 l 5 T i V h i 0 s C C C d n 2 k C K M c t L Q V D P S j i R B o c d I y x v c 5 X 7 r k U h F B X / Q S U S 6 I e p x G l C M d C a 5 5 s E 5 d J j o w c R N n R D p v g x T x P 3 h s O S
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,a Z a t i j Q E X i T 0 l 5 e r R q P 7 7 f D y q u e a 3 4 w s c h 4 R r z J B S H
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,d u K d D d F U l P M y L D k x I p E C A 9 Q j 6 T j 3 k N 4 m k k + D I T M H t d w r M 7 k U K h U E n p Z M u + m 5 r 1 c / M / r x D q 4 6 a a U R 7 E m H E 8 G B T G D W s D 8 C N C n k m D N k o w g L G n W E O I + k g j r 7 F T 5 6 v b 8 o o u k e V G x r y q X d b t c v Q U T F M E h O A F n w A b X o A r u Q Q 0 0 A A Z P 4 B V 8 g E 8 j M V 6 M N + N 9 E i 0 Y 0 z / 7 Y A b G 1 x 9 b R Z X r < / l
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"V C k = "" > A A A C A H i c b V D L S s N A F J 3 U V 6 2 v q A s X b g a L 4 M a S q K j L o h u X F e w D m h A m 0 0 k 7 d C Y T Z i Z C C d n 4 K 2 5 c K O L W z 3 D n 3 z h p s 9 D W A x c O 5 9 z L v f e E C a N K O 8 6 3 V V l a X l l d q 6 7 X N j a 3 t n f s 3 b 2 O E q n E p I 0 F E 7 I X I k U Y j U l b U 8 1 I L 5 E E 8 Z C R b j i + L f z u I 5 G K i v h B T x L i c z S M a U Q x 0 k Y K 7 I N T 6 D E x h J M g 8 z j S I 8 k z x F i e B 3 b d a T h T w E X i l q Q O S r Q C + 8 s b C J x y E m v"
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"M k F J 9 1 0 m 0 n y G p K W Y k r 3 m p I g n C Y z Q k f U N j x I n y s + k D O T w 2 y g B G Q p q K N Z y q v y c y x J W a 8 N B 0 F k e q e a 8 Q / / P 6 q Y 6 u / Y z G S a p J j G e L o p R B L W C R B h x Q S b B m E 0 M Q l t T c C v E I S Y S 1 y a x m Q n D n X 1 4 k n b O G e 9 k 4 v 7 + o N 2 / K O K r g E B y B E + C C K 9 A E d 6 A F 2 g C D H D y D V / B m P V k v 1 r v 1 M W u t W O X M P v g D 6 / M H w H e W i g = = < / l a t e x i t > log y all < l a t e x i t s h a 1 _ b a s e 6 4 = "" K b x L q C A W 5 L D k l 1 r I X a Q v g y 2 6 V x U = "" > A A A C A 3 i c b V D L S s N A F J 3 4 r P U V d a e b w S K 4 s S Q q 6 r L o x m U F + 4 A m l M l 0 0 g 6 d m Y S Z i R B C w I 2 / 4 s a F I m 7 9 C X f + j Z M 2 C 2 0 9 c O F w z r 3 c e 0 8 Q M 6 q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W U S I x a e G I R b I b I E U Y F a S l q W a k G 0 u C e M B I J x j f F H 7 n g U h F I 3 G v 0 5 j 4 H A 0 F D S l G 2 k h 9 e / 8 E e i w a w r S f e R z p k e S Z H i E x V n n e t 2 t O 3 Z k A z h O 3 J D"
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V Q o t m 3 v 7 x B h B N O h M Y M K d V z n V j 7 G Z K a Y k b y q p c o E i M 8 R k P S M 1 Q g T p S f T X 7 I 4 Z F R B j C M p C m h 4 U T 9 P Z E h r l T K A 9 N Z 3 K l m v U L 8 z + s l O r z y M y r i R B O B p 4 v C h E E d w S I Q O K C S Y M 1 S Q x C W 1 N w K 8 Q h J h L W J r W p C c G d
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"f n i f t 0 7 p 7 U T + 7 O 6 8 1 r s s 4 K u A A H I J j 4 I J L 0 A C 3 o A l a A I N H 8 A x e w Z v 1 Z L 1 Y 7 9 b H t H X B K m f 2 w B 9 Y n z 9 J c Z f 4 < / l a t e x i t > log y thanks < l a t e x i t s h a 1 _ b a s e 6 4 = "" e 2 v w v r Z I C 4 t a v P n t 1 U q 3 S q V r 9 y 0 = "" > A A A C A X i c b V A 9 S w N B E J 3 z M 8 a v q I 1 g s x g E G 8 O d i l o G b S w j m A / I h b C 3 2 V y W 7 O 0 e u 3 v C c c T G v 2 J j o Y i t / 8 L O f + M m u U I T H w w 8 3 p t h Z l 4 Q c 6 a N 6 3 4 7 C 4 t L y y u r h b X i + s b m 1 n Z p Z 7 e h Z a I I r R P J p W o F W F P O B K 0 b Z j h t x Y r i K O C 0 G Q x v x n 7 z g S r N p L g 3 a U w 7 E Q 4 F 6 z O C j Z W 6 p f 0 T 5 H M Z o r S b + R E 2 A x V l X I p w N O q W y m 7 F n Q D N E y 8 n Z c h R 6 5 a + / J 4 k S U S F I R x r 3 f b c 2 H Q y r A w j n I 6 K f q J p j M k Q h 7 R t q c A R 1 Z 1 s 8 s E I H V m l h / p S 2 R I G T d T f E x m O t E 6 j w H a O r 9 S z 3 l j 8 z 2 s n p n / predict the next word (rather than feeding the model its best case from the previous time step) is called teacher forcing."
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V y Z i I E 0 M F m S 7 q J x w Z i c Z x o B 5 T l B i e W o K J Y v Z W R A Z Y Y W J
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,The weights in the network are adjusted to minimize the average CE loss over the training sequence via gradient descent. Fig. 9 .6 illustrates this training regimen.
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"Careful readers may have noticed that the input embedding matrix E and the final layer matrix V, which feeds the output softmax, are quite similar. The rows of E represent the word embeddings for each word in the vocabulary learned during the training process with the goal that words that have similar meaning and function will have similar embeddings. And, since the length of these embeddings corresponds to the size of the hidden layer d h , the shape of the embedding matrix E is |V | × d h ."
9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"The final layer matrix V provides a way to score the likelihood of each word in the vocabulary given the evidence present in the final hidden layer of the network through the calculation of Vh. This entails that it also has the dimensionality |V | × d h . That is, the rows of V provide a second set of learned word embeddings that capture relevant aspects of word meaning and function. This leads to an obvious question -is it even necessary to have both? Weight tying is a method that Weight tying dispenses with this redundancy and uses a single set of embeddings at the input and softmax layers. That is, E = V. To do this, we set the dimensionality of the final hidden layer to be the same d h , (or add an additional projection layer to do the same thing), and simply use the same matrix for both layers. In addition to providing improved perplexity results, this approach significantly reduces the number of parameters required for the model."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,,,"Now that we've seen the basic RNN architecture, let's consider how to apply it to three types of NLP tasks: sequence classification tasks like sentiment analysis and topic classification, sequence labeling tasks like part-of-speech tagging, and and text generation tasks. And we'll see in Chapter 10 how to use them for encoder-decoder approaches to summarization, machine translation, and question answering.ling"
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.1,Sequence labe,"In sequence labeling, the network's task is to assign a label chosen from a small fixed set of labels to each element of a sequence, like the part-of-speech tagging and named entity recognition tasks from Chapter 8. In an RNN approach to sequence labeling, inputs are word embeddings and the outputs are tag probabilities generated by a softmax layer over the given tagset, as illustrated in Fig. 9 Figure 9 .7 Part-of-speech tagging as sequence labeling with a simple RNN. Pre-trained word embeddings serve as inputs and a softmax layer provides a probability distribution over the part-of-speech tags as output at each time step."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.1,Sequence labe,"In this figure, the inputs at each time step are pre-trained word embeddings corresponding to the input tokens. The RNN block is an abstraction that represents an unrolled simple recurrent network consisting of an input layer, hidden layer, and output layer at each time step, as well as the shared U, V and W weight matrices that comprise the network. The outputs of the network at each time step represent the distribution over the POS tagset generated by a softmax layer."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.1,Sequence labe,"To generate a sequence of tags for a given input, we run forward inference over the input sequence and select the most likely tag from the softmax at each step. Since we're using a softmax layer to generate the probability distribution over the output tagset at each time step, we will again employ the cross-entropy loss during training."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.2,RNNs for Sequence Classification,"Another use of RNNs is to classify entire sequences rather than the tokens within them. We've already encountered sentiment analysis in Chapter 4, in which we classify a text as positive or negative. Other sequence classification tasks for mapping sequences of text to one from a small set of categories include document-level topic classification, spam detection, or message routing for customer service applications."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.2,RNNs for Sequence Classification,"To apply RNNs in this setting, we pass the text to be classified through the RNN a word at a time generating a new hidden layer at each time step. We can then take the hidden layer for the last token of the text, h n , to constitute a compressed representation of the entire sequence. We can pass this representation h n to a feedforward network that chooses a class via a softmax over the possible classes. Fig. 9 .8 illustrates this approach."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.2,RNNs for Sequence Classification,"Note that in this approach there don't need intermediate outputs for the words in the sequence preceding the last element. Therefore, there are no loss terms associated with those elements. Instead, the loss function used to train the weights in the network is based entirely on the final text classification task. The output from the softmax output from the feedforward classifier together with a cross-entropy loss drives the training. The error signal from the classification is backpropagated all the way through the weights in the feedforward classifier through, to its input, and then through to the three sets of weights in the RNN as described earlier in Section 9.2.2. The training regimen that uses the loss from a downstream application to adjust the weights all the way through the network is referred to as end-to-end training."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.2,RNNs for Sequence Classification,"Another option, instead of using just the last token h n to represent the whole sequence, is to use some sort of pooling function of all the hidden states h i for each pooling word i in the sequence. For example, we can create a representation that pools all the n hidden states by taking their element-wise mean:"
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.2,RNNs for Sequence Classification,h mean = 1 n n i=1 h i (9.14)
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.2,RNNs for Sequence Classification,Or we can take the element-wise max; the element-wise max of a set of n vectors is a new vector whose kth element is the max of the kth elements of all the n vectors.
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.3,Generation with RNN-Based Language Models,"RNN-based language models can also be used to generate text. Text generation is of enormous practical importance, part of tasks like question answering, machine translation, text summarization, and conversational dialogue; any ask where a system needs to produce text, conditioned on some other text. Recall back in Chapter 3 we saw how to generate text from an n-gram language model by adapting a technique suggested contemporaneously by Claude Shannon (Shannon, 1951) and the psychologists George Miller and Selfridge (Miller and Selfridge, 1950) . We first randomly sample a word to begin a sequence based on its suitability as the start of a sequence. We then continue to sample words conditioned on our previous choices until we reach a pre-determined length, or an end of sequence token is generated."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.3,Generation with RNN-Based Language Models,"Today, this approach of using a language model to incrementally generate words by repeatedly sampling the next word conditioned on our previous choices is called autoregressive generation. The procedure is basically the same as that described autoregressive generation on 38, in a neural context:"
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.3,Generation with RNN-Based Language Models,"• Sample a word in the output from the softmax distribution that results from using the beginning of sentence marker, , as the first input. • Use the word embedding for that first word as the input to the network at the next time step, and then sample the next word in the same fashion. • Continue generating until the end of sentence marker, , is sampled or a fixed length limit is reached."
9,Deep Learning Architectures for Sequence Processing,9.4,RNNs for other NLP tasks,9.4.3,Generation with RNN-Based Language Models,"Technically an autoregressive model is a model that predicts a value at time t based on a linear function of the previous values at times t − 1, t − 2, and so on. Although language models are not linear (since they have many layers of non-linearities), we loosely refer to this generation technique as autoregressive generation since the word generated at each time step is conditioned on the word selected by the network from the previous step. Fig. 9 .9 illustrates this approach. In this figure, the details of the RNN's hidden layers and recurrent connections are hidden within the blue block. This simple architecture underlies state-of-the-art approaches to applications such as machine translation, summarization, and question answering. The key to these approaches is to prime the generation component with an appropriate context. That is, instead of simply using to get things started we can provide a richer task-appropriate context; for translation the context is the sentence in the source language; for summarization it's the long text we want to summarize. We'll discuss the application of contextual generation to the problem of summarization in Section 9.9 in the context of transformer-based language models, and then again in Chapter 10 when we introduce encoder-decoder models."
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,,,"Recurrent networks are quite flexible. By combining the feedforward nature of unrolled computational graphs with vectors as common inputs and outputs, complex networks can be treated as modules that can be combined in creative ways. This"
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,,,section introduces two of the more common network architectures used in language processing with RNNs.
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.1,Stacked RNNs,"In our examples thus far, the inputs to our RNNs have consisted of sequences of word or character embeddings (vectors) and the outputs have been vectors useful for predicting words, tags or sequence labels. However, nothing prevents us from using the entire sequence of outputs from one RNN as an input sequence to another one. Stacked RNNs generally outperform single-layer networks. One reason for this success seems to be that the network induces representations at differing levels of abstraction across layers. Just as the early stages of the human visual system detect edges that are then used for finding larger regions and shapes, the initial layers of stacked networks can induce representations that serve as useful abstractions for further layers -representations that might prove difficult to induce in a single RNN. The optimal number of stacked RNNs is specific to each application and to each training set. However, as the number of stacks is increased the training costs rise quickly."
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,"The RNN uses information from the left (prior) context to make its predictions at time t. But in many applications we have access to the entire input sequence; in those cases we would like to use words from the context to the right of t. One way to do this is to run two separate RNNs, one left-to-right, and one right-to-left, and concatenate their representations."
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,"In the left-to-right RNNs we've discussed so far, the hidden state at a given time t represents everything the network knows about the sequence up to that point. The state is a function of the inputs x 1 , ..., x t and represents the context of the network to the left of the current time."
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,EQUATION
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,"15) Fig. 9 .11 illustrates such a bidirectional network that concatenates the outputs of the forward and backward pass. Other simple ways to combine the forward and backward contexts include element-wise addition or multiplication. The output at each step in time thus captures information to the left and to the right of the current input. In sequence labeling applications, these concatenated outputs can serve as the basis for a local labeling decision."
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,RNN 1
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,x 1 y 2 y 1 y 3 y n concatenated outputs
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,x 2
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,"x 3 x n Bidirectional RNNs have also proven to be quite effective for sequence classification. Recall from Fig. 9 .8 that for sequence classification we used the final hidden state of the RNN as the input to a subsequent feedforward classifier. A difficulty with this approach is that the final state naturally reflects more information about the end of the sentence than its beginning. Bidirectional RNNs provide a simple solution to this problem; as shown in Fig. 9 .12, we simply combine the final hidden states from the forward and backward passes (for example by concatenation) and use that as input for follow-on processing."
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,RNN 2 RNN 1 x 1 x 2 x 3 x n h n → h 1 ← h n → Softmax FFN h 1 ← Figure 9
9,Deep Learning Architectures for Sequence Processing,9.5,Stacked and Bidirectional RNN Architectures,9.5.2,Bidirectional RNNs,.12 A bidirectional RNN for sequence classification. The final hidden units from the forward and backward passes are combined to represent the entire sequence. This combined representation serves as input to the subsequent classifier.
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"In practice, it is quite difficult to train RNNs for tasks that require a network to make use of information distant from the current point of processing. Despite having access to the entire preceding sequence, the information encoded in hidden states tends to be fairly local, more relevant to the most recent parts of the input sequence and recent decisions. Yet distant information is critical to many language applications. Consider the following example in the context of language modeling."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,(9.18) The flights the airline was cancelling were full.
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"Assigning a high probability to was following airline is straightforward since airline provides a strong local context for the singular agreement. However, assigning an appropriate probability to were is quite difficult, not only because the plural flights is quite distant, but also because the intervening context involves singular constituents. Ideally, a network should be able to retain the distant information about plural flights until it is needed, while still processing the intermediate parts of the sequence correctly. One reason for the inability of RNNs to carry forward critical information is that the hidden layers, and, by extension, the weights that determine the values in the hidden layer, are being asked to perform two tasks simultaneously: provide information useful for the current decision, and updating and carrying forward information required for future decisions."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"A second difficulty with training RNNs arises from the need to backpropagate the error signal back through time. Recall from Section 9.2.2 that the hidden layer at time t contributes to the loss at the next time step since it takes part in that calculation. As a result, during the backward pass of training, the hidden layers are subject to repeated multiplications, as determined by the length of the sequence. A frequent result of this process is that the gradients are eventually driven to zero, a situation called the vanishing gradients problem."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"To address these issues, more complex network architectures have been designed to explicitly manage the task of maintaining relevant context over time, by enabling the network to learn to forget information that is no longer needed and to remember information required for decisions still to come."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"The most commonly used such extension to RNNs is the Long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) . LSTMs divide Long short-term memory the context management problem into two sub-problems: removing information no longer needed from the context, and adding information likely to be needed for later decision making. The key to solving both problems is to learn how to manage this context rather than hard-coding a strategy into the architecture. LSTMs accomplish this by first adding an explicit context layer to the architecture (in addition to the usual recurrent hidden layer), and through the use of specialized neural units that make use of gates to control the flow of information into and out of the units that comprise the network layers. These gates are implemented through the use of additional weights that operate sequentially on the input, and previous hidden layer, and previous context layers. The gates in an LSTM share a common design pattern; each consists of a feedforward layer, followed by a sigmoid activation function, followed by a pointwise multiplication with the layer being gated. The choice of the sigmoid as the activation function arises from its tendency to push its outputs to either 0 or 1. Combining this with a pointwise multiplication has an effect similar to that of a binary mask. Values in the layer being gated that align with values near 1 in the mask are passed through nearly unchanged; values corresponding to lower values are essentially erased."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"The first gate we'll consider is the forget gate. The purpose of this gate to delete forget gate information from the context that is no longer needed. The forget gate computes a weighted sum of the previous state's hidden layer and the current input and passes that through a sigmoid. This mask is then multiplied element-wise by the context vector to remove the information from context that is no longer required. Elementwise multiplication of two vectors (represented by the operator , and sometimes called the Hadamard product) is the vector of the same dimension as the two input vectors, where each element i is the product of element i in the two input vectors:"
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,f t = σ (U f h t−1 + W f x t ) (9.19) k t = c t−1 f t (9.20)
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,The next task is compute the actual information we need to extract from the previous hidden state and current inputs -the same basic computation we've been using for all our recurrent networks.
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,g t = tanh(U g h t−1 + W g x t ) (9.21)
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"Next, we generate the mask for the add gate to select the information to add to the add gate current context. Next, we add this to the modified context vector to get our new context vector."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,EQUATION
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,c t = j t + k t (9.24)
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,The final gate we'll use is the output gate which is used to decide what informaoutput gate tion is required for the current hidden state (as opposed to what information needs to be preserved for future decisions).
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,o t = σ (U o h t−1 + W o x t ) (9.25) h t = o t tanh(c t )
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"(9.26) Fig. 9 .13 illustrates the complete computation for a single LSTM unit. Given the appropriate weights for the various gates, an LSTM accepts as input the context layer, and hidden layer from the previous time step, along with the current input vector. It then generates updated context and hidden vectors as output. The hidden layer, h t , can be used as input to subsequent layers in a stacked RNN, or to generate an output for the final layer of a network."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,9.6.1,"Gated Units, Layers and Networks","The neural units used in LSTMs are obviously much more complex than those used in basic feedforward networks. Fortunately, this complexity is encapsulated within the basic processing units, allowing us to maintain modularity and to easily experiment with different architectures. To see this, consider Fig. 9 .14 which illustrates the inputs and outputs associated with each kind of unit. At the far left, (a) is the basic feedforward unit where a single set of weights and a single activation function determine its output, and when arranged in a layer there are no connections among the units in the layer. Next, (b) represents the unit in a simple recurrent network. Now there are two inputs and an additional set of weights to go with it. However, there is still a single activation function and output."
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,9.6.1,"Gated Units, Layers and Networks",The increased complexity of the LSTM units is encapsulated within the unit itself. The only additional external complexity for the LSTM over the basic recurrent unit (b) is the presence of the additional context vector as an input and output.
9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,9.6.1,"Gated Units, Layers and Networks","This modularity is key to the power and widespread applicability of LSTM units. LSTM units (or other varieties, like GRUs) can be substituted into any of the network architectures described in Section 9.5. And, as with simple RNNs, multi-layered networks making use of gated units can be unrolled into deep feedforward networks and trained in the usual fashion with backpropagation."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"While the addition of gates allows LSTMs to handle more distant information than RNNs, they don't completely solve the underlying problem: passing information through an extended series of recurrent connections leads to information loss and difficulties in training. Moreover, the inherently sequential nature of recurrent networks makes it hard to do computation in parallel. These considerations led to the development of transformers -an approach to sequence processing that eliminates transformers recurrent connections and returns to architectures reminiscent of the fully connected networks described earlier in Chapter 7. Transformers map sequences of input vectors (x 1 , ..., x n ) to sequences of output vectors (y 1 , ..., y n ) of the same length. Transformers are made up of stacks of transformer blocks, which are multilayer networks made by combining simple linear layers, feedforward networks, and self-attention layers, they key innovation of self-attention transformers. Self-attention allows a network to directly extract and use information from arbitrarily large contexts without the need to pass it through intermediate recurrent connections as in RNNs. We'll start by describing how self-attention works and then return to how it fits into larger transformer blocks. Fig. 9 .15 illustrates the flow of information in a single causal, or backward looking, self-attention layer. As with the overall transformer, a self-attention layer maps input sequences (x 1 , ..., x n ) to output sequences of the same length (y 1 , ..., y n ). When processing each item in the input, the model has access to all of the inputs up to and including the one under consideration, but no access to information about inputs beyond the current one. In addition, the computation performed for each item is independent of all the other computations. The first point ensures that we can use this approach to create language models and use them for autoregressive generation, and the second point means that we can easily parallelize both forward inference and training of such models."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,At the core of an attention-based approach is the ability to compare an item of 9.7 • SELF-ATTENTION NETWORKS: TRANSFORMERS 195
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,Self-Attention Layer
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 1 y 1
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 2 y 2 y 3 y 4 y 5
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 3
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 4
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"x 5 Figure 9 .15 Information flow in a causal (or masked) self-attention model. In processing each element of the sequence, the model attends to all the inputs up to, and including, the current one. Unlike RNNs, the computations at each time step are independent of all the other steps and therefore can be performed in parallel."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"interest to a collection of other items in a way that reveals their relevance in the current context. In the case of self-attention, the set of comparisons are to other elements within a given sequence. The result of these comparisons is then used to compute an output for the current input. For example, returning to Fig. 9 .15, the computation of y 3 is based on a set of comparisons between the input x 3 and its preceding elements x 1 and x 2 , and to x 3 itself. The simplest form of comparison between elements in a self-attention layer is a dot product. Let's refer to the result of this comparison as a score (we'll be updating this equation to add attention to the computation of this score):"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"score(x i , x j ) = x i • x j (9.27)"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"The result of a dot product is a scalar value ranging from −∞ to ∞, the larger the value the more similar the vectors that are being compared. Continuing with our example, the first step in computing y 3 would be to compute three scores:"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"x 3 • x 1 , x 3 • x 2 and x 3 • x 3 ."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Then to make effective use of these scores, we'll normalize them with a softmax to create a vector of weights, α i j , that indicates the proportional relevance of each input to the input element i that is the current focus of attention."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"α i j = softmax(score(x i , x j )) ∀ j ≤ i (9.28) = exp(score(x i , x j )) i k=1 exp(score(x i , x k )) ∀ j ≤ i (9.29)"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Given the proportional scores in α, we then generate an output value y i by taking the sum of the inputs seen so far, weighted by their respective α value."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,y i = j≤i α i j x j (9.30)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"The steps embodied in Equations 9.27 through 9.30 represent the core of an attention-based approach: a set of comparisons to relevant items in some context, a normalization of those scores to provide a probability distribution, followed by a weighted sum using this distribution. The output y is the result of this straightforward computation over the inputs."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"This kind of simple attention can be useful, and indeed we'll see in Chapter 10 how to use this simple idea of attention for LSTM-based encoder-decoder models for machine translation."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,But transformers allow us to create a more sophisticated way of representing how words can contribute to the representation of longer inputs. Consider the three different roles that each input embedding plays during the course of the attention process.
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,• As the current focus of attention when being compared to all of the other preceding inputs. We'll refer to this role as a query.
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,• In its role as a preceding input being compared to the current focus of attention. We'll refer to this role as a key. key
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"• And finally, as a value used to compute the output for the current focus of value attention."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"To capture these three different roles, transformers introduce weight matrices W Q , W K , and W V . These weights will be used to project each input vector x i into a representation of its role as a key, query, or value."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,q i = W Q x i ; k i = W K x i ; v i = W V x i (9.31)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"The inputs x and outputs y of transformers, as well as the intermediate vectors after the various layers, all have the same dimensionality 1 × d. For now let's assume the dimensionalities of the transform matrices are W Q ∈ R d×d , W K ∈ R d×d , and W V ∈ R d×d . Later we'll need separate dimensions for these matrices when we introduce multi-headed attention, so let's just make a note that we'll have a dimension d k for the key and query vectors, and a dimension d v for the value vectors, both of which for now we'll set to d. In the original transformer work (Vaswani et al., 2017) , d was 1024."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Given these projections, the score between a current focus of attention, x i and an element in the preceding context, x j consists of a dot product between its query vector q i and the preceding element's key vectors k j . This dot product has the right shape since both the query and the key are of dimensionality 1 × d. Let's update our previous comparison calculation to reflect this, replacing Eq. 9.27 with Eq. 9.32:"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"score(x i , x j ) = q i • k j (9.32)"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"The ensuing softmax calculation resulting in α i, j remains the same, but the output calculation for y i is now based on a weighted sum over the value vectors v. Fig. 9 .16 illustrates this calculation in the case of computing the third output y 3 in a sequence."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,y i = j≤i α i j v j (9.33)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"The result of a dot product can be an arbitrarily large (positive or negative) value. Exponentiating such large values can lead to numerical issues and to an effective loss of gradients during training. To avoid this, the dot product needs to be scaled in a suitable fashion. A scaled dot-product approach divides the result of the dot product by a factor related to the size of the embeddings before passing them through the softmax. A typical approach is to divide the dot product by the square root of the dimensionality of the query and key vectors (d k ), leading us to update our scoring function one more time, replacing Eq. 9.27 and Eq. 9.32 with Eq. 9.34:"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,EQUATION
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"This description of the self-attention process has been from the perspective of computing a single output at a single time step i. However, since each output, y i , is computed independently this entire process can be parallelized by taking advantage of efficient matrix multiplication routines by packing the input embeddings of the N tokens of the input sequence into a single matrix X ∈ R N×d . That is, each row of X is the embedding of one token of the input. We then multiply X by the key, query, and value matrices (all of dimensionality d × d) to produce matrices Q ∈ R N×d , K ∈ R N×d , and V ∈ R N×d , containing all the key, query, and value vectors:"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,Q = XW Q ; K = XW K ; V = XW V (9.35)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Given these matrices we can compute all the requisite query-key comparisons simultaneously by multiplying Q and K in a single matrix multiplication (the product is of shape N × N; Fig. 9 .17 shows a visualization). Taking this one step further, we can scale these scores, take the softmax, and then multiply the result by V resulting in a matrix of shape N × d: a vector embedding representation for each token in the input. We've reduced the entire self-attention step for an entire sequence of N tokens to the following computation:"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"SelfAttention(Q, K, V) = softmax QK √ d k V (9.36)"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Unfortunately, this process goes a bit too far since the calculation of the comparisons in QK results in a score for each query value to every key value, including those that follow the query. This is inappropriate in the setting of language modeling since guessing the next word is pretty simple if you already know it. To fix this, the elements in the upper-triangular portion of the matrix are zeroed out (set to −∞), thus eliminating any knowledge of words that follow in the sequence. Fig. 9 .17 depicts the QK matrix. (we'll see in Chapter 11 how to make use of words in the future for tasks that need it)."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Figure 9.17 The N × N QT matrix showing the q i • k j values, with the upper-triangle portion of the comparisons matrix zeroed out (set to −∞, which the softmax will turn to zero)."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Fig. 9 .17 also makes it clear that attention is quadratic in the length of the input, since at each layer we need to compute dot products between each pair of tokens in the input. This makes it extremely expensive for the input to a transformer to consist of long documents (like entire Wikipedia pages, or novels), and so most applications have to limit the input length, for example to at most a page or a paragraph of text at a time. Finding more efficient attention mechanisms is an ongoing research direction."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,"The self-attention calculation lies at the core of what's called a transformer block, which, in addition to the self-attention layer, includes additional feedforward layers, residual connections, and normalizing layers. The input and output dimensions of these blocks are matched so they can be stacked just as was the case for stacked RNNs. Fig. 9 .18 illustrates a standard transformer block consisting of a single attention layer followed by a fully-connected feedforward layer with residual connections and layer normalizations following each. We've already seen feedforward layers in Chapter 7, but what are residual connections and layer norm? In deep networks, residual connections are connections that pass information from a lower layer to a higher layer without going through the intermediate layer. Allowing information from the activation going forward and the gradient going backwards to skip a layer improves learning and gives higher level layers direct access to information from lower layers (He et al., 2016) . Residual connections in transformers are implemented by added a layer's input vector to its output vector before passing it forward . In the transformer block shown in Fig. 9 .18, residual connections are used with both the attention and feedforward sublayers. These summed vectors are then normalized using layer normalization (Ba et al., 2016 ). If we think of a layer as one long vector of units, the resulting function computed in a transformer block can be expressed as:"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,z = LayerNorm(x + SelfAttn(x)) (9.37) y = LayerNorm(z + FFNN(z)) (9.38)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,"Layer normalization (or layer norm) is one of many forms of normalization that layer norm can be used to improve training performance in deep neural networks by keeping the values of a hidden layer in a range that facilitates gradient-based training. Layer norm is a variation of the standard score, or z-score, from statistics applied to a single hidden layer. The first step in layer normalization is to calculate the mean, µ, and standard deviation, σ , over the elements of the vector to be normalized. Given a hidden layer with dimensionality d h , these values are calculated as follows."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,µ = 1 d h d h i=1 x i (9.39) σ = 1 d h d h i=1 (x i − µ) 2 (9.40)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,"Given these values, the vector components are normalized by subtracting the mean from each and dividing by the standard deviation. The result of this computation is a new vector with zero mean and a standard deviation of one."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,x = (x − µ) σ (9.41)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,"Finally, in the standard implementation of layer normalization, two learnable parameters, γ and β , representing gain and offset values, are introduced."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.1,Transformer Blocks,LayerNorm = γx + β (9.42)
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.2,Multihead Attention,"The different words in a sentence can relate to each other in many different ways simultaneously. For example, distinct syntactic, semantic, and discourse relationships can hold between verbs and their arguments in a sentence. It would be difficult for a single transformer block to learn to capture all of the different kinds of parallel relations among its inputs. Transformers address this issue with multihead selfattention layers. These are sets of self-attention layers, called heads, that reside in multihead self-attention layers parallel layers at the same depth in a model, each with its own set of parameters. Given these distinct sets of parameters, each head can learn different aspects of the relationships that exist among inputs at the same level of abstraction."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.2,Multihead Attention,"To implement this notion, each head, i, in a self-attention layer is provided with its own set of key, query and value matrices:"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.2,Multihead Attention,"W K i , W Q i and W V"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.2,Multihead Attention,"i . These are used to project the inputs into separate key, value, and query embeddings separately for each head, with the rest of the self-attention computation remaining unchanged. In multi-head attention, instead of using the model dimension d that's used for the input and output from the model, the key and query embeddings have dimensionality d k , and the value embeddings are dimensionality d v (in the original transformer paper d k = d v = 64). Thus for each head i, we have weight layers W Q i ∈ R d×d k , W K i ∈ R d×d k , and"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.2,Multihead Attention,W V i ∈ R d×d v
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.2,Multihead Attention,", and these get multiplied by the inputs packed into X to produce Q ∈ R N×d k , K ∈ R N×d k , and V ∈ R N×d v . The output of each of the h heads is of shape N × d v , and so the output of the multi-head layer with h heads consists of h vectors of shape N × d v . To make use of these vectors in further processing, they are combined and then reduced down to the original input dimension d. This is accomplished by concatenating the outputs from each head and then using yet another linear projection, W O ∈ R hd v ×d , to reduce it to the original output dimension for each token, or a total N × d output. Fig. 9 .19 illustrates this approach with 4 self-attention heads. This multihead layer replaces the single self-attention layer in the transformer block shown earlier in Fig. 9 .18, the rest of the transformer block with its feedforward layer, residual connections, and layer norms remains the same."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.2,Multihead Attention,"MultiHeadAttn(X) = (head 1 ⊕ head 2 ... ⊕ head h )W O (9.43) Q = XW Q i ; K = XW K i ; V = XW V i (9.44) head i = SelfAttention(Q, K, V) (9.45)"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.3,Modeling Word Order: Positional Embeddings,"How does a transformer model the position of each token in the input sequence? With RNNs, information about the order of the inputs was built into the structure of the model. Unfortunately, the same isn't true for transformers; the models as we've described them so far don't have any notion of the relative, or absolute, positions of the tokens in the input. This can be seen from the fact that if you scramble the order of the inputs in the attention computation in Fig. 9 .16 you get exactly the same answer."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.3,Modeling Word Order: Positional Embeddings,One simple solution is to modify the input embeddings by combining them with positional embeddings specific to each position in an input sequence.
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.3,Modeling Word Order: Positional Embeddings,"Where do we get these positional embeddings? The simplest method is to start with randomly initialized embeddings corresponding to each possible input position up to some maximum length. For example, just as we have an embedding for the word fish, we'll have an embedding for the position 3. As with word embeddings, these positional embeddings are learned along with other parameters during training. To produce an input embedding that captures positional information, we just add the 9.7 • SELF-ATTENTION NETWORKS: TRANSFORMERS 201"
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.3,Modeling Word Order: Positional Embeddings,"Figure 9 .19 Multihead self-attention: Each of the multihead self-attention layers is provided with its own set of key, query and value weight matrices. The outputs from each of the layers are concatenated and then projected down to d, thus producing an output of the same size as the input so layers can be stacked."
9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.3,Modeling Word Order: Positional Embeddings,"word embedding for each input to its corresponding positional embedding. This new embedding serves as the input for further processing. Fig. 9 .20 shows the idea. A potential problem with the simple absolute position embedding approach is that there will be plenty of training examples for the initial positions in our inputs and correspondingly fewer at the outer length limits. These latter embeddings may be poorly trained and may not generalize well during testing. An alternative approach to positional embeddings is to choose a static function that maps integer inputs to realvalued vectors in a way that captures the inherent relationships among the positions. That is, it captures the fact that position 4 in an input is more closely related to position 5 than it is to position 17. A combination of sine and cosine functions with differing frequencies was used in the original transformer work. Developing better position representations is an ongoing research topic."
9,Deep Learning Architectures for Sequence Processing,9.8,Transformers as Language Models,,,"Now that we've seen all the major components of transformers, let's examine how to deploy them as language models via semi-supervised learning. To do this, we'll proceed just as we did with the RNN-based approach: given a training corpus of plain text we'll train a model to predict the next word in a sequence using teacher forcing. Fig. 9 .21 illustrates the general approach. At each step, given all the preceding words, the final transformer layer produces an output distribution over the entire vocabulary. During training, the probability assigned to the correct word is used to calculate the cross-entropy loss for each item in the sequence. As with RNNs, the loss for a training sequence is the average cross-entropy loss over the entire sequence."
9,Deep Learning Architectures for Sequence Processing,9.8,Transformers as Language Models,,,"Linear Layer Figure 9 .21 Training a transformer as a language model. Note the key difference between this figure and the earlier RNN-based version shown in Fig. 9 .6. There the calculation of the outputs and the losses at each step was inherently serial given the recurrence in the calculation of the hidden states. With transformers, each training item can be processed in parallel since the output for each element in the sequence is computed separately. Once trained, we can compute the perplexity of the resulting model, or autoregressively generate novel text just as with RNN-based models."
9,Deep Learning Architectures for Sequence Processing,9.8,Transformers as Language Models,,,"A simple variation on autoregressive generation that underlies a number of practical applications uses a prior context to prime the autoregressive generation process. Fig. 9 .22 illustrates this with the task of text completion. Here a standard language 9.9 • CONTEXTUAL GENERATION AND SUMMARIZATION 203 model is given the prefix to some text and is asked to generate a possible completion to it. Note that as the generation process proceeds, the model has direct access to the priming context as well as to all of its own subsequently generated outputs. This ability to incorporate the entirety of the earlier context and generated outputs at each time step is the key to the power of these models. Text summarization is a practical application of context-based autoregressive Text summarization generation. The task is to take a full-length article and produce an effective summary of it. To train a transformer-based autoregressive model to perform this task, we start with a corpus consisting of full-length articles accompanied by their corresponding summaries. Fig. 9 .23 shows an example of this kind of data from a widely used summarization corpus consisting of CNN and Daily Mirror news articles."
9,Deep Learning Architectures for Sequence Processing,9.9,Contextual Generation and Summarization,,,"A simple but surprisingly effective approach to applying transformers to summarization is to append a summary to each full-length article in a corpus, with a unique marker separating the two. More formally, each article-summary pair (x 1 , ..., x m ), (y 1 , ..., y n ) in a training corpus is converted into a single training instance (x 1 , ..., x m , δ , y 1 , ...y n ) with an overall length of n + m + 1. These training instances are treated as long sentences and then used to train an autoregressive language model using teacher forcing, exactly as we did earlier."
9,Deep Learning Architectures for Sequence Processing,9.9,Contextual Generation and Summarization,,,"Once trained, full articles ending with the special marker are used as the context to prime the generation process to produce a summary as illustrated in Fig. 9 .24. Note that, in contrast to RNNs, the model has access to the original article as well as to the newly generated text throughout the process."
9,Deep Learning Architectures for Sequence Processing,9.9,Contextual Generation and Summarization,,,"As we'll see in later chapters, variations on this simple scheme are the basis for successful text-to-text applications including machine translation, summarization and question answering."
9,Deep Learning Architectures for Sequence Processing,9.9,Contextual Generation and Summarization,9.9.1,Applying Transformers to other NLP tasks,"Transformers can also be used for sequence labeling tasks (like part-of-speech tagging or named entity tagging) and sequence classification tasks (like sentiment classification), as we'll see in detail in Chapter 11. Just to give a preview, however, we don't directly train a raw transformer on these tasks. Instead, we use a technique called pretraining, in which we first train a transformer language model on a large corpus of text, in a normal self-supervised way, and only afterwards add a linear or feedforward layer on top that we finetune on a smaller dataset hand-labeled withfinetune part-of-speech or sentiment labels. Pretraining on large amounts of data via the self-supervised language model objective turns out to be a very useful way of incorporating rich information about language, and the resulting representations make it much easier to learn from the generally smaller supervised datasets for tagging or sentiment."
9,Deep Learning Architectures for Sequence Processing,9.10,Summary,,,This chapter has introduced the concepts of recurrent neural networks and transformers and how they can be applied to language problems. Here’s a summary of the main points that we covered:
9,Deep Learning Architectures for Sequence Processing,9.10,Summary,,,"• In simple Recurrent Neural Networks sequences are processed one element at a time, with the output of each neural unit at time t based both on the current input at t and the hidden layer from time t − 1. • RNNs can be trained with a straightforward extension of the backpropagation algorithm, known as backpropagation through time (BPTT). • Simple recurrent networks fail on long inputs because of problems like vanishing gradients; instead modern systems use more complex gated architectures such as LSTMs that explicitly decide what to remember and forget in their hidden and context layers. • Transformers are non-recurrent networks based on self-attention. A selfattention layer maps input sequences to output sequences of the same length, based on a set of attention heads that each model how the surrounding words are relevant for the processing of the current word. • A transformer block consists of a single attention layer followed by a feedforward layer with residual connections and layer normalizations following each. Transformer blocks can be stacked to make deeper and more powerful networks. • Common language-based applications for RNNs and transformers include:"
9,Deep Learning Architectures for Sequence Processing,9.10,Summary,,,"-Probabilistic language modeling: assigning a probability to a sequence, or to the next element of a sequence given the preceding words. -Auto-regressive generation using a trained language model. -Sequence labeling like part-of-speech tagging, where each element of a sequence is assigned a label. -Sequence classification, where an entire text is assigned to a category, as in spam detection, sentiment analysis or topic classification."
9,Deep Learning Architectures for Sequence Processing,9.11,Bibliographical and Historical Notes,,,"Influential investigations of RNNs were conducted in the context of the Parallel Distributed Processing (PDP) group at UC San Diego in the 1980's. Much of this work was directed at human cognitive modeling rather than practical NLP applications Rumelhart and McClelland 1986c McClelland and Rumelhart 1986 . Models using recurrence at the hidden layer in a feedforward network (Elman networks) were introduced by Elman (1990) . Similar architectures were investigated by Jordan (1986) with a recurrence from the output layer, and Mathis and Mozer (1995) with the addition of a recurrent context layer prior to the hidden layer. The possibility of unrolling a recurrent network into an equivalent feedforward network is discussed in (Rumelhart and McClelland, 1986c) . In parallel with work in cognitive modeling, RNNs were investigated extensively in the continuous domain in the signal processing and speech communities (Giles et al. 1994 , Robinson et al. 1996 . Schuster and Paliwal (1997) introduced bidirectional RNNs and described results on the TIMIT phoneme transcription task."
9,Deep Learning Architectures for Sequence Processing,9.11,Bibliographical and Historical Notes,,,"While theoretically interesting, the difficulty with training RNNs and managing context over long sequences impeded progress on practical applications. This situation changed with the introduction of LSTMs in Hochreiter and Schmidhuber (1997) and Gers et al. (2000) . Impressive performance gains were demonstrated on tasks at the boundary of signal processing and language processing including phoneme recognition (Graves and Schmidhuber, 2005) , handwriting recognition (Graves et al., 2007) and most significantly speech recognition (Graves et al., 2013b) ."
9,Deep Learning Architectures for Sequence Processing,9.11,Bibliographical and Historical Notes,,,"Interest in applying neural networks to practical NLP problems surged with the work of Collobert and Weston (2008) and Collobert et al. (2011) . These efforts made use of learned word embeddings, convolutional networks, and end-to-end training. They demonstrated near state-of-the-art performance on a number of standard shared tasks including part-of-speech tagging, chunking, named entity recognition and semantic role labeling without the use of hand-engineered features."
9,Deep Learning Architectures for Sequence Processing,9.11,Bibliographical and Historical Notes,,,"Approaches that married LSTMs with pre-trained collections of word-embeddings based on word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) , quickly came to dominate many common tasks: part-of-speech tagging (Ling et al., 2015) , syntactic chunking (Søgaard and Goldberg, 2016), named entity recognition (Chiu and Nichols, 2016; Ma and Hovy, 2016) , opinion mining (Irsoy and Cardie, 2014), semantic role labeling (Zhou and Xu, 2015a) and AMR parsing (Foland and Martin, 2016) . As with the earlier surge of progress involving statistical machine learning, these advances were made possible by the availability of training data provided by CONLL, SemEval, and other shared tasks, as well as shared resources such as Ontonotes (Pradhan et al., 2007b), and PropBank (Palmer et al., 2005) ."
9,Deep Learning Architectures for Sequence Processing,9.11,Bibliographical and Historical Notes,,,"The transformer (Vaswani et al., 2017) was developed drawing on two lines of prior research: self-attention and memory networks. Encoder-decoder attention, the idea of using a soft weighting over the encodings of input words to inform a generative decoder (see Chapter 10) was developed by Graves (2013) in the context of handwriting generation, and Bahdanau et al. (2015) for MT. This idea was extended to self-attention by dropping the need for separate encoding and decoding sequences and instead seeing attention a way of weighting the tokens in collecting information passed from lower layers to higher layers (Ling et al., 2015; Cheng et al., 2016; Liu et al., 2016b) . Other aspects of the transformer, including the terminology of key, query, and value, came from memory networks, a mechanism for adding an external read-write memory to networks, by using an embedding of a query to match keys representing content in an associative memory (Sukhbaatar et al., 2015; Weston et al., 2015; Graves et al., 2014) ."
10,Machine Translation and Encoder-Decoder Models,,,,,"This chapter introduces machine translation (MT), the use of computers to translate from one language to another. Of course translation, in its full generality, such as the translation of literature, or poetry, is a difficult, fascinating, and intensely human endeavor, as rich as any other area of human creativity."
10,Machine Translation and Encoder-Decoder Models,,,,,"Machine translation in its present form therefore focuses on a number of very practical tasks. Perhaps the most common current use of machine translation is for information access. We might want to translate some instructions on the web, information access perhaps the recipe for a favorite dish, or the steps for putting together some furniture. Or we might want to read an article in a newspaper, or get information from an online resource like Wikipedia or a government webpage in a foreign language. MT for information access is probably one of the most common uses of NLP technology, and Google Translate alone (shown above) translates hundreds of billions of words a day between over 100 languages."
10,Machine Translation and Encoder-Decoder Models,,,,,Another common use of machine translation is to aid human translators. MT systems are routinely used to produce a draft translation that is fixed up in a post-editing post-editing phase by a human translator. This task is often called computer-aided translation or CAT. CAT is commonly used as part of localization: the task of adapting content CAT localization or a product to a particular language community.
10,Machine Translation and Encoder-Decoder Models,,,,,"Finally, a more recent application of MT is to in-the-moment human communication needs. This includes incremental translation, translating speech on-the-fly before the entire sentence is complete, as is commonly used in simultaneous interpretation. Image-centric translation can be used for example to use OCR of the text on a phone camera image as input to an MT system to translate menus or street signs."
10,Machine Translation and Encoder-Decoder Models,,,,,"The standard algorithm for MT is the encoder-decoder network, also called the encoderdecoder sequence to sequence network, an architecture that can be implemented with RNNs or with Transformers. We've seen in prior chapters that RNN or Transformer architecture can be used to do classification (for example to map a sentence to a positive or negative sentiment tag for sentiment analysis), or can be used to do sequence labeling (for example to assign each word in an input sentence with a part-of-speech, or with a named entity tag). For part-of-speech tagging, recall that the output tag is associated directly with each input word, and so we can just model the tag as output y t for each input word x t ."
10,Machine Translation and Encoder-Decoder Models,,,,,Encoder-decoder or sequence-to-sequence models are used for a different kind of sequence modeling in which the output sequence is a complex function of the entire input sequencer; we must map from a sequence of input words or tokens to a sequence of tags that are not merely direct mappings from individual words.
10,Machine Translation and Encoder-Decoder Models,,,,,"Machine translation is exactly such a task: the words of the target language don't necessarily agree with the words of the source language in number or order. Consider translating the following made-up English sentence into Japanese. Note that the elements of the sentences are in very different places in the different languages. In English, the verb is in the middle of the sentence, while in Japanese, the verb kaita comes at the end. The Japanese sentence doesn't require the pronoun he, while English does. Such differences between languages can be quite complex. In the following actual sentence from the United Nations, notice the many changes between the Chinese sentence (we've given in in red a word-by-word gloss of the Chinese characters) and its English equivalent."
10,Machine Translation and Encoder-Decoder Models,,,,,"(10.2) 大会/General Assembly 在/on 1982年/1982 12月/December 10日/10 通过 了/adopted 第37号/37th 决议/resolution ,核准了/approved 第二 次/second 探索/exploration 及/and 和平peaceful 利用/using 外层空 间/outer space 会议/conference 的/of 各项/various 建议/suggestions 。 On 10 December 1982 , the General Assembly adopted resolution 37 in which it endorsed the recommendations of the Second United Nations Conference on the Exploration and Peaceful Uses of Outer Space ."
10,Machine Translation and Encoder-Decoder Models,,,,,"Note the many ways the English and Chinese differ. For example the ordering differs in major ways; the Chinese order of the noun phrase is ""peaceful using outer space conference of suggestions"" while the English has ""suggestions of the ... conference on peaceful use of outer space""). And the order differs in minor ways (the date is ordered differently). English requires the in many places that Chinese doesn't, and adds some details (like ""in which"" and ""it"") that aren't necessary in Chinese. Chinese doesn't grammatically mark plurality on nouns (unlike English, which has the ""-s"" in ""recommendations""), and so the Chinese must use the modifier 各项/various to make it clear that there is not just one recommendation. English capitalizes some words but not others."
10,Machine Translation and Encoder-Decoder Models,,,,,"Encoder-decoder networks are very successful at handling these sorts of complicated cases of sequence mappings. Indeed, the encoder-decoder algorithm is not just for MT; it's the state of the art for many other tasks where complex mappings between two sequences are involved. These include summarization (where we map from a long text to its summary, like a title or an abstract), dialogue (where we map from what the user said to what our dialogue system should respond), semantic parsing (where we map from a string of words to a semantic representation like logic or SQL), and many others."
10,Machine Translation and Encoder-Decoder Models,,,,,"We'll introduce the algorithm in sections Section 10.2, and in following sections give important components of the model like beam search decoding, and we'll discuss how MT is evaluated, introducing the simple chrF metric."
10,Machine Translation and Encoder-Decoder Models,,,,,"But first, in the next section, we begin by summarizing the linguistic background to MT: key differences among languages that are important to consider when considering the task of translation."
10,Machine Translation and Encoder-Decoder Models,10.1,Language Divergences and Typology,,,"Some aspects of human language seem to be universal, holding true for every lanuniversal guage, or are statistical universals, holding true for most languages. Many universals arise from the functional role of language as a communicative system by humans. Every language, for example, seems to have words for referring to people, for talking about eating and drinking, for being polite or not. There are also structural linguistic universals; for example, every language seems to have nouns and verbs (Chapter 8), has ways to ask questions, or issue commands, linguistic mechanisms for indicating agreement or disagreement."
10,Machine Translation and Encoder-Decoder Models,10.1,Language Divergences and Typology,,,"Yet languages also differ in many ways, and an understanding of what causes such translation divergences will help us build better MT models. We often distin-translation divergence guish the idiosyncratic and lexical differences that must be dealt with one by one (the word for ""dog"" differs wildly from language to language), from systematic differences that we can model in a general way (many languages put the verb before the direct object; others put the verb after the direct object). The study of these systematic cross-linguistic similarities and differences is called linguistic typology. This typology section sketches some typological facts that impact machine translation; the interested reader should also look into WALS, the World Atlas of Language Structures, which gives many typological facts about languages (Dryer and Haspelmath, 2013)."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,,,"As we hinted it in our example above comparing English and Japanese, languages differ in the basic word order of verbs, subjects, and objects in simple declarative clauses. German, French, English, and Mandarin, for example, are all SVO SVO (Subject-Verb-Object) languages, meaning that the verb tends to come between the subject and object. Hindi and Japanese, by contrast, are SOV languages, mean-SOV ing that the verb tends to come at the end of basic clauses, and Irish and Arabic are VSO languages. Two languages that share their basic word order type often have VSO other similarities. For example, VO languages generally have prepositions, whereas OV languages generally have postpositions."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,,,"Let's look in more detail at the example we saw above. In this SVO English sentence, the verb wrote is followed by its object a letter and the prepositional phrase to a friend, in which the preposition to is followed by its argument a friend. Arabic, with a VSO order, also has the verb before the object and prepositions. By contrast, in the Japanese example that follows, each of these orderings is reversed; the verb is preceded by its arguments, and the postposition follows its argument. .1 shows examples of other word order differences. All of these word order differences between languages can cause problems for translation, requiring the system to do huge structural reorderings as it generates the output."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"Of course we also need to translate the individual words from one language to another. For any translation, the appropriate word can vary depending on the context. The English source-language word bass, for example, can appear in Spanish as the fish lubina or the musical instrument bajo. German uses two distinct words for what in English would be called a wall: Wand for walls inside a building, and Mauer for walls outside a building. Where English uses the word brother for any male sibling, Chinese and many other languages have distinct words for older brother and younger brother (Mandarin gege and didi, respectively). In all these cases, translating bass, wall, or brother from English would require a kind of specialization, disambiguating the different uses of a word. For this reason the fields of MT and Word Sense Disambiguation (Chapter 18) are closely linked."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"Sometimes one language places more grammatical constraints on word choice than another. We saw above that English marks nouns for whether they are singular or plural. Mandarin doesn't. Or French and Spanish, for example, mark grammatical gender on adjectives, so an English translation into French requires specifying adjective gender."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"The way that languages differ in lexically dividing up conceptual space may be more complex than this one-to-many translation problem, leading to many-to-many mappings. For example, Fig. 10 .2 summarizes some of the complexities discussed by Hutchins and Somers (1992) in translating English leg, foot, and paw, to French. For example, when leg is used about an animal it's translated as French jambe; but about the leg of a journey, as French etape; if the leg is of a chair, we use French pied."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"Further, one language may have a lexical gap, where no word or phrase, short lexical gap of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the ""satellites"": particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 ."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.3,Morphological Typology,"Morphologically, languages are often characterized along two dimensions of variation. The first is the number of morphemes per word, ranging from isolating isolating languages like Vietnamese and Cantonese, in which each word generally has one morpheme, to polysynthetic languages like Siberian Yupik (""Eskimo""), in which a polysynthetic single word may have very many morphemes, corresponding to a whole sentence in English. The second dimension is the degree to which morphemes are segmentable, ranging from agglutinative languages like Turkish, in which morphemes have relagglutinative atively clean boundaries, to fusion languages like Russian, in which a single affix fusion may conflate multiple morphemes, like -om in the word stolom (table-SG-INSTR-DECL1), which fuses the distinct morphological categories instrumental, singular, and first declension. Translating between languages with rich morphology requires dealing with structure below the word level, and for this reason modern systems generally use subword models like the wordpiece or BPE models of Section 10.7.1."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.4,Referential Density,"Finally, languages vary along a typological dimension related to the things they tend to omit. Some languages, like English, require that we use an explicit pronoun when talking about a referent that is given in the discourse. In other languages, however, we can sometimes omit pronouns altogether, as the following example from Spanish shows 1 : (10.6) [El jefe] i dio con un libro. / 0 i Mostró a un descifrador ambulante. [The boss] came upon a book. [He] showed it to a wandering decoder."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.4,Referential Density,"Languages that can omit pronouns are called pro-drop languages. Even among pro-drop the pro-drop languages, there are marked differences in frequencies of omission. Japanese and Chinese, for example, tend to omit far more than does Spanish. This dimension of variation across languages is called the dimension of referential density. We say that languages that tend to use more pronouns are more referentially referential density dense than those that use more zeros. Referentially sparse languages, like Chinese or Japanese, that require the hearer to do more inferential work to recover antecedents are also called cold languages. Languages that are more explicit and make it easier Marshall McLuhan's 1964 distinction between hot media like movies, which fill in many details for the viewer, versus cold media like comics, which require the reader to do more inferential work to fill out the representation (Bickel, 2003) ."
10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.4,Referential Density,"Translating from languages with extensive pro-drop, like Chinese or Japanese, to non-pro-drop languages like English can be difficult since the model must somehow identify each zero and recover who or what is being talked about in order to insert the proper pronoun."
10,Machine Translation and Encoder-Decoder Models,10.2,The Encoder-Decoder Model,,,"Encoder-decoder networks, or sequence-to-sequence networks, are models ca-encoderdecoder pable of generating contextually appropriate, arbitrary length, output sequences. Encoder-decoder networks have been applied to a very wide range of applications including machine translation, summarization, question answering, and dialogue."
10,Machine Translation and Encoder-Decoder Models,10.2,The Encoder-Decoder Model,,,"The key idea underlying these networks is the use of an encoder network that takes an input sequence and creates a contextualized representation of it, often called the context. This representation is then passed to a decoder which generates a taskspecific output sequence. Fig. 10 .3 illustrates the architecture Encoder-decoder networks consist of three components:"
10,Machine Translation and Encoder-Decoder Models,10.2,The Encoder-Decoder Model,,,"1. An encoder that accepts an input sequence, x n 1 , and generates a corresponding sequence of contextualized representations, h n 1 . LSTMs, GRUs, convolutional networks, and Transformers can all be employed as encoders. 2. A context vector, c, which is a function of h n 1 , and conveys the essence of the input to the decoder."
10,Machine Translation and Encoder-Decoder Models,10.2,The Encoder-Decoder Model,,,"3. A decoder, which accepts c as input and generates an arbitrary length sequence of hidden states h m 1 , from which a corresponding sequence of output states y m 1 , can be obtained. Just as with encoders, decoders can be realized by any kind of sequence architecture."
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"Let's begin by describing an encoder-decoder network based on a pair of RNNs. 2 Recall the conditional RNN language model from Chapter 9 for computing p(y), the probability of a sequence y. Like any language model, we can break down the probability as follows:"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"p(y) = p(y 1 )p(y 2 |y 1 )p(y 3 |y 1 , y 2 )...P(y m |y 1 , ..., y m−1 ) (10.7)"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"At a particular time t, we pass the prefix of t − 1 tokens through the language model, using forward inference to produce a sequence of hidden states, ending with the hidden state corresponding to the last word of the prefix. We then use the final hidden state of the prefix as our starting point to generate the next token."
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"More formally, if g is an activation function like tanh or ReLU, a function of the input at time t and the hidden state at time t − 1, and f is a softmax over the set of possible vocabulary items, then at time t the output y t and hidden state h t are computed as:"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"h t = g(h t−1 , x t ) (10.8) y t = f (h t ) (10.9)"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"We only have to make one slight change to turn this language model with autoregressive generation into a translation model that can translate from a source text source in one language to a target text in a second: add an sentence separation marker at target the end of the source text, and then simply concatenate the target text. We briefly introduced this idea of a sentence separator token in Chapter 9 when we considered using a Transformer language model to do summarization, by training a conditional language model. If we call the source text x and the target text y, we are computing the probability p(y|x) as follows: Fig. 10.4 shows the setup for a simplified version of the encoder-decoder model (we'll see the full model, which requires attention, in the next section). Fig. 10.4 shows an English source text (""the green witch arrived""), a sentence separator token (, and a Spanish target text (""llegó la bruja verde""). To translate a source text, we run it through the network performing forward inference to generate hidden states until we get to the end of the source. Then we begin autoregressive generation, asking for a word in the context of the hidden layer from the end of the source input as well as the end-of-sentence marker. Subsequent words are conditioned on the previous hidden state and the embedding for the last word generated. Let's formalize and generalize this model a bit in Fig. 10 .5. (To help keep things straight, we'll use the superscripts e and d where needed to distinguish the hidden states of the encoder and the decoder.) The elements of the network on the left process the input sequence x and comprise the encoder. While our simplified figure shows only a single network layer for the encoder, stacked architectures are the norm, where the output states from the top layer of the stack are taken as the final representation. A widely used encoder design makes use of stacked biLSTMs where the hidden states from top layers from the forward and backward passes are concatenated as described in Chapter 9 to provide the contextualized representations for each time step. The entire purpose of the encoder is to generate a contextualized representation of the input. This representation is embodied in the final hidden state of the encoder, h e n . This representation, also called c for context, is then passed to the decoder. The decoder network on the right takes this state and uses it to initialize the first"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"p(y|x) = p(y 1 |x)p(y 2 |y 1 , x)p(y 3 |y 1 , y 2 , x)...P(y m |y 1 , ..., y m−1 , x) (10.10)"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"hidden state of the decoder. That is, the first decoder RNN cell uses c as its prior hidden state h d 0 . The decoder autoregressively generates a sequence of outputs, an element at a time, until an end-of-sequence marker is generated. Each hidden state is conditioned on the previous hidden state and the output generated in the previous state. Figure 10 .6 Allowing every hidden state of the decoder (not just the first decoder state) to be influenced by the context c produced by the encoder."
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,h d 1 h d 2 h d i y 1 y 2 y i c … … …
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"One weakness of this approach as described so far is that the influence of the context vector, c, will wane as the output sequence is generated. A solution is to make the context vector c available at each step in the decoding process by adding it as a parameter to the computation of the current hidden state, using the following equation (illustrated in Fig. 10.6 ):"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"h d t = g(ŷ t−1 , h d t−1 , c) (10.11)"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"Now we're ready to see the full equations for this version of the decoder in the basic encoder-decoder model, with context available at each decoding timestep. Recall that g is a stand-in for some flavor of RNN andŷ t−1 is the embedding for the output sampled from the softmax at the previous step:"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"c = h e n h d 0 = c h d t = g(ŷ t−1 , h d t−1 , c) z t = f (h d t ) y t = softmax(z t ) (10.12)"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"Finally, as shown earlier, the output y at each time step consists of a softmax computation over the set of possible outputs (the vocabulary, in the case of language modeling or MT). We compute the most likely output at each time step by taking the argmax over the softmax output:"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,,,"y t = argmax w∈V P(w|x, y 1 ...y t−1 ) (10.13)"
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,10.3.1,Training the Encoder-Decoder Model,"Encoder-decoder architectures are trained end-to-end, just as with the RNN language models of Chapter 9. Each training example is a tuple of paired strings, a source and a target. Concatenated with a separator token, these source-target pairs can now serve as training data."
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,10.3.1,Training the Encoder-Decoder Model,"For MT, the training data typically consists of sets of sentences and their translations. These can be drawn from standard datasets of aligned sentence pairs, as we'll discuss in Section 10.7.2. Once we have a training set, the training itself proceeds as with any RNN-based language model. The network is given the source text and then starting with the separator token is trained autoregressively to predict the next word, as shown in Fig. 10.7 ."
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,10.3.1,Training the Encoder-Decoder Model,"Total loss is the average cross-entropy loss per target word: Figure 10 .7 Training the basic RNN encoder-decoder approach to machine translation. Note that in the decoder we usually don't propagate the model's softmax outputsŷ t , but use teacher forcing to force each input to the correct gold value for training. We compute the softmax output distribution overŷ in the decoder in order to compute the loss at each token, which can then be averaged to compute a loss for the sentence."
10,Machine Translation and Encoder-Decoder Models,10.3,Encoder-Decoder with RNNs,10.3.1,Training the Encoder-Decoder Model,"Note the differences between training (Fig. 10.7 ) and inference ( Fig. 10.4 ) with respect to the outputs at each time step. The decoder during inference uses its own estimated outputŷ t as the input for the next time step x t+1 . Thus the decoder will tend to deviate more and more from the gold target sentence as it keeps generating more tokens. In training, therefore, it is more common to use teacher forcing in the teacher forcing decoder. Teacher forcing means that we force the system to use the gold target token from training as the next input x t+1 , rather than allowing it to rely on the (possibly erroneous) decoder outputŷ t . This speeds up training."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"The simplicity of the encoder-decoder model is its clean separation of the encoder -which builds a representation of the source text -from the decoder, which uses this context to generate a target text. In the model as we've described it so far, this context vector is h n , the hidden state of the last (nth) time step of the source text. This final hidden state is thus acting as a bottleneck: it must represent absolutely everything about the meaning of the source text, since the only thing the decoder knows about the source text is what's in this context vector (Fig. 10.8) . Information at the beginning of the sentence, especially for long sentences, may not be equally well represented in the context vector."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"The attention mechanism is a solution to the bottleneck problem, a way of attention mechanism allowing the decoder to get information from all the hidden states of the encoder, not just the last hidden state."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"In the attention mechanism, as in the vanilla encoder-decoder model, the context vector c is a single vector that is a function of the hidden states of the encoder, that is, c = f (h e 1 . . . h e n ). Because the number of hidden states varies with the size of the"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,Encoder Decoder bottleneck bottleneck Figure 10 .8 Requiring the context c to be only the encoder's final hidden state forces all the information from the entire source sentence to pass through this representational bottleneck.
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"input, we can't use the entire tensor of encoder hidden state vectors directly as the context for the decoder. The idea of attention is instead to create the single fixed-length vector c by taking a weighted sum of all the encoder hidden states. The weights focus on ('attend to') a particular part of the source text that is relevant for the token the decoder is currently producing. Attention thus replaces the static context vector with one that is dynamically derived from the encoder hidden states, different for each token in decoding."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"This context vector, c i , is generated anew with each decoding step i and takes all of the encoder hidden states into account in its derivation. We then make this context available during decoding by conditioning the computation of the current decoder hidden state on it (along with the prior hidden state and the previous output generated by the decoder), as we see in this equation (and Fig. 10.9 ): Figure 10 .9 The attention mechanism allows each hidden state of the decoder to see a different, dynamic, context, which is a function of all the encoder hidden states."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"h d i = g(ŷ i−1 , h d i−1 , c i ) (10.14) h d 1 h d 2 h d i y 1 y 2 y i c 1 c 2 c i … …"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"The first step in computing c i is to compute how much to focus on each encoder state, how relevant each encoder state is to the decoder state captured in h d i−1 . We capture relevance by computing-at each state i during decoding-a score(h d i−1 , h e j ) for each encoder state j."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"The simplest such score, called dot-product attention, implements relevance as dot-product attention similarity: measuring how similar the decoder hidden state is to an encoder hidden state, by computing the dot product between them:"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"score(h d i−1 , h e j ) = h d i−1 • h e j (10.15)"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,The score that results from this dot product is a scalar that reflects the degree of similarity between the two vectors. The vector of these scores across all the encoder hidden states gives us the relevance of each encoder state to the current step of the decoder.
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"To make use of these scores, we'll normalize them with a softmax to create a vector of weights, α i j , that tells us the proportional relevance of each encoder hidden state j to the prior hidden decoder state, h d i−1 ."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"α i j = softmax(score(h d i−1 , h e j ) ∀ j ∈ e) = exp(score(h d i−1 , h e j ) k exp(score(h d i−1 , h e k )) (10.16)"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"Finally, given the distribution in α, we can compute a fixed-length context vector for the current decoder state by taking a weighted average over all the encoder hidden states. 10.17) With this, we finally have a fixed-length context vector that takes into account information from the entire encoder state that is dynamically updated to reflect the needs of the decoder at each step of decoding. Fig. 10 .10 illustrates an encoderdecoder network with attention, focusing on the computation of one context vector c i ."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,c i = j α i j h e j
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"With this, we finally have a fixed-length context vector that takes into account"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,information from the entire encoder state that is dynamically updated to reflect the
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,needs of the decoder at each step of decoding. Fig. 10.10 illustrates an encoder-
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"decoder network with attention, focusing on the computation of one context vector"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,ci.
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"Figure 10 .10 A sketch of the encoder-decoder network with attention, focusing on the computation of c i . The context value c i is one of the inputs to the computation of h d i . It is computed by taking the weighted sum of all the encoder hidden states, each weighted by their dot product with the prior decoder hidden state h d i−1 ."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"It's also possible to create more sophisticated scoring functions for attention models. Instead of simple dot product attention, we can get a more powerful function that computes the relevance of each encoder hidden state to the decoder hidden state by parameterizing the score with its own set of weights, W s ."
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"score(h d i−1 , h e j ) = h d t−1 W s h e j"
10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"The weights W s , which are then trained during normal end-to-end training, give the network the ability to learn which aspects of similarity between the decoder and encoder states are important to the current application. This bilinear model also allows the encoder and decoder to use different dimensional vectors, whereas the simple dot-product attention requires the encoder and decoder hidden states have the same dimensionality."
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"The decoding algorithm we gave above for generating translations has a problem (as does the autoregressive generation we introduced in Chapter 9 for generating from a conditional language model). Recall that algorithm: at each time step in decoding, the output y t is chosen by computing a softmax over the set of possible outputs (the vocabulary, in the case of language modeling or MT), and then choosing the highest probability token (the argmax):"
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"y t = argmax w∈V P(w|x, y 1 ...y t−1 ) (10.18)"
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Choosing the single most probable token to generate at each step is called greedy greedy decoding; a greedy algorithm is one that make a choice that is locally optimal, whether or not it will turn out to have been the best choice with hindsight. Indeed, greedy search is not optimal, and may not find the highest probability translation. The problem is that the token that looks good to the decoder now might turn out later to have been the wrong choice! Let's see this by looking at the search tree, a graphical representation of the search tree choices the decoder makes in searching for the best translation, in which we view the decoding problem as a heuristic state-space search and systematically explore the space of possible outputs. In such a search tree, the branches are the actions, in this case the action of generating a token, and the nodes are the states, in this case the state of having generated a particular prefix. We are searching for the best action sequence, i.e. the target string with the highest probability. Fig. 10 .11 demonstrates the problem, using a made-up example. Notice that the most probable sequence is ok ok (with a probability of .4*.7*1.0), but a greedy search algorithm will fail to find it, because it incorrectly chooses yes as the first word since it has the highest local probability. Figure 10 .11 A search tree for generating the target string T = t 1 ,t 2 , ... from the vocabulary V = {yes, ok, }, given the source string, showing the probability of generating each token from that state. Greedy search would choose yes at the first time step followed by yes, instead of the globally most probable sequence ok ok."
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Recall from Chapter 8 that for part-of-speech tagging we used dynamic programming search (the Viterbi algorithm) to address this problem. Unfortunately, dynamic programming is not applicable to generation problems with long-distance dependencies between the output decisions. The only method guaranteed to find the best solution is exhaustive search: computing the probability of every one of the V T possible sentences (for some length value T ) which is obviously too slow."
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Instead, decoding in MT and other sequence generation problems generally uses a method called beam search. In beam search, instead of choosing the best token beam search to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width, on the metaphor of a flashlight beam beam width that can be parameterized to be wider or narrower."
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Thus at the first step of decoding, we compute a softmax over the entire vocabulary, assigning a probability to each word. We then select the k-best options from this softmax output. These initial k outputs are the search frontier and these k initial words are called hypotheses. A hypothesis is an output sequence, a translation-sofar, together with its probability. Figure 10 .12 Beam search decoding with a beam width of k = 2. At each time step, we choose the k best hypotheses, compute the V possible extensions of each hypothesis, score the resulting k * V possible hypotheses and choose the best k to continue. At time 1, the frontier is filled with the best 2 options from the initial state of the decoder: arrived and the. We then extend each of those, compute the probability of all the hypotheses so far (arrived the, arrived aardvark, the green, the witch) and compute the best 2 (in this case the green and the witch) to be the search frontier to extend on the next step. On the arcs we show the decoders that we run to score the extension words (although for simplicity we haven't shown the context value c i that is input at each step)."
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"At subsequent steps, each of the k best hypotheses is extended incrementally by being passed to distinct decoders, which each generate a softmax over the entire vocabulary to extend the hypothesis to every possible next token. Each of these k * V hypotheses is scored by P(y i |x, y is generated indicating that a complete candidate output has been found. At this point, the completed hypothesis is removed from the frontier and the size of the beam is reduced by one. The search continues until the beam has been reduced to 0. The result will be k hypotheses."
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Let's see how the scoring works in detail, scoring each node by its log probability. Recall from Eq. 10.10 that we can use the chain rule of probability to break down p(y|x) into the product of the probability of each word given its prior context, which we can turn into a sum of logs (for an output string of length t):"
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"score(y) = log P(y|x) = log (P(y 1 |x)P(y 2 |y 1 , x)P(y 3 |y 1 , y 2 , x)...P(y t |y 1 , ..., y t−1 , x)) = t i=1 log P(y i |y 1 , ..., y i−1 , x) (10.19) Thus at each step, to compute the probability of a partial translation, we simply add the log probability of the prefix translation so far to the log probability of generating the next token. Fig. 10 .13 shows the scoring for the example sentence shown in Fig. 10 .12, using some simple made-up probabilities. Log probabilities are negative or 0, and the max of two log probabilities is the one that is greater (closer to 0). Figure 10 .13 Scoring for beam search decoding with a beam width of k = 2. We maintain the log probability of each hypothesis in the beam by incrementally adding the logprob of generating each next token. Only the top k paths are extended to the next step. Fig. 10 .14 gives the algorithm. One problem arises from the fact that the completed hypotheses may have different lengths. Because models generally assign lower probabilities to longer strings, a naive algorithm would also choose shorter strings for y. This was not an issue during the earlier steps of decoding; due to the breadth-first nature of beam search all the hypotheses being compared had the same length. The usual solution to this is function BEAMDECODE(c, beam width) returns best paths y 0 , h 0 ← 0 path ← () complete paths ← () state ← (c, y 0 , h 0 , path)"
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,";initial state frontier ← state ;initial frontier while frontier contains incomplete paths and beamwidth > 0 extended frontier ← for each state ∈ frontier do y ← DECODE(state) for each word i ∈ Vocabulary do successor ← NEWSTATE(state, i, y i ) new agenda ← ADDTOBEAM(successor, extended frontier, beam width)"
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"for each state in extended frontier do if state is complete do complete paths ← APPEND(complete paths, state) extended frontier ← REMOVE(extended frontier, state) beam width ← beam width -1 frontier ← extended frontier to apply some form of length normalization to each of the hypotheses, for example simply dividing the negative log probability by the number of words:"
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"score(y) = − log P(y|x) = 1 T t i=1 − log P(y i |y 1 , ..., y i−1 , x) (10.20)"
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Beam search is common in large production MT systems, generally with beam widths k between 5 and 10. What do we do with the resulting k hypotheses? In some cases, all we need from our MT algorithm is the single best hypothesis, so we can return that. In other cases our downstream application might want to look at all k hypotheses, so we can pass them all (or a subset) to the downstream application with their respective scores."
10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,10.6 Encoder-Decoder with Transformers
10,Machine Translation and Encoder-Decoder Models,10.6,Encoder-Decoder with Transformers,,,"The encoder-decoder architecture can also be implemented using transformers (rather than RNN/LSTMs) as the component modules. At a high-level, the architecture, sketched in Fig. 10 .15, is quite similar to what we saw for RNNs. It consists of an encoder that takes the source language input words X = x 1 , ..., x T and maps them to an output representation H enc = h 1 , ..., h T ; usually via N = 6 stacked encoder blocks. The decoder, just like the encoder-decoder RNN, is essentially a conditional language model that attends to the encoder representation and generates the target words one by one, at each timestep conditioning on the source sentence and the previously generated target language words. Figure 10 .15 The encoder-decoder architecture using transformer components. The encoder uses the transformer blocks we saw in Chapter 9, while the decider uses a more powerful block with an extra encoder-decoder attention layer. The final output of the encoder H enc = h 1 , ..., h T is used to form the K and V inputs to the crossattention layer in each decoder block."
10,Machine Translation and Encoder-Decoder Models,10.6,Encoder-Decoder with Transformers,,,"But the components of the architecture differ somewhat from the RNN and also from the transformer block we've seen. First, in order to attend to the source language, the transformer blocks in the decoder has an extra cross-attention layer. Recall that the transformer block of Chapter 9 consists of a self-attention layer that attends to the input from the previous layer, followed by layer norm, a feed forward layer, and another layer norm. The decoder transformer block includes an extra layer with a special kind of attention, cross-attention (also sometimes called cross-attention encoder-decoder attention or source attention). Cross-attention has the same form as the multi-headed self-attention in a normal transformer block, except that while the queries as usual come from the previous layer of the decoder, the keys and values come from the output of the encoder."
10,Machine Translation and Encoder-Decoder Models,10.6,Encoder-Decoder with Transformers,,,"That is, the final output of the encoder H enc = h 1 , ..., h t is multiplied by the cross-attention layer's key weights W K and value weights W V , but the output from the prior decoder layer H dec [i−1] is multiplied by the cross-attention layer's query weights W Q : Figure 10 .16 The transformer block for the encoder and the decoder. Each decoder block has an extra cross-attention layer, which uses the output of the final encoder layer H enc = h 1 , ..., h t to produce its key and value vectors."
10,Machine Translation and Encoder-Decoder Models,10.6,Encoder-Decoder with Transformers,,,EQUATION
10,Machine Translation and Encoder-Decoder Models,10.6,Encoder-Decoder with Transformers,,,"The cross attention thus allows the decoder to attend to each of the source language words as projected into the the entire encoder final output representations. The other attention layer in each decoder block, the self-attention layer, is the same causal (leftto-right) self-attention that we saw in Chapter 9. The self-attention in the encoder, however, is allowed to look ahead at the entire source language text."
10,Machine Translation and Encoder-Decoder Models,10.6,Encoder-Decoder with Transformers,,,"In training, just as for RNN encoder-decoders, we use teacher forcing, and train autoregressively, at each time step predicting the next token in the target language, using cross-entropy loss."
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,,,"Machine translation systems generally use a fixed vocabulary, A common way to generate this vocabulary is with the BPE or wordpiece algorithms sketched in Chapwordpiece ter 2. Generally a shared vocabulary is used for the source and target languages, which makes it easy to copy tokens (like names) from source to target, so we build the wordpiece/BPE lexicon on a corpus that contains both source and target language data. Wordpieces use a special symbol at the beginning of each token; here's a resulting tokenization from the Google MT system (Wu et al., 2016):"
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,,,words:
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,,,Jet makers feud over seat width with big orders at stake wordpieces: J et makers fe ud over seat width with big orders at stake
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,,,"We gave the BPE algorithm in detail in Chapter 2; here are more details on the wordpiece algorithm, which is given a training corpus and a desired vocabulary size and proceeds as follows:"
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,,,"1. Initialize the wordpiece lexicon with characters (for example a subset of Unicode characters, collapsing all the remaining characters to a special unknown character token). 2. Repeat until there are V wordpieces:"
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,,,"(a) Train an n-gram language model on the training corpus, using the current set of wordpieces. (b) Consider the set of possible new wordpieces made by concatenating two wordpieces from the current lexicon. Choose the one new wordpiece that most increases the language model probability of the training corpus."
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,,,A vocabulary of 8K to 32K word pieces is commonly used.
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.2,MT corpora,"Machine translation models are trained on a parallel corpus, sometimes called a parallel corpus bitext, a text that appears in two (or more) languages. Large numbers of parallel corpora are available. Some are governmental; the Europarl corpus (Koehn,"
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.2,MT corpora,"Standard training corpora for MT come as aligned pairs of sentences. When creating new corpora, for example for underresourced languages or new domains, these sentence alignments must be created. Fig. 10 .17 gives a sample hypothetical sentence alignment. Given two documents that are translations of each other, we generally need two steps to produce sentence alignments:"
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.2,MT corpora,• a cost function that takes a span of source sentences and a span of target sentences and returns a score measuring how likely these spans are to be translations. • an alignment algorithm that takes these scores to find a good alignment between the documents.
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.2,MT corpora,"Since it is possible to induce multilingual sentence embeddings (Artetxe and Schwenk, 2019), cosine similarity of such embeddings provides a natural scoring function (Schwenk, 2018) . Thompson and Koehn (2019) give the following cost function between two sentences or spans x,y from the source and target documents respectively: c(x, y) ="
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.2,MT corpora,"(1 − cos(x, y))nSents(x) nSents(y) where nSents() gives the number of sentences (this biases the metric toward many alignments of single sentences instead of aligning very large spans). The denominator helps to normalize the similarities, and so x 1 , ..., x S , y 1 , ..., y S , are randomly selected sentences sampled from the respective documents. Usually dynamic programming is used as the alignment algorithm (Gale and Church, 1993), in a simple extension of the minimum edit distance algorithm we introduced in Chapter 2."
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.2,MT corpora,"Figure 10 .17 A sample alignment between sentences in English and French, with sentences extracted from Antoine de Saint-Exupery's Le Petit Prince and a hypothetical translation. Sentence alignment takes sentences e 1 , ..., e n , and f 1 , ..., f n and finds minimal sets of sentences that are translations of each other, including single sentence mappings like (e 1 ,f 1 ), (e 4 -f 3 ), (e 5 -f 4 ), (e 6 -f 6 ) as well as 2-1 alignments (e 2 /e 3 ,f 2 ), (e 7 /e 8 -f 7 ), and null alignments (f 5 )."
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.2,MT corpora,"Finally, it's helpful to do some corpus cleanup by removing noisy sentence pairs. This can involve handwritten rules to remove low-precision pairs (for example removing sentences that are too long, too short, have different URLs, or even pairs that are too similar, suggesting that they were copies rather than translations). Or pairs can be ranked by their multilingual embedding cosine score and low-scoring pairs discarded."
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.3,Backtranslation,"We're often short of data for training MT models, since parallel corpora may be limited for particular languages or domains. However, often we can find a large monolingual corpus, to add to the smaller parallel corpora that are available."
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.3,Backtranslation,"Backtranslation is a way of making use of monolingual corpora in the target backtranslation language by creating synthetic bitexts. In backtranslation, we train an intermediate target-to-source MT system on the small bitext to translate the monolingual target data to the source language. Now we can add this synthetic bitext (natural target sentences, aligned with MT-produced source sentences) to our training data, and retrain our source-to-target MT model. For example suppose we want to translate from Navajo to English but only have a small Navajo-English bitext, although of course we can find lots of monolingual English data. We use the small bitext to build an MT engine going the other way (from English to Navajo). Once we translate the monolingual English text to Navajo, we can add this synthetic Navajo/English bitext to our training data. Backtranslation has various parameters. One is how we generate the backtranslated data; we can run the decoder in greedy inference, or use beam search. Or we can do sampling, or Monte Carlo search. In Monte Carlo decoding, at each Monte Carlo search timestep, instead of always generating the word with the highest softmax probability, we roll a weighted die, and use it to choose the next word according to its 10.8 • MT EVALUATION 227 softmax probability. This works just like the sampling algorithm we saw in Chapter 3 for generating random sentences from n-gram language models. Imagine there are only 4 words and the softmax probability distribution at time t is (the: 0.6, green: 0.2, a: 0.1, witch: 0.1). We roll a weighted die, with the 4 sides weighted 0.6, 0.2, 0.1, and 0.1, and chose the word based on which side comes up. Another parameter is the ratio of backtranslated data to natural bitext data; we can choose to upsample the bitext data (include multiple copies of each sentence)."
10,Machine Translation and Encoder-Decoder Models,10.7,Some practical details on building MT systems,10.7.3,Backtranslation,"In general backtranslation works surprisingly well; one estimate suggests that a system trained on backtranslated text gets about 2/3 of the gain as would training on the same amount of natural bitext (Edunov et al., 2018)."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,,,Translations are evaluated along two dimensions:
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,,,1. adequacy: how well the translation captures the exact meaning of the source adequacy sentence. Sometimes called faithfulness or fidelity.
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,,,"2. fluency: how fluent the translation is in the target language (is it grammatical, fluency clear, readable, natural)."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,,,"Using humans to evaluate is most accurate, but automatic metrics are also used for convenience."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.1,Using Human Raters to Evaluate MT,"The most accurate evaluations use human raters, such as online crowdworkers, to evaluate each translation along the two dimensions. For example, along the dimension of fluency, we can ask how intelligible, how clear, how readable, or how natural the MT output (the target text) is. We can give the raters a scale, for example, from 1 (totally unintelligible) to 5 (totally intelligible, or 1 to 100, and ask them to rate each sentence or paragraph of the MT output."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.1,Using Human Raters to Evaluate MT,"We can do the same thing to judge the second dimension, adequacy, using raters to assign scores on a scale. If we have bilingual raters, we can give them the source sentence and a proposed target sentence, and rate, on a 5-point or 100-point scale, how much of the information in the source was preserved in the target. If we only have monolingual raters but we have a good human translation of the source text, we can give the monolingual raters the human reference translation and a target machine translation and again rate how much information is preserved. An alternative is to do ranking: give the raters a pair of candidate translations, and ask them which one ranking they prefer."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.1,Using Human Raters to Evaluate MT,"Training of human raters (who are often online crowdworkers) is essential; raters without translation expertise find it difficult to separate fluency and adequacy, and so training includes examples carefully distinguishing these. Raters often disagree (sources sentences may be ambiguous, raters will have different world knowledge, raters may apply scales differently). It is therefore common to remove outlier raters, and (if we use a fine-grained enough scale) normalizing raters by subtracting the mean from their scores and dividing by the variance."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"While humans produce the best evaluations of machine translation output, running a human evaluation can be time consuming and expensive. For this reason automatic metrics are often used. Automatic metrics are less accurate than human evaluation, but can help test potential system improvements, and even be used as an automatic loss function for training. In this section we introduce two families of such metrics, those based on character-or word-overlap and those based on embedding similarity."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,Automatic evaluation by Character Overlap: chrF
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"The simplest and most robust metric for MT evaluation is called chrF, which stands chrF for character F-score (Popović, 2015) . chrF (along with many other earlier related metrics like BLEU, METEOR, TER, and others) is based on a simple intuition derived from the pioneering work of Miller and Beebe-Center (1956): a good machine translation will tend to contain characters and words that occur in a human translation of the same sentence. Consider a test set from a parallel corpus, in which each source sentence has both a gold human target translation and a candidate MT translation we'd like to evaluate. The chrF metric ranks each MT target sentence by a function of the number of character n-gram overlaps with the human translation."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"Given the hypothesis and the reference, chrF is given a parameter k indicating the length of character n-grams to be considered, and computes the average of the k precisions (unigram precision, bigram, and so on) and the average of the k recalls (unigram recall, bigram recall, etc.):"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"chrP percentage of character 1-grams, 2-grams, ..., k-grams in the hypothesis that occur in the reference, averaged. chrR of character 1-grams, 2-grams,..., k-grams in the reference that occur in the hypothesis, averaged."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"The metric then computes an F-score by combining chrP and chrR using a weighting parameter β . It is common to set β = 2, thus weighing recall twice as much as precision:"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,chrFβ = (1 + β 2 ) chrP • chrR β 2 • chrP + chrR (10.24)
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"For β = 2, that would be:"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,chrF2 = 5 • chrP • chrR 4 • chrP + chrR
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"For example, consider two hypotheses that we'd like to score against the reference translation witness for the past. Here are the hypotheses along with chrF values computed using parameters k = β = 2 (in real examples, k would be a higher number like 6): REF: witness for the past, HYP1: witness of the past, chrF2,2 = .86 HYP2: past witness chrF2,2 = .62"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"Let's see how we computed that chrF value for HYP1 (we'll leave the computation of the chrF value for HYP2 as an exercise for the reader). First, chrF ignores spaces, so we'll remove them from both the reference and hypothesis: REF: witnessforthepast, (18 unigrams, 17 bigrams) HYP1: witnessofthepast, (17 unigrams, 16 bigrams)"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"Next let's see how many unigrams and bigrams match between the reference and hypothesis: unigrams that match: w i t n e s s f o t h e p a s t , (17 unigrams) bigrams that match: wi it tn ne es ss th he ep pa as st t, (13 bigrams)"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,We use that to compute the unigram and bigram precisions and recalls: unigram P: 17/17 = 1 unigram R: 17/18 = .944 bigram P: 13/16 = .813 bigram R: 13/17 = .765
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"Finally we average to get chrP and chrR, and compute the F-score: chrP = (17/17 + 13/16)/2 = .906 chrR = (17/18 + 13/17)/2 = .855 chrF2,2 = 5 chrP * chrR 4chrP + chrR = .86 chrF is simple, robust, and correlates very well with human judgments in many languages (Kocmi et al., 2021) . There are various alternative overlap metrics. For example, before the development of chrF, it was common to use a word-based overlap metric called BLEU (for BiLingual Evaluation Understudy), that is purely precision-based rather than combining precision and recall (Papineni et al., 2002) . The BLEU score for a corpus of candidate translation sentences is a function of the n-gram word precision over all the sentences combined with a brevity penalty computed over the corpus as a whole. Because BLEU is a word-based metric, it is very sensitive to word tokenization, making it difficult to compare across situations, and doesn't work as well in languages with complex morphology."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,Statstical Significance Testing for MT evals
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"Character or word overlap-based metrics like chrF (or BLEU, or etc.) are mainly used to compare two systems, with the goal of answering questions like: did the new algorithm we just invented improve our MT system? To know if the difference between the chrF scores of two MT systems is a significant difference, we use the paired bootstrap test, or the similar randomization test."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"To get a confidence interval on a single chrF score using the bootstrap test, recall from Section 4.9 that we take our test set (or devset) and create thousands of pseudotestsets by repeatedly sampling with replacement from the original test set. We now compute the chrF score of each of the pseudo-testsets. If we drop the top 2.5% and bottom 2.5% of the scores, the remaining scores will give us the 95% confidence interval for the chrF score of our system."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"To compare two MT systems A and B, we draw the same set of pseudo-testsets, and compute the chrF scores for each of them. We then compute the percentage of pseudo-test-sets in which A has a higher chrF score than B."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,chrF: Limitations
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.2,Automatic Evaluation,"While automatic character and word-overlap metrics like chrF or BLEU are useful, they have important limitations. chrF is very local: a large phrase that is moved around might barely change the chrF score at all, and chrF can't evaluate crosssentence properties of a document like its discourse coherence (Chapter 22). chrF and similar automatic metrics also do poorly at comparing very different kinds of systems, such as comparing human-aided translation against machine translation, or different machine translation architectures against each other (Callison-Burch et al., 2006) . Instead, automatic overlap metrics like chrF are most appropriate when evaluating changes to a single system."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.3,Automatic Evaluation: Embedding-Based Methods,"The chrF metric is based on measuring the exact character n-grams a human reference and candidate machine translation have in common. However, this criterion is overly strict, since a good translation may use alternate words or paraphrases. A solution first pioneered in early metrics like METEOR (Banerjee and Lavie, 2005) was to allow synonyms to match between the reference x and candidatex. More recent metrics use BERT or other embeddings to implement this intuition."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.3,Automatic Evaluation: Embedding-Based Methods,"For example, in some situations we might have datasets that have human assessments of translation quality. Such datasets consists of tuples (x,x, r), where x = (x 1 , . . . , x n ) is a reference translation,x = (x 1 , . . . ,x m ) is a candidate machine translation, and r ∈ R is a human rating that expresses the quality ofx with respect to x. Given such data, algorithms like COMET (Rei et al., 2020) BLEURT (Sellam et al., 2020) train a predictor on the human-labeled datasets, for example by passing x andx through a version of BERT (trained with extra pretraining, and then finetuned on the human-labeled sentences), followed by a linear layer that is trained to predict r. The output of such models correlates highly with human labels."
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.3,Automatic Evaluation: Embedding-Based Methods,"In other cases, however, we don't have such human-labeled datasets. In that case we can measure the similarity of x andx by the similarity of their embeddings. The BERTSCORE algorithm (Zhang et al., 2020) shown in Fig. 10 .18, for example, passes the reference x and the candidatex through BERT, computing a BERT embedding for each token x i andx j . Each pair of tokens (x i ,x j ) is scored by its cosine"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.3,Automatic Evaluation: Embedding-Based Methods,x i •x j |x i ||x j | .
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.3,Automatic Evaluation: Embedding-Based Methods,"Each token in x is matched to a token inx to compute recall, and each token iñ x is matched to a token in x to compute precision (with each token greedily matched to the most similar token in the corresponding sentence). BERTSCORE provides precision and recall (and hence F 1 ):"
10,Machine Translation and Encoder-Decoder Models,10.8,MT Evaluation,10.8.3,Automatic Evaluation: Embedding-Based Methods,EQUATION
10,Machine Translation and Encoder-Decoder Models,10.9,Bias and Ethical Issues,,,"Machine translation raises many of the same ethical issues that we've discussed in earlier chapters. For example, consider MT systems translating from Hungarian (which has the gender neutral pronounő) or Spanish (which often drops pronouns) into English (in which pronouns are obligatory, and they have grammatical gender). When translating a reference to a person described without specified gender, MT systems often default to male gender (Schiebinger 2014 , Prates et al. 2019 . And MT systems often assign gender according to culture stereotypes of the sort we saw in Section 6.11. Fig. 10 .19 shows examples from Prates et al. 2019, in which Hungarian gender-neutralő is a nurse is translated with she, but gender-neutralő is a CEO is translated with he. Prates et al. (2019) find that these stereotypes can't completely be accounted for by gender bias in US labor statistics, because the biases are amplified by MT systems, with pronouns being mapped to male or female gender with a probability higher than if the mapping was based on actual labor employment statistics."
10,Machine Translation and Encoder-Decoder Models,10.9,Bias and Ethical Issues,,,"Hungarian (gender neutral) source English MT output o egyápoló she is a nurse o egy tudós he is a scientist o egy mérnök he is an engineer o egy pék he is a baker o egy tanár she is a teacher o egy vesküvőszervező she is a wedding organizer o egy vezérigazgató he is a CEO Figure 10 .19 When translating from gender-neutral languages like Hungarian into English, current MT systems interpret people from traditionally male-dominated occupations as male, and traditionally female-dominated occupations as female (Prates et al., 2019) ."
10,Machine Translation and Encoder-Decoder Models,10.9,Bias and Ethical Issues,,,"Similarly, a recent challenge set, the WinoMT dataset (Stanovsky et al., 2019) shows that MT systems perform worse when they are asked to translate sentences that describe people with non-stereotypical gender roles, like ""The doctor asked the nurse to help her in the operation""."
10,Machine Translation and Encoder-Decoder Models,10.9,Bias and Ethical Issues,,,"Many ethical questions in MT require further research. One open problem is developing metrics for knowing what our systems don't know. This is because MT systems can be used in urgent situations where human translators may be unavailable or delayed: in medical domains, to help translate when patients and doctors don't speak the same language, or in legal domains, to help judges or lawyers communicate with witnesses or defendants. In order to 'do no harm', systems need ways to assign confidence values to candidate translations, so they can abstain from giving confidence incorrect translations that may cause harm."
10,Machine Translation and Encoder-Decoder Models,10.9,Bias and Ethical Issues,,,"Another is the need for low-resource algorithms that can translate to and from all the world's languages, the vast majority of which do not have large parallel training texts available. This problem is exacerbated by the tendency of many MT approaches to focus on the case where one of the languages is English (Anastasopoulos and Neubig, 2020). ∀ et al. (2020) propose a participatory design process to encourage content creators, curators, and language technologists who speak these low-resourced languages to participate in developing MT algorithms. They pro-low-resourced languages vide online groups, mentoring, and infrastructure, and report on a case study on developing MT algorithms for low-resource African languages."
10,Machine Translation and Encoder-Decoder Models,10.10,Summary,,,"Machine translation is one of the most widely used applications of NLP, and the encoder-decoder model, first developed for MT is a key tool that has applications throughout NLP."
10,Machine Translation and Encoder-Decoder Models,10.10,Summary,,,"• Languages have divergences, both structural and lexical, that make translation difficult. • The linguistic field of typology investigates some of these differences; languages can be classified by their position along typological dimensions like whether verbs precede their objects. • Encoder-decoder networks (either for RNNs or transformers) are composed of an encoder network that takes an input sequence and creates a contextualized representation of it, the context. This context representation is then passed to a decoder which generates a task-specific output sequence. • The attention mechanism in RNNs, and cross-attention in transformers, allows the decoder to view information from all the hidden states of the encoder. • For the decoder, choosing the single most probable token to generate at each step is called greedy decoding. • In beam search, instead of choosing the best token to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width. • Machine translation models are trained on a parallel corpus, sometimes called a bitext, a text that appears in two (or more) languages. • Backtranslation is a way of making use of monolingual corpora in the target language by running a pilot MT engine backwards to create synthetic bitexts. • MT is evaluated by measuring a translation's adequacy (how well it captures the meaning of the source sentence) and fluency (how fluent or natural it is in the target language). Human evaluation is the gold standard, but automatic evaluation metrics like chrF, which measure character n-gram overlap with human translations, or more recent metrics based on embedding similarity, are also commonly used."
10,Machine Translation and Encoder-Decoder Models,10.11,Bibliographical and Historical Notes,,,"MT was proposed seriously by the late 1940s, soon after the birth of the computer (Weaver, 1949 (Weaver, /1955 . In 1954, the first public demonstration of an MT system prototype (Dostert, 1955) led to great excitement in the press (Hutchins, 1997). The next decade saw a great flowering of ideas, prefiguring most subsequent developments. But this work was ahead of its time-implementations were limited by, for example, the fact that pending the development of disks there was no good way to store dictionary information."
10,Machine Translation and Encoder-Decoder Models,10.11,Bibliographical and Historical Notes,,,"As high-quality MT proved elusive (Bar-Hillel, 1960) , there grew a consensus on the need for better evaluation and more basic research in the new fields of formal and computational linguistics. This consensus culminated in the famously critical ALPAC (Automatic Language Processing Advisory Committee) report of 1966 of (Pierce et al., 1966 that led in the mid 1960s to a dramatic cut in funding for MT in the US. As MT research lost academic respectability, the Association for Machine Translation and Computational Linguistics dropped MT from its name. Some MT developers, however, persevered, and there were early MT systems like Météo, which translated weather forecasts from English to French (Chandioux, 1976) , and industrial systems like Systran."
10,Machine Translation and Encoder-Decoder Models,10.11,Bibliographical and Historical Notes,,,"In the early years, the space of MT architectures spanned three general models. In direct translation, the system proceeds word-by-word through the sourcelanguage text, translating each word incrementally. Direct translation uses a large bilingual dictionary, each of whose entries is a small program with the job of translating one word. In transfer approaches, we first parse the input text and then apply rules to transform the source-language parse into a target language parse. We then generate the target language sentence from the parse tree. In interlingua approaches, we analyze the source language text into some abstract meaning representation, called an interlingua. We then generate into the target language from this interlingual representation. A common way to visualize these three early approaches was the Vauquois triangle shown in Fig. 10 .20. The triangle shows the"
10,Machine Translation and Encoder-Decoder Models,10.11,Bibliographical and Historical Notes,,,"increasing depth of analysis required (on both the analysis and generation end) as we move from the direct approach through transfer approaches to interlingual approaches. In addition, it shows the decreasing amount of transfer knowledge needed as we move up the triangle, from huge amounts of transfer at the direct level (almost all knowledge is transfer knowledge for each word) through transfer (transfer rules only for parse trees or thematic roles) through interlingua (no specific transfer knowledge). We can view the encoder-decoder network as an interlingual approach, with attention acting as an integration of direct and transfer, allowing words or their representations to be directly accessed by the decoder. Statistical methods began to be applied around 1990, enabled first by the development of large bilingual corpora like the Hansard corpus of the proceedings of the Canadian Parliament, which are kept in both French and English, and then by the growth of the Web. Early on, a number of researchers showed that it was possible to extract pairs of aligned sentences from bilingual corpora, using words or simple cues like sentence length (Kay and Röscheisen 1988 , Gale and Church 1991 , Gale and Church 1993 , Kay and Röscheisen 1993 ."
10,Machine Translation and Encoder-Decoder Models,10.11,Bibliographical and Historical Notes,,,"At the same time, the IBM group, drawing directly on the noisy channel model for speech recognition, proposed two related paradigm for statistical MT. These statistical MT include the generative algorithms that became known as IBM Models 1 through IBM Models 5, implemented in the Candide system. The algorithms (except for the decoder) Candide were published in full detail-encouraged by the US government which had par- (Papineni et al., 2002) , NIST (Doddington, 2002) , TER (Translation Error Rate) (Snover et al., 2006) , Precision and Recall (Turian et al., 2003) , and METEOR (Banerjee and Lavie, 2005); character n-gram overlap methods like chrF (Popović, 2015) came later. More recent evaluation work, echoing the ALPAC report, has emphasized the importance of careful statistical methodology and the use of human evaluation (Kocmi et al., 2021; Marie et al., 2021) ."
10,Machine Translation and Encoder-Decoder Models,10.11,Bibliographical and Historical Notes,,,The early history of MT is surveyed in Hutchins 1986 and 1997; Nirenburg et al. (2002) collects early readings. See Croft (1990) or Comrie (1989) for introductions to linguistic typology.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"""How much do we know at any time? Much more, or so I believe, than we know we know."" Agatha Christie, The Moving Finger"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"Fluent speakers bring an enormous amount of knowledge to bear during comprehension and production of language. This knowledge is embodied in many forms, perhaps most obviously in the vocabulary. That is, in the rich representations associated with the words we know, including their grammatical function, meaning, real-world reference, and pragmatic function. This makes the vocabulary a useful lens to explore the acquisition of knowledge from text, by both people and machines."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"Estimates of the size of adult vocabularies vary widely both within and across languages. For example, estimates of the vocabulary size of young adult speakers of American English range from 30,000 to 100,000 depending on the resources used to make the estimate and the definition of what it means to know a word. What is agreed upon is that the vast majority of words that mature speakers use in their dayto-day interactions are acquired early in life through spoken interactions in context with care givers and peers, usually well before the start of formal schooling. This active vocabulary is extremely limited compared to the size of the adult vocabulary (usually on the order of 2000 words for young speakers) and is quite stable, with very few additional words learned via casual conversation beyond this early stage. Obviously, this leaves a very large number of words to be acquired by some other means."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"A simple consequence of these facts is that children have to learn about 7 to 10 words a day, every single day, to arrive at observed vocabulary levels by the time they are 20 years of age. And indeed empirical estimates of vocabulary growth in late elementary through high school are consistent with this rate. How do children achieve this rate of vocabulary growth given their daily experiences during this period? We know that most of this growth is not happening through direct vocabulary instruction in school since these methods are largely ineffective, and are not deployed at a rate that would result in the reliable acquisition of words at the required rate."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"The most likely remaining explanation is that the bulk of this knowledge acquisition happens as a by-product of reading. Research into the average amount of time children spend reading, and the lexical diversity of the texts they read, indicate that it is possible to achieve the desired rate. But the mechanism behind this rate of learning must be remarkable indeed, since at some points during learning the rate of vocabulary growth exceeds the rate at which new words are appearing to the learner!"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"Many of these facts have motivated approaches to word learning based on the distributional hypothesis, introduced in Chapter 6. This is the idea that something about what we're loosely calling word meanings can be learned even without any grounding in the real world, solely based on the content of the texts we've encountered over our lives. This knowledge is based on the complex association of words with the words they co-occur with (and with the words that those words occur with)."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,The crucial insight of the distributional hypothesis is that the knowledge that we acquire through this process can be brought to bear during language processing long after its initial acquisition in novel contexts. We saw in Chapter 6 that embeddings (static word representations) can be learned from text and then employed for other purposes like measuring word similarity or studying meaning change over time.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"In this chapter, we expand on this idea in two large ways. First, we'll introduce the idea of contextual embeddings: representations for words in context. The methods of Chapter 6 like word2vec or GloVe learned a single vector embedding for each unique word w in the vocabulary. By contrast, with contextual embeddings, such as those learned by popular methods like BERT (Devlin et al., 2019) or GPT (Radford et al., 2019) or their descendants, each word w will be represented by a different vector each time it appears in a different context."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"Second, we'll introduce in this chapter the idea of pretraining and fine-tuning. We call pretraining the process of learning some sort of representation of meaning for words or sentences by processing very large amounts of text. We'll call these pretrained models pretrained language models, since they can take the form of the transformer language models we introduced in Chapter 9. We call fine-tuning the process of taking the representations from these pretrained models, and further training the model, often via an added neural net classifier, to perform some downstream task like named entity tagging or question answering or coreference. The intuition is that the pretraining phase learns a language model that instantiates a rich representations of word meaning, that thus enables the model to more easily learn ('be fine-tuned to') the requirements of a downstream language understanding task. The pretrain-finetune paradigm is an instance of what is called transfer learning in machine learning: the method of acquiring knowledge from one task or domain, and then applying it (transferring it) to solve a new task. Of course, adding grounding from vision or from real-world interaction into pretrained models can help build even more powerful models, but even text alone is remarkably useful, and we will limit our attention here to purely textual models. There are two common paradigms for pretrained language models. One is the causal or left-to-right transformer model we introduced in Chapter 9. In this chapter we'll introduce a second paradigm, called the bidirectional transformer encoder, and the method of masked language modeling, introduced with the BERT model (Devlin et al., 2019 ) that allows the model to see entire texts at a time, including both the right and left context."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,,,,,"Finally, we'll show how the contextual embeddings from these pretrained language models can be used to transfer the knowledge embodied in these models to novel applications via fine-tuning. Indeed, in later chapters we'll see pretrained language models fine-tuned to tasks from parsing to question answering, from information extraction to semantic parsing."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Let's begin by introducing the bidirectional transformer encoder that underlies models like BERT and its descendants like RoBERTa (Liu et al., 2019) or SpanBERT (Joshi et al., 2020) . In Chapter 9 we explored causal (left-to-right) transformers that can serve as the basis for powerful language models-models that can easily be applied to autoregressive generation problems such as contextual generation, summarization and machine translation. However, when applied to sequence classification and labeling problems causal models have obvious shortcomings since they are based on an incremental, left-to-right processing of their inputs. If we want to assign the correct named-entity tag to each word in a sentence, or other sophisticated linguistic labels like the parse tags we'll introduce in later chapters, we'll want to be able to take into account information from the right context as we process each element. Fig. 11 .1, reproduced here from Chapter 9, illustrates the information flow in the purely left-to-right approach of Chapter 9. As can be seen, the hidden state computation at each point in time is based solely on the current and earlier elements of the input, ignoring potentially useful information located to the right of each tagging decision."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Figure 11 .1 A causal, backward looking, transformer model like Chapter 9. Each output is computed independently of the others using only information seen earlier in the context."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Figure 11 .2 Information flow in a bidirectional self-attention model. In processing each element of the sequence, the model attends to all inputs, both before and after the current one."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Bidirectional encoders overcome this limitation by allowing the self-attention mechanism to range over the entire input, as shown in Fig. 11 .2. The focus of bidirectional encoders is on computing contextualized representations of the tokens in an input sequence that are generally useful across a range of downstream applications. Therefore, bidirectional encoders use self-attention to map sequences of input embeddings (x 1 , ..., x n ) to sequences of output embeddings the same length (y 1 , ..., y n ), where the output vectors have been contextualized using information from the entire input sequence."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"This contextualization is accomplished through the use of the same self-attention mechanism used in causal models. As with these models, the first step is to generate a set of key, query and value embeddings for each element of the input vector x through the use of learned weight matrices W Q , W K , and W V . These weights project each input vector x i into its specific role as a key, query, or value."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,q i = W Q x i ; k i = W K x i ; v i = W V x i (11.1)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"The output vector y i corresponding to each input element x i is a weighted sum of all the input value vectors v, as follows:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,y i = n j=i α i j v j (11.2)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"The α weights are computed via a softmax over the comparison scores between every element of an input sequence considered as a query and every other element as a key, where the comparison scores are computed using dot products."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,α i j = exp(score i j ) n k=1 exp(score ik ) (11.3) score i j = q i • k j (11.4)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Since each output vector, y i , is computed independently, the processing of an entire sequence can be parallelized via matrix operations. The first step is to pack the input embeddings x i into a matrix X ∈ R N×d h . That is, each row of X is the embedding of one token of the input. We then multiply X by the key, query, and value weight matrices (all of dimensionality d × d) to produce matrices Q ∈ R N×d , K ∈ R N×d , and V ∈ R N×d , containing all the key, query, and value vectors in a single step."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,Q = XW Q ; K = XW K ; V = XW V (11.5)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,Given these matrices we can compute all the requisite query-key comparisons simultaneously by multiplying Q and K in a single operation. Fig. 11 .3 illustrates the result of this operation for an input with length 5.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Finally, we can scale these scores, take the softmax, and then multiply the result by V resulting in a matrix of shape N × d where each row contains a contextualized output embedding corresponding to each token in the input."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"SelfAttention(Q, K, V) = softmax QK √ d k V (11.6)"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"As shown in Fig. 11 .3, the full set of self-attention scores represented by QK T constitute an all-pairs comparison between the keys and queries for each element of the input. In the case of causal language models in Chapter 9, we masked the upper triangular portion of this matrix (in Fig. ??) to eliminate information about future words since this would make the language modeling training task trivial. With bidirectional encoders we simply skip the mask, allowing the model to contextualize each token using information from the entire input."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Beyond this simple change, all of the other elements of the transformer architecture remain the same for bidirectional encoder models. Inputs to the model are segmented using subword tokenization and are combined with positional embeddings before being passed through a series of standard transformer blocks consisting of self-attention and feedforward layers augmented with residual connections and layer normalization, as shown in Fig. 11 .4. To make this more concrete, the original bidirectional transformer encoder model, BERT (Devlin et al., 2019) , consisted of the following:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"• A subword vocabulary consisting of 30,000 tokens generated using the Word-Piece algorithm (Schuster and Nakajima, 2012) , • Hidden layers of size of 768, • 12 layers of transformer blocks, with 12 multihead attention layers each."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"The result is a model with over 100M parameters. The use of WordPiece (one of the large family of subword tokenization algorithms that includes the BPE algorithm we saw in Chapter 2) means that BERT and its descendants are based on subword tokens rather than words. Every input sentence first has to be tokenized, and then all further processing takes place on subword tokens rather than words. This will require, as we'll see, that for some NLP tasks that require notions of words (like named entity tagging, or parsing) we will occasionally need to map subwords back to words."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.1,Bidirectional Transformer Encoders,,,"Finally, a fundamental issue with transformers is that the size of the input layer dictates the complexity of model. Both the time and memory requirements in a transformer grow quadratically with the length of the input. It's necessary, therefore, to set a fixed input length that is long enough to provide sufficient context for the model to function and yet still be computationally tractable. For BERT, a fixed input size of 512 subword tokens was used."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,,,"We trained causal transformer language models in Chapter 9 by making them iteratively predict the next word in a text. But eliminating the causal mask makes the guess-the-next-word language modeling task trivial since the answer is now directly available from the context, so we're in need of a new training scheme. Fortunately, the traditional learning objective suggests an approach that can be used to train bidirectional encoders. Instead of trying to predict the next word, the model learns to perform a fill-in-the-blank task, technically called the cloze task (Taylor, 1953) . To cloze task see this, let's return to the motivating example from Chapter 3. Instead of predicting which words are likely to come next in this example:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,,,Please turn your homework ____.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,,,we're asked to predict a missing item given the rest of the sentence.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,,,Please turn _____ homework in.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,,,"That is, given an input sequence with one or more elements missing, the learning task is to predict the missing elements. More precisely, during training the model is deprived of one or more elements of an input sequence and must generate a probability distribution over the vocabulary for each of the missing items. We then use the cross-entropy loss from each of the model's predictions to drive the learning process."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,,,"This approach can be generalized to any of a variety of methods that corrupt the training input and then asks the model to recover the original input. Examples of the kinds of manipulations that have been used include masks, substitutions, reorderings, deletions, and extraneous insertions into the training text."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"The original approach to training bidirectional encoders is called Masked Language Modeling (MLM) (Devlin et al., 2019) . As with the language model training meth-"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"ods we've already seen, MLM uses unannotated text from a large corpus. Here, the MLM model is presented with a series of sentences from the training corpus where a random sample of tokens from each training sequence is selected for use in the learning task. Once chosen, a token is used in one of three ways:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,• It is replaced with the unique vocabulary token [MASK] .
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"• It is replaced with another token from the vocabulary, randomly sampled based on token unigram probabilities. • It is left unchanged."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"In BERT, 15% of the input tokens in a training sequence are sampled for learning. Of these, 80% are replaced with [MASK] , 10% are replaced with randomly selected tokens, and the remaining 10% are left unchanged."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"The MLM training objective is to predict the original inputs for each of the masked tokens using a bidirectional encoder of the kind described in the last section. The cross-entropy loss from these predictions drives the training process for all the parameters in the model. Note that all of the input tokens play a role in the selfattention process, but only the sampled tokens are used for learning."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"More specifically, the original input sequence is first tokenized using a subword model. The sampled items which drive the learning process are chosen from among the set of tokenized inputs. Word embeddings for all of the tokens in the input are retrieved from the word embedding matrix and then combined with positional embeddings to form the input to the transformer."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"Figure 11 .5 Masked language model training. In this example, three of the input tokens are selected, two of which are masked and the third is replaced with an unrelated word. The probabilities assigned by the model to these three items are used as the training loss. (In this and subsequent figures we display the input as words rather than subword tokens; the reader should keep in mind that BERT and similar models actually use subword tokens instead.)"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"Fig. 11 .5 illustrates this approach with a simple example. Here, long, thanks and the have been sampled from the training sequence, with the first two masked and the replaced with the randomly sampled token apricot. The resulting embeddings are passed through a stack of bidirectional transformer blocks. To produce a probability distribution over the vocabulary for each of the masked tokens, the output vector from the final transformer layer for each of the masked tokens is multiplied by a learned set of classification weights W V ∈ R |V |×d h and then through a softmax to yield the required predictions over the vocabulary."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,y i = softmax(W V h i )
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.1,Masking Words,"With a predicted probability distribution for each masked item, we can use crossentropy to compute the loss for each masked item-the negative log probability assigned to the actual masked word, as shown in Fig. 11 .5. The gradients that form the basis for the weight updates are based on the average loss over the sampled learning items from a single training sequence (or batch of sequences)."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,"For many NLP applications, the natural unit of interest may be larger than a single word (or token). Question answering, syntactic parsing, coreference and semantic role labeling applications all involve the identification and classification of constituents, or phrases. This suggests that a span-oriented masked learning objective might provide improved performance on such tasks."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,"A span is a contiguous sequence of one or more words selected from a training text, prior to subword tokenization. In span-based masking, a set of randomly selected spans from a training sequence are chosen. In the SpanBERT work that originated this technique (Joshi et al., 2020) , a span length is first chosen by sampling from a geometric distribution that is biased towards shorter spans an with upper bound of 10. Given this span length, a starting location consistent with the desired span length and the length of the input is sampled uniformly."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,"Once a span is chosen for masking, all the words within the span are substituted according to the same regime used in BERT: 80% of the time the span elements are substituted with the [MASK] token, 10% of the time they are replaced by randomly sampled words from the vocabulary, and 10% of the time they are left as is. Note that this substitution process is done at the span level-all the tokens in a given span are substituted using the same method. As with BERT, the total token substitution is limited to 15% of the training sequence input. Having selected and masked the training span, the input is passed through the standard transformer architecture to generate contextualized representations of the input tokens."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,"Downstream span-based applications rely on span representations derived from the tokens within the span, as well as the start and end points, or the boundaries, of a span. Representations for these boundaries are typically derived from the first and last words of a span, the words immediately preceding and following the span, or some combination of them. The SpanBERT learning objective augments the MLM objective with a boundary oriented component called the Span Boundary Objective (SBO). The SBO relies on a model's ability to predict the words within a masked span from the words immediately preceding and following it. This prediction is made using the output vectors associated with the words that immediately precede and follow the span being masked, along with positional embedding that signals which word in the span is being predicted:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,"L(x) = L MLM (x) + L SBO (x) (11.7) L SBO (x) = −logP(x|x s , x e , p x ) (11.8)"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,where s denotes the position of the word before the span and e denotes the word after the end. The prediction for a given position i within the span is produced by concatenating the output embeddings for words s and e span boundary vectors with a positional embedding for position i and passing the result through a 2-layer feedforward network.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,s = FFNN([y s−1 ; y e+1 ; p i−s+1 ]) (11.9) z = softmax(Es) (11.10)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.2,Masking Spans,"The final loss is the sum of the BERT MLM loss and the SBO loss. Fig. 11 .6 illustrates this with one of our earlier examples. Here the span selected is and thanks for which spans from position 3 to 5. The total loss associated with the masked token thanks is the sum of the cross-entropy loss generated from the prediction of thanks from the output y 4 , plus the cross-entropy loss from the prediction of thanks from the output vectors for y 2 , y 6 and the embedding for position 4 in the span. Figure 11 .6 Span-based language model training. In this example, a span of length 3 is selected for training and all of the words in the span are masked. The figure illustrates the loss computed for word thanks; the loss for the entire span is based on the loss for all three of the words in the span."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.3,Next Sentence Prediction,"The focus of masked-based learning is on predicting words from surrounding contexts with the goal of producing effective word-level representations. However, an important class of applications involves determining the relationship between pairs of sentences. These includes tasks like paraphrase detection (detecting if two sentences have similar meanings), entailment (detecting if the meanings of two sentences entail or contradict each other) or discourse coherence (deciding if two neighboring sentences form a coherent discourse)."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.3,Next Sentence Prediction,"To capture the kind of knowledge required for applications such as these, BERT introduced a second learning objective called Next Sentence Prediction (NSP). In"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.3,Next Sentence Prediction,"this task, the model is presented with pairs of sentences and is asked to predict whether each pair consists of an actual pair of adjacent sentences from the training corpus or a pair of unrelated sentences. In BERT, 50% of the training pairs consisted of positive pairs, and in the other 50% the second sentence of a pair was randomly selected from elsewhere in the corpus. The NSP loss is based on how well the model can distinguish true pairs from random pairs. To facilitate NSP training, BERT introduces two new tokens to the input representation (tokens that will prove useful for fine-tuning as well). After tokenizing the input with the subword model, the token [CLS] is prepended to the input sentence pair, and the token [SEP] is placed between the sentences and after the final token of the second sentence. Finally, embeddings representing the first and second segments of the input are added to the word and positional embeddings to allow the model to more easily distinguish the input sentences."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.3,Next Sentence Prediction,"During training, the output vector from the final layer associated with the [CLS] token represents the next sentence prediction. As with the MLM objective, a learned set of classification weights W NSP ∈ R 2×d h is used to produce a two-class prediction from the raw [CLS] vector."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.3,Next Sentence Prediction,y i = softmax(W NSP h i )
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.3,Next Sentence Prediction,"Cross entropy is used to compute the NSP loss for each sentence pair presented to the model. Fig. 11 .7 illustrates the overall NSP training setup. In BERT, the NSP loss was used in conjunction with the MLM training objective to form final loss. Figure 11 .7 An example of the NSP loss calculation."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.4,Training Regimes,"The corpus used in training BERT and other early transformer-based language models consisted of an 800 million word corpus of book texts called BooksCorpus (Zhu et al., 2015 ) and a 2.5 Billion word corpus derived from the English Wikipedia, for a combined size of 3.3 Billion words. The BooksCorpus is no longer used (for intellectual property reasons), and in general, as we'll discuss later, state-of-the-art models employ corpora that are orders of magnitude larger than these early efforts. To train the original BERT models, pairs of sentences were selected from the training corpus according to the next sentence prediction 50/50 scheme. Pairs were sampled so that their combined length was less than the 512 token input. Tokens within these sentence pairs were then masked using the MLM approach with the combined loss from the MLM and NSP objectives used for a final loss. Approximately 40 passes (epochs) over the training data was required for the model to converge."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.4,Training Regimes,"The result of this pretraining process consists of both learned word embeddings, as well as all the parameters of the bidirectional encoder that are used to produce contextual embeddings for novel inputs."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.5,Contextual Embeddings,"Given a pretrained language model and a novel input sentence, we can think of the output of the model as constituting contextual embeddings for each token in the contextual embeddings input. These contextual embeddings can be used as a contextual representation of the meaning of the input token for any task requiring the meaning of word."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.5,Contextual Embeddings,"Contextual embeddings are thus vectors representing some aspect of the meaning of a token in context. For example, given a sequence of input tokens x 1 , ..., x n , we can use the output vector y i from the final layer of the model as a representation of the meaning of token x i in the context of sentence x 1 , ..., x n . Or instead of just using the vector y i from the final layer of the model, it's common to compute a representation for x i by averaging the output tokens y i from each of the last four layers of the model."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.5,Contextual Embeddings,"Just as we used static embeddings like word2vec to represent the meaning of words, we can use contextual embeddings as representations of word meanings in context for any task that might require a model of word meaning. Where static embeddings represent the meaning of word types (vocabulary entries), contextual embeddings represent the meaning of word tokens: instances of a particular word type in a particular context. Contextual embeddings can thus by used for tasks like measuring the semantic similarity of two words in context, and are useful in linguistic tasks that require models of word meaning."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.2,Training Bidirectional Encoders,11.2.5,Contextual Embeddings,"In the next section, however, we'll see the most common use of these representations: as embeddings of word or even entire sentences that are the inputs to classifiers in the fine-tuning process for downstream NLP applications."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,,,"The power of pretrained language models lies in their ability to extract generalizations from large amounts of text-generalizations that are useful for myriad downstream applications. To make practical use of these generalizations, we need to create interfaces from these models to downstream applications through a process called fine-tuning. Fine-tuning facilitates the creation of applications on top of prefine-tuning trained models through the addition of a small set of application-specific parameters. The fine-tuning process consists of using labeled data from the application to train these additional application-specific parameters. Typically, this training will either freeze or make only minimal adjustments to the pretrained language model parameters."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,,,"The following sections introduce fine-tuning methods for the most common applications including sequence classification, sequence labeling, sentence-pair inference, and span-based operations."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.1,Sequence Classification,"Sequence classification applications often represent an input sequence with a single consolidated representation. With RNNs, we used the hidden layer associated with the final input element to stand for the entire sequence. A similar approach is used with transformers. An additional vector is added to the model to stand for the entire sequence. This vector is sometimes called the sentence embedding since it refers sentence embedding to the entire sequence, although the term 'sentence embedding' is also used in other ways. In BERT, the [CLS] token plays the role of this embedding. This unique token is added to the vocabulary and is prepended to the start of all input sequences, both during pretraining and encoding. The output vector in the final layer of the model for the [CLS] input represents the entire input sequence and serves as the input to a classifier head, a logistic regression or neural network classifier that makes the classifier head relevant decision."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.1,Sequence Classification,"As an example, let's return to the problem of sentiment classification. A simple approach to fine-tuning a classifier for this application involves learning a set of weights, W C , to map the output vector for the [CLS] token, y CLS to a set of scores over the possible sentiment classes. Assuming a three-way sentiment classification task (positive, negative, neutral) and dimensionality d h for the size of the language model hidden layers gives W C ∈ R 3×d h . Classification of unseen documents proceeds by passing the input text through the pretrained language model to generate y CLS , multiplying it by W C , and finally passing the resulting vector through a softmax."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.1,Sequence Classification,y = softmax(W C y CLS ) (11.11)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.1,Sequence Classification,Finetuning the values in W C requires supervised training data consisting of input sequences labeled with the appropriate class. Training proceeds in the usual way; cross-entropy loss between the softmax output and the correct answer is used to drive the learning that produces W C .
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.1,Sequence Classification,"A key difference from what we've seen earlier with neural classifiers is that this loss can be used to not only learn the weights of the classifier, but also to update the weights for the pretrained language model itself. In practice, reasonable classification performance is typically achieved with only minimal changes to the language model parameters, often limited to updates over the final few layers of the transformer. Fig. 11 .8 illustrates this overall approach to sequence classification."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,"As mentioned in Section 11.2.3, an important type of problem involves the classification of pairs of input sequences. Practical applications that fall into this class include logical entailment, paraphrase detection and discourse analysis."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,"Fine-tuning an application for one of these tasks proceeds just as with pretraining using the NSP objective. During fine-tuning, pairs of labeled sentences from the supervised training data are presented to the model. As with sequence classification, the output vector associated with the prepended [CLS] token represents the model's view of the input pair. And as with NSP training, the two inputs are separated by the a [SEP] token. To perform classification, the [CLS] vector is multiplied by a set of learning classification weights and passed through a softmax to generate label predictions, which are then used to update the weights."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,"As an example, let's consider an entailment classification task with the Multi-Genre Natural Language Inference (MultiNLI) dataset (Williams et al., 2018) . In the task of natural language inference or NLI, also called recognizing textual natural language inference entailment, a model is presented with a pair of sentences and must classify the relationship between their meanings. For example in the MultiNLI corpus, pairs of sentences are given one of 3 labels: entails, contradicts and neutral. These labels describe a relationship between the meaning of the first sentence (the premise) and the meaning of the second sentence (the hypothesis). Here are representative examples of each class from the corpus:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,• Neutral a: Jon walked back to the town to the smithy. b: Jon traveled back to his hometown.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,• Contradicts a: Tourist Information offices can be very helpful. b: Tourist Information offices are never of any help.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,• Entails a: I'm confused. b: Not all of it is very clear to me.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,A relationship of contradicts means that the premise contradicts the hypothesis; entails means that the premise entails the hypothesis; neutral means that neither is necessarily true. The meaning of these labels is looser than strict logical entailment or contradiction indicating that a typical human reading the sentences would most likely interpret the meanings in this way.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.2,Pair-Wise Sequence Classification,"To fine-tune a classifier for the MultiNLI task, we pass the premise/hypothesis pairs through a bidirectional encoder as described above and use the output vector for the [CLS] token as the input to the classification head. As with ordinary sequence classification, this head provides the input to a three-way classifier that can be trained on the MultiNLI training corpus."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,"Sequence labelling tasks, such as part-of-speech tagging or BIO-based named entity recognition, follow the same basic classification approach. Here, the final output vector corresponding to each input token is passed to a classifier that produces a softmax distribution over the possible set of tags. Again, assuming a simple classifier consisting of a single feedforward layer followed by a softmax, the set of weights to be learned for this additional layer is W K ∈ R k×d h , where k is the number of possible tags for the task. As with RNNs, a greedy approach, where the argmax tag for each token is taken as a likely answer, can be used to generate the final output tag sequence. Fig. 11 .9 illustrates an example of this approach."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,y i = softmax(W K z i ) (11.12) t i = argmax k (y i ) (11.13)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,"Alternatively, the distribution over labels provided by the softmax for each input token can be passed to a conditional random field (CRF) layer which can take global tag-level transitions into account."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,[CLS] Janet will back the bill
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,Bidirectional Transformer Encoder NNP MD VB DT NN Figure 11 .9 Sequence labeling for part-of-speech tagging with a bidirectional transformer encoder. The output vector for each input token is passed to a simple k-way classifier.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,A complication with this approach arises from the use of subword tokenization such as WordPiece or Byte Pair Encoding. Supervised training data for tasks like named entity recognition (NER) is typically in the form of BIO tags associated with text segmented at the word level. For example the following sentence containing two named entities:
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,[ LOC Mt. Sanitas ] is in [ LOC Sunshine Canyon] .
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,would have the following set of per-word BIO tags.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,(11.14) Mt.
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,B-LOC
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,Sanitas I-LOC is O in O Sunshine B-LOC Canyon I-LOC . O
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,"Unfortunately, the WordPiece tokenization for this sentence yields the following sequence of tokens which doesn't align directly with BIO tags in the ground truth annotation:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,"'Mt ', '.', 'San', '##itas', 'is', 'in', 'Sunshine', 'Canyon' '.' To deal with this misalignment, we need a way to assign BIO tags to subword tokens during training and a corresponding way to recover word-level tags from subwords during decoding. For training, we can just assign the gold-standard tag associated with each word to all of the subword tokens derived from it."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.3,Sequence Labelling,"For decoding, the simplest approach is to use the argmax BIO tag associated with the first subword token of a word. Thus, in our example, the BIO tag assigned to ""Mt"" would be assigned to ""Mt."" and the tag assigned to ""San"" would be assigned to ""Sanitas"", effectively ignoring the information in the tags assigned to ""."" and ""##itas"". More complex approaches combine the distribution of tag probabilities across the subwords in an attempt to find an optimal word-level tag."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"Span-oriented applications operate in a middle ground between sequence level and token level tasks. That is, in span-oriented applications the focus is on generating and operating with representations of contiguous sequences of tokens. Typical operations include identifying spans of interest, classifying spans according to some labeling scheme, and determining relations among discovered spans. Applications include named entity recognition, question answering, syntactic parsing, semantic role labeling and coreference resolution."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"Formally, given an input sequence x consisting of T tokens, (x 1 , x 2 , ..., x T ), a span is a contiguous sequence of tokens with start i and end j such that 1 <= i <= j <= T . This formulation results in a total set of spans equal to T (T −1)"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,". For practical purposes, span-based models often impose an application-specific length limit L, so the legal spans are limited to those where j − i < L. In the following, we'll refer to the enumerated set of legal spans in x as S(x)."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"The first step in fine-tuning a pretrained language model for a span-based application using the contextualized input embeddings from the model to generate representations for all the spans in the input. Most schemes for representing spans make use of two primary components: representations of the span boundaries and summary representations of the contents of each span. To compute a unified span representation, we concatenate the boundary representations with the summary representation."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"In the simplest possible approach, we can use the contextual embeddings of the start and end tokens of a span as the boundaries, and the average of the output embeddings within the span as the summary representation."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"g i j = 1 ( j − i) + 1 j k=i h k (11.15) spanRep i j = [h i ; h j ; g i, j ] (11.16)"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"A weakness of this approach is that it doesn't distinguish the use of a word's embedding as the beginning of a span from its use as the end of one. Therefore, more elaborate schemes for representing the span boundaries involve learned representations for start and end points through the use of two distinct feedforward networks:"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"s i = FFNN start (h i ) (11.17) e j = FFNN end (h j ) (11.18) spanRep i j = [s i ; e j ; g i, j ] (11.19)"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"Similarly, a simple average of the vectors in a span is unlikely to be an optimal representation of a span since it treats all of a span's embeddings as equally important. For many applications, a more useful representation would be centered around the head of the phrase corresponding to the span. One method for getting at such information in the absence of a syntactic parse is to use a standard self-attention layer to generate a span representation."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,g i j = SelfATTN(h i: j ) (11.20)
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"Now, given span representations g for each span in S(x), classifiers can be finetuned to generate application-specific scores for various span-oriented tasks: binary span identification (is this a legitimate span of interest or not?), span classification (what kind of span is this?), and span relation classification (how are these two spans related?)."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"To ground this discussion, let's return to named entity recognition (NER). Given a scheme for representing spans and set of named entity types, a span-based approach to NER is a straightforward classification problem where each span in an input is assigned a class label. More formally, given an input sequence x, we want to assign a label y, from the set of valid NER labels, to each of the spans in S(x). Since most of the spans in a given input will not be named entities we'll add the label NULL to the set of types in Y . Figure 11 .10 A span-oriented approach to named entity classification. The figure only illustrates the computation for 2 spans corresponding to ground truth named entities. In reality, the network scores all of the"
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,y i j = softmax(FFNN(g i j )) (11.21) Contextualized Embeddings (h) Bidirectional Transformer Encoder
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,T (T −1) 2
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"spans in the text. That is, all the unigrams, bigrams, trigrams, etc. up to the length limit."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"With this approach, fine-tuning entails using supervised training data to learn the parameters of the final classifier, as well as the weights used to generate the boundary representations, and the weights in the self-attention layer that generates the span content representation. During training, the model's predictions for all spans are compared to their gold-standard labels and cross-entropy loss is used to drive the training."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"During decoding, each span is scored using a softmax over the final classifier output to generate a distribution over the possible labels, with the argmax score for each span taken as the correct answer. Fig. 11 .10 illustrates this approach with an example. A variation on this scheme designed to improve precision adds a calibrated threshold to the labeling of a span as anything other than NULL."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.3,Transfer Learning through Fine-Tuning,11.3.4,Fine-tuning for Span-Based Applications,"There are two significant advantages to a span-based approach to NER over a BIO-based per-word labeling approach. The first advantage is that BIO-based approaches are prone to a labeling mis-match problem. That is, every label in a longer named entity must be correct for an output to be judged correct. Returning to the example in Fig. 11 .10, the following labeling would be judged entirely wrong due to the incorrect label on the first item. Span-based approaches only have to make one classification for each span. The second advantage to span-based approaches is that they naturally accommodate embedded named entities. For example, in this example both United Airlines and United Airlines Holding are legitimate named entities. The BIO approach has no way of encoding this embedded structure. But the span-based approach can naturally label both since the spans are labeled separately."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.6,Potential Harms from Language Models,,,"Large pretrained neural language models exhibit many of the potential harms discussed in Chapter 4 and Chapter 6. Many of these harms become realized when pretrained language models are fine-tuned to downstream tasks, particularly those involving text generation, such as in assistive technologies like web search query completion, or predictive typing for email (Olteanu et al., 2020) ."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.6,Potential Harms from Language Models,,,"For example, language models can generate toxic language. Gehman et al. (2020) show that many kinds of completely non-toxic prompts can nonetheless lead large language models to output hate speech and abuse. Brown et al. (2020) and Sheng et al. (2019) showed that large language models generate sentences displaying negative attitudes toward minority identities such as being Black or gay."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.6,Potential Harms from Language Models,,,"Indeed, language models are biased in a number of ways by the distributions of their training data. Gehman et al. (2020) shows that large language model training datasets include toxic text scraped from banned sites. In addition to problems of toxicity, internet data is disproportionately generated by authors from developed countries, and many large language models train on data from Reddit, whose authors skew male and young. Such biased population samples likely skew the resulting generation away from the perspectives or topics of underrepresented populations. Furthermore, language models can amplify demographic and other biases in training data, just as we saw for embedding models in Chapter 6."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.6,Potential Harms from Language Models,,,"Language models can also be a tool for generating text for misinformation, phishing, radicalization, and other socially harmful activities (Brown et al., 2020) . McGuffie and Newhouse (2020) show how large language models generate text that emulates online extremists, with the risk of amplifying extremist movements and their attempt to radicalize and recruit."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.6,Potential Harms from Language Models,,,"Finally, there are important privacy issues. Language models, like other machine learning models, can leak information about their training data. It is thus possible for an adversary to extract individual training-data phrases from a language model such as an individual person's name, phone number, and address (Henderson et al. 2017 , Carlini et al. 2020 . This is a problem if large language models are trained on private datasets such has electronic health records (EHRs). Mitigating all these harms is an important but unsolved research question in NLP. Extra pretraining (Gururangan et al., 2020) on non-toxic subcorpora seems to reduce a language model's tendency to generate toxic language somewhat (Gehman et al., 2020) . And analyzing the data used to pretrain large language models is important to understand toxicity and bias in generation, as well as privacy, making it extremely important that language models include datasheets (page ??) or model cards (page ??) giving full replicable information on the corpora used to train them."
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.6,Potential Harms from Language Models,,,This chapter has introduced the topic of transfer learning from pretrained language models. Here's a summary of the main points that we covered:
11,Transfer Learning with Pretrained Language Models and Contextual Embeddings,11.6,Potential Harms from Language Models,,,• Bidirectional encoders can be used to generate contextualized representations of input embeddings using the entire input context. • Pretrained language models based on bidirectional encoders can be learned using a masked language model objective where a model is trained to guess the missing information from an input. • Pretrained language models can be fine-tuned for specific applications by adding lightweight classifier layers on top of the outputs of the pretrained model.