diff --git "a/slp3ed.tsv" "b/slp3ed.tsv" --- "a/slp3ed.tsv" +++ "b/slp3ed.tsv" @@ -1,7 +1,7 @@ n_chapter chapter n_section section n_subsection subsection text -2 Regular Expressions The dialogue above is from ELIZA, an early natural language processing system ELIZA that could carry on a limited conversation with a user by imitating the responses of a Rogerian psychotherapist (Weizenbaum, 1966). ELIZA is a surprisingly simple program that uses pattern matching to recognize phrases like \"I need X\" and translate them into suitable outputs like \"What would it mean to you if you got X?\". This simple technique succeeds in this domain because ELIZA doesn't actually need to know anything to mimic a Rogerian psychotherapist. As Weizenbaum notes, this is one of the few dialogue genres where listeners can act as if they know nothing of the world. Eliza's mimicry of human conversation was remarkably successful: many people who interacted with ELIZA came to believe that it really understood them and their problems, many continued to believe in ELIZA's abilities even after the program's operation was explained to them (Weizenbaum, 1976), and even today such chatbots are a fun diversion. +2 Regular Expressions "The dialogue above is from ELIZA, an early natural language processing system ELIZA that could carry on a limited conversation with a user by imitating the responses of a Rogerian psychotherapist (Weizenbaum, 1966). ELIZA is a surprisingly simple program that uses pattern matching to recognize phrases like \""I need X\"" and translate them into suitable outputs like \""What would it mean to you if you got X?\"". This simple technique succeeds in this domain because ELIZA doesn't actually need to know anything to mimic a Rogerian psychotherapist. As Weizenbaum notes, this is one of the few dialogue genres where listeners can act as if they know nothing of the world. Eliza's mimicry of human conversation was remarkably successful: many people who interacted with ELIZA came to believe that it really understood them and their problems, many continued to believe in ELIZA's abilities even after the program's operation was explained to them (Weizenbaum, 1976), and even today such chatbots are a fun diversion." 2 Regular Expressions Of course modern conversational agents are much more than a diversion; they can answer questions, book flights, or find restaurants, functions for which they rely on a much more sophisticated understanding of the user's intent, as we will see in Chapter 24. Nonetheless, the simple pattern-based methods that powered ELIZA and other chatbots play a crucial role in natural language processing. -2 Regular Expressions We'll begin with the most important tool for describing text patterns: the regular expression. Regular expressions can be used to specify strings we might want to extract from a document, from transforming \"I need X\" in Eliza above, to defining strings like $199 or $24.99 for extracting tables of prices from a document. +2 Regular Expressions "We'll begin with the most important tool for describing text patterns: the regular expression. Regular expressions can be used to specify strings we might want to extract from a document, from transforming \""I need X\"" in Eliza above, to defining strings like $199 or $24.99 for extracting tables of prices from a document." 2 Regular Expressions We'll then turn to a set of tasks collectively called text normalization, in which regular expressions play an important part. Normalizing text means converting it to a more convenient, standard form. For example, most of what we are going to do with language relies on first separating out or tokenizing words from running text, the task of tokenization. English words are often separated from each other by whitespace, but whitespace is not always sufficient. New York and rock 'n' roll are sometimes treated as large words despite the fact that they contain spaces, while sometimes we'll need to separate I'm into the two words I and am. For processing tweets or texts we'll need to tokenize emoticons like :) or hashtags like #nlproc. 2.1 \u2022 REGULAR EXPRESSIONS 3 Some languages, like Japanese, don't have spaces between words, so word tokenization becomes more difficult. 2 Regular Expressions Another part of text normalization is lemmatization, the task of determining that two words have the same root, despite their surface differences. For example, the words sang, sung, and sings are forms of the verb sing. The word sing is the common lemma of these words, and a lemmatizer maps from all of these to sing. Lemmatization is essential for processing morphologically complex languages like Arabic. Stemming refers to a simpler version of lemmatization in which we mainly just strip suffixes from the end of the word. Text normalization also includes sentence segmentation: breaking up a text into individual sentences, using cues like periods or exclamation points. 2 Regular Expressions Finally, we'll need to compare words and other strings. We'll introduce a metric called edit distance that measures how similar two strings are based on the number of edits (insertions, deletions, substitutions) it takes to change one string into the other. Edit distance is an algorithm with applications throughout language processing, from spelling correction to speech recognition to coreference resolution. @@ -13,14 +13,14 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions The regular expression /[1234567890]/ specifies any single digit. While such classes of characters as digits or letters are important building blocks in expressions, they can get awkward (e.g., it’s inconvenient to specify /[ABCDEFGHIJKLMNOPQRSTUVWXYZ]/ to mean “any capital letter”). In cases where there is a well-defined sequence associated with a set of characters, the brackets can be used with the dash (-) to specify any one character in a range. The pattern /[2-5]/ specifies any one of the characters 2, 3, 4, or 5. The pattern /[b-g]/ specifies one of the characters b, c, d, e, f, or g. Some other examples are shown in Fig. 2.3. 2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions The square braces can also be used to specify what a single character cannot be, by use of the caret ˆ. If the caret ˆ is the first symbol after the open square brace [, the resulting pattern is negated. For example, the pattern /[ˆa]/ matches any single character (including special characters) except a. This is only true when the caret is the first symbol after the open square brace. If it occurs anywhere else, it usually stands for a caret; Fig. 2.4 shows some examples. 2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions How can we talk about optional elements, like an optional s in woodchuck and woodchucks? We can’t use the square brackets, because while they allow us to say “s or S”, they don’t allow us to say “s or nothing”. For this we use the question mark /?/, which means “the preceding character or nothing”, as shown in Fig. 2.5. -2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions We can think of the question mark as meaning "zero or one instances of the previous character". That is, it's a way of specifying how many of something that we want, something that is very important in regular expressions. For example, consider the language of certain sheep, which consists of strings that look like the following: baa! baaa! baaaa! baaaaa! . . . -2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like "some number of as" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced "cleany star"). The Kleene star means "zero or more occurrences of the immediately previous character or regular expression". So /a*/ means "any string of zero or more as". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means "zero or more a's or b's" (not "zero or more right square braces"). This will match strings like aaaa or ababab or bbbb. -2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions For specifying multiple digits (useful for finding prices) we can extend /[0-9]/, the regular expression for a single digit. An integer (a string of digits) is thus /[0-9][0-9]*/. (Why isn't it just /[0-9]*/?) Sometimes it's annoying to have to write the regular expression for digits twice, so there is a shorter way to specify "at least one" of some character. This is the Kleene +, which means "one or more occurrences of the immediately preceding Kleene + character or regular expression". Thus, the expression /[0-9]+/ is the normal way to specify "a sequence of digits". There are thus two ways to specify the sheep language: /baaa*!/ or /baa+!/. +2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions "We can think of the question mark as meaning ""zero or one instances of the previous character"". That is, it's a way of specifying how many of something that we want, something that is very important in regular expressions. For example, consider the language of certain sheep, which consists of strings that look like the following: baa! baaa! baaaa! baaaaa! . . ." +2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions "This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like ""some number of as"" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced ""cleany star""). The Kleene star means ""zero or more occurrences of the immediately previous character or regular expression"". So /a*/ means ""any string of zero or more as"". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means ""zero or more a's or b's"" (not ""zero or more right square braces""). This will match strings like aaaa or ababab or bbbb." +2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions "For specifying multiple digits (useful for finding prices) we can extend /[0-9]/, the regular expression for a single digit. An integer (a string of digits) is thus /[0-9][0-9]*/. (Why isn't it just /[0-9]*/?) Sometimes it's annoying to have to write the regular expression for digits twice, so there is a shorter way to specify ""at least one"" of some character. This is the Kleene +, which means ""one or more occurrences of the immediately preceding Kleene + character or regular expression"". Thus, the expression /[0-9]+/ is the normal way to specify ""a sequence of digits"". There are thus two ways to specify the sheep language: /baaa*!/ or /baa+!/." 2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions One very important special character is the period (/./), a wildcard expression that matches any single character (except a carriage return), as shown in Fig. 2 .6. -2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions The wildcard is often used together with the Kleene star to mean "any string of characters". For example, suppose we want to find any line in which a particular word, for example, aardvark, appears twice. We can specify this with the regular expression /aardvark.*aardvark/. -2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions Anchors are special characters that anchor regular expressions to particular places in a string. The most common anchors are the caretˆand the dollar sign $. The caret matches the start of a line. The pattern /ˆThe/ matches the word The only at the start of a line. Thus, the caretˆhas three uses: to match the start of a line, to indicate a negation inside of square brackets, and just to mean a caret. (What are the contexts that allow grep or Python to know which function a given caret is supposed to have?) The dollar sign $ matches the end of a line. So the pattern $ is a useful pattern for matching a space at the end of a line, and /ˆThe dog\.$/ matches a line that contains only the phrase The dog. (We have to use the backslash here since we want the . to mean "period" and not the wildcard.) -2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions There are also two other anchors: \b matches a word boundary, and \B matches a non-boundary. Thus, /\bthe\b/ matches the word the but not the word other. More technically, a "word" for the purposes of a regular expression is defined as any sequence of digits, underscores, or letters; this is based on the definition of "words" in programming languages. For example, /\b99\b/ will match the string 99 in There are 99 bottles of beer on the wall (because 99 follows a space) but not 99 in There are 299 bottles of beer on the wall (since 99 follows a number). But it will match 99 in $99 (since 99 follows a dollar sign ($), which is not a digit, underscore, or letter). -2 Regular Expressions 2.1 Regular Expressions 2.1.2 Disjunction, Grouping and Precendence Suppose we need to search for texts about pets; perhaps we are particularly interested in cats and dogs. In such a case, we might want to search for either the string cat or the string dog. Since we can't use the square brackets to search for "cat or dog" (why can't we say /[catdog]/?), we need a new operator, the disjunction operator, also disjunction called the pipe symbol |. The pattern /cat|dog/ matches either the string cat or the string dog. +2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions "The wildcard is often used together with the Kleene star to mean ""any string of characters"". For example, suppose we want to find any line in which a particular word, for example, aardvark, appears twice. We can specify this with the regular expression /aardvark.*aardvark/." +2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions "Anchors are special characters that anchor regular expressions to particular places in a string. The most common anchors are the caretˆand the dollar sign $. The caret matches the start of a line. The pattern /ˆThe/ matches the word The only at the start of a line. Thus, the caretˆhas three uses: to match the start of a line, to indicate a negation inside of square brackets, and just to mean a caret. (What are the contexts that allow grep or Python to know which function a given caret is supposed to have?) The dollar sign $ matches the end of a line. So the pattern $ is a useful pattern for matching a space at the end of a line, and /ˆThe dog\.$/ matches a line that contains only the phrase The dog. (We have to use the backslash here since we want the . to mean ""period"" and not the wildcard.)" +2 Regular Expressions 2.1 Regular Expressions 2.1.1 Basic Regular Expressions "There are also two other anchors: \b matches a word boundary, and \B matches a non-boundary. Thus, /\bthe\b/ matches the word the but not the word other. More technically, a ""word"" for the purposes of a regular expression is defined as any sequence of digits, underscores, or letters; this is based on the definition of ""words"" in programming languages. For example, /\b99\b/ will match the string 99 in There are 99 bottles of beer on the wall (because 99 follows a space) but not 99 in There are 299 bottles of beer on the wall (since 99 follows a number). But it will match 99 in $99 (since 99 follows a dollar sign ($), which is not a digit, underscore, or letter)." +2 Regular Expressions 2.1 Regular Expressions 2.1.2 Disjunction, Grouping and Precendence "Suppose we need to search for texts about pets; perhaps we are particularly interested in cats and dogs. In such a case, we might want to search for either the string cat or the string dog. Since we can't use the square brackets to search for ""cat or dog"" (why can't we say /[catdog]/?), we need a new operator, the disjunction operator, also disjunction called the pipe symbol |. The pattern /cat|dog/ matches either the string cat or the string dog." 2 Regular Expressions 2.1 Regular Expressions 2.1.2 Disjunction, Grouping and Precendence Sometimes we need to use this disjunction operator in the midst of a larger sequence. For example, suppose I want to search for information about pet fish for my cousin David. How can I specify both guppy and guppies? We cannot simply say /guppy|ies/, because that would match only the strings guppy and ies. This is because sequences like guppy take precedence over the disjunction operator |. 2 Regular Expressions 2.1 Regular Expressions 2.1.2 Disjunction, Grouping and Precendence precedence To make the disjunction operator apply only to a specific pattern, we need to use the parenthesis operators ( and ). Enclosing a pattern in parentheses makes it act like a single character for the purposes of neighboring operators like the pipe | and the Kleene*. So the pattern /gupp(y|ies)/ would specify that we meant the disjunction only to apply to the suffixes y and ies. 2 Regular Expressions 2.1 Regular Expressions 2.1.2 Disjunction, Grouping and Precendence The parenthesis operator ( is also useful when we are using counters like the Kleene*. Unlike the | operator, the Kleene* operator applies by default only to a single character, not to a whole sequence. Suppose we want to match repeated instances of a string. Perhaps we have a line that has column labels of the form Column 1 Column 2 Column 3. The expression /Column [0-9]+ */ will not match any number of columns; instead, it will match a single column followed by any number of spaces! The star here applies only to the space that precedes it, not to the whole sequence. With the parentheses, we could write the expression /(Column [0-9]+ *)*/ to match the word Column, followed by a number and optional spaces, the whole pattern repeated zero or more times. @@ -35,7 +35,7 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.1 Regular Expressions 2.1.3 A Simple Example The process we just went through was based on fixing two kinds of errors: false positives, strings that we incorrectly matched like other or there, and false negafalse positives tives, strings that we incorrectly missed, like The. Addressing these two kinds of false negatives errors comes up again and again in implementing speech and language processing systems. Reducing the overall error rate for an application thus involves two antagonistic efforts: 2 Regular Expressions 2.1 Regular Expressions 2.1.3 A Simple Example • Increasing precision (minimizing false positives) • Increasing recall (minimizing false negatives) 2 Regular Expressions 2.1 Regular Expressions 2.1.3 A Simple Example We'll come back to precision and recall with more precise definitions in Chapter 4. -2 Regular Expressions 2.1 Regular Expressions 2.1.4 More Operators Figure 2.8 shows some aliases for common ranges, which can be used mainly to save typing. Besides the Kleene * and Kleene + we can also use explicit numbers as counters, by enclosing them in curly brackets. The regular expression /{3}/ means "exactly 3 occurrences of the previous character or expression". So /a\.{24}z/ will match a followed by 24 dots followed by z (but not a followed by 23 or 25 dots followed by a z). +2 Regular Expressions 2.1 Regular Expressions 2.1.4 More Operators "Figure 2.8 shows some aliases for common ranges, which can be used mainly to save typing. Besides the Kleene * and Kleene + we can also use explicit numbers as counters, by enclosing them in curly brackets. The regular expression /{3}/ means ""exactly 3 occurrences of the previous character or expression"". So /a\.{24}z/ will match a followed by 24 dots followed by z (but not a followed by 23 or 25 dots followed by a z)." 2 Regular Expressions 2.1 Regular Expressions 2.1.4 More Operators A range of numbers can also be specified. So /{n,m}/ specifies from n to m occurrences of the previous char or expression, and /{n,}/ means at least n occurrences of the previous expression. REs for counting are summarized in Fig. 2 .9. 2 Regular Expressions 2.1 Regular Expressions 2.1.4 More Operators Finally, certain special characters are referred to by special notation based on the backslash (\) (see Fig. 2 .10). The most common of these are the newline character newline \n and the tab character \t. To refer to characters that are special themselves (like ., *, [, and \) , precede them with a backslash, (i.e., /\./, /\*/, /\[/, and /\\/). 2 Regular Expressions 2.1 Regular Expressions 2.1.5 A More Complex Example Let’s try out a more significant example of the power of REs. Suppose we want to build an application to help a user buy a computer on the Web. The user might want “any machine with at least 6 GHz and 500 GB of disk space for less than $1000”. To do this kind of retrieval, we first need to be able to look for expressions like 6GHz or 500 GB or Mac or $999.99. In the rest of this section we’ll work out some simple regular expressions for this task. @@ -50,7 +50,7 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.1 Regular Expressions 2.1.5 A More Complex Example Modifying this regular expression so that it only matches more than 500 GB is left as an exercise for the reader. 2 Regular Expressions 2.1 Regular Expressions 2.1.6 Substitution, Capture Groups, and ELIZA An important use of regular expressions is in substitutions. For example, the substitution operator s/regexp1/pattern/ used in Python and in Unix commands like vim or sed allows a string characterized by a regular expression to be replaced by another string: s/colour/color/ 2 Regular Expressions 2.1 Regular Expressions 2.1.6 Substitution, Capture Groups, and ELIZA It is often useful to be able to refer to a particular subpart of the string matching the first pattern. For example, suppose we wanted to put angle brackets around all integers in a text, for example, changing the 35 boxes to the <35> boxes. We'd like a way to refer to the integer we've found so that we can easily add the brackets. To do this, we put parentheses ( and ) around the first pattern and use the number operator \1 in the second pattern to refer back. Here's how it looks: s/([0-9]+)/<\1>/ -2 Regular Expressions 2.1 Regular Expressions 2.1.6 Substitution, Capture Groups, and ELIZA The parenthesis and number operators can also specify that a certain string or expression must occur twice in the text. For example, suppose we are looking for the pattern "the Xer they were, the Xer they will be", where we want to constrain the two X's to be the same string. We do this by surrounding the first X with the parenthesis operator, and replacing the second X with the number operator \1, as follows: /the (.*)er they were, the \1er they will be/ +2 Regular Expressions 2.1 Regular Expressions 2.1.6 Substitution, Capture Groups, and ELIZA "The parenthesis and number operators can also specify that a certain string or expression must occur twice in the text. For example, suppose we are looking for the pattern ""the Xer they were, the Xer they will be"", where we want to constrain the two X's to be the same string. We do this by surrounding the first X with the parenthesis operator, and replacing the second X with the number operator \1, as follows: /the (.*)er they were, the \1er they will be/" 2 Regular Expressions 2.1 Regular Expressions 2.1.6 Substitution, Capture Groups, and ELIZA Here the \1 will be replaced by whatever string matched the first item in parentheses. So this will match the bigger they were, the bigger they will be but not the bigger they were, the faster they will be. 2 Regular Expressions 2.1 Regular Expressions 2.1.6 Substitution, Capture Groups, and ELIZA This use of parentheses to store a pattern in memory is called a capture group. Every time a capture group is used (i.e., parentheses surround a pattern), the resulting match is stored in a numbered register. If you match two different sets of register parentheses, \\2 means whatever matched the second capture group. Thus /the (.*)er they (.*), the \\1er we \\2/ will match the faster they ran, the faster we ran but not the faster they ran, the faster we ate. Similarly, the third capture group is stored in \\3, the fourth is \\4, and so on. 2 Regular Expressions 2.1 Regular Expressions 2.1.6 Substitution, Capture Groups, and ELIZA Parentheses thus have a double function in regular expressions; they are used to group terms for specifying the order in which operators should apply, and they are used to capture something in a register. Occasionally we might want to use parentheses for grouping, but don't want to capture the resulting pattern in a register. In that case we use a non-capturing group, which is specified by putting the commands non-capturing group ?: after the open paren, in the form (?: pattern). /(?:some|a few) (people|cats) like some \\1/ will match some cats like some cats but not some cats like some a few. @@ -65,7 +65,7 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.1 Regular Expressions 2.1.7 Lookahead Assertions These lookahead assertions make use of the (? syntax that we saw in the previous section for non-capture groups. The operator (?= pattern) is true if pattern occurs, but is zero-width, i.e. the match pointer doesn’t advance. The operator (?! pattern) only returns true if a pattern does not match, but again is zero-width and doesn’t advance the cursor. Negative lookahead is commonly used when we are parsing some complex pattern but want to rule out a special case. For example suppose we want to match, at the beginning of a line, any single word that doesn’t start with “Volcano”. We can use negative lookahead to do this: /ˆ(?!Volcano)[A-Za-z]+/ 2 Regular Expressions 2.2 Words Before we talk about processing words, we need to decide what counts as a word. Let’s start by looking at one particular corpus (plural corpora), a computer-readable collection of text or speech. For example the Brown corpus is a million-word collection of samples from 500 written English texts from different genres (newspaper, fiction, non-fiction, academic, etc.), assembled at Brown University in 1963-64 (Kučera and Francis, 1967) . How many words are in the following Brown sentence? 2 Regular Expressions 2.2 Words He stepped out into the hall, was delighted to encounter a water brother. -2 Regular Expressions 2.2 Words This sentence has 13 words if we don't count punctuation marks as words, 15 if we count punctuation. Whether we treat period ("."), comma (","), and so on as words depends on the task. Punctuation is critical for finding boundaries of things (commas, periods, colons) and for identifying some aspects of meaning (question marks, exclamation marks, quotation marks). For some tasks, like part-of-speech tagging or parsing or speech synthesis, we sometimes treat punctuation marks as if they were separate words. +2 Regular Expressions 2.2 Words "This sentence has 13 words if we don't count punctuation marks as words, 15 if we count punctuation. Whether we treat period ("".""), comma ("",""), and so on as words depends on the task. Punctuation is critical for finding boundaries of things (commas, periods, colons) and for identifying some aspects of meaning (question marks, exclamation marks, quotation marks). For some tasks, like part-of-speech tagging or parsing or speech synthesis, we sometimes treat punctuation marks as if they were separate words." 2 Regular Expressions 2.2 Words The Switchboard corpus of American English telephone conversations between strangers was collected in the early 1990s; it contains 2430 conversations averaging 6 minutes each, totaling 240 hours of speech and about 3 million words (Godfrey et al., 1992) . Such corpora of spoken language don't have punctuation but do intro-duce other complications with regard to defining words. Let's look at one utterance from Switchboard; an utterance is the spoken correlate of a sentence: I do uh main-mainly business data processing This utterance has two kinds of disfluencies. The broken-off word main-is disfluency called a fragment. Words like uh and um are called fillers or filled pauses. Should we consider these to be words? Again, it depends on the application. If we are building a speech transcription system, we might want to eventually strip out the disfluencies. 2 Regular Expressions 2.2 Words But we also sometimes keep disfluencies around. Disfluencies like uh or um are actually helpful in speech recognition in predicting the upcoming word, because they may signal that the speaker is restarting the clause or idea, and so for speech recognition they are treated as regular words. Because people use different disfluencies they can also be a cue to speaker identification. In fact Clark and Fox Tree (2002) showed that uh and um have different meanings. What do you think they are? 2 Regular Expressions 2.2 Words Are capitalized tokens like They and uncapitalized tokens like they the same word? These are lumped together in some tasks (speech recognition), while for partof-speech or named-entity tagging, capitalization is a useful feature and is retained. @@ -81,7 +81,7 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.3 Corpora Words don't appear out of nowhere. Any particular piece of text that we study is produced by one or more specific speakers or writers, in a specific dialect of a specific language, at a specific time, in a specific place, for a specific function. 2 Regular Expressions 2.3 Corpora Perhaps the most important dimension of variation is the language. NLP algorithms are most useful when they apply across many languages. The world has 7097 languages at the time of this writing, according to the online Ethnologue catalog (Simons and Fennig, 2018) . It is important to test algorithms on more than one language, and particularly on languages with different properties; by contrast there is an unfortunate current tendency for NLP algorithms to be developed or tested just on English (Bender, 2019) . Even when algorithms are developed beyond English, they tend to be developed for the official languages of large industrialized nations (Chinese, Spanish, Japanese, German etc.), but we don't want to limit tools to just these few languages. Furthermore, most languages also have multiple varieties, often spoken in different regions or by different social groups. Thus, for example, if we're processing text that uses features of African American English (AAE) or AAE African American Vernacular English (AAVE) -the variations of English used by millions of people in African American communities (King 2020) -we must use NLP tools that function with features of those varieties. Twitter posts might use features often used by speakers of African American English, such as constructions like iont (I don't in Mainstream American English (MAE)), or talmbout corresponding MAE to MAE talking about, both examples that influence word segmentation (Blodgett et al. 2016 , Jones 2015 . 2 Regular Expressions 2.3 Corpora It's also quite common for speakers or writers to use multiple languages in a single communicative act, a phenomenon called code switching. Code switching (2.2) Por primera vez veo a @username actually being hateful! it was beautiful:) -2 Regular Expressions 2.3 Corpora [For the first time I get to see @username actually being hateful! it was beautiful:) ] (2.3) dost tha or ra-hega ... dont wory ... but dherya rakhe ["he was and will remain a friend ... don't worry ... +2 Regular Expressions 2.3 Corpora "[For the first time I get to see @username actually being hateful! it was beautiful:) ] (2.3) dost tha or ra-hega ... dont wory ... but dherya rakhe [""he was and will remain a friend ... don't worry ..." 2 Regular Expressions 2.3 Corpora Another dimension of variation is the genre. The text that our algorithms must process might come from newswire, fiction or non-fiction books, scientific articles, Wikipedia, or religious texts. It might come from spoken genres like telephone conversations, business meetings, police body-worn cameras, medical interviews, or transcripts of television shows or movies. It might come from work situations like doctors' notes, legal text, or parliamentary or congressional proceedings. 2 Regular Expressions 2.3 Corpora Text also reflects the demographic characteristics of the writer (or speaker): their age, gender, race, socioeconomic class can all influence the linguistic properties of the text we are processing. 2 Regular Expressions 2.3 Corpora And finally, time matters too. Language changes over time, and for some languages we have good corpora of texts from different historical periods. @@ -106,12 +106,12 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization A tokenizer can also be used to expand clitic contractions that are marked by apostrophes, for example, converting what’re to the two tokens what are, and we’re to we are. A clitic is a part of a word that can’t stand on its own, and can only occur when it is attached to another word. Some such contractions occur in other alphabetic languages, including articles and pronouns in French (j’ai, l’homme). 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization Depending on the application, tokenization algorithms may also tokenize multiword expressions like New York or rock ’n’ roll as a single token, which requires a multiword expression dictionary of some sort. Tokenization is thus intimately tied up with named entity recognition, the task of detecting names, dates, and organizations (Chapter 8). 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization One commonly used tokenization standard is known as the Penn Treebank tokenization standard, used for the parsed corpora (treebanks) released by the Linguistic Data Consortium (LDC), the source of many useful datasets. This standard separates out clitics (doesn’t becomes does plus n’t), keeps hyphenated words together, and separates out all punctuation (to save space we’re showing visible spaces ‘ ’ between tokens, although newlines is a more common output): -2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization Input: "The San Francisco-based restaurant," they said, "doesn't charge $10". Output: " The San Francisco-based restaurant , " they said , " does n't charge $ 10 " . +2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization "Input: ""The San Francisco-based restaurant,"" they said, ""doesn't charge $10"". Output: "" The San Francisco-based restaurant , "" they said , "" does n't charge $ 10 "" ." 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization In practice, since tokenization needs to be run before any other language processing, it needs to be very fast. The standard method for tokenization is therefore to use deterministic algorithms based on regular expressions compiled into very efficient finite state automata. For example, Fig. 2 .12 shows an example of a basic regular expression that can be used to tokenize with the nltk.regexp tokenize function of the Python-based Natural Language Toolkit (NLTK) (Bird et al. 2009; http://www.nltk.org). 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization >>> text = 'That U.S.A. poster-print costs $12.40...' >>> pattern = r''' (?x) # set flag to allow verbose regexps . ['That', 'U.S.A.', 'costs', '$12.40', '...'] Figure 2 .12 A Python trace of regular expression tokenization in the NLTK Python-based natural language processing toolkit (Bird et al., 2009) , commented for readability; the (?x) verbose flag tells Python to strip comments and whitespace. 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization Carefully designed deterministic algorithms can deal with the ambiguities that arise, such as the fact that the apostrophe needs to be tokenized differently when used as a genitive marker (as in the book's cover), a quotative as in 'The other class', she said, or in clitics like they're. 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization Word tokenization is more complex in languages like written Chinese, Japanese, and Thai, which do not use spaces to mark potential word-boundaries. In Chinese, for example, words are composed of characters (called hanzi in Chinese). Each hanzi character generally represents a single unit of meaning (called a morpheme) and is pronounceable as a single syllable. Words are about 2.4 characters long on average. But deciding what counts as a word in Chinese is complex. For example, consider the following sentence: -2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization (2.4) 姚明进入总决赛 "Yao Ming reaches the finals" +2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization "(2.4) 姚明进入总决赛 ""Yao Ming reaches the finals""" 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization As Chen et al. (2017b) point out, this could be treated as 3 words ('Chinese Treebank' segmentation): 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization (2.5) 姚明 YaoMing 进入 reaches 总决赛 finals or as 5 words ('Peking University' segmentation): 2 Regular Expressions 2.4 Text Normalization 2.4.2 Word Tokenization (2.6) 姚 Yao 明 Ming 进入 reaches 总 overall 决赛 finals Finally, it is possible in Chinese simply to ignore words altogether and use characters as the basic elements, treating the sentence as a series of 7 characters: @@ -133,7 +133,7 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming Word normalization is the task of putting words/tokens in a standard format, choosing a single normal form for words with multiple forms like USA and US or uh-huh and uhhuh. This standardization may be valuable, despite the spelling information that is lost in the normalization process. For information retrieval or information extraction about the US, we might want to see information from documents whether they mention the US or the USA. 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming Case folding is another kind of normalization. Mapping everything to lowercase means that Woodchuck and woodchuck are represented identically, which is very helpful for generalization in many tasks, such as information retrieval or speech recognition. For sentiment analysis and other text classification tasks, information extraction, and machine translation, by contrast, case can be quite helpful and case folding is generally not done. This is because maintaining the difference between, for example, US the country and us the pronoun can outweigh the advantage in generalization that case folding would have provided for other words. For many natural language processing situations we also want two morphologically different forms of a word to behave similarly. For example in web search, someone may type the string woodchucks but a useful system might want to also return pages that mention woodchuck with no s. This is especially common in morphologically complex languages like Russian, where for example the word Moscow has different endings in the phrases Moscow, of Moscow, to Moscow, and so on. 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming Lemmatization is the task of determining that two words have the same root, despite their surface differences. The words am, are, and is have the shared lemma be; the words dinner and dinners both have the lemma dinner. Lemmatizing each of these forms to the same lemma will let us find all mentions of words in Russian like Moscow. The lemmatized form of a sentence like He is reading detective stories would thus be He be read detective story. -2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming How is lemmatization done? The most sophisticated methods for lemmatization involve complete morphological parsing of the word. Morphology is the study of the way words are built up from smaller meaning-bearing units called morphemes. Two broad classes of morphemes can be distinguished: stems-the central morpheme of the word, supplying the main meaning-and affixes-adding "additional" meanings of various kinds. So, for example, the word fox consists of one morpheme (the morpheme fox) and the word cats consists of two: the morpheme cat and the morpheme -s. A morphological parser takes a word like cats and parses it into the two morphemes cat and s, or parses a Spanish word like amaren ('if in the future they would love') into the morpheme amar 'to love', and the morphological features 3PL and future subjunctive. +2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming "How is lemmatization done? The most sophisticated methods for lemmatization involve complete morphological parsing of the word. Morphology is the study of the way words are built up from smaller meaning-bearing units called morphemes. Two broad classes of morphemes can be distinguished: stems-the central morpheme of the word, supplying the main meaning-and affixes-adding ""additional"" meanings of various kinds. So, for example, the word fox consists of one morpheme (the morpheme fox) and the word cats consists of two: the morpheme cat and the morpheme -s. A morphological parser takes a word like cats and parses it into the two morphemes cat and s, or parses a Spanish word like amaren ('if in the future they would love') into the morpheme amar 'to love', and the morphological features 3PL and future subjunctive." 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming The Porter Stemmer 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming Lemmatization algorithms can be complex. For this reason we sometimes make use of a simpler but cruder method, which mainly consists of chopping off word-final affixes. This naive version of morphological analysis is called stemming. One of stemming the most widely used stemming algorithms is the Porter (1980) . The Porter stemmer stemmer applied to the following paragraph: 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming This was not the map we found in Billy Bones's chest, but an accurate copy, complete in all things-names and heights and soundings-with the single exception of the red crosses and the written notes. @@ -141,7 +141,7 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming Thi wa not the map we found in Billi Bone s chest but an accur copi complet in all thing name and height and sound with the singl except of the red cross and the written note 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming The algorithm is based on series of rewrite rules run in series, as a cascade, in cascade which the output of each pass is fed as input to the next pass; here is a sampling of the rules: ATIONAL → ATE (e.g., relational → relate) ING → if stem contains vowel (e.g., motoring → motor) SSES → SS (e.g., grasses → grass) 2 Regular Expressions 2.4 Text Normalization 2.4.4 Word Normalization, Lemmatization and Stemming Detailed rule lists for the Porter stemmer, as well as code (in Java, Python, etc.) can be found on Martin Porter's homepage; see also the original paper (Porter, 1980) . Simple stemmers can be useful in cases where we need to collapse across different variants of the same lemma. Nonetheless, they do tend to commit errors of both over-and under-generalizing, as shown in the table below (Krovetz, 1993 -2 Regular Expressions 2.4 Text Normalization 2.4.5 Sentence Segmentation Sentence segmentation is another important step in text processing. The most useful cues for segmenting a text into sentences are punctuation, like periods, question marks, and exclamation points. Question marks and exclamation points are relatively unambiguous markers of sentence boundaries. Periods, on the other hand, are more ambiguous. The period character "." is ambiguous between a sentence boundary marker and a marker of abbreviations like Mr. or Inc. The previous sentence that you just read showed an even more complex case of this ambiguity, in which the final period of Inc. marked both an abbreviation and the sentence boundary marker. For this reason, sentence tokenization and word tokenization may be addressed jointly. +2 Regular Expressions 2.4 Text Normalization 2.4.5 Sentence Segmentation "Sentence segmentation is another important step in text processing. The most useful cues for segmenting a text into sentences are punctuation, like periods, question marks, and exclamation points. Question marks and exclamation points are relatively unambiguous markers of sentence boundaries. Periods, on the other hand, are more ambiguous. The period character ""."" is ambiguous between a sentence boundary marker and a marker of abbreviations like Mr. or Inc. The previous sentence that you just read showed an even more complex case of this ambiguity, in which the final period of Inc. marked both an abbreviation and the sentence boundary marker. For this reason, sentence tokenization and word tokenization may be addressed jointly." 2 Regular Expressions 2.4 Text Normalization 2.4.5 Sentence Segmentation In general, sentence tokenization methods work by first deciding (based on rules or machine learning) whether a period is part of the word or is a sentence-boundary marker. An abbreviation dictionary can help determine whether the period is part of a commonly used abbreviation; the dictionaries can be hand-built or machinelearned (Kiss and Strunk, 2006) , as can the final sentence splitter. In the Stanford CoreNLP toolkit (Manning et al., 2014) , for example sentence splitting is rule-based, a deterministic consequence of tokenization; a sentence ends when a sentence-ending punctuation (., !, or ?) is not already grouped with other characters into a token (such as for an abbreviation or number), optionally followed by additional final quotes or brackets. 2 Regular Expressions 2.5 Minimum Edit Distance Much of natural language processing is concerned with measuring how similar two strings are. For example in spelling correction, the user typed some erroneous string—let’s say graffe–and we want to know what the user meant. The user probably intended a word that is similar to graffe. Among candidate similar words, the word giraffe, which differs by only one letter from graffe, seems intuitively to be more similar than, say grail or graf, which differ in more letters. Another example comes from coreference, the task of deciding whether two strings such as the following refer to the same entity: 2 Regular Expressions 2.5 Minimum Edit Distance Stanford President Marc Tessier-Lavigne @@ -167,18 +167,18 @@ n_chapter chapter n_section section n_subsection subsection text 2 Regular Expressions 2.5 Minimum Edit Distance 2.5.1 The Minimum Edit Distance Algorithm To extend the edit distance algorithm to produce an alignment, we can start by visualizing an alignment as a path through the edit distance matrix. Figure 2 .19 shows this path with the boldfaced cell. Each boldfaced cell represents an alignment of a pair of letters in the two strings. If two boldfaced cells occur in the same row, there will be an insertion in going from the source to the target; two boldfaced cells in the same column indicate a deletion. 2 Regular Expressions 2.5 Minimum Edit Distance 2.5.1 The Minimum Edit Distance Algorithm Figure 2 .19 also shows the intuition of how to compute this alignment path. The computation proceeds in two steps. In the first step, we augment the minimum edit distance algorithm to store backpointers in each cell. The backpointer from a cell points to the previous cell (or cells) that we came from in entering the current cell. We've shown a schematic of these backpointers in Fig. 2.19 . Some cells have multiple backpointers because the minimum extension could have come from multiple previous cells. In the second step, we perform a backtrace. In a backtrace, we start from the last cell (at the final row and column), and follow the pointers back through the dynamic programming matrix. Each complete path between the final cell and the initial cell is a minimum distance alignment. Exercise 2.7 asks you to modify the minimum edit distance algorithm to store the pointers and compute the backtrace to output an alignment. 2 Regular Expressions 2.5 Minimum Edit Distance 2.5.1 The Minimum Edit Distance Algorithm Figure 2 .19 When entering a value in each cell, we mark which of the three neighboring cells we came from with up to three arrows. After the table is full we compute an alignment (minimum edit path) by using a backtrace, starting at the 8 in the lower-right corner and following the arrows back. The sequence of bold cells represents one possible minimum cost alignment between the two strings. Diagram design after Gusfield (1997). -2 Regular Expressions 2.5 Minimum Edit Distance 2.5.1 The Minimum Edit Distance Algorithm While we worked our example with simple Levenshtein distance, the algorithm in Fig. 2 .17 allows arbitrary weights on the operations. For spelling correction, for example, substitutions are more likely to happen between letters that are next to each other on the keyboard. The Viterbi algorithm is a probabilistic extension of minimum edit distance. Instead of computing the "minimum edit distance" between two strings, Viterbi computes the "maximum probability alignment" of one string with another. We'll discuss this more in Chapter 8. +2 Regular Expressions 2.5 Minimum Edit Distance 2.5.1 The Minimum Edit Distance Algorithm "While we worked our example with simple Levenshtein distance, the algorithm in Fig. 2 .17 allows arbitrary weights on the operations. For spelling correction, for example, substitutions are more likely to happen between letters that are next to each other on the keyboard. The Viterbi algorithm is a probabilistic extension of minimum edit distance. Instead of computing the ""minimum edit distance"" between two strings, Viterbi computes the ""maximum probability alignment"" of one string with another. We'll discuss this more in Chapter 8." 2 Regular Expressions 2.6 Summary This chapter introduced a fundamental tool in language processing, the regular expression, and showed how to perform basic text normalization tasks including word segmentation and normalization, sentence segmentation, and stemming. We also introduced the important minimum edit distance algorithm for comparing strings. Here's a summary of the main points we covered about these ideas: 2 Regular Expressions 2.6 Summary • The regular expression language is a powerful tool for pattern-matching. 2 Regular Expressions 2.6 Summary • Basic operations in regular expressions include concatenation of symbols, disjunction of symbols ([], |, and .), counters (*, +, and {n,m}), anchors (ˆ, $) and precedence operators ((,) ). 2 Regular Expressions 2.6 Summary • Word tokenization and normalization are generally done by cascades of simple regular expression substitutions or finite automata. 2 Regular Expressions 2.6 Summary • The Porter algorithm is a simple and efficient way to do stemming, stripping off affixes. It does not have high accuracy but may be useful for some tasks. 2 Regular Expressions 2.6 Summary • The minimum edit distance between two strings is the minimum number of operations it takes to edit one into the other. Minimum edit distance can be computed by dynamic programming, which also results in an alignment of the two strings. -2 Regular Expressions 2.7 Bibliographical and Historical Notes Kleene 1951; 1956 first defined regular expressions and the finite automaton, based on the McCulloch-Pitts neuron. Ken Thompson was one of the first to build regular expressions compilers into editors for text searching (Thompson, 1968) . His editor ed included a command "g/regular expression/p", or Global Regular Expression Print, which later became the Unix grep utility. Text normalization algorithms have been applied since the beginning of the field. One of the earliest widely used stemmers was Lovins (1968) . Stemming was also applied early to the digital humanities, by Packard (1973) , who built an affix-stripping morphological parser for Ancient Greek. Currently a wide variety of code for tokenization and normalization is available, such as the Stanford Tokenizer (http://nlp.stanford.edu/software/tokenizer.shtml) or specialized tokenizers for Twitter (O'Connor et al., 2010) , or for sentiment (http: //sentiment.christopherpotts.net/tokenizing.html). See Palmer (2012) for a survey of text preprocessing. NLTK is an essential tool that offers both useful Python libraries (http://www.nltk.org) and textbook descriptions (Bird et al., 2009 ) of many algorithms including text normalization and corpus interfaces. +2 Regular Expressions 2.7 Bibliographical and Historical Notes "Kleene 1951; 1956 first defined regular expressions and the finite automaton, based on the McCulloch-Pitts neuron. Ken Thompson was one of the first to build regular expressions compilers into editors for text searching (Thompson, 1968) . His editor ed included a command ""g/regular expression/p"", or Global Regular Expression Print, which later became the Unix grep utility. Text normalization algorithms have been applied since the beginning of the field. One of the earliest widely used stemmers was Lovins (1968) . Stemming was also applied early to the digital humanities, by Packard (1973) , who built an affix-stripping morphological parser for Ancient Greek. Currently a wide variety of code for tokenization and normalization is available, such as the Stanford Tokenizer (http://nlp.stanford.edu/software/tokenizer.shtml) or specialized tokenizers for Twitter (O'Connor et al., 2010) , or for sentiment (http: //sentiment.christopherpotts.net/tokenizing.html). See Palmer (2012) for a survey of text preprocessing. NLTK is an essential tool that offers both useful Python libraries (http://www.nltk.org) and textbook descriptions (Bird et al., 2009 ) of many algorithms including text normalization and corpus interfaces." 2 Regular Expressions 2.7 Bibliographical and Historical Notes For more on Herdan's law and Heaps' Law, see Herdan (1960 , p. 28), Heaps (1978 , Egghe (2007) and Baayen (2001) ; Yasseri et al. (2012) discuss the relationship with other measures of linguistic complexity. For more on edit distance, see the excellent Gusfield (1997) . Our example measuring the edit distance from 'intention' to 'execution' was adapted from Kruskal (1983) . There are various publicly available packages to compute edit distance, including Unix diff and the NIST sclite program (NIST, 2005) . 2 Regular Expressions 2.7 Bibliographical and Historical Notes In his autobiography Bellman (1984) explains how he originally came up with the term dynamic programming: -2 Regular Expressions 2.7 Bibliographical and Historical Notes "...The 1950s were not good years for mathematical research. [the] Secretary of Defense ...had a pathological fear and hatred of the word, research... I decided therefore to use the word, "programming". I wanted to get across the idea that this was dynamic, this was multistage... I thought, let's ... take a word that has an absolutely precise meaning, namely dynamic... it's impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to." -3 N-gram Language Models "You are uniformly charming!" cried he, with a smile of associating and now and then I bowed and they perceived a chaise and four to wish for. Random sentence generated from a Jane Austen trigram model +2 Regular Expressions 2.7 Bibliographical and Historical Notes """...The 1950s were not good years for mathematical research. [the] Secretary of Defense ...had a pathological fear and hatred of the word, research... I decided therefore to use the word, ""programming"". I wanted to get across the idea that this was dynamic, this was multistage... I thought, let's ... take a word that has an absolutely precise meaning, namely dynamic... it's impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to.""" +3 N-gram Language Models """You are uniformly charming!"" cried he, with a smile of associating and now and then I bowed and they perceived a chaise and four to wish for. Random sentence generated from a Jane Austen trigram model" 3 N-gram Language Models Predicting is difficult-especially about the future, as the old quip goes. But how about predicting something that seems much easier, like the next few words someone is going to say? What word, for example, is likely to follow 3 N-gram Language Models Please turn your homework ... 3 N-gram Language Models Hopefully, most of you concluded that a very likely word is in, or possibly over, but probably not refrigerator or the. In the following sections we will formalize this intuition by introducing models that assign a probability to each possible next word. The same models will also serve to assign a probability to an entire sentence. Such a model, for example, could predict that the following sequence has a much higher probability of appearing in a text: @@ -194,14 +194,14 @@ n_chapter chapter n_section section n_subsection subsection text 3 N-gram Language Models he briefed reporters on the main contents of the statement 3 N-gram Language Models A probabilistic model of word sequences could suggest that briefed reporters on is a more probable English phrase than briefed to reporters (which has an awkward to after briefed) or introduced reporters to (which uses a verb that is less fluent English in this context), allowing us to correctly select the boldfaced sentence above. 3 N-gram Language Models Probabilities are also important for augmentative and alternative communication systems (Trnka et al. 2007 , Kane et al. 2017 . People often use such AAC devices if they are physically unable to speak or sign but can instead use eye gaze or other specific movements to select words from a menu to be spoken by the system. Word prediction can be used to suggest likely words for the menu. -3 N-gram Language Models Models that assign probabilities to sequences of words are called language models or LMs. In this chapter we introduce the simplest model that assigns probabilities to sentences and sequences of words, the n-gram. An n-gram is a sequence of n words: a 2-gram (which we'll call bigram) is a two-word sequence of words like "please turn", "turn your", or "your homework", and a 3-gram (a trigram) is a three-word sequence of words like "please turn your", or "turn your homework". We'll see how to use n-gram models to estimate the probability of the last word of an n-gram given the previous words, and also to assign probabilities to entire sequences. In a bit of terminological ambiguity, we usually drop the word "model", and use the term n-gram (and bigram, etc.) to mean either the word sequence itself or the predictive model that assigns it a probability. While n-gram models are much simpler than state-of-the art neural language models based on the RNNs and transformers we will introduce in Chapter 9, they are an important foundational tool for understanding the fundamental concepts of language modeling. -3 N-gram Language Models 3.1 N-Grams Let's begin with the task of computing P(w|h), the probability of a word w given some history h. Suppose the history h is "its water is so transparent that" and we want to know the probability that the next word is the: +3 N-gram Language Models "Models that assign probabilities to sequences of words are called language models or LMs. In this chapter we introduce the simplest model that assigns probabilities to sentences and sequences of words, the n-gram. An n-gram is a sequence of n words: a 2-gram (which we'll call bigram) is a two-word sequence of words like ""please turn"", ""turn your"", or ""your homework"", and a 3-gram (a trigram) is a three-word sequence of words like ""please turn your"", or ""turn your homework"". We'll see how to use n-gram models to estimate the probability of the last word of an n-gram given the previous words, and also to assign probabilities to entire sequences. In a bit of terminological ambiguity, we usually drop the word ""model"", and use the term n-gram (and bigram, etc.) to mean either the word sequence itself or the predictive model that assigns it a probability. While n-gram models are much simpler than state-of-the art neural language models based on the RNNs and transformers we will introduce in Chapter 9, they are an important foundational tool for understanding the fundamental concepts of language modeling." +3 N-gram Language Models 3.1 N-Grams "Let's begin with the task of computing P(w|h), the probability of a word w given some history h. Suppose the history h is ""its water is so transparent that"" and we want to know the probability that the next word is the:" 3 N-gram Language Models 3.1 N-Grams P(the|its water is so transparent that). (3.1) -3 N-gram Language Models 3.1 N-Grams One way to estimate this probability is from relative frequency counts: take a very large corpus, count the number of times we see its water is so transparent that, and count the number of times this is followed by the. This would be answering the question "Out of the times we saw the history h, how many times was it followed by the word w", as follows: +3 N-gram Language Models 3.1 N-Grams "One way to estimate this probability is from relative frequency counts: take a very large corpus, count the number of times we see its water is so transparent that, and count the number of times this is followed by the. This would be answering the question ""Out of the times we saw the history h, how many times was it followed by the word w"", as follows:" 3 N-gram Language Models 3.1 N-Grams P(the|its water is so transparent that) = C(its water is so transparent that the) C(its water is so transparent that) (3.2) 3 N-gram Language Models 3.1 N-Grams With a large enough corpus, such as the web, we can compute these counts and estimate the probability from Eq. 3.2. You should pause now, go to the web, and compute this estimate for yourself. -3 N-gram Language Models 3.1 N-Grams While this method of estimating probabilities directly from counts works fine in many cases, it turns out that even the web isn't big enough to give us good estimates in most cases. This is because language is creative; new sentences are created all the time, and we won't always be able to count entire sentences. Even simple extensions of the example sentence may have counts of zero on the web (such as "Walden Pond's water is so transparent that the"; well, used to have counts of zero). -3 N-gram Language Models 3.1 N-Grams Similarly, if we wanted to know the joint probability of an entire sequence of words like its water is so transparent, we could do it by asking "out of all possible sequences of five words, how many of them are its water is so transparent?" We would have to get the count of its water is so transparent and divide by the sum of the counts of all possible five word sequences. That seems rather a lot to estimate! For this reason, we'll need to introduce more clever ways of estimating the probability of a word w given a history h, or the probability of an entire word sequence W . Let's start with a little formalizing of notation. To represent the probability of a particular random variable X i taking on the value "the", or P(X i = "the"), we will use the simplification P(the). We'll represent a sequence of N words either as w 1 . . . w n or w 1:n (so the expression w 1:n−1 means the string w 1 , w 2 , ..., w n−1 ). For the joint probability of each word in a sequence having a particular value P(X = w 1 ,Y = w 2 , Z = w 3 , ...,W = w n ) we'll use P(w 1 , w 2 , ..., w n ). +3 N-gram Language Models 3.1 N-Grams "While this method of estimating probabilities directly from counts works fine in many cases, it turns out that even the web isn't big enough to give us good estimates in most cases. This is because language is creative; new sentences are created all the time, and we won't always be able to count entire sentences. Even simple extensions of the example sentence may have counts of zero on the web (such as ""Walden Pond's water is so transparent that the""; well, used to have counts of zero)." +3 N-gram Language Models 3.1 N-Grams "Similarly, if we wanted to know the joint probability of an entire sequence of words like its water is so transparent, we could do it by asking ""out of all possible sequences of five words, how many of them are its water is so transparent?"" We would have to get the count of its water is so transparent and divide by the sum of the counts of all possible five word sequences. That seems rather a lot to estimate! For this reason, we'll need to introduce more clever ways of estimating the probability of a word w given a history h, or the probability of an entire word sequence W . Let's start with a little formalizing of notation. To represent the probability of a particular random variable X i taking on the value ""the"", or P(X i = ""the""), we will use the simplification P(the). We'll represent a sequence of N words either as w 1 . . . w n or w 1:n (so the expression w 1:n−1 means the string w 1 , w 2 , ..., w n−1 ). For the joint probability of each word in a sequence having a particular value P(X = w 1 ,Y = w 2 , Z = w 3 , ...,W = w n ) we'll use P(w 1 , w 2 , ..., w n )." 3 N-gram Language Models 3.1 N-Grams Now how can we compute probabilities of entire sequences like P(w 1 , w 2 , ..., w n )? One thing we can do is decompose this probability using the chain rule of probability: 3 N-gram Language Models 3.1 N-Grams P(X 1 ...X n ) = P(X 1 )P(X 2 |X 1 )P(X 3 |X 1:2 ) . . . P(X n |X 1:n−1 ) = n k=1 P(X k |X 1:k−1 ) (3.3) 3 N-gram Language Models 3.1 N-Grams Applying the chain rule to words, we get P(w 1:n ) = P(w 1 )P(w 2 |w 1 )P(w 3 |w 1:2 ) . . . P(w n |w 1: n−1 ) = n k=1 P(w k |w 1:k−1 ) (3.4) @@ -235,9 +235,9 @@ n_chapter chapter n_section section n_subsection subsection text 3 N-gram Language Models 3.2 Evaluating Language Models Unfortunately, running big NLP systems end-to-end is often very expensive. Instead, it would be nice to have a metric that can be used to quickly evaluate potential improvements in a language model. An intrinsic evaluation metric is one that mea-intrinsic evaluation sures the quality of a model independent of any application. 3 N-gram Language Models 3.2 Evaluating Language Models For an intrinsic evaluation of a language model we need a test set. As with many of the statistical models in our field, the probabilities of an n-gram model come from the corpus it is trained on, the training set or training corpus. We can then measure training set the quality of an n-gram model by its performance on some unseen data called the test set or test corpus. We will also sometimes call test sets and other datasets that test set are not in our training sets held out corpora because we hold them out from the held out training data. 3 N-gram Language Models 3.2 Evaluating Language Models So if we are given a corpus of text and want to compare two different n-gram models, we divide the data into training and test sets, train the parameters of both models on the training set, and then compare how well the two trained models fit the test set. -3 N-gram Language Models 3.2 Evaluating Language Models But what does it mean to "fit the test set"? The answer is simple: whichever model assigns a higher probability to the test set-meaning it more accurately predicts the test set-is a better model. Given two probabilistic models, the better model is the one that has a tighter fit to the test data or that better predicts the details of the test data, and hence will assign a higher probability to the test data. -3 N-gram Language Models 3.2 Evaluating Language Models Since our evaluation metric is based on test set probability, it's important not to let the test sentences into the training set. Suppose we are trying to compute the probability of a particular "test" sentence. If our test sentence is part of the training corpus, we will mistakenly assign it an artificially high probability when it occurs in the test set. We call this situation training on the test set. Training on the test set introduces a bias that makes the probabilities all look too high, and causes huge inaccuracies in perplexity, the probability-based metric we introduce below. -3 N-gram Language Models 3.2 Evaluating Language Models Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or, devset. How do we divide our data into training, development, and test sets? We want our test set to be as large as possible, since a small test set may be accidentally unrepresentative, but we also want as much training data as possible. At the minimum, we would want to pick the smallest test set that gives us enough statistical power to measure a statistically significant difference between two potential models. In practice, we often just divide our data into 80% training, 10% development, and 10% test. Given a large corpus that we want to divide into training and test, test data can either be taken from some continuous sequence of text inside the corpus, or we can remove smaller "stripes" of text from randomly selected parts of our corpus and combine them into a test set. +3 N-gram Language Models 3.2 Evaluating Language Models "But what does it mean to ""fit the test set""? The answer is simple: whichever model assigns a higher probability to the test set-meaning it more accurately predicts the test set-is a better model. Given two probabilistic models, the better model is the one that has a tighter fit to the test data or that better predicts the details of the test data, and hence will assign a higher probability to the test data." +3 N-gram Language Models 3.2 Evaluating Language Models "Since our evaluation metric is based on test set probability, it's important not to let the test sentences into the training set. Suppose we are trying to compute the probability of a particular ""test"" sentence. If our test sentence is part of the training corpus, we will mistakenly assign it an artificially high probability when it occurs in the test set. We call this situation training on the test set. Training on the test set introduces a bias that makes the probabilities all look too high, and causes huge inaccuracies in perplexity, the probability-based metric we introduce below." +3 N-gram Language Models 3.2 Evaluating Language Models "Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or, devset. How do we divide our data into training, development, and test sets? We want our test set to be as large as possible, since a small test set may be accidentally unrepresentative, but we also want as much training data as possible. At the minimum, we would want to pick the smallest test set that gives us enough statistical power to measure a statistically significant difference between two potential models. In practice, we often just divide our data into 80% training, 10% development, and 10% test. Given a large corpus that we want to divide into training and test, test data can either be taken from some continuous sequence of text inside the corpus, or we can remove smaller ""stripes"" of text from randomly selected parts of our corpus and combine them into a test set." 3 N-gram Language Models 3.2 Evaluating Language Models 3.2.1 Perplexity In practice we don't use raw probability as our metric for evaluating language models, but a variant called perplexity. The perplexity (sometimes called PP for short) perplexity of a language model on a test set is the inverse probability of the test set, normalized by the number of words. For a test set W = w 1 w 2 . . . w N ,: 3 N-gram Language Models 3.2 Evaluating Language Models 3.2.1 Perplexity PP(W ) = P(w 1 w 2 . . . w N ) − 1 N (3.14) = N 1 P(w 1 w 2 . . . w N ) 3 N-gram Language Models 3.2 Evaluating Language Models 3.2.1 Perplexity We can use the chain rule to expand the probability of W : @@ -263,12 +263,12 @@ n_chapter chapter n_section section n_subsection subsection text 3 N-gram Language Models 3.4 Generalization and Zeros The longer the context on which we train the model, the more coherent the sentences. In the unigram sentences, there is no coherent relation between words or any sentence-final punctuation. The bigram sentences have some local word-to-word coherence (especially if we consider that punctuation counts as a word). The tri- .4 Eight sentences randomly generated from four n-grams computed from Shakespeare's works. All characters were mapped to lower-case and punctuation marks were treated as words. Output is hand-corrected for capitalization to improve readability. 3 N-gram Language Models 3.4 Generalization and Zeros gram and 4-gram sentences are beginning to look a lot like Shakespeare. Indeed, a careful investigation of the 4-gram sentences shows that they look a little too much like Shakespeare. The words It cannot be but so are directly from King John. This is because, not to put the knock on Shakespeare, his oeuvre is not very large as corpora go (N = 884, 647,V = 29, 066), and our n-gram probability matrices are ridiculously sparse. There are V 2 = 844, 000, 000 possible bigrams alone, and the number of possible 4-grams is V 4 = 7 × 10 17 . Thus, once the generator has chosen the first 4-gram (It cannot be but), there are only five possible continuations (that, I, he, thou, and so); indeed, for many 4-grams, there is only one continuation. 3 N-gram Language Models 3.4 Generalization and Zeros To get an idea of the dependence of a grammar on its training set, let's look at an n-gram grammar trained on a completely different corpus: the Wall Street Journal (WSJ) newspaper. Shakespeare and the Wall Street Journal are both English, so we might expect some overlap between our n-grams for the two genres. Fig. 3 .5 shows sentences generated by unigram, bigram, and trigram grammars trained on 40 million words from WSJ. .5 Three sentences randomly generated from three n-gram models computed from 40 million words of the Wall Street Journal, lower-casing all characters and treating punctuation as words. Output was then hand-corrected for capitalization to improve readability. -3 N-gram Language Models 3.4 Generalization and Zeros Compare these examples to the pseudo-Shakespeare in Fig. 3 .4. While they both model "English-like sentences", there is clearly no overlap in generated sentences, and little overlap even in small phrases. Statistical models are likely to be pretty useless as predictors if the training sets and the test sets are as different as Shakespeare and WSJ. +3 N-gram Language Models 3.4 Generalization and Zeros "Compare these examples to the pseudo-Shakespeare in Fig. 3 .4. While they both model ""English-like sentences"", there is clearly no overlap in generated sentences, and little overlap even in small phrases. Statistical models are likely to be pretty useless as predictors if the training sets and the test sets are as different as Shakespeare and WSJ." 3 N-gram Language Models 3.4 Generalization and Zeros How should we deal with this problem when we build n-gram models? One step is to be sure to use a training corpus that has a similar genre to whatever task we are trying to accomplish. To build a language model for translating legal documents, we need a training corpus of legal documents. To build a language model for a question-answering system, we need a training corpus of questions. 3 N-gram Language Models 3.4 Generalization and Zeros It is equally important to get training data in the appropriate dialect or variety, especially when processing social media posts or spoken transcripts. For example some tweets will use features of African American Language (AAL)-the name for the many variations of language used in African American communities (King, 2020) . Such features include words like finna-an auxiliary verb that marks immediate future tense -that don't occur in other varieties, or spellings like den for then, in tweets like this one (Blodgett and O'Connor, 2017): 3 N-gram Language Models 3.4 Generalization and Zeros (3.18) Bored af den my phone finna die!!! while tweets from varieties like Nigerian English have markedly different vocabulary and n-gram patterns from American English (Jurgens et al., 2017): 3 N-gram Language Models 3.4 Generalization and Zeros (3.19) @username R u a wizard or wat gan sef: in d mornin -u tweet, afternoon -u tweet, nyt gan u dey tweet. beta get ur IT placement wiv twitter -3 N-gram Language Models 3.4 Generalization and Zeros Matching genres and dialects is still not sufficient. Our models may still be subject to the problem of sparsity. For any n-gram that occurred a sufficient number of times, we might have a good estimate of its probability. But because any corpus is limited, some perfectly acceptable English word sequences are bound to be missing from it. That is, we'll have many cases of putative "zero probability n-grams" that should really have some non-zero probability. Consider the words that follow the bigram denied the in the WSJ Treebank3 corpus, together with their counts: denied the allegations: 5 denied the speculation: 2 denied the rumors: 1 denied the report: 1 +3 N-gram Language Models 3.4 Generalization and Zeros "Matching genres and dialects is still not sufficient. Our models may still be subject to the problem of sparsity. For any n-gram that occurred a sufficient number of times, we might have a good estimate of its probability. But because any corpus is limited, some perfectly acceptable English word sequences are bound to be missing from it. That is, we'll have many cases of putative ""zero probability n-grams"" that should really have some non-zero probability. Consider the words that follow the bigram denied the in the WSJ Treebank3 corpus, together with their counts: denied the allegations: 5 denied the speculation: 2 denied the rumors: 1 denied the report: 1" 3 N-gram Language Models 3.4 Generalization and Zeros But suppose our test set has phrases like: denied the offer denied the loan Our model will incorrectly estimate that the P(offer|denied the) is 0! These zerosthings that don't ever occur in the training set but do occur in zeros the test set-are a problem for two reasons. First, their presence means we are underestimating the probability of all sorts of words that might occur, which will hurt the performance of any application we want to run on this data. Second, if the probability of any word in the test set is 0, the entire probability of the test set is 0. By definition, perplexity is based on the inverse probability of the test set. Thus if some words have zero probability, we can't compute perplexity at all, since we can't divide by 0! 3 N-gram Language Models 3.4 Generalization and Zeros 3.4.1 Unknown Words The previous section discussed the problem of words whose bigram probability is zero. But what about words we simply have never seen before? 3 N-gram Language Models 3.4 Generalization and Zeros 3.4.1 Unknown Words Sometimes we have a language task in which this can't happen because we know all the words that can occur. In such a closed vocabulary system the test set can only contain words from this lexicon, and there will be no unknown words. This is a reasonable assumption in some domains, such as speech recognition or machine translation, where we have a pronunciation dictionary or a phrase table that are fixed in advance, and so the language model can only use the words in that dictionary or phrase table. @@ -298,7 +298,7 @@ n_chapter chapter n_section section n_subsection subsection text 3 N-gram Language Models 3.5 Smoothing 3.5.2 Add-k smoothing add-k P * Add-k (w n |w n−1 ) = C(w n−1 w n ) + k C(w n−1 ) + kV (3.25) 3 N-gram Language Models 3.5 Smoothing 3.5.2 Add-k smoothing Add-k smoothing requires that we have a method for choosing k; this can be done, for example, by optimizing on a devset. Although add-k is useful for some tasks (including text classification), it turns out that it still doesn't work well for language modeling, generating counts with poor variances and often inappropriate discounts (Gale and Church, 1994) . 3 N-gram Language Models 3.5 Smoothing 3.5.3 Backoff and Interpolation The discounting we have been discussing so far can help solve the problem of zero frequency n-grams. But there is an additional source of knowledge we can draw on. If we are trying to compute P(w n |w n−2 w n−1 ) but we have no examples of a particular trigram w n−2 w n−1 w n , we can instead estimate its probability by using the bigram probability P(w n |w n−1 ). Similarly, if we don't have counts to compute P(w n |w n−1 ), we can look to the unigram P(w n ). -3 N-gram Language Models 3.5 Smoothing 3.5.3 Backoff and Interpolation In other words, sometimes using less context is a good thing, helping to generalize more for contexts that the model hasn't learned much about. There are two ways to use this n-gram "hierarchy". In backoff, we use the trigram if the evidence is backoff sufficient, otherwise we use the bigram, otherwise the unigram. In other words, we only "back off" to a lower-order n-gram if we have zero evidence for a higher-order n-gram. By contrast, in interpolation, we always mix the probability estimates from interpolation all the n-gram estimators, weighing and combining the trigram, bigram, and unigram counts. +3 N-gram Language Models 3.5 Smoothing 3.5.3 Backoff and Interpolation "In other words, sometimes using less context is a good thing, helping to generalize more for contexts that the model hasn't learned much about. There are two ways to use this n-gram ""hierarchy"". In backoff, we use the trigram if the evidence is backoff sufficient, otherwise we use the bigram, otherwise the unigram. In other words, we only ""back off"" to a lower-order n-gram if we have zero evidence for a higher-order n-gram. By contrast, in interpolation, we always mix the probability estimates from interpolation all the n-gram estimators, weighing and combining the trigram, bigram, and unigram counts." 3 N-gram Language Models 3.5 Smoothing 3.5.3 Backoff and Interpolation In simple linear interpolation, we combine different order n-grams by linearly interpolating them. Thus, we estimate the trigram probability P(w n |w n−2 w n−1 ) by mixing together the unigram, bigram, and trigram probabilities, each weighted by a 3 N-gram Language Models 3.5 Smoothing 3.5.3 Backoff and Interpolation λ :P (w n |w n−2 w n−1 ) = λ 1 P(w n ) +λ 2 P(w n |w n−1 ) +λ 3 P(w n |w n−2 w n−1 ) (3.26) 3 N-gram Language Models 3.5 Smoothing 3.5.3 Backoff and Interpolation The λ s must sum to 1, making Eq. 3.26 equivalent to a weighted average: @@ -321,7 +321,7 @@ n_chapter chapter n_section section n_subsection subsection text 3 N-gram Language Models 3.6 Kneser-Ney Smoothing P AbsoluteDiscounting (w i |w i−1 ) = C(w i−1 w i ) − d v C(w i−1 v) + λ (w i−1 )P(w i ) (3.30) 3 N-gram Language Models 3.6 Kneser-Ney Smoothing The first term is the discounted bigram, and the second term is the unigram with an interpolation weight λ . We could just set all the d values to .75, or we could keep a separate discount value of 0.5 for the bigrams with counts of 1. 3 N-gram Language Models 3.6 Kneser-Ney Smoothing Kneser-Ney discounting (Kneser and Ney, 1995) augments absolute discounting with a more sophisticated way to handle the lower-order unigram distribution. Consider the job of predicting the next word in this sentence, assuming we are interpolating a bigram and a unigram model. I can't see without my reading . The word glasses seems much more likely to follow here than, say, the word Kong, so we'd like our unigram model to prefer glasses. But in fact it's Kong that is more common, since Hong Kong is a very frequent word. A standard unigram model will assign Kong a higher probability than glasses. We would like to capture the intuition that although Kong is frequent, it is mainly only frequent in the phrase Hong Kong, that is, after the word Hong. The word glasses has a much wider distribution. -3 N-gram Language Models 3.6 Kneser-Ney Smoothing In other words, instead of P(w), which answers the question "How likely is w?", we'd like to create a unigram model that we might call P CONTINUATION , which answers the question "How likely is w to appear as a novel continuation?". How can we estimate this probability of seeing the word w as a novel continuation, in a new unseen context? The Kneser-Ney intuition is to base our estimate of P CONTINUATION on the number of different contexts word w has appeared in, that is, the number of bigram types it completes. Every bigram type was a novel continuation the first time it was seen. We hypothesize that words that have appeared in more contexts in the past are more likely to appear in some new context as well. The number of times a word w appears as a novel continuation can be expressed as: +3 N-gram Language Models 3.6 Kneser-Ney Smoothing "In other words, instead of P(w), which answers the question ""How likely is w?"", we'd like to create a unigram model that we might call P CONTINUATION , which answers the question ""How likely is w to appear as a novel continuation?"". How can we estimate this probability of seeing the word w as a novel continuation, in a new unseen context? The Kneser-Ney intuition is to base our estimate of P CONTINUATION on the number of different contexts word w has appeared in, that is, the number of bigram types it completes. Every bigram type was a novel continuation the first time it was seen. We hypothesize that words that have appeared in more contexts in the past are more likely to appear in some new context as well. The number of times a word w appears as a novel continuation can be expressed as:" 3 N-gram Language Models 3.6 Kneser-Ney Smoothing P CONTINUATION (w) ∝ |{v : C(vw) > 0}| (3.31) 3 N-gram Language Models 3.6 Kneser-Ney Smoothing To turn this count into a probability, we normalize by the total number of word bigram types. In summary: 3 N-gram Language Models 3.6 Kneser-Ney Smoothing P CONTINUATION (w) = |{v : C(vw) > 0}| |{(u , w ) : C(u w ) > 0}| (3.32) @@ -382,7 +382,7 @@ Advanced: Perplexity's Relation to Entropy Perplexity(W ) = 2 H(W ) = P(w 1 Advanced: Perplexity's Relation to Entropy 3.9 Summary This chapter introduced language modeling and the n-gram, one of the most widely used tools in language processing. Advanced: Perplexity's Relation to Entropy 3.9 Summary • Language models offer a way to assign a probability to a sentence or other sequence of words, and to predict a word from preceding words. • n-grams are Markov models that estimate words from a fixed window of previous words. n-gram probabilities can be estimated by counting in a corpus and normalizing (the maximum likelihood estimate). • n-gram language models are evaluated extrinsically in some task, or intrinsically using perplexity. • The perplexity of a test set according to a language model is the geometric mean of the inverse test set probability computed by the model. • Smoothing algorithms provide a more sophisticated way to estimate the probability of n-grams. Commonly used smoothing algorithms for n-grams rely on lower-order n-gram counts through backoff or interpolation. Advanced: Perplexity's Relation to Entropy 3.9 Summary • Both backoff and interpolation require discounting to create a probability distribution. • Kneser-Ney smoothing makes use of the probability of a word being a novel continuation. The interpolated Kneser-Ney smoothing algorithm mixes a discounted probability with a lower-order continuation probability. -Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical Notes The underlying mathematics of the n-gram was first proposed by Markov (1913) , who used what are now called Markov chains (bigrams and trigrams) to predict whether an upcoming letter in Pushkin's Eugene Onegin would be a vowel or a consonant. Markov classified 20,000 letters as V or C and computed the bigram and trigram probability that a given letter would be a vowel given the previous one or two letters. Shannon (1948) applied n-grams to compute approximations to English word sequences. Based on Shannon's work, Markov models were commonly used in engineering, linguistic, and psychological work on modeling word sequences by the 1950s. In a series of extremely influential papers starting with Chomsky (1956) and including Chomsky (1957) and Miller and Chomsky (1963) , Noam Chomsky argued that "finite-state Markov processes", while a possibly useful engineering heuristic, were incapable of being a complete cognitive model of human grammatical knowledge. These arguments led many linguists and computational linguists to ignore work in statistical modeling for decades. The resurgence of n-gram models came from Jelinek and colleagues at the IBM Thomas J. Watson Research Center, who were influenced by Shannon, and Baker at CMU, who was influenced by the work of Baum and colleagues. Independently these two labs successfully used n-grams in their speech recognition systems (Baker 1975b , Jelinek 1976 , Baker 1975a , Bahl et al. 1983 , Jelinek 1990 ). +Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical Notes "The underlying mathematics of the n-gram was first proposed by Markov (1913) , who used what are now called Markov chains (bigrams and trigrams) to predict whether an upcoming letter in Pushkin's Eugene Onegin would be a vowel or a consonant. Markov classified 20,000 letters as V or C and computed the bigram and trigram probability that a given letter would be a vowel given the previous one or two letters. Shannon (1948) applied n-grams to compute approximations to English word sequences. Based on Shannon's work, Markov models were commonly used in engineering, linguistic, and psychological work on modeling word sequences by the 1950s. In a series of extremely influential papers starting with Chomsky (1956) and including Chomsky (1957) and Miller and Chomsky (1963) , Noam Chomsky argued that ""finite-state Markov processes"", while a possibly useful engineering heuristic, were incapable of being a complete cognitive model of human grammatical knowledge. These arguments led many linguists and computational linguists to ignore work in statistical modeling for decades. The resurgence of n-gram models came from Jelinek and colleagues at the IBM Thomas J. Watson Research Center, who were influenced by Shannon, and Baker at CMU, who was influenced by the work of Baum and colleagues. Independently these two labs successfully used n-grams in their speech recognition systems (Baker 1975b , Jelinek 1976 , Baker 1975a , Bahl et al. 1983 , Jelinek 1990 )." Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical Notes Add-one smoothing derives from Laplace's 1812 law of succession and was first applied as an engineering solution to the zero frequency problem by Jeffreys (1948) based on an earlier Add-K suggestion by Johnson (1932) . Problems with the addone algorithm are summarized in Gale and Church (1994) . Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical Notes A wide variety of different language modeling and smoothing techniques were proposed in the 80s and 90s, including Good-Turing discounting-first applied to the n-gram smoothing at IBM by Katz (Nádas 1984, Church and Gale 1991)-Witten-Bell discounting (Witten and Bell, 1991) , and varieties of class-based ngram models that used information about word classes. Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical Notes Starting in the late 1990s, Chen and Goodman performed a number of carefully controlled experiments comparing different discounting algorithms, cache models, class-based models, and other language model parameters (Chen and Goodman 1999, Goodman 2006, inter alia) . They showed the advantages of Modified Interpolated Kneser-Ney, which became the standard baseline for n-gram language modeling, especially because they showed that caches and class-based models provided only minor additional improvement. These papers are recommended for any reader with further interest in n-gram language modeling. SRILM (Stolcke, 2002) and KenLM (Heafield 2011 , Heafield et al. 2013 are publicly available toolkits for building n-gram language models. @@ -390,18 +390,18 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 4 Naive Bayes and Sentiment Classification Classification lies at the heart of both human and machine intelligence. Deciding what letter, word, or image has been presented to our senses, recognizing faces or voices, sorting mail, assigning grades to homeworks; these are all examples of assigning a category to an input. The potential challenges of this task are highlighted by the fabulist Jorge Luis Borges 1964, who imagined classifying animals into: 4 Naive Bayes and Sentiment Classification (a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel's hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance. 4 Naive Bayes and Sentiment Classification Many language processing tasks involve classification, although luckily our classes are much easier to define than those of Borges. In this chapter we introduce the naive Bayes algorithm and apply it to text categorization, the task of assigning a label or text categorization category to an entire text or document. -4 Naive Bayes and Sentiment Classification We focus on one common text categorization task, sentiment analysis, the ex-sentiment analysis traction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Extracting consumer or public sentiment is thus relevant for fields from marketing to politics. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... Spam detection is another important commercial application, the binary classpam detection sification task of assigning an email to one of the two classes spam or not-spam. Many lexical and other features can be used to perform this classification. For example you might quite reasonably be suspicious of an email containing phrases like "online pharmaceutical" or "WITHOUT ANY COST" or "Dear Winner". +4 Naive Bayes and Sentiment Classification "We focus on one common text categorization task, sentiment analysis, the ex-sentiment analysis traction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Extracting consumer or public sentiment is thus relevant for fields from marketing to politics. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... Spam detection is another important commercial application, the binary classpam detection sification task of assigning an email to one of the two classes spam or not-spam. Many lexical and other features can be used to perform this classification. For example you might quite reasonably be suspicious of an email containing phrases like ""online pharmaceutical"" or ""WITHOUT ANY COST"" or ""Dear Winner""." 4 Naive Bayes and Sentiment Classification Another thing we might want to know about a text is the language it's written in. Texts on social media, for example, can be in any number of languages and we'll need to apply different processing. The task of language id is thus the first language id step in most language processing pipelines. Related text classification tasks like authorship attributiondetermining a text's author-are also relevant to the digital authorship attribution humanities, social sciences, and forensic linguistics. 4 Naive Bayes and Sentiment Classification Finally, one of the oldest tasks in text classification is assigning a library subject category or topic label to a text. Deciding whether a research paper concerns epidemiology or instead, perhaps, embryology, is an important component of information retrieval. Various sets of subject categories exist, such as the MeSH (Medical Subject Headings) thesaurus. In fact, as we will see, subject category classification is the task for which the naive Bayes algorithm was invented in 1961. 4 Naive Bayes and Sentiment Classification Classification is essential for tasks below the level of the document as well. We've already seen period disambiguation (deciding if a period is the end of a sentence or part of a word), and word tokenization (deciding if a character should be a word boundary). Even language modeling can be viewed as classification: each word can be thought of as a class, and so predicting the next word is classifying the context-so-far into a class for each next word. A part-of-speech tagger (Chapter 8) classifies each occurrence of a word in a sentence as, e.g., a noun or a verb. 4 Naive Bayes and Sentiment Classification The goal of classification is to take a single observation, extract some useful features, and thereby classify the observation into one of a set of discrete classes. One method for classifying text is to use handwritten rules. There are many areas of language processing where handwritten rule-based classifiers constitute a state-ofthe-art system, or at least part of it. -4 Naive Bayes and Sentiment Classification Rules can be fragile, however, as situations or data change over time, and for some tasks humans aren't necessarily good at coming up with the rules. Most cases of classification in language processing are instead done via supervised machine learning, and this will be the subject of the remainder of this chapter. In supervised supervised machine learning learning, we have a data set of input observations, each associated with some correct output (a 'supervision signal'). The goal of the algorithm is to learn how to map from a new observation to a correct output. Formally, the task of supervised classification is to take an input x and a fixed set of output classes Y = y 1 , y 2 , ..., y M and return a predicted class y ∈ Y . For text classification, we'll sometimes talk about c (for "class") instead of y as our output variable, and d (for "document") instead of x as our input variable. In the supervised situation we have a training set of N documents that have each been hand-labeled with a class: +4 Naive Bayes and Sentiment Classification "Rules can be fragile, however, as situations or data change over time, and for some tasks humans aren't necessarily good at coming up with the rules. Most cases of classification in language processing are instead done via supervised machine learning, and this will be the subject of the remainder of this chapter. In supervised supervised machine learning learning, we have a data set of input observations, each associated with some correct output (a 'supervision signal'). The goal of the algorithm is to learn how to map from a new observation to a correct output. Formally, the task of supervised classification is to take an input x and a fixed set of output classes Y = y 1 , y 2 , ..., y M and return a predicted class y ∈ Y . For text classification, we'll sometimes talk about c (for ""class"") instead of y as our output variable, and d (for ""document"") instead of x as our input variable. In the supervised situation we have a training set of N documents that have each been hand-labeled with a class:" 4 Naive Bayes and Sentiment Classification (d 1 , c 1 ), ...., (d N , c N ). 4 Naive Bayes and Sentiment Classification Our goal is to learn a classifier that is capable of mapping from a new document d to its correct class c ∈ C. A probabilistic classifier additionally will tell us the probability of the observation being in the class. This full distribution over the classes can be useful information for downstream decisions; avoiding making discrete decisions early on can be useful when combining systems. 4 Naive Bayes and Sentiment Classification Many kinds of machine learning algorithms are used to build classifiers. This chapter introduces naive Bayes; the following one introduces logistic regression. These exemplify two ways of doing classification. Generative classifiers like naive Bayes build a model of how a class could generate some input data. Given an observation, they return the class most likely to have generated the observation. Discriminative classifiers like logistic regression instead learn what features from the input are most useful to discriminate between the different possible classes. While discriminative systems are often more accurate and hence more commonly used, generative classifiers still have a role. -4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers In this section we introduce the multinomial naive Bayes classifier, so called be- cause it is a Bayesian classifier that makes a simplifying (naive) assumption about how the features interact. The intuition of the classifier is shown in Fig. 4 .1. We represent a text document as if it were a bag-of-words, that is, an unordered set of words with their position bag-of-words ignored, keeping only their frequency in the document. In the example in the figure, instead of representing the word order in all the phrases like "I love this movie" and "I would recommend it", we simply note that the word I occurred 5 times in the entire excerpt, the word it 6 times, the words love, recommend, and movie once, and so on. +4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers "In this section we introduce the multinomial naive Bayes classifier, so called be- cause it is a Bayesian classifier that makes a simplifying (naive) assumption about how the features interact. The intuition of the classifier is shown in Fig. 4 .1. We represent a text document as if it were a bag-of-words, that is, an unordered set of words with their position bag-of-words ignored, keeping only their frequency in the document. In the example in the figure, instead of representing the word order in all the phrases like ""I love this movie"" and ""I would recommend it"", we simply note that the word I occurred 5 times in the entire excerpt, the word it 6 times, the words love, recommend, and movie once, and so on." 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers Figure 4 .1 Intuition of the multinomial naive Bayes classifier applied to a movie review. The position of the words is ignored (the bag of words assumption) and we make use of the frequency of each word. -4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers Naive Bayes is a probabilistic classifier, meaning that for a document d, out of all classes c ∈ C the classifier returns the classĉ which has the maximum posterior probability given the document. In Eq. 4.1 we use the hat notationˆto mean "our estimate of the correct class". +4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers "Naive Bayes is a probabilistic classifier, meaning that for a document d, out of all classes c ∈ C the classifier returns the classĉ which has the maximum posterior probability given the document. In Eq. 4.1 we use the hat notationˆto mean ""our estimate of the correct class""." 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers This idea of Bayesian inference has been known since the work of Bayes (1763), and was first applied to text classification by Mosteller and Wallace (1964) . The intuition of Bayesian classification is to use Bayes' rule to transform Eq. 4.1 into other probabilities that have some useful properties. Bayes' rule is presented in Eq. 4.2; it gives us a way to break down any conditional probability P(x|y) into three other probabilities: 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers P(x|y) = P(y|x)P(x) P(y) (4.2) 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers We can then substitute Eq. 4.2 into Eq. 4.1 to get Eq. 4.3: @@ -416,7 +416,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers Without loss of generalization, we can represent a document d as a set of features 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers f 1 , f 2 , ..., f n :ĉ = argmax c∈C likelihood P( f 1 , f 2 , ...., f n |c) prior P(c) (4.6) 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers Unfortunately, Eq. 4.6 is still too hard to compute directly: without some simplifying assumptions, estimating the probability of every possible combination of features (for example, every possible set of words and positions) would require huge numbers of parameters and impossibly large training sets. Naive Bayes classifiers therefore make two simplifying assumptions. -4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers The first is the bag of words assumption discussed intuitively above: we assume position doesn't matter, and that the word "love" has the same effect on classification whether it occurs as the 1st, 20th, or last word in the document. Thus we assume that the features f 1 , f 2 , ..., f n only encode word identity and not position. +4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers "The first is the bag of words assumption discussed intuitively above: we assume position doesn't matter, and that the word ""love"" has the same effect on classification whether it occurs as the 1st, 20th, or last word in the document. Thus we assume that the features f 1 , f 2 , ..., f n only encode word identity and not position." 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers The second is commonly called the naive Bayes assumption: this is the condi-naive Bayes assumption tional independence assumption that the probabilities P( f i |c) are independent given the class c and hence can be 'naively' multiplied as follows: 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers P( f 1 , f 2 , ...., f n |c) = P( f 1 |c) • P( f 2 |c) • ... • P( f n |c) (4.7) 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers The final equation for the class chosen by a naive Bayes classifier is thus: @@ -432,10 +432,10 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 4 Naive Bayes and Sentiment Classification 4.1 Naive Bayes Classifiers are called linear classifiers. 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier How can we learn the probabilities P(c) and P( f i |c)? Let's first consider the maximum likelihood estimate. We'll simply use the frequencies in the data. For the class prior P(c) we ask what percentage of the documents in our training set are in each class c. Let N c be the number of documents in our training data with class c and N doc be the total number of documents. Then: 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier P(c) = N c N doc (4.11) -4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier To learn the probability P( f i |c), we'll assume a feature is just the existence of a word in the document's bag of words, and so we'll want P(w i |c), which we compute as the fraction of times the word w i appears among all words in all documents of topic c. We first concatenate all documents with category c into one big "category c" text. Then we use the frequency of w i in this concatenated document to give a maximum likelihood estimate of the probability: +4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier "To learn the probability P( f i |c), we'll assume a feature is just the existence of a word in the document's bag of words, and so we'll want P(w i |c), which we compute as the fraction of times the word w i appears among all words in all documents of topic c. We first concatenate all documents with category c into one big ""category c"" text. Then we use the frequency of w i in this concatenated document to give a maximum likelihood estimate of the probability:" 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier P(w i |c) = count(w i , c) w∈V count(w, c) (4.12) 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier Here the vocabulary V consists of the union of all the word types in all classes, not just the words in one class c. -4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier There is a problem, however, with maximum likelihood training. Imagine we are trying to estimate the likelihood of the word "fantastic" given class positive, but suppose there are no training documents that both contain the word "fantastic" and are classified as positive. Perhaps the word "fantastic" happens to occur (sarcastically?) in the class negative. In such a case the probability for this feature will be zero:P ("fantastic"|positive) = count("fantastic", positive) +4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier "There is a problem, however, with maximum likelihood training. Imagine we are trying to estimate the likelihood of the word ""fantastic"" given class positive, but suppose there are no training documents that both contain the word ""fantastic"" and are classified as positive. Perhaps the word ""fantastic"" happens to occur (sarcastically?) in the class negative. In such a case the probability for this feature will be zero:P (""fantastic""|positive) = count(""fantastic"", positive)" 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier w∈V count(w, positive) = 0 (4.13) 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier But since naive Bayes naively multiplies all the feature likelihoods together, zero probabilities in the likelihood term for any class will cause the probability of the class to be zero, no matter the other evidence! The simplest solution is the add-one (Laplace) smoothing introduced in Chapter 3. While Laplace smoothing is usually replaced by more sophisticated smoothing algorithms in language modeling, it is commonly used in naive Bayes text categorization:P 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier (w i |c) = count(w i , c) + 1 w∈V (count(w, c) + 1) = count(w i , c) + 1 w∈V count(w, c) + |V | (4.14) @@ -443,9 +443,9 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier Finally, some systems choose to completely ignore another class of words: stop words, very frequent words like the and a. This can be done by sorting the vocabustop words lary by frequency in the training set, and defining the top 10-100 vocabulary entries as stop words, or alternatively by using one of the many predefined stop word lists available online. Then each instance of these stop words is simply removed from both training and test documents as if it had never occurred. In most text classification applications, however, using a stop word list doesn't improve performance, and so it is more common to make use of the entire vocabulary and not use a stop word list. Fig. 4 .2 shows the final algorithm. 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier function TRAIN NAIVE BAYES(D, C) returns log P(c) and log P(w|c) 4 Naive Bayes and Sentiment Classification 4.2 Training the Naive Bayes Classifier for each class c ∈ C # Calculate P(c) terms N doc = number of documents in D N c = number of documents from D in class c logprior[c] ← log N c N doc V ← vocabulary of D bigdoc[c] ← append(d) for d ∈ D with class c for each word w in V # Calculate P(w|c) terms count(w,c) ← # of occurrences of w in bigdoc[c] loglikelihood[w,c] ← log count(w, c) + 1 w in V (count (w , c) + 1) return logprior, loglikelihood, V function TEST NAIVE BAYES(testdoc, logprior, loglikelihood, C, V) returns best c for each class c ∈ C sum[c] ← logprior[c] for each position i in testdoc word ← testdoc[i] if word ∈ V sum[c] ← sum[c]+ loglikelihood[word,c] return argmax c sum[c] -4 Naive Bayes and Sentiment Classification 4.3 Worked Example Let's walk through an example of training and testing naive Bayes with add-one smoothing. We'll use a sentiment analysis domain with the two classes positive (+) and negative (-), and take the following miniature training and test documents simplified from actual movie reviews. : P(−) = 3 5 P(+) = 2 5 The word with doesn't occur in the training set, so we drop it completely (as mentioned above, we don't use unknown word models for naive Bayes). The likelihoods from the training set for the remaining three words "predictable", "no", and "fun", are as follows, from Eq. 4.14 (computing the probabilities for the remainder of the words in the training set is left as an exercise for the reader): P("predictable"|−) = 1 + 1 14 + 20 P("predictable"|+) = 0 + 1 9 + 20 P("no"|−) = 1 + 1 14 + 20 P("no"|+) = 0 + 1 9 + 20 P("fun"|−) = 0 + 1 14 + 20 P("fun" +4 Naive Bayes and Sentiment Classification 4.3 Worked Example "Let's walk through an example of training and testing naive Bayes with add-one smoothing. We'll use a sentiment analysis domain with the two classes positive (+) and negative (-), and take the following miniature training and test documents simplified from actual movie reviews. : P(−) = 3 5 P(+) = 2 5 The word with doesn't occur in the training set, so we drop it completely (as mentioned above, we don't use unknown word models for naive Bayes). The likelihoods from the training set for the remaining three words ""predictable"", ""no"", and ""fun"", are as follows, from Eq. 4.14 (computing the probabilities for the remainder of the words in the training set is left as an exercise for the reader): P(""predictable""|−) = 1 + 1 14 + 20 P(""predictable""|+) = 0 + 1 9 + 20 P(""no""|−) = 1 + 1 14 + 20 P(""no""|+) = 0 + 1 9 + 20 P(""fun""|−) = 0 + 1 14 + 20 P(""fun""" 4 Naive Bayes and Sentiment Classification 4.3 Worked Example |+) = 1 + 1 9 + 20 -4 Naive Bayes and Sentiment Classification 4.3 Worked Example For the test sentence S = "predictable with no fun", after removing the word 'with', the chosen class, via Eq. 4.9, is therefore computed as follows: +4 Naive Bayes and Sentiment Classification 4.3 Worked Example "For the test sentence S = ""predictable with no fun"", after removing the word 'with', the chosen class, via Eq. 4.9, is therefore computed as follows:" 4 Naive Bayes and Sentiment Classification 4.3 Worked Example P(−)P(S|−) = 3 5 × 2 × 2 × 1 34 3 = 6.1 × 10 −5 P(+)P(S|+) = 2 5 × 1 × 1 × 2 29 3 = 3.2 × 10 −5 4 Naive Bayes and Sentiment Classification 4.3 Worked Example The model thus predicts the class negative for the test sentence. 4 Naive Bayes and Sentiment Classification 4.4 Optimizing for Sentiment Analysis While standard naive Bayes text classification can work well for sentiment analysis, some small changes are generally employed that improve performance. First, for sentiment classification and a number of other text classification tasks, whether a word occurs or not seems to matter more than its frequency. Thus it often improves performance to clip the word counts in each document at 1 (see the end of the chapter for pointers to these results). This variant is called binary 4.4 • OPTIMIZING FOR SENTIMENT ANALYSIS 63 multinomial naive Bayes or binary NB. The variant uses the same Eq. 4.10 except binary NB that for each document we remove all duplicate words before concatenating them into the single big document. Fig. 4 .3 shows an example in which a set of four documents (shortened and text-normalized for this example) are remapped to binary, with the modified counts shown in the table on the right. The example is worked without add-1 smoothing to make the differences clearer. Note that the results counts need not be 1; the word great has a count of 2 even for Binary NB, because it appears in multiple documents. @@ -455,23 +455,23 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 4 Naive Bayes and Sentiment Classification 4.4 Optimizing for Sentiment Analysis of Hu and Liu (2004a) and the MPQA Subjectivity Lexicon (Wilson et al., 2005) . For example the MPQA subjectivity lexicon has 6885 words, 2718 positive and 4912 negative, each marked for whether it is strongly or weakly biased. Some samples of positive and negative words from the MPQA lexicon include: + : admirable, beautiful, confident, dazzling, ecstatic, favor, glee, great − : awful, bad, bias, catastrophe, cheat, deny, envious, foul, harsh, hate A common way to use lexicons in a naive Bayes classifier is to add a feature that is counted whenever a word from that lexicon occurs. Thus we might add a feature called 'this word occurs in the positive lexicon', and treat all instances of words in the lexicon as counts for that one feature, instead of counting each word separately. Similarly, we might add as a second feature 'this word occurs in the negative lexicon' of words in the negative lexicon. If we have lots of training data, and if the test data matches the training data, using just two features won't work as well as using all the words. But when training data is sparse or not representative of the test set, using dense lexicon features instead of sparse individual-word features may generalize better. 4 Naive Bayes and Sentiment Classification 4.4 Optimizing for Sentiment Analysis We'll return to this use of lexicons in Chapter 20, showing how these lexicons can be learned automatically, and how they can be applied to many other tasks beyond sentiment classification. 4 Naive Bayes and Sentiment Classification 4.5 Naive Bayes for Other Text Classification Tasks In the previous section we pointed out that naive Bayes doesn't require that our classifier use all the words in the training data as features. In fact features in naive Bayes can express any property of the input text we want. -4 Naive Bayes and Sentiment Classification 4.5 Naive Bayes for Other Text Classification Tasks Consider the task of spam detection, deciding if a particular piece of email is spam detection an example of spam (unsolicited bulk email) -and one of the first applications of naive Bayes to text classification (Sahami et al., 1998) . A common solution here, rather than using all the words as individual features, is to predefine likely sets of words or phrases as features, combined with features that are not purely linguistic. For example the open-source SpamAssassin tool 1 predefines features like the phrase "one hundred percent guaranteed", or the feature mentions millions of dollars, which is a regular expression that matches suspiciously large sums of money. But it also includes features like HTML has a low ratio of text to image area, that aren't purely linguistic and might require some sophisticated computation, or totally non-linguistic features about, say, the path that the email took to arrive. More sample SpamAssassin features: -4 Naive Bayes and Sentiment Classification 4.5 Naive Bayes for Other Text Classification Tasks • Email subject line is all capital letters • Contains phrases of urgency like "urgent reply" • Email subject line contains "online pharmaceutical" -4 Naive Bayes and Sentiment Classification 4.5 Naive Bayes for Other Text Classification Tasks • HTML has unbalanced "head" tags • Claims you can be removed from the list For other tasks, like language id-determining what language a given piece language id of text is written in-the most effective naive Bayes features are not words at all, but character n-grams, 2-grams ('zw') 3-grams ('nya', ' Vo'), or 4-grams ('ie z', 'thei'), or, even simpler byte n-grams, where instead of using the multibyte Unicode character representations called codepoints, we just pretend everything is a string of raw bytes. Because spaces count as a byte, byte n-grams can model statistics about the beginning or ending of words. A widely used naive Bayes system, langid.py (Lui and Baldwin, 2012) begins with all possible n-grams of lengths 1-4, using feature selection to winnow down to the most informative 7000 final features. Language ID systems are trained on multilingual text, such as Wikipedia (Wikipedia text in 68 different languages was used in (Lui and Baldwin, 2011)), or newswire. To make sure that this multilingual text correctly reflects different regions, dialects, and socioeconomic classes, systems also add Twitter text in many languages geotagged to many regions (important for getting world English dialects from countries with large Anglophone populations like Nigeria or India), Bible and Quran translations, slang websites like Urban Dictionary, corpora of African American Vernacular English (Blodgett et al., 2016) , and so on (Jurgens et al., 2017). +4 Naive Bayes and Sentiment Classification 4.5 Naive Bayes for Other Text Classification Tasks "Consider the task of spam detection, deciding if a particular piece of email is spam detection an example of spam (unsolicited bulk email) -and one of the first applications of naive Bayes to text classification (Sahami et al., 1998) . A common solution here, rather than using all the words as individual features, is to predefine likely sets of words or phrases as features, combined with features that are not purely linguistic. For example the open-source SpamAssassin tool 1 predefines features like the phrase ""one hundred percent guaranteed"", or the feature mentions millions of dollars, which is a regular expression that matches suspiciously large sums of money. But it also includes features like HTML has a low ratio of text to image area, that aren't purely linguistic and might require some sophisticated computation, or totally non-linguistic features about, say, the path that the email took to arrive. More sample SpamAssassin features:" +4 Naive Bayes and Sentiment Classification 4.5 Naive Bayes for Other Text Classification Tasks "• Email subject line is all capital letters • Contains phrases of urgency like ""urgent reply"" • Email subject line contains ""online pharmaceutical""" +4 Naive Bayes and Sentiment Classification 4.5 Naive Bayes for Other Text Classification Tasks "• HTML has unbalanced ""head"" tags • Claims you can be removed from the list For other tasks, like language id-determining what language a given piece language id of text is written in-the most effective naive Bayes features are not words at all, but character n-grams, 2-grams ('zw') 3-grams ('nya', ' Vo'), or 4-grams ('ie z', 'thei'), or, even simpler byte n-grams, where instead of using the multibyte Unicode character representations called codepoints, we just pretend everything is a string of raw bytes. Because spaces count as a byte, byte n-grams can model statistics about the beginning or ending of words. A widely used naive Bayes system, langid.py (Lui and Baldwin, 2012) begins with all possible n-grams of lengths 1-4, using feature selection to winnow down to the most informative 7000 final features. Language ID systems are trained on multilingual text, such as Wikipedia (Wikipedia text in 68 different languages was used in (Lui and Baldwin, 2011)), or newswire. To make sure that this multilingual text correctly reflects different regions, dialects, and socioeconomic classes, systems also add Twitter text in many languages geotagged to many regions (important for getting world English dialects from countries with large Anglophone populations like Nigeria or India), Bible and Quran translations, slang websites like Urban Dictionary, corpora of African American Vernacular English (Blodgett et al., 2016) , and so on (Jurgens et al., 2017)." 4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model As we saw in the previous section, naive Bayes classifiers can use any sort of feature: dictionaries, URLs, email addresses, network features, phrases, and so on. But if, as in the previous section, we use only individual word features, and we use all of the words in the text (not a subset), then naive Bayes has an important similarity to language modeling. Specifically, a naive Bayes model can be viewed as a set of class-specific unigram language models, in which the model for each class instantiates a unigram language model. 4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model Since the likelihood features from the naive Bayes model assign a probability to each word P(word|c), the model also assigns a probability to each sentence: 4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model P(s|c) = i∈positions P(w i |c) (4.15) 4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model Thus consider a naive Bayes model with the classes positive (+) and negative (-) and the following model parameters: 4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model w P(w|+) P(w|-) I 0.1 0.2 love 0.1 0.001 this 0.01 0.01 fun 0.05 0.005 film 0.1 0.1 ... ... ... -4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model Each of the two columns above instantiates a language model that can assign a probability to the sentence "I love this fun film": P("I love this fun film"|+) = 0.1 × 0.1 × 0.01 × 0.05 × 0.1 = 0.0000005 P("I love this fun film"|−) = 0.2 × 0.001 × 0.01 × 0.005 × 0.1 = .0000000010 +4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model "Each of the two columns above instantiates a language model that can assign a probability to the sentence ""I love this fun film"": P(""I love this fun film""|+) = 0.1 × 0.1 × 0.01 × 0.05 × 0.1 = 0.0000005 P(""I love this fun film""|−) = 0.2 × 0.001 × 0.01 × 0.005 × 0.1 = .0000000010" 4 Naive Bayes and Sentiment Classification 4.6 Naive Bayes as a Language Model As it happens, the positive model assigns a higher probability to the sentence: P(s|pos) > P(s|neg). Note that this is just the likelihood part of the naive Bayes model; once we multiply in the prior a full naive Bayes model might well make a different classification decision. -4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure To introduce the methods for evaluating text classification, let's first consider some simple binary detection tasks. For example, in spam detection, our goal is to label every text as being in the spam category ("positive") or not in the spam category ("negative"). For each item (email document) we therefore need to know whether our system called it spam or not. We also need to know whether the email is actually spam or not, i.e. the human-defined labels for each document that we are trying to match. We will refer to these human labels as the gold labels. +4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure "To introduce the methods for evaluating text classification, let's first consider some simple binary detection tasks. For example, in spam detection, our goal is to label every text as being in the spam category (""positive"") or not in the spam category (""negative""). For each item (email document) we therefore need to know whether our system called it spam or not. We also need to know whether the email is actually spam or not, i.e. the human-defined labels for each document that we are trying to match. We will refer to these human labels as the gold labels." 4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure Or imagine you're the CEO of the Delicious Pie Company and you need to know what people are saying about your pies on social media, so you build a system that detects tweets concerning Delicious Pie. Here the positive class is tweets about Delicious Pie and the negative class is all other tweets. 4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure In both cases, we need a metric for knowing how well our spam detector (or pie-tweet-detector) is doing. To evaluate any system for detecting things, we start by building a confusion matrix like the one shown in Fig. 4 4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure is a table for visualizing how an algorithm performs with respect to the human gold labels, using two dimensions (system output and gold labels), and each cell labeling a set of possible outcomes. In the spam detection case, for example, true positives are documents that are indeed spam (indicated by human-created gold labels) that our system correctly said were spam. False negatives are documents that are indeed spam but our system incorrectly labeled as non-spam. 4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure To the bottom right of the table is the equation for accuracy, which asks what percentage of all the observations (for the spam or pie examples that means all emails or tweets) our system labeled correctly. Although accuracy might seem a natural metric, we generally don't use it for text classification tasks. That's because accuracy doesn't work well when the classes are unbalanced (as indeed they are with spam, which is a large majority of email, or with tweets, which are mainly not about pie). To make this more explicit, imagine that we looked at a million tweets, and let's say that only 100 of them are discussing their love (or hatred) for our pie, -4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure while the other 999,900 are tweets about something completely unrelated. Imagine a simple classifier that stupidly classified every tweet as "not about pie". This classifier would have 999,900 true negatives and only 100 false negatives for an accuracy of 999,900/1,000,000 or 99.99%! What an amazing accuracy level! Surely we should be happy with this classifier? But of course this fabulous 'no pie' classifier would be completely useless, since it wouldn't find a single one of the customer comments we are looking for. In other words, accuracy is not a good metric when the goal is to discover something that is rare, or at least not completely balanced in frequency, which is a very common situation in the world. -4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure That's why instead of accuracy we generally turn to two other metrics shown in Precision and recall will help solve the problem with the useless "nothing is pie" classifier. This classifier, despite having a fabulous accuracy of 99.99%, has a terrible recall of 0 (since there are no true positives, and 100 false negatives, the recall is 0/100). You should convince yourself that the precision at finding relevant tweets is equally problematic. Thus precision and recall, unlike accuracy, emphasize true positives: finding the things that we are supposed to be looking for. +4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure "while the other 999,900 are tweets about something completely unrelated. Imagine a simple classifier that stupidly classified every tweet as ""not about pie"". This classifier would have 999,900 true negatives and only 100 false negatives for an accuracy of 999,900/1,000,000 or 99.99%! What an amazing accuracy level! Surely we should be happy with this classifier? But of course this fabulous 'no pie' classifier would be completely useless, since it wouldn't find a single one of the customer comments we are looking for. In other words, accuracy is not a good metric when the goal is to discover something that is rare, or at least not completely balanced in frequency, which is a very common situation in the world." +4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure "That's why instead of accuracy we generally turn to two other metrics shown in Precision and recall will help solve the problem with the useless ""nothing is pie"" classifier. This classifier, despite having a fabulous accuracy of 99.99%, has a terrible recall of 0 (since there are no true positives, and 100 false negatives, the recall is 0/100). You should convince yourself that the precision at finding relevant tweets is equally problematic. Thus precision and recall, unlike accuracy, emphasize true positives: finding the things that we are supposed to be looking for." 4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure There are many ways to define a single metric that incorporates aspects of both precision and recall. The simplest of these combinations is the F-measure (van F-measure Rijsbergen, 1975) , defined as: 4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure F β = (β 2 + 1)PR 4 Naive Bayes and Sentiment Classification 4.7 Evaluation: Precision, Recall, F-measure β 2 P + R The β parameter differentially weights the importance of recall and precision, based perhaps on the needs of an application. Values of β > 1 favor recall, while values of β < 1 favor precision. When β = 1, precision and recall are equally balanced; this is the most frequently used metric, and is called F β =1 or just F 1 : @@ -497,13 +497,13 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing We do this by creating a random variable X ranging over all test sets. Now we ask how likely is it, if the null hypothesis H 0 was correct, that among these test sets we would encounter the value of δ (x) that we found. We formalize this likelihood as the p-value: the probability, assuming the null hypothesis H 0 is true, of seeing p-value the δ (x) that we saw or one even greater 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing P(δ (X) ≥ δ (x)|H 0 is true) (4.21) 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing So in our example, this p-value is the probability that we would see δ (x) assuming A is not better than B. If δ (x) is huge (let's say A has a very respectable F 1 of .9 and B has a terrible F 1 of only .2 on x), we might be surprised, since that would be extremely unlikely to occur if H 0 were in fact true, and so the p-value would be low (unlikely to have such a large δ if A is in fact not better than B). But if δ (x) is very small, it might be less surprising to us even if H 0 were true and A is not really better than B, and so the p-value would be higher. -4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing A very small p-value means that the difference we observed is very unlikely under the null hypothesis, and we can reject the null hypothesis. What counts as very small? It is common to use values like .05 or .01 as the thresholds. A value of .01 means that if the p-value (the probability of observing the δ we saw assuming H 0 is true) is less than .01, we reject the null hypothesis and assume that A is indeed better than B. We say that a result (e.g., "A is better than B") is statistically significant if statistically significant the δ we saw has a probability that is below the threshold and we therefore reject this null hypothesis. How do we compute this probability we need for the p-value? In NLP we generally don't use simple parametric tests like t-tests or ANOVAs that you might be familiar with. Parametric tests make assumptions about the distributions of the test statistic (such as normality) that don't generally hold in our cases. So in NLP we usually use non-parametric tests based on sampling: we artificially create many versions of the experimental setup. For example, if we had lots of different test sets x we could just measure all the δ (x ) for all the x . That gives us a distribution. Now we set a threshold (like .01) and if we see in this distribution that 99% or more of those deltas are smaller than the delta we observed, i.e. that p-value(x)-the probability of seeing a δ (x) as big as the one we saw, is less than .01, then we can reject the null hypothesis and agree that δ (x) was a sufficiently surprising difference and A is really a better algorithm than B. +4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing "A very small p-value means that the difference we observed is very unlikely under the null hypothesis, and we can reject the null hypothesis. What counts as very small? It is common to use values like .05 or .01 as the thresholds. A value of .01 means that if the p-value (the probability of observing the δ we saw assuming H 0 is true) is less than .01, we reject the null hypothesis and assume that A is indeed better than B. We say that a result (e.g., ""A is better than B"") is statistically significant if statistically significant the δ we saw has a probability that is below the threshold and we therefore reject this null hypothesis. How do we compute this probability we need for the p-value? In NLP we generally don't use simple parametric tests like t-tests or ANOVAs that you might be familiar with. Parametric tests make assumptions about the distributions of the test statistic (such as normality) that don't generally hold in our cases. So in NLP we usually use non-parametric tests based on sampling: we artificially create many versions of the experimental setup. For example, if we had lots of different test sets x we could just measure all the δ (x ) for all the x . That gives us a distribution. Now we set a threshold (like .01) and if we see in this distribution that 99% or more of those deltas are smaller than the delta we observed, i.e. that p-value(x)-the probability of seeing a δ (x) as big as the one we saw, is less than .01, then we can reject the null hypothesis and agree that δ (x) was a sufficiently surprising difference and A is really a better algorithm than B." 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing There are two common non-parametric tests used in NLP: approximate randomization (Noreen, 1989) and the bootstrap test. We will describe bootstrap approximate randomization below, showing the paired version of the test, which again is most common in NLP. Paired tests are those in which we compare two sets of observations that are aligned: paired each observation in one set can be paired with an observation in another. This happens naturally when we are comparing the performance of two systems on the same test set; we can pair the performance of system A on an individual observation x i with the performance of system B on the same x i . 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test The bootstrap test (Efron and Tibshirani, 1993) can apply to any metric; from prebootstrap test cision, recall, or F1 to the BLEU metric used in machine translation. The word bootstrapping refers to repeatedly drawing large numbers of smaller samples with bootstrapping replacement (called bootstrap samples) from an original larger sample. The intuition of the bootstrap test is that we can create many virtual test sets from an observed test set by repeatedly sampling from it. The method only makes the assumption that the sample is representative of the population. 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test Consider a tiny text classification example with a test set x of 10 documents. The first row of Fig. 4 .8 shows the results of two classifiers (A and B) on this test set, with each document labeled by one of the four possibilities: (A and B both right, both wrong, A right and B wrong, A wrong and B right); a slash through a letter ( B) means that that classifier got the answer wrong. On the first document both A and B get the correct class (AB), while on the second document A got it right but B got it wrong (A B). If we assume for simplicity that our metric is accuracy, A has an accuracy of .70 and B of .50, so δ (x) is .20. Now we create a large number b (perhaps 10 5 ) of virtual test sets x (i) , each of size n = 10. Fig. 4 .8 shows a couple examples. To create each virtual test set x (i) , we repeatedly (n = 10 times) select a cell from row x with replacement. For example, to create the first cell of the first virtual test set x (1) , if we happened to randomly select the second cell of the x row; we would copy the value A B into our new cell, and move on to create the second cell of x (1) , each time sampling (randomly choosing) from the original x with replacement. 1 2 3 4 5 6 7 8 9 10 A% B% δ () Now that we have the b test sets, providing a sampling distribution, we can do statistics on how often A has an accidental advantage. There are various ways to compute this advantage; here we follow the version laid out in Berg-Kirkpatrick et al. (2012) . Assuming H 0 (A isn't better than B), we would expect that δ (X), estimated over many test sets, would be zero; a much higher value would be surprising, since H 0 specifically assumes A isn't better than B. To measure exactly how surprising is our observed δ (x) we would in other circumstances compute the p-value by counting over many test sets how often δ (x (i) ) exceeds the expected zero value by δ (x) or more: 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test x AB A B AB AB A B AB A B AB A B A B .70 .50 .20 x (1) A B AB A B AB AB A B AB AB A B AB .60 .60 .00 x (2) A B AB A B AB AB AB AB A B AB AB .60 .70 -.10 ... x (b) 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test p-value(x) = 1 b b i=1 1 δ (x (i) ) − δ (x) ≥ 0 -4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test (We use the notation 1(x) to mean "1 if x is true, and 0 otherwise".) However, although it's generally true that the expected value of δ (X) over many test sets, (again assuming A isn't better than B) is 0, this isn't true for the bootstrapped test sets we created. That's because we didn't draw these samples from a distribution with 0 mean; we happened to create them from the original test set x, which happens to be biased (by .20) in favor of A. So to measure how surprising is our observed δ (x), we actually compute the p-value by counting over many test sets how often δ (x (i) ) exceeds the expected value of δ (x) by δ (x) or more: +4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test "(We use the notation 1(x) to mean ""1 if x is true, and 0 otherwise"".) However, although it's generally true that the expected value of δ (X) over many test sets, (again assuming A isn't better than B) is 0, this isn't true for the bootstrapped test sets we created. That's because we didn't draw these samples from a distribution with 0 mean; we happened to create them from the original test set x, which happens to be biased (by .20) in favor of A. So to measure how surprising is our observed δ (x), we actually compute the p-value by counting over many test sets how often δ (x (i) ) exceeds the expected value of δ (x) by δ (x) or more:" 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test p-value(x) = 1 b b i=1 1 δ (x (i) ) − δ (x) ≥ δ (x) = 1 b b i=1 1 δ (x (i) ) ≥ 2δ (x) (4.22) 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test So if for example we have 10,000 test sets x (i) and a threshold of .01, and in only 47 of the test sets do we find that δ (x (i) ) ≥ 2δ (x), the resulting p-value of .0047 is smaller than .01, indicating δ (x) is indeed sufficiently surprising, and we can reject the null hypothesis and conclude A is better than B. 4 Naive Bayes and Sentiment Classification 4.9 Statistical Significance Testing 4.9.1 The Paired Bootstrap Test function BOOTSTRAP(test set x, num of samples b) returns p-value(x) @@ -547,7 +547,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 5 Logistic Regression Logistic regression has two phases: 5 Logistic Regression training: we train the system (specifically the weights w and b) using stochastic gradient descent and the cross-entropy loss. 5 Logistic Regression test: Given a test example x we compute p(y|x) and return the higher probability label y = 1 or y = 0. -5 Logistic Regression 5.1 Classification: the Sigmoid The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation. Here we introduce the sigmoid classifier that will help us make this decision. Consider a single input observation x, which we will represent by a vector of features [x 1 , x 2 , ..., x n ] (we'll show sample features in the next subsection). The classifier output y can be 1 (meaning the observation is a member of the class) or 0 (the observation is not a member of the class). We want to know the probability P(y = 1|x) that this observation is a member of the class. So perhaps the decision is "positive sentiment" versus "negative sentiment", the features represent counts of words in a document, P(y = 1|x) is the probability that the document has positive sentiment, and P(y = 0|x) is the probability that the document has negative sentiment. Logistic regression solves this task by learning, from a training set, a vector of weights and a bias term. Each weight w i is a real number, and is associated with one of the input features x i . The weight w i represents how important that input feature is to the classification decision, and can be positive (providing evidence that the instance being classified belongs in the positive class) or negative (providing evidence that the instance being classified belongs in the negative class). Thus we might expect in a sentiment task the word awesome to have a high positive weight, and abysmal to have a very negative weight. The bias term, also called the intercept, is bias term intercept another real number that's added to the weighted inputs. +5 Logistic Regression 5.1 Classification: the Sigmoid "The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation. Here we introduce the sigmoid classifier that will help us make this decision. Consider a single input observation x, which we will represent by a vector of features [x 1 , x 2 , ..., x n ] (we'll show sample features in the next subsection). The classifier output y can be 1 (meaning the observation is a member of the class) or 0 (the observation is not a member of the class). We want to know the probability P(y = 1|x) that this observation is a member of the class. So perhaps the decision is ""positive sentiment"" versus ""negative sentiment"", the features represent counts of words in a document, P(y = 1|x) is the probability that the document has positive sentiment, and P(y = 0|x) is the probability that the document has negative sentiment. Logistic regression solves this task by learning, from a training set, a vector of weights and a bias term. Each weight w i is a real number, and is associated with one of the input features x i . The weight w i represents how important that input feature is to the classification decision, and can be positive (providing evidence that the instance being classified belongs in the positive class) or negative (providing evidence that the instance being classified belongs in the negative class). Thus we might expect in a sentiment task the word awesome to have a high positive weight, and abysmal to have a very negative weight. The bias term, also called the intercept, is bias term intercept another real number that's added to the weighted inputs." 5 Logistic Regression 5.1 Classification: the Sigmoid To make a decision on a test instance-after we've learned the weights in training-the classifier first multiplies each x i by its weight w i , sums up the weighted features, and adds the bias term b. The resulting single number z expresses the weighted sum of the evidence for the class. 5 Logistic Regression 5.1 Classification: the Sigmoid z = n i=1 w i x i + b (5.2) 5 Logistic Regression 5.1 Classification: the Sigmoid In the rest of the book we'll represent such sums using the dot product notation from dot product linear algebra. The dot product of two vectors a and b, written as a • b is the sum of the products of the corresponding elements of each vector. Thus the following is an equivalent formation to Eq. 5.2: @@ -562,19 +562,19 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 5 Logistic Regression 5.1 Classification: the Sigmoid so we could also have expressed P(y = 0) as σ (−(w • x + b)). 5 Logistic Regression 5.1 Classification: the Sigmoid Now we have an algorithm that given an instance x computes the probability P(y = 1|x). How do we make a decision? For a test instance x, we say yes if the probability P(y = 1|x) is more than .5, and no otherwise. We call .5 the decision boundary: 5 Logistic Regression 5.1 Classification: the Sigmoid decision boundary decision(x) = 1 if P(y = 1|x) > 0.5, 0 otherwise -5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification Let's have an example. Suppose we are doing binary sentiment classification on movie review text, and we would like to know whether to assign the sentiment class + or − to a review document doc. We'll represent each input observation by the 6 features x 1 . . . x 6 of the input shown in the following x 3 1 if "no" ∈ doc 0 otherwise 1 +5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification "Let's have an example. Suppose we are doing binary sentiment classification on movie review text, and we would like to know whether to assign the sentiment class + or − to a review document doc. We'll represent each input observation by the 6 features x 1 . . . x 6 of the input shown in the following x 3 1 if ""no"" ∈ doc 0 otherwise 1" 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification x 4 count(1st and 2nd pronouns ∈ doc) 3 -5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification x 5 1 if "!" ∈ doc 0 otherwise 0 +5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification "x 5 1 if ""!"" ∈ doc 0 otherwise 0" 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification x 6 log(word count of doc) ln(66) = 4.19 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification Let's assume for the moment that we've already learned a real-valued weight for each of these features, and that the 6 weights corresponding to the 6 features are [2.5, −5.0, −1.2, 0.5, 2.0, 0.7], while b = 0.1. (We'll discuss in the next section how the weights are learned.) The weight w 1 , for example indicates how important a feature the number of positive lexicon words (great, nice, enjoyable, etc.) is to a positive sentiment decision, while w 2 tells us the importance of negative lexicon words. Note that w 1 = 2.5 is positive, while w 2 = −5.0, meaning that negative words are negatively associated with a positive sentiment decision, and are about twice as important as positive words. 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification Given these 6 features and the input review x, P(+|x) and P(−|x) can be computed using Eq. 5.5: 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification p(+|x) = P(y = 1|x) = σ (w • x + b) = σ ([2.5, −5.0, −1.2, 0.5, 2.0, 0.7] • [3, 2, 1, 3, 0, 4.19] + 0.1) = σ (.833) = 0.70 (5.7) p(−|x) = P(y = 0|x) = 1 − σ (w • x + b) = 0.30 -5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification Logistic regression is commonly applied to all sorts of NLP tasks, and any property of the input can be a feature. Consider the task of period disambiguation: deciding if a period is the end of a sentence or part of a word, by classifying each period into one of two classes EOS (end-of-sentence) and not-EOS. We might use features like x 1 below expressing that the current word is lower case (perhaps with a positive weight), or that the current word is in our abbreviations dictionary ("Prof.") (perhaps with a negative weight). A feature can also express a quite complex combination of properties. For example a period following an upper case word is likely to be an EOS, but if the word itself is St. and the previous word is capitalized, then the period is likely part of a shortening of the word street. -5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification x 1 = 1 if "Case(w i ) = Lower" 0 otherwise -5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification x 2 = 1 if "w i ∈ AcronymDict" 0 otherwise x 3 = 1 if "w i = St. & Case(w i−1 ) = Cap" 0 otherwise +5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification "Logistic regression is commonly applied to all sorts of NLP tasks, and any property of the input can be a feature. Consider the task of period disambiguation: deciding if a period is the end of a sentence or part of a word, by classifying each period into one of two classes EOS (end-of-sentence) and not-EOS. We might use features like x 1 below expressing that the current word is lower case (perhaps with a positive weight), or that the current word is in our abbreviations dictionary (""Prof."") (perhaps with a negative weight). A feature can also express a quite complex combination of properties. For example a period following an upper case word is likely to be an EOS, but if the word itself is St. and the previous word is capitalized, then the period is likely part of a shortening of the word street." +5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification "x 1 = 1 if ""Case(w i ) = Lower"" 0 otherwise" +5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification "x 2 = 1 if ""w i ∈ AcronymDict"" 0 otherwise x 3 = 1 if ""w i = St. & Case(w i−1 ) = Cap"" 0 otherwise" 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification Designing features: Features are generally designed by examining the training set with an eye to linguistic intuitions and the linguistic literature on the domain. A careful error analysis on the training set or devset of an early version of a system often provides insights into features. 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification For some tasks it is especially helpful to build complex features that are combinations of more primitive features. We saw such a feature for period disambiguation above, where a period on the word St. was less likely to be the end of the sentence if the previous word was capitalized. For logistic regression and naive Bayes these combination features or feature interactions have to be designed by hand. -5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification For many tasks (especially when feature values can reference specific words) we'll need large numbers of features. Often these are created automatically via feature templates, abstract specifications of features. For example a bigram template feature templates for period disambiguation might create a feature for every pair of words that occurs before a period in the training set. Thus the feature space is sparse, since we only have to create a feature if that n-gram exists in that position in the training set. The feature is generally created as a hash from the string descriptions. A user description of a feature as, "bigram(American breakfast)" is hashed into a unique integer i that becomes the feature number f i . +5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification "For many tasks (especially when feature values can reference specific words) we'll need large numbers of features. Often these are created automatically via feature templates, abstract specifications of features. For example a bigram template feature templates for period disambiguation might create a feature for every pair of words that occurs before a period in the training set. Thus the feature space is sparse, since we only have to create a feature if that n-gram exists in that position in the training set. The feature is generally created as a hash from the string descriptions. A user description of a feature as, ""bigram(American breakfast)"" is hashed into a unique integer i that becomes the feature number f i ." 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification In order to avoid the extensive human effort of feature design, recent research in NLP has focused on representation learning: ways to learn features automatically in an unsupervised way from the input. We'll introduce methods for representation learning in Chapter 6 and Chapter 7. 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification Choosing a classifier Logistic regression has a number of advantages over naive Bayes. Naive Bayes has overly strong conditional independence assumptions. Consider two features which are strongly correlated; in fact, imagine that we just add the same feature f 1 twice. Naive Bayes will treat both copies of f 1 as if they were separate, multiplying them both in, overestimating the evidence. By contrast, logistic regression is much more robust to correlated features; if two features f 1 and f 2 are perfectly correlated, regression will simply assign part of the weight to w 1 and part to w 2 . Thus when there are many correlated features, logistic regression will assign a more accurate probability than naive Bayes. So logistic regression generally works better on larger documents or datasets and is a common default. 5 Logistic Regression 5.1 Classification: the Sigmoid 5.1.1 Example: Sentiment Classification Despite the less accurate probabilities, naive Bayes still often makes the correct classification decision. Furthermore, naive Bayes can work extremely well (sometimes even better than logistic regression) on very small datasets (Ng and Jordan, 2002) or short documents (Wang and Manning, 2012). Furthermore, naive Bayes is easy to implement and very fast to train (there's no optimization step). So it's still a reasonable approach to use in some situations. @@ -595,7 +595,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 5 Logistic Regression 5.3 The Cross-Entropy Loss Function L CE (ŷ, y) = − [y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] (5.12) 5 Logistic Regression 5.3 The Cross-Entropy Loss Function Let's see if this loss function does the right thing for our example from Fig. 5 .2. We want the loss to be smaller if the model's estimate is close to correct, and bigger if the model is confused. So first let's suppose the correct gold label for the sentiment example in Fig. 5 .2 is positive, i.e., y = 1. In this case our model is doing well, since from Eq. 5.7 it indeed gave the example a higher probability of being positive (.70) than negative (.30). If we plug σ (w • x + b) = .70 and y = 1 into Eq. 5.12, the right side of the equation drops out, leading to the following loss (we'll use log to mean natural log when the base is not specified): 5 Logistic Regression 5.3 The Cross-Entropy Loss Function L CE (ŷ, y) = −[y log σ (w • x + b) + (1 − y) log (1 − σ (w • x + b))] = − [log σ (w • x + b)] = − log(.70) = .36 -5 Logistic Regression 5.3 The Cross-Entropy Loss Function By contrast, let's pretend instead that the example in Fig. 5 .2 was actually negative, i.e., y = 0 (perhaps the reviewer went on to say "But bottom line, the movie is terrible! I beg you not to see it!"). In this case our model is confused and we'd want the loss to be higher. Now if we plug y = 0 and 1 − σ (w • x + b) = .31 from Eq. 5.7 into Eq. 5.12, the left side of the equation drops out: +5 Logistic Regression 5.3 The Cross-Entropy Loss Function "By contrast, let's pretend instead that the example in Fig. 5 .2 was actually negative, i.e., y = 0 (perhaps the reviewer went on to say ""But bottom line, the movie is terrible! I beg you not to see it!""). In this case our model is confused and we'd want the loss to be higher. Now if we plug y = 0 and 1 − σ (w • x + b) = .31 from Eq. 5.7 into Eq. 5.12, the left side of the equation drops out:" 5 Logistic Regression 5.3 The Cross-Entropy Loss Function L CE (ŷ, y) = −[y log σ (w • x + b)+(1 − y) log (1 − σ (w • x + b))] = − [log (1 − σ (w • x + b))] = − log (.30) = 1.2 5 Logistic Regression 5.3 The Cross-Entropy Loss Function Sure enough, the loss for the first classifier (.37) is less than the loss for the second classifier (1.17). Why does minimizing this negative log probability do what we want? A perfect classifier would assign probability 1 to the correct outcome (y=1 or y=0) and probability 0 to the incorrect outcome. That means the higherŷ (the closer it is to 1), the better the classifier; the lowerŷ is (the closer it is to 0), the worse the classifier. The negative log of this probability is a convenient loss metric since it goes from 0 (negative log of 1, no loss) to infinity (negative log of 0, infinite loss). This loss function also ensures that as the probability of the correct answer is maximized, the probability of the incorrect answer is minimized; since the two sum to one, any increase in the probability of the correct answer is coming at the expense of the incorrect answer. It's called the cross-entropy loss, because Eq. 5.10 is also the formula for the cross-entropy between the true probability distribution y and our estimated distributionŷ. 5 Logistic Regression 5.3 The Cross-Entropy Loss Function Now we know what we want to minimize; in the next section, we'll see how to find the minimum. @@ -606,7 +606,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 5 Logistic Regression 5.4 Gradient Descent Given a random initialization of w at some value w 1 , and assuming the loss function L happened to have the shape in Fig. 5 .3, we need the algorithm to tell us whether at the next iteration we should move left (making w 2 smaller than w 1 ) or right (making w 2 bigger than w 1 ) to reach the minimum. .3 The first step in iteratively finding the minimum of this loss function, by moving w in the reverse direction from the slope of the function. Since the slope is negative, we need to move w in a positive direction, to the right. Here superscripts are used for learning steps, so w 1 means the initial value of w (which is 0), w 2 at the second step, and so on. 5 Logistic Regression 5.4 Gradient Descent The gradient descent algorithm answers this question by finding the gradient gradient of the loss function at the current point and moving in the opposite direction. The gradient of a function of many variables is a vector pointing in the direction of the greatest increase in a function. The gradient is a multi-variable generalization of the slope, so for a function of one variable like the one in Fig. 5 .3, we can informally think of the gradient as the slope. The dotted line in Fig. 5 .3 shows the slope of this hypothetical loss function at point w = w 1 . You can see that the slope of this dotted line is negative. Thus to find the minimum, gradient descent tells us to go in the opposite direction: moving w in a positive direction. The magnitude of the amount to move in gradient descent is the value of the slope d dw L( f (x; w), y) weighted by a learning rate η. A higher (faster) learning learning rate rate means that we should move w more on each step. The change we make in our parameter is the learning rate times the gradient (or the slope, in our single-variable example): 5 Logistic Regression 5.4 Gradient Descent w t+1 = w t − η d dw L( f (x; w), y) (5.14) -5 Logistic Regression 5.4 Gradient Descent Now let's extend the intuition from a function of one scalar variable w to many variables, because we don't just want to move left or right, we want to know where in the N-dimensional space (of the N parameters that make up θ ) we should move. The gradient is just such a vector; it expresses the directional components of the sharpest slope along each of those N dimensions. If we're just imagining two weight dimensions (say for one weight w and one bias b), the gradient might be a vector with two orthogonal components, each of which tells us how much the ground slopes in the w dimension and in the b dimension. In an actual logistic regression, the parameter vector w is much longer than 1 or 2, since the input feature vector x can be quite long, and we need a weight w i for each x i . For each dimension/variable w i in w (plus the bias b), the gradient will have a component that tells us the slope with respect to that variable. Essentially we're asking: "How much would a small change in that variable w i influence the total loss function L?" +5 Logistic Regression 5.4 Gradient Descent "Now let's extend the intuition from a function of one scalar variable w to many variables, because we don't just want to move left or right, we want to know where in the N-dimensional space (of the N parameters that make up θ ) we should move. The gradient is just such a vector; it expresses the directional components of the sharpest slope along each of those N dimensions. If we're just imagining two weight dimensions (say for one weight w and one bias b), the gradient might be a vector with two orthogonal components, each of which tells us how much the ground slopes in the w dimension and in the b dimension. In an actual logistic regression, the parameter vector w is much longer than 1 or 2, since the input feature vector x can be quite long, and we need a weight w i for each x i . For each dimension/variable w i in w (plus the bias b), the gradient will have a component that tells us the slope with respect to that variable. Essentially we're asking: ""How much would a small change in that variable w i influence the total loss function L?""" 5 Logistic Regression 5.4 Gradient Descent In each dimension w i , we express the slope as a partial derivative ∂ ∂ w i of the loss function. The gradient is then defined as a vector of these partials. We'll representŷ as f (x; θ ) to make the dependence on θ more obvious: 5 Logistic Regression 5.4 Gradient Descent ∇ θ L( f (x; θ ), y)) =         ∂ ∂ w 1 L( f (x; θ ), y) ∂ ∂ w 2 L( f (x; θ ), y) . . . ∂ ∂ w n L( f (x; θ ), y) ∂ ∂ b L( f (x; θ ), y)         (5.15) 5 Logistic Regression 5.4 Gradient Descent The final equation for updating θ based on the gradient is thus @@ -616,7 +616,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 5 Logistic Regression 5.4 Gradient Descent 5.4.1 The Gradient for Logistic Regression It turns out that the derivative of this function for one observation vector x is Eq. 5.18 (the interested reader can see Section 5.8 for the derivation of this equation): 5 Logistic Regression 5.4 Gradient Descent 5.4.1 The Gradient for Logistic Regression ∂ L CE (ŷ, y) ∂ w j = [σ (w • x + b) − y]x j (5.18) 5 Logistic Regression 5.4 Gradient Descent 5.4.1 The Gradient for Logistic Regression Note in Eq. 5.18 that the gradient with respect to a single weight w j represents a very intuitive value: the difference between the true y and our estimatedŷ = σ (w • x + b) for that observation, multiplied by the corresponding input value x j . -5 Logistic Regression 5.4 Gradient Descent 5.4.2 The Stochastic Gradient Descent Algorithm Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example, and nudging θ in the right direction (the opposite direction of the gradient). (an "online algorithm" is one that processes its input example by example, rather than waiting until it sees the entire input). x is the set of training inputs +5 Logistic Regression 5.4 Gradient Descent 5.4.2 The Stochastic Gradient Descent Algorithm "Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example, and nudging θ in the right direction (the opposite direction of the gradient). (an ""online algorithm"" is one that processes its input example by example, rather than waiting until it sees the entire input). x is the set of training inputs" 5 Logistic Regression 5.4 Gradient Descent 5.4.2 The Stochastic Gradient Descent Algorithm x (1) , x (2) , ..., x (m) # 5 Logistic Regression 5.4 Gradient Descent 5.4.2 The Stochastic Gradient Descent Algorithm y is the set of training outputs (labels) y (1) , y (2) , ..., y (m) θ ← 0 repeat til done # see caption For each training tuple (x (i) , y (i) ) (in random order) 1. Optional (for reporting): # How are we doing on this tuple? Computeŷ (i) = f (x (i) ; θ ) # What is our estimated outputŷ? Compute the loss L(ŷ (i) , y (i) ) # How far off isŷ (i) from the true output 5 Logistic Regression 5.4 Gradient Descent 5.4.2 The Stochastic Gradient Descent Algorithm y (i) ? 2. g ← ∇ θ L( f (x (i) ; θ ), y (i) ) @@ -679,10 +679,10 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 5 Logistic Regression 5.6 Multinomial Logistic Regression p(y = c|x) = exp (w c • x + b c ) K j=1 exp (w j • x + b j ) (5.32) 5 Logistic Regression 5.6 Multinomial Logistic Regression Like the sigmoid, the softmax has the property of squashing values toward 0 or 1. Thus if one of the inputs is larger than the others, it will tend to push its probability toward 1, and suppress the probabilities of the smaller inputs. 5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression Features in multinomial logistic regression function similarly to binary logistic regression, with one difference that we'll need separate weight vectors (and biases) for each of the K classes. Recall our binary exclamation point feature x 5 from page 80: -5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression x 5 = 1 if "!" ∈ doc 0 otherwise +5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression "x 5 = 1 if ""!"" ∈ doc 0 otherwise" 5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression In binary classification a positive weight w 5 on a feature influences the classifier toward y = 1 (positive sentiment) and a negative weight influences it toward y = 0 (negative sentiment) with the absolute value indicating how important the feature is. For multinominal logistic regression, by contrast, with separate weights for each class, a feature can be evidence for or against each individual class. 5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression In 3-way multiclass sentiment classification, for example, we must assign each document one of the 3 classes +, −, or 0 (neutral). Now a feature related to exclamation marks might have a negative weight for 0 documents, and a positive weight for + or − documents: -5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression Feature Definition w 5,+ w 5,− w 5,0 f 5 (x) 1 if "!" ∈ doc 0 otherwise 3.5 3.1 −5.3 +5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression "Feature Definition w 5,+ w 5,− w 5,0 f 5 (x) 1 if ""!"" ∈ doc 0 otherwise 3.5 3.1 −5.3" 5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.1 Features in Multinomial Logistic Regression Because these feature weights are dependent both on the input text and the output class, we sometimes make this dependence explicit and represent the features themselves as f (x, y): a function of both the input and the class. Using such a notation f 5 (x) above could be represented as three features f 5 (x, +), f 5 (x, −), and f 5 (x, 0), each of which has a single weight. 5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.2 Learning in Multinomial Logistic Regression The loss function for multinomial logistic regression generalizes the loss function for binary logistic regression from 2 to K classes. Recall that that the cross-entropy loss for binary logistic regression (repeated from Eq. 5.11) is: 5 Logistic Regression 5.6 Multinomial Logistic Regression 5.6.2 Learning in Multinomial Logistic Regression L CE (ŷ, y) = − log p(y|x) = − [y logŷ + (1 − y) log(1 −ŷ)] (5.33) @@ -717,15 +717,15 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 6 Vector Semantics and Embeddings Nets are for fish; Once you get the fish, you can forget the net. 6 Vector Semantics and Embeddings Words are for meaning; Once you get the meaning, you can forget the words 6 Vector Semantics and Embeddings 庄子(Zhuangzi), Chapter 26 -6 Vector Semantics and Embeddings The asphalt that Los Angeles is famous for occurs mainly on its freeways. But in the middle of the city is another patch of asphalt, the La Brea tar pits, and this asphalt preserves millions of fossil bones from the last of the Ice Ages of the Pleistocene Epoch. One of these fossils is the Smilodon, or saber-toothed tiger, instantly recognizable by its long canines. Five million years ago or so, a completely different sabre-tooth tiger called Thylacosmilus lived in Argentina and other parts of South America. Thylacosmilus was a marsupial whereas Smilodon was a placental mammal, but Thylacosmilus had the same long upper canines and, like Smilodon, had a protective bone flange on the lower jaw. The similarity of these two mammals is one of many examples of parallel or convergent evolution, in which particular contexts or environments lead to the evolution of very similar structures in different species (Gould, 1980) . The role of context is also important in the similarity of a less biological kind of organism: the word. Words that occur in similar contexts tend to have similar meanings. This link between similarity in how words are distributed and similarity in what they mean is called the distributional hypothesis. The hypothesis was distributional hypothesis first formulated in the 1950s by linguists like Joos (1950), Harris (1954) , and Firth (1957), who noticed that words which are synonyms (like oculist and eye-doctor) tended to occur in the same environment (e.g., near words like eye or examined) with the amount of meaning difference between two words "corresponding roughly to the amount of difference in their environments" (Harris, 1954, 157) . In this chapter we introduce vector semantics, which instantiates this linguistic vector semantics hypothesis by learning representations of the meaning of words, called embeddings, embeddings directly from their distributions in texts. These representations are used in every natural language processing application that makes use of meaning, and the static embeddings we introduce here underlie the more powerful dynamic or contextualized embeddings like BERT that we will see in Chapter 11. +6 Vector Semantics and Embeddings "The asphalt that Los Angeles is famous for occurs mainly on its freeways. But in the middle of the city is another patch of asphalt, the La Brea tar pits, and this asphalt preserves millions of fossil bones from the last of the Ice Ages of the Pleistocene Epoch. One of these fossils is the Smilodon, or saber-toothed tiger, instantly recognizable by its long canines. Five million years ago or so, a completely different sabre-tooth tiger called Thylacosmilus lived in Argentina and other parts of South America. Thylacosmilus was a marsupial whereas Smilodon was a placental mammal, but Thylacosmilus had the same long upper canines and, like Smilodon, had a protective bone flange on the lower jaw. The similarity of these two mammals is one of many examples of parallel or convergent evolution, in which particular contexts or environments lead to the evolution of very similar structures in different species (Gould, 1980) . The role of context is also important in the similarity of a less biological kind of organism: the word. Words that occur in similar contexts tend to have similar meanings. This link between similarity in how words are distributed and similarity in what they mean is called the distributional hypothesis. The hypothesis was distributional hypothesis first formulated in the 1950s by linguists like Joos (1950), Harris (1954) , and Firth (1957), who noticed that words which are synonyms (like oculist and eye-doctor) tended to occur in the same environment (e.g., near words like eye or examined) with the amount of meaning difference between two words ""corresponding roughly to the amount of difference in their environments"" (Harris, 1954, 157) . In this chapter we introduce vector semantics, which instantiates this linguistic vector semantics hypothesis by learning representations of the meaning of words, called embeddings, embeddings directly from their distributions in texts. These representations are used in every natural language processing application that makes use of meaning, and the static embeddings we introduce here underlie the more powerful dynamic or contextualized embeddings like BERT that we will see in Chapter 11." 6 Vector Semantics and Embeddings These word representations are also the first example in this book of representation learning, automatically learning useful representations of the input text. 6 Vector Semantics and Embeddings 6.1 Lexical Semantics Finding such self-supervised ways to learn representations of the input, instead of creating representations by hand via feature engineering, is an important focus of NLP research (Bengio et al., 2013). -6 Vector Semantics and Embeddings 6.1 Lexical Semantics Let's begin by introducing some basic principles of word meaning. How should we represent the meaning of a word? In the n-gram models of Chapter 3, and in classical NLP applications, our only representation of a word is as a string of letters, or an index in a vocabulary list. This representation is not that different from a tradition in philosophy, perhaps you've seen it in introductory logic classes, in which the meaning of words is represented by just spelling the word with small capital letters; representing the meaning of "dog" as DOG, and "cat" as CAT. +6 Vector Semantics and Embeddings 6.1 Lexical Semantics "Let's begin by introducing some basic principles of word meaning. How should we represent the meaning of a word? In the n-gram models of Chapter 3, and in classical NLP applications, our only representation of a word is as a string of letters, or an index in a vocabulary list. This representation is not that different from a tradition in philosophy, perhaps you've seen it in introductory logic classes, in which the meaning of words is represented by just spelling the word with small capital letters; representing the meaning of ""dog"" as DOG, and ""cat"" as CAT." 6 Vector Semantics and Embeddings 6.1 Lexical Semantics Representing the meaning of a word by capitalizing it is a pretty unsatisfactory model. You might have seen a joke due originally to semanticist Barbara Partee (Carlson, 1977) : Q: What's the meaning of life? A: LIFE' Surely we can do better than this! After all, we'll want a model of word meaning to do all sorts of things for us. It should tell us that some words have similar meanings (cat is similar to dog), others are antonyms (cold is the opposite of hot), some have positive connotations (happy) while others have negative connotations (sad). It should represent the fact that the meanings of buy, sell, and pay offer differing perspectives on the same underlying purchasing event (If I buy something from you, you've probably sold it to me, and I likely paid you). More generally, a model of word meaning should allow us to draw inferences to address meaning-related tasks like question-answering or dialogue. 6 Vector Semantics and Embeddings 6.1 Lexical Semantics In this section we summarize some of these desiderata, drawing on results in the linguistic study of word meaning, which is called lexical semantics; we'll return to lexical semantics and expand on this list in Chapter 18 and Chapter 10. 6 Vector Semantics and Embeddings 6.1 Lexical Semantics Lemmas and Senses Let's start by looking at how one word (we'll choose mouse) might be defined in a dictionary (simplified from the online dictionary WordNet): mouse (N) 1. any of numerous small rodents... 2. a hand-operated device that controls a cursor... -6 Vector Semantics and Embeddings 6.1 Lexical Semantics Here the form mouse is the lemma, also called the citation form. The form lemma citation form mouse would also be the lemma for the word mice; dictionaries don't have separate definitions for inflected forms like mice. Similarly sing is the lemma for sing, sang, sung. In many languages the infinitive form is used as the lemma for the verb, so Spanish dormir "to sleep" is the lemma for duermes "you sleep". The specific forms sung or carpets or sing or duermes are called wordforms. -6 Vector Semantics and Embeddings 6.1 Lexical Semantics wordform As the example above shows, each lemma can have multiple meanings; the lemma mouse can refer to the rodent or the cursor control device. We call each of these aspects of the meaning of mouse a word sense. The fact that lemmas can be polysemous (have multiple senses) can make interpretation difficult (is someone who types "mouse info" into a search engine looking for a pet or a tool?). Chapter 18 will discuss the problem of polysemy, and introduce word sense disambiguation, the task of determining which sense of a word is being used in a particular context. +6 Vector Semantics and Embeddings 6.1 Lexical Semantics "Here the form mouse is the lemma, also called the citation form. The form lemma citation form mouse would also be the lemma for the word mice; dictionaries don't have separate definitions for inflected forms like mice. Similarly sing is the lemma for sing, sang, sung. In many languages the infinitive form is used as the lemma for the verb, so Spanish dormir ""to sleep"" is the lemma for duermes ""you sleep"". The specific forms sung or carpets or sing or duermes are called wordforms." +6 Vector Semantics and Embeddings 6.1 Lexical Semantics "wordform As the example above shows, each lemma can have multiple meanings; the lemma mouse can refer to the rodent or the cursor control device. We call each of these aspects of the meaning of mouse a word sense. The fact that lemmas can be polysemous (have multiple senses) can make interpretation difficult (is someone who types ""mouse info"" into a search engine looking for a pet or a tool?). Chapter 18 will discuss the problem of polysemy, and introduce word sense disambiguation, the task of determining which sense of a word is being used in a particular context." 6 Vector Semantics and Embeddings 6.1 Lexical Semantics Synonymy One important component of word meaning is the relationship between word senses. For example when one word has a sense whose meaning is identical to a sense of another word, or nearly identical, we say the two senses of those two words are synonyms. Synonyms include such pairs as synonym 6.1 • LEXICAL SEMANTICS 99 couch/sofa vomit/throw up filbert/hazelnut car/automobile A more formal definition of synonymy (between words rather than senses) is that two words are synonymous if they are substitutable for one another in any sentence without changing the truth conditions of the sentence, the situations in which the sentence would be true. We often say in this case that the two words have the same propositional meaning. 6 Vector Semantics and Embeddings 6.1 Lexical Semantics propositional meaning 6 Vector Semantics and Embeddings 6.1 Lexical Semantics While substitutions between some pairs of words like car / automobile or water / H 2 O are truth preserving, the words are still not identical in meaning. Indeed, probably no two words are absolutely identical in meaning. One of the fundamental tenets of semantics, called the principle of contrast (Girard 1718, Bréal 1897, Clark principle of contrast 1987), states that a difference in linguistic form is always associated with some difference in meaning. For example, the word H 2 O is used in scientific contexts and would be inappropriate in a hiking guide-water would be more appropriate-and this genre difference is part of the meaning of the word. In practice, the word synonym is therefore used to describe a relationship of approximate or rough synonymy. @@ -741,10 +741,10 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 6 Vector Semantics and Embeddings 6.1 Lexical Semantics Controlling is high on dominance, while awed or influenced are low on dominance. Each word is thus represented by three numbers, corresponding to its value on each of the three dimensions: (1957) noticed that in using these 3 numbers to represent the meaning of a word, the model was representing each word as a point in a threedimensional space, a vector whose three dimensions corresponded to the word's rating on the three scales. This revolutionary idea that word meaning could be represented as a point in space (e.g., that part of the meaning of heartbreak can be represented as the point [2.45, 5.65, 3.58]) was the first expression of the vector semantics models that we introduce next. 6 Vector Semantics and Embeddings 6.2 Vector Semantics Vectors semantics is the standard way to represent word meaning in NLP, helping vector semantics us model many of the aspects of word meaning we saw in the previous section. The roots of the model lie in the 1950s when two big ideas converged: Osgood's 1957 idea mentioned above to use a point in three-dimensional space to represent the connotation of a word, and the proposal by linguists like Joos (1950), Harris (1954), and Firth (1957) to define the meaning of a word by its distribution in language use, meaning its neighboring words or grammatical environments. Their idea was that two words that occur in very similar distributions (whose neighboring words are similar) have similar meanings. 6 Vector Semantics and Embeddings 6.2 Vector Semantics For example, suppose you didn't know the meaning of the word ongchoi (a recent borrowing from Cantonese) but you see it in the following contexts: The fact that ongchoi occurs with words like rice and garlic and delicious and salty, as do words like spinach, chard, and collard greens might suggest that ongchoi is a leafy green similar to these other leafy greens. 1 We can do the same thing computationally by just counting words in the context of ongchoi. -6 Vector Semantics and Embeddings 6.2 Vector Semantics The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors. Vectors for representing words are called embeddings (although embeddings the term is sometimes more strictly applied only to dense vectors like word2vec (Section 6.8), rather than sparse tf-idf or PPMI vectors (Section 6.3-Section 6.6)). The word "embedding" derives from its mathematical sense as a mapping from one space or structure to another, although the meaning has shifted; see the end of the chapter. Figure 6 .1 A two-dimensional (t-SNE) projection of embeddings for some words and phrases, showing that words with similar meanings are nearby in space. The original 60dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. 2015with colors added for explanation. +6 Vector Semantics and Embeddings 6.2 Vector Semantics "The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors. Vectors for representing words are called embeddings (although embeddings the term is sometimes more strictly applied only to dense vectors like word2vec (Section 6.8), rather than sparse tf-idf or PPMI vectors (Section 6.3-Section 6.6)). The word ""embedding"" derives from its mathematical sense as a mapping from one space or structure to another, although the meaning has shifted; see the end of the chapter. Figure 6 .1 A two-dimensional (t-SNE) projection of embeddings for some words and phrases, showing that words with similar meanings are nearby in space. The original 60dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. 2015with colors added for explanation." 6 Vector Semantics and Embeddings 6.2 Vector Semantics The fine-grained model of word similarity of vector semantics offers enormous power to NLP applications. NLP applications like the sentiment classifiers of Chapter 4 or Chapter 5 depend on the same words appearing in the training and test sets. But by representing words as embeddings, classifiers can assign sentiment as long as it sees some words with similar meanings. And as we'll see, vector semantic models can be learned automatically from text without supervision. 6 Vector Semantics and Embeddings 6.2 Vector Semantics In this chapter we'll introduce the two most commonly used models. In the tf-idf model, an important baseline, the meaning of a word is defined by a simple function of the counts of nearby words. We will see that this method results in very long vectors that are sparse, i.e. mostly zeros (since most words simply never occur in the context of others). We'll introduce the word2vec model family for constructing short, dense vectors that have useful semantic properties. We'll also introduce the cosine, the standard way to use embeddings to compute semantic similarity, between two words, two sentences, or two documents, an important tool in practical applications like question answering, summarization, or automatic essay grading. -6 Vector Semantics and Embeddings 6.3 Words and Vectors "The most important attributes of a vector in 3-space are {Location, Location, Location}" Randall Munroe, https://xkcd.com/2358/ Vector or distributional models of meaning are generally based on a co-occurrence matrix, a way of representing how often words co-occur. We'll look at two popular matrices: the term-document matrix and the term-term matrix. +6 Vector Semantics and Embeddings 6.3 Words and Vectors """The most important attributes of a vector in 3-space are {Location, Location, Location}"" Randall Munroe, https://xkcd.com/2358/ Vector or distributional models of meaning are generally based on a co-occurrence matrix, a way of representing how often words co-occur. We'll look at two popular matrices: the term-document matrix and the term-term matrix." 6 Vector Semantics and Embeddings 6.3 Words and Vectors 6.3.1 Vectors and documents In a term-document matrix, each row represents a word in the vocabulary and each term-document matrix column represents a document from some collection of documents. Fig. 6 .2 shows a small selection from a term-document matrix showing the occurrence of four words in four plays by Shakespeare. Each cell in this matrix represents the number of times a particular word (defined by the row) occurs in a particular document (defined by the column). Thus fool appeared 58 times in Twelfth Night. 6 Vector Semantics and Embeddings 6.3 Words and Vectors 6.3.1 Vectors and documents The term-document matrix of Fig. 6 .2 was first defined as part of the vector space model of information retrieval (Salton, 1971). In this model, a document is represented as a count vector, a column in Fig. 6.3. 6 Vector Semantics and Embeddings 6.3 Words and Vectors 6.3.1 Vectors and documents To review some basic linear algebra, a vector is, at heart, just a list or array of numbers. So As You Like It is represented as the list [1,114,36,20] (the first column vector in Fig. 6.3) and Julius Caesar is represented as the list [7,62,1,2] (the third column vector). A vector space is a collection of vectors, characterized by their dimension. @@ -817,8 +817,8 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 6 Vector Semantics and Embeddings 6.8 Word2Vec In the previous sections we saw how to represent a word as a sparse, long vector with dimensions corresponding to words in the vocabulary or documents in a collection. We now introduce a more powerful word representation: embeddings, short dense vectors. Unlike the vectors we've seen so far, embeddings are short, with number of dimensions d ranging from 50-1000, rather than the much larger vocabulary size |V | or number of documents D we've seen. These d dimensions don't have a clear interpretation. And the vectors are dense: instead of vector entries being sparse, mostly-zero counts or functions of counts, the values will be real-valued numbers that can be negative. 6 Vector Semantics and Embeddings 6.8 Word2Vec It turns out that dense vectors work better in every NLP task than sparse vectors. While we don't completely understand all the reasons for this, we have some intuitions. Representing words as 300-dimensional dense vectors requires our classifiers to learn far fewer weights than if we represented words as 50,000-dimensional vectors, and the smaller parameter space possibly helps with generalization and avoiding overfitting. Dense vectors may also do a better job of capturing synonymy. For example, in a sparse vector representation, dimensions for synonyms like car and automobile dimension are distinct and unrelated; sparse vectors may thus fail to capture the similarity between a word with car as a neighbor and a word with automobile as a neighbor. 6 Vector Semantics and Embeddings 6.8 Word2Vec In this section we introduce one method for computing embeddings: skip-gram vocabulary. In Chapter 11 we'll introduce methods for learning dynamic contextual embeddings like the popular family of BERT representations, in which the vector for each word is different in different contexts. -6 Vector Semantics and Embeddings 6.8 Word2Vec The intuition of word2vec is that instead of counting how often each word w occurs near, say, apricot, we'll instead train a classifier on a binary prediction task: "Is word w likely to show up near apricot?" We don't actually care about this prediction task; instead we'll take the learned classifier weights as the word embeddings. -6 Vector Semantics and Embeddings 6.8 Word2Vec The revolutionary intuition here is that we can just use running text as implicitly supervised training data for such a classifier; a word c that occurs near the target word apricot acts as gold 'correct answer' to the question "Is word c likely to show up near apricot?" This method, often called self-supervision, avoids the need for self-supervision any sort of hand-labeled supervision signal. This idea was first proposed in the task of neural language modeling, when Bengio et al. (2003) and Collobert et al. 2011showed that a neural language model (a neural network that learned to predict the next word from prior words) could just use the next word in running text as its supervision signal, and could be used to learn an embedding representation for each word as part of doing this prediction task. +6 Vector Semantics and Embeddings 6.8 Word2Vec "The intuition of word2vec is that instead of counting how often each word w occurs near, say, apricot, we'll instead train a classifier on a binary prediction task: ""Is word w likely to show up near apricot?"" We don't actually care about this prediction task; instead we'll take the learned classifier weights as the word embeddings." +6 Vector Semantics and Embeddings 6.8 Word2Vec "The revolutionary intuition here is that we can just use running text as implicitly supervised training data for such a classifier; a word c that occurs near the target word apricot acts as gold 'correct answer' to the question ""Is word c likely to show up near apricot?"" This method, often called self-supervision, avoids the need for self-supervision any sort of hand-labeled supervision signal. This idea was first proposed in the task of neural language modeling, when Bengio et al. (2003) and Collobert et al. 2011showed that a neural language model (a neural network that learned to predict the next word from prior words) could just use the next word in running text as its supervision signal, and could be used to learn an embedding representation for each word as part of doing this prediction task." 6 Vector Semantics and Embeddings 6.8 Word2Vec We'll see how to do neural networks in the next chapter, but word2vec is a much simpler model than the neural network language model, in two ways. First, word2vec simplifies the task (making it binary classification instead of word prediction). Second, word2vec simplifies the architecture (training a logistic regression classifier instead of a multi-layer neural network with hidden layers that demand more sophisticated training algorithms). The intuition of skip-gram is: 6 Vector Semantics and Embeddings 6.8 Word2Vec 1. Treat the target word and a neighboring context word as positive examples. 2. Randomly sample other words in the lexicon to get negative samples. 3. Use logistic regression to train a classifier to distinguish those two cases. 4. Use the learned weights as the embeddings. 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.1 The classifier Let's start by thinking about the classification task, and then turn to how to train. Imagine a sentence like the following, with a target word apricot, and assume we're using a window of ±2 context words: @@ -842,7 +842,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings ... lemon, a [tablespoon of apricot jam, a] pinch ... c1 c2 w c3 c4 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings This example has a target word w (apricot), and 4 context words in the L = ±2 window, resulting in 4 positive training instances (on the left below): For training a binary classifier we also need negative examples. In fact skipgram with negative sampling (SGNS) uses more negative examples than positive examples (with the ratio between them set by a parameter k). So for each of these (w, c pos ) training instances we'll create k negative samples, each consisting of the target w plus a 'noise word' c neg . A noise word is a random word from the lexicon, constrained not to be the target word w. The right above shows the setting where k = 2, so we'll have 2 negative examples in the negative training set − for each positive example w, c pos . 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings positive examples + w c -6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings The noise words are chosen according to their weighted unigram frequency p α (w), where α is a weight. If we were sampling according to unweighted frequency p(w), it would mean that with unigram probability p("the") we would choose the word the as a noise word, with unigram probability p("aardvark") we would choose aardvark, and so on. But in practice it is common to set α = .75, i.e. use the weighting p 3 4 (w): +6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings "The noise words are chosen according to their weighted unigram frequency p α (w), where α is a weight. If we were sampling according to unweighted frequency p(w), it would mean that with unigram probability p(""the"") we would choose the word the as a noise word, with unigram probability p(""aardvark"") we would choose aardvark, and so on. But in practice it is common to set α = .75, i.e. use the weighting p 3 4 (w):" 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings P α (w) = count(w) α w count(w ) α (6.32) 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings Setting α = .75 gives better performance because it gives rare noise words slightly higher probability: for rare words, P α (w) > P(w). To illustrate this intuition, it might help to work out the probabilities for an example with two events, P(a) = .99 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.2 Learning skip-gram embeddings and P(b) = .01: If we consider one word/context pair (w, c pos ) with its k noise words c neg 1 ...c neg k , we can express these two goals as the following loss function L to be minimized (hence the −); here the first term expresses that we want the classifier to assign the real context word c pos a high probability of being a neighbor, and the second term expresses that we want to assign each of the noise words c neg i a high probability of being a non-neighbor, all multiplied because we assume independence: @@ -862,7 +862,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.3 Other kinds of static embeddings Then a skipgram embedding is learned for each constituent n-gram, and the word where is represented by the sum of all of the embeddings of its constituent n-grams. Unknown words can then be presented only by the sum of the constituent n-grams. A fasttext open-source library, including pretrained embeddings for 157 languages, is available at https://fasttext.cc. 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.3 Other kinds of static embeddings Another very widely used static embedding model is GloVe (Pennington et al., 2014), short for Global Vectors, because the model is based on capturing global corpus statistics. GloVe is based on ratios of probabilities from the word-word cooccurrence matrix, combining the intuitions of count-based models like PPMI while also capturing the linear structures used by methods like word2vec. 6 Vector Semantics and Embeddings 6.8 Word2Vec 6.8.3 Other kinds of static embeddings It turns out that dense embeddings like word2vec actually have an elegant mathematical relationships with sparse embeddings like PPMI, in which word2vec can be seen as implicitly optimizing a shifted version of a PPMI matrix (Levy and Goldberg, 2014c). -6 Vector Semantics and Embeddings 6.9 Visualizing Embeddings "I see well in many dimensions as long as the dimensions are around two." +6 Vector Semantics and Embeddings 6.9 Visualizing Embeddings """I see well in many dimensions as long as the dimensions are around two.""" 6 Vector Semantics and Embeddings 6.9 Visualizing Embeddings The late economist Martin Shubik Visualizing embeddings is an important goal in helping understand, apply, and improve these models of word meaning. But how can we visualize a (for example) 100-dimensional vector? 6 Vector Semantics and Embeddings 6.9 Visualizing Embeddings The simplest way to visualize the meaning of a word w embedded in a space is to list the most similar words to w by sorting the vectors for all words in the vocabulary by their cosine with the vector for w. For example the 7 closest words to frog using the GloVe embeddings are: frogs, toad, litoria, leptodactylidae, rana, lizard, and eleutherodactylus (Pennington et al., 2014). 6 Vector Semantics and Embeddings 6.9 Visualizing Embeddings Yet another visualization method is to use a clustering algorithm to show a hierarchical representation of which words are similar to others in the embedding space. The uncaptioned figure on the left uses hierarchical clustering of some embedding vectors for nouns as a visualization method (Rohde et al., 2006) . @@ -877,7 +877,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 6 Vector Semantics and Embeddings 6.10 Semantic Properties of Embeddings with some distance function, such as Euclidean distance. 6 Vector Semantics and Embeddings 6.10 Semantic Properties of Embeddings There are some caveats. For example, the closest value returned by the parallelogram algorithm in word2vec or GloVe embedding spaces is usually not in fact b* but one of the 3 input words or their morphological variants (i.e., cherry:red :: potato:x returns potato or potatoes instead of brown), so these must be explicitly excluded. Furthermore while embedding spaces perform well if the task involves frequent words, small distances, and certain relations (like relating countries with their capitals or verbs/nouns with their inflected forms), the parallelogram method with embeddings doesn't work as well for other relations (Linzen 2016 , Gladkova et al. 2016 , Schluter 2018 , Ethayarajh et al. 2019a ), and indeed Peterson et al. (2020) argue that the parallelogram method is in general too simple to model the human cognitive process of forming analogies of this kind. 6 Vector Semantics and Embeddings 6.10 Semantic Properties of Embeddings 6.10.1 Embeddings and Historical Semantics Embeddings can also be a useful tool for studying how meaning changes over time, by computing multiple embedding spaces, each from texts written in a particular time period. For example Fig. 6 .17 shows a visualization of changes in meaning in English words over the last two centuries, computed by building separate embed-ding spaces for each decade from historical corpora like Google n-grams (Lin et al., 2012b) and the Corpus of Historical American English (Davies, 2012). -6 Vector Semantics and Embeddings 6.10 Semantic Properties of Embeddings 6.10.1 Embeddings and Historical Semantics Figure 6 .17: Two-dimensional visualization of semantic change in English using SGNS vectors (see Section 5.8 for the visualization algorithm). A, The word gay shifted from meaning "cheerful" or "frolicsome" to referring to homosexuality. A, In the early 20th century broadcast referred to "casting out seeds"; with the rise of television and radio its meaning shifted to "transmitting signals". C, Awful underwent a process of pejoration, as it shifted from meaning "full of awe" to meaning "terrible or appalling" [212] . that adverbials (e.g., actually) have a general tendency to undergo subjectification where they shift from objective statements about the world (e.g., "Sorry, the car is actually broken") to subjective statements (e.g., "I can't believe he actually did that", indicating surprise/disbelief). .17 A t-SNE visualization of the semantic change of 3 words in English using word2vec vectors. The modern sense of each word, and the grey context words, are computed from the most recent (modern) time-point embedding space. Earlier points are computed from earlier historical embedding spaces. The visualizations show the changes in the word gay from meanings related to "cheerful" or "frolicsome" to referring to homosexuality, the development of the modern "transmission" sense of broadcast from its original sense of sowing seeds, and the pejoration of the word awful as it shifted from meaning "full of awe" to meaning "terrible or appalling" (Hamilton et al., 2016b). +6 Vector Semantics and Embeddings 6.10 Semantic Properties of Embeddings 6.10.1 Embeddings and Historical Semantics "Figure 6 .17: Two-dimensional visualization of semantic change in English using SGNS vectors (see Section 5.8 for the visualization algorithm). A, The word gay shifted from meaning ""cheerful"" or ""frolicsome"" to referring to homosexuality. A, In the early 20th century broadcast referred to ""casting out seeds""; with the rise of television and radio its meaning shifted to ""transmitting signals"". C, Awful underwent a process of pejoration, as it shifted from meaning ""full of awe"" to meaning ""terrible or appalling"" [212] . that adverbials (e.g., actually) have a general tendency to undergo subjectification where they shift from objective statements about the world (e.g., ""Sorry, the car is actually broken"") to subjective statements (e.g., ""I can't believe he actually did that"", indicating surprise/disbelief). .17 A t-SNE visualization of the semantic change of 3 words in English using word2vec vectors. The modern sense of each word, and the grey context words, are computed from the most recent (modern) time-point embedding space. Earlier points are computed from earlier historical embedding spaces. The visualizations show the changes in the word gay from meanings related to ""cheerful"" or ""frolicsome"" to referring to homosexuality, the development of the modern ""transmission"" sense of broadcast from its original sense of sowing seeds, and the pejoration of the word awful as it shifted from meaning ""full of awe"" to meaning ""terrible or appalling"" (Hamilton et al., 2016b)." 6 Vector Semantics and Embeddings 6.11 Bias and Embeddings In addition to their ability to learn word meaning from text, embeddings, alas, also reproduce the implicit biases and stereotypes that were latent in the text. As the prior section just showed, embeddings can roughly model relational similarity: 'queen' as the closest word to 'king' -'man' + 'woman' implies the analogy man:woman::king:queen. But these same embedding analogies also exhibit gender stereotypes. For example Bolukbasi et al. (2016) find that the closest occupation to 'man' -'computer programmer' + 'woman' in word2vec embeddings trained on news text is 'homemaker', and that the embeddings similarly suggest the analogy 'father' is to 'doctor' as 'mother' is to 'nurse'. This could result in what Crawford (2017) and Blodgett et al. (2020) call an allocational harm, when a system allo-allocational harm cates resources (jobs or credit) unfairly to different groups. For example algorithms that use embeddings as part of a search for hiring potential programmers or doctors might thus incorrectly downweight documents with women's names. It turns out that embeddings don't just reflect the statistics of their input, but also amplify bias; gendered terms become more gendered in embedding space than they bias amplification were in the input text statistics (Zhao et al. 2017 , Ethayarajh et al. 2019b , Jia et al. 2020 , and biases are more exaggerated than in actual labor employment statistics (Garg et al., 2018) . 6 Vector Semantics and Embeddings 6.11 Bias and Embeddings Embeddings also encode the implicit associations that are a property of human reasoning. The Implicit Association Test (Greenwald et al., 1998) measures people's associations between concepts (like 'flowers' or 'insects') and attributes (like 'pleasantness' and 'unpleasantness') by measuring differences in the latency with which they label words in the various categories. 7 Using such methods, people 6 Vector Semantics and Embeddings 6.11 Bias and Embeddings in the United States have been shown to associate African-American names with unpleasant words (more than European-American names), male names more with mathematics and female names with the arts, and old people's names with unpleasant words (Greenwald et al. 1998 , Nosek et al. 2002a , Nosek et al. 2002b . Caliskan et al. 2017replicated all these findings of implicit associations using GloVe vectors and cosine similarity instead of human latencies. For example African-American names like 'Leroy' and 'Shaniqua' had a higher GloVe cosine with unpleasant words while European-American names ('Brad', 'Greg', 'Courtney') had a higher cosine with pleasant words. These problems with embeddings are an example of a representational harm (Crawford 2017, Blodgett et al. 2020), which is a harm caused by representational harm a system demeaning or even ignoring some social groups. Any embedding-aware algorithm that made use of word sentiment could thus exacerbate bias against African Americans. Recent research focuses on ways to try to remove these kinds of biases, for example by developing a transformation of the embedding space that removes gender stereotypes but preserves definitional gender (Bolukbasi et al. 2016 , Zhao et al. 2017 or changing the training procedure (Zhao et al., 2018b). However, although these sorts of debiasing may reduce bias in embeddings, they do not eliminate it Historical embeddings are also being used to measure biases in the past. Garg et al. 2018used embeddings from historical texts to measure the association between embeddings for occupations and embeddings for names of various ethnicities or genders (for example the relative cosine similarity of women's names versus men's to occupation words like 'librarian' or 'carpenter') across the 20th century. They found that the cosines correlate with the empirical historical percentages of women or ethnic groups in those occupations. Historical embeddings also replicated old surveys of ethnic stereotypes; the tendency of experimental participants in 1933 to associate adjectives like 'industrious' or 'superstitious' with, e.g., Chinese ethnicity, correlates with the cosine between Chinese last names and those adjectives using embeddings trained on 1930s text. They also were able to document historical gender biases, such as the fact that embeddings for adjectives related to competence ('smart', 'wise', 'thoughtful', 'resourceful') had a higher cosine with male than female words, and showed that this bias has been slowly decreasing since 1960. We return in later chapters to this question about the role of bias in natural language processing. @@ -890,16 +890,16 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 6 Vector Semantics and Embeddings 6.13 Summary • In vector semantics, a word is modeled as a vector-a point in high-dimensional space, also called an embedding. In this chapter we focus on static embeddings, in each each word is mapped to a fixed embedding. 6 Vector Semantics and Embeddings 6.13 Summary • Vector semantic models fall into two classes: sparse and dense. In sparse models each dimension corresponds to a word in the vocabulary V and cells are functions of co-occurrence counts. The term-document matrix has a row for each word (term) in the vocabulary and a column for each document. The word-context or term-term matrix has a row for each (target) word in the vocabulary and a column for each context term in the vocabulary. Two sparse weightings are common: the tf-idf weighting which weights each cell by its term frequency and inverse document frequency, and PPMI (pointwise positive mutual information) most common for for word-context matrices. 6 Vector Semantics and Embeddings 6.13 Summary • Dense vector models have dimensionality 50-1000. Word2vec algorithms like skip-gram are a popular way to compute dense embeddings. Skip-gram trains a logistic regression classifier to compute the probability that two words are 'likely to occur nearby in text'. This probability is computed from the dot product between the embeddings for the two words. • Skip-gram uses stochastic gradient descent to train the classifier, by learning embeddings that have a high dot product with embeddings of words that occur nearby and a low dot product with noise words. • Other important embedding algorithms include GloVe, a method based on ratios of word co-occurrence probabilities. • Whether using sparse or dense vectors, word and document similarities are computed by some function of the dot product between vectors. The cosine of two vectors-a normalized dot product-is the most popular such metric. -6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes The idea of vector semantics arose out of research in the 1950s in three distinct fields: linguistics, psychology, and computer science, each of which contributed a fundamental aspect of the model. The idea that meaning is related to the distribution of words in context was widespread in linguistic theory of the 1950s, among distributionalists like Zellig Harris, Martin Joos, and J. R. Firth, and semioticians like Thomas Sebeok. As Joos (1950) put it, the linguist's "meaning" of a morpheme. . . is by definition the set of conditional probabilities of its occurrence in context with all other morphemes. +6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes "The idea of vector semantics arose out of research in the 1950s in three distinct fields: linguistics, psychology, and computer science, each of which contributed a fundamental aspect of the model. The idea that meaning is related to the distribution of words in context was widespread in linguistic theory of the 1950s, among distributionalists like Zellig Harris, Martin Joos, and J. R. Firth, and semioticians like Thomas Sebeok. As Joos (1950) put it, the linguist's ""meaning"" of a morpheme. . . is by definition the set of conditional probabilities of its occurrence in context with all other morphemes." 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes The idea that the meaning of a word might be modeled as a point in a multidimensional semantic space came from psychologists like Charles E. Osgood, who had been studying how people responded to the meaning of words by assigning values along scales like happy/sad or hard/soft. Osgood et al. (1957) proposed that the meaning of a word in general could be modeled as a point in a multidimensional Euclidean space, and that the similarity of meaning between two words could be modeled as the distance between these points in the space. -6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes A final intellectual source in the 1950s and early 1960s was the field then called mechanical indexing, now known as information retrieval. In what became known mechanical indexing as the vector space model for information retrieval (Salton 1971 , Sparck Jones 1986 , researchers demonstrated new ways to define the meaning of words in terms of vectors (Switzer, 1965) , and refined methods for word similarity based on measures of statistical association between words like mutual information (Giuliano, 1965) and idf (Sparck Jones, 1972) , and showed that the meaning of documents could be represented in the same vector spaces used for words. Some of the philosophical underpinning of the distributional way of thinking came from the late writings of the philosopher Wittgenstein, who was skeptical of the possibility of building a completely formal theory of meaning definitions for each word, suggesting instead that "the meaning of a word is its use in the language" (Wittgenstein, 1953, PI 43) . That is, instead of using some logical language to define each word, or drawing on denotations or truth values, Wittgenstein's idea is that we should define a word by how it is used by people in speaking and understanding in their day-to-day interactions, thus prefiguring the movement toward embodied and experiential models in linguistics and NLP (Glenberg and Robertson 2000, Lake and Murphy 2021, Bisk et al. 2020, Bender and Koller 2020). +6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes "A final intellectual source in the 1950s and early 1960s was the field then called mechanical indexing, now known as information retrieval. In what became known mechanical indexing as the vector space model for information retrieval (Salton 1971 , Sparck Jones 1986 , researchers demonstrated new ways to define the meaning of words in terms of vectors (Switzer, 1965) , and refined methods for word similarity based on measures of statistical association between words like mutual information (Giuliano, 1965) and idf (Sparck Jones, 1972) , and showed that the meaning of documents could be represented in the same vector spaces used for words. Some of the philosophical underpinning of the distributional way of thinking came from the late writings of the philosopher Wittgenstein, who was skeptical of the possibility of building a completely formal theory of meaning definitions for each word, suggesting instead that ""the meaning of a word is its use in the language"" (Wittgenstein, 1953, PI 43) . That is, instead of using some logical language to define each word, or drawing on denotations or truth values, Wittgenstein's idea is that we should define a word by how it is used by people in speaking and understanding in their day-to-day interactions, thus prefiguring the movement toward embodied and experiential models in linguistics and NLP (Glenberg and Robertson 2000, Lake and Murphy 2021, Bisk et al. 2020, Bender and Koller 2020)." 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes More distantly related is the idea of defining words by a vector of discrete features, which has roots at least as far back as Descartes and Leibniz (Wierzbicka 1992, Wierzbicka 1996) . By the middle of the 20th century, beginning with the work of Hjelmslev (Hjelmslev, 1969) (originally 1943) and fleshed out in early models of generative grammar (Katz and Fodor, 1963) , the idea arose of representing meaning with semantic features, symbols that represent some sort of primitive meaning. 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes For example words like hen, rooster, or chick, have something in common (they all describe chickens) and something different (their age and sex), representable as: 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes hen +female, +chicken, +adult rooster -female, +chicken, +adult chick +chicken, -adult 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes The dimensions used by vector models of meaning to define words, however, are only abstractly related to this idea of a small fixed number of hand-built dimensions. Nonetheless, there has been some attempt to show that certain dimensions of embedding models do contribute some specific compositional aspect of meaning like these early semantic features. 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes The use of dense vectors to model word meaning, and indeed the term embedding, grew out of the latent semantic indexing (LSI) model (Deerwester et al., 1988) recast as LSA (latent semantic analysis) (Deerwester et al., 1990) . In LSA singular value decomposition-SVDis applied to a term-document matrix (each SVD cell weighted by log frequency and normalized by entropy), and then the first 300 dimensions are used as the LSA embedding. Singular Value Decomposition (SVD) is a method for finding the most important dimensions of a data set, those dimensions along which the data varies the most. LSA was then quickly widely applied: as a cognitive model Landauer and Dumais (1997), and for tasks like spell checking (Jones and Martin, 1997), language modeling (Bellegarda 1997, Coccaro and Jurafsky 1998, Bellegarda 2000) morphology induction (Schone and Jurafsky 2000, Schone and Jurafsky 2001b), multiword expressions (MWEs) (Schone and Jurafsky, 2001a), and essay grading (Rehder et al., 1998) . Related models were simultaneously developed and applied to word sense disambiguation by Schütze (1992b). LSA also led to the earliest use of embeddings to represent words in a probabilistic classifier, in the logistic regression document router of Schütze et al. (1995) . The idea of SVD on the term-term matrix (rather than the term-document matrix) as a model of meaning for NLP was proposed soon after LSA by Schütze (1992b). Schütze applied the low-rank (97-dimensional) embeddings produced by SVD to the task of word sense disambiguation, analyzed the resulting semantic space, and also suggested possible techniques like dropping high-order dimensions. See Schütze (1997a). 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes A number of alternative matrix models followed on from the early SVD work, including Probabilistic Latent Semantic Indexing (PLSI) (Hofmann, 1999), Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , and Non-negative Matrix Factorization (NMF) (Lee and Seung, 1999). -6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes The LSA community seems to have first used the word "embedding" in Landauer et al. (1997) , in a variant of its mathematical meaning as a mapping from one space or mathematical structure to another. In LSA, the word embedding seems to have described the mapping from the space of sparse count vectors to the latent space of SVD dense vectors. Although the word thus originally meant the mapping from one space to another, it has metonymically shifted to mean the resulting dense vector in the latent space. and it is in this sense that we currently use the word. +6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes "The LSA community seems to have first used the word ""embedding"" in Landauer et al. (1997) , in a variant of its mathematical meaning as a mapping from one space or mathematical structure to another. In LSA, the word embedding seems to have described the mapping from the space of sparse count vectors to the latent space of SVD dense vectors. Although the word thus originally meant the mapping from one space to another, it has metonymically shifted to mean the resulting dense vector in the latent space. and it is in this sense that we currently use the word." 6 Vector Semantics and Embeddings 6.14 Bibliographical and Historical Notes By the next decade, Bengio et al. (2003) and Bengio et al. (2006) showed that neural language models could also be used to develop embeddings as part of the task of word prediction. Collobert and Weston 2007 See Manning et al. (2008) for a deeper understanding of the role of vectors in information retrieval, including how to compare queries with documents, more details on tf-idf, and issues of scaling to very large datasets. See Kim (2019) for a clear and comprehensive tutorial on word2vec. Cruse 2004 is a useful introductory linguistic text of lexical semantics. 7 Neural Networks and Neural Language Models Neural networks are a fundamental computational tool for language processing, and a very old one. They are called neural because their origins lie in the McCulloch-Pitts neuron (McCulloch and Pitts, 1943 ), a simplified model of the human neuron as a kind of computing element that could be described in terms of propositional logic. But the modern use in language processing no longer draws on these early biological inspirations. 7 Neural Networks and Neural Language Models Instead, a modern neural network is a network of small computing units, each of which takes a vector of input values and produces a single output value. In this chapter we introduce the neural net applied to classification. The architecture we introduce is called a feedforward network because the computation proceeds iterfeedforward atively from one layer of units to the next. The use of modern neural nets is often called deep learning, because modern networks are often deep (have many layers). @@ -945,7 +945,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks While the XOR function cannot be calculated by a single perceptron, it can be calculated by a layered network of units. Let's see an example of how to do this from Goodfellow et al. (2016) that computes XOR using two layers of ReLU-based units. Fig. 7 .6 shows a figure with the input being processed by two layers of neural units. The middle layer (called h) has two units, and the output layer (called y) has one unit. A set of weights and biases are shown for each ReLU that correctly computes the XOR function. 7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks Let's walk through what happens with the input x = [0, 0]. If we multiply each input value by the appropriate weight, sum, and then add the bias b, we get the vector [0, -1], and we then apply the rectified linear transformation to give the output of the h layer as [0, 0]. Now we once again multiply by the weights, sum, and add the bias (0 in this case) resulting in the value 0. The reader should work through the computation of the remaining 3 possible input pairs to see that the resulting y values are 1 for the inputs [0, 1] ? Figure 7 .5 The functions AND, OR, and XOR, represented with input x 1 on the x-axis and input x 2 on the y axis. Filled circles represent perceptron outputs of 1, and white circles perceptron outputs of 0. There is no way to draw a line that correctly separates the two categories for XOR. Figure styled after Russell and Norvig (2002) . 7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks x 1 x 2 h 1 h 2 y 1 +1 1 -1 1 1 1 -2 0 1 +1 0 Figure 7 -7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks .6 XOR solution after Goodfellow et al. (2016). There are three ReLU units, in two layers; we've called them h 1 , h 2 (h for "hidden layer") and y 1 . As before, the numbers on the arrows represent the weights w for each unit, and we represent the bias b as a weight on a unit clamped to +1, with the bias weights/units in gray. +7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks ".6 XOR solution after Goodfellow et al. (2016). There are three ReLU units, in two layers; we've called them h 1 , h 2 (h for ""hidden layer"") and y 1 . As before, the numbers on the arrows represent the weights w for each unit, and we represent the bias b as a weight on a unit clamped to +1, with the bias weights/units in gray." 7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks It's also instructive to look at the intermediate results, the outputs of the two hidden nodes h 1 and h 2 . We showed in the previous paragraph that the h vector for the inputs x = [0, 0] was [0, 0]. Fig. 7.7b shows the values of the h layer for all 4 inputs. Notice that hidden representations of the two input points x = [0, 1] and x = [1, 0] (the two cases with XOR output = 1) are merged to the single point h = [1, 0]. The merger makes it easy to linearly separate the positive and negative cases of XOR. In other words, we can view the hidden layer of the network as forming a representation for the input. 7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks In this example we just stipulated the weights in Fig. 7 .6. But for real examples the weights for neural networks are learned automatically using the error backpropagation algorithm to be introduced in Section 7.6. That means the hidden layers will learn to form useful representations. This intuition, that neural networks can automatically learn useful representations of the input, is one of their key advantages, and one that we will return to again and again in later chapters. 7 Neural Networks and Neural Language Models 7.2 The XOR Problem 7.2.1 The solution: neural networks Note that the solution to the XOR problem requires a network of units with nonlinear activation functions. A network made up of simple linear (perceptron) units cannot solve the XOR problem. This is because a network formed by many layers of purely linear units can always be reduced (i.e., shown to be computationally identical to) a single layer of linear units with appropriate weights, and we've already shown (visually, in Fig. 7 .5) that a single unit cannot solve the XOR problem. We'll return to this question on page 137. @@ -996,7 +996,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 7 Neural Networks and Neural Language Models 7.3 Feedforward Neural Networks 7.3.1 More details on feedforward networks EQUATION 7 Neural Networks and Neural Language Models 7.3 Feedforward Neural Networks 7.3.1 More details on feedforward networks We'll continue showing the bias as b when we go over the learning algorithm in Section 7.6, but then we'll switch to this simplified notation without explicit bias terms for the rest of the book. 7 Neural Networks and Neural Language Models 7.4 Feedforward networks for NLP: Classification Let's see how to apply feedforward networks to NLP tasks! In this section we'll look at classification tasks like sentiment analysis; in the next section we'll introduce neural language modeling. -7 Neural Networks and Neural Language Models 7.4 Feedforward networks for NLP: Classification Let's begin with a simple two-layer sentiment classifier. You might imagine taking our logistic regression classifier of Chapter 5, which corresponds to a 1-layer network, and just adding a hidden layer. The input element x i could be scalar features like those in Fig. 5.2, e .g., x 1 = count(words ∈ doc), x 2 = count(positive lexicon words ∈ doc), x 3 = 1 if "no" ∈ doc, and so on. And the output layer y could have two nodes (one each for positive and negative), or 3 nodes (positive, negative, neutral), in which case y 1 would be the estimated probability of positive sentiment, y 2 the probability of negative and y 3 the probability of neutral. The resulting equations would be just what we saw above for a two-layer network (as sketched in Fig. 7.10 ): +7 Neural Networks and Neural Language Models 7.4 Feedforward networks for NLP: Classification "Let's begin with a simple two-layer sentiment classifier. You might imagine taking our logistic regression classifier of Chapter 5, which corresponds to a 1-layer network, and just adding a hidden layer. The input element x i could be scalar features like those in Fig. 5.2, e .g., x 1 = count(words ∈ doc), x 2 = count(positive lexicon words ∈ doc), x 3 = 1 if ""no"" ∈ doc, and so on. And the output layer y could have two nodes (one each for positive and negative), or 3 nodes (positive, negative, neutral), in which case y 1 would be the estimated probability of positive sentiment, y 2 the probability of negative and y 3 the probability of neutral. The resulting equations would be just what we saw above for a two-layer network (as sketched in Fig. 7.10 ):" 7 Neural Networks and Neural Language Models 7.4 Feedforward networks for NLP: Classification x = vector of hand-designed features 7 Neural Networks and Neural Language Models 7.4 Feedforward networks for NLP: Classification h = σ (Wx + b) z = Uh y = softmax(z) (7.19) 7 Neural Networks and Neural Language Models 7.4 Feedforward networks for NLP: Classification As we mentioned earlier, adding this hidden layer to our logistic regression regression classifier allows the network to represent the non-linear interactions between features. This alone might give us a better sentiment classifier. @@ -1006,10 +1006,10 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling P(w t |w 1 , . . . , w t−1 ) ≈ P(w t |w t−N+1 , . . . , w t−1 ) (7.21) 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling In the following examples we'll use a 4-gram example, so we'll show a net to estimate the probability P(w t = i|w t−3 , w t−2 , w t−1 ). 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling Neural language models represent words in this prior context by their embeddings, rather than just by their word identity as used in n-gram language models. Using embeddings allows neural language models to generalize better to unseen data. For example, suppose we've seen this sentence in training: -7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling I have to make sure that the cat gets fed. but have never seen the words "gets fed" after the word "dog". Our test set has the prefix "I forgot to make sure that the dog gets". What's the next word? An n-gram language model will predict "fed" after "that the cat gets", but not after "that the dog gets". But a neural LM, knowing that "cat" and "dog" have similar embeddings, will be able to generalize from the "cat" context to assign a high enough probability to "fed" even after seeing "dog". +7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling "I have to make sure that the cat gets fed. but have never seen the words ""gets fed"" after the word ""dog"". Our test set has the prefix ""I forgot to make sure that the dog gets"". What's the next word? An n-gram language model will predict ""fed"" after ""that the cat gets"", but not after ""that the dog gets"". But a neural LM, knowing that ""cat"" and ""dog"" have similar embeddings, will be able to generalize from the ""cat"" context to assign a high enough probability to ""fed"" even after seeing ""dog""." 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling 7.5.1 Forward inference in the neural language model Let's walk through forward inference or decoding for neural language models. 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling 7.5.1 Forward inference in the neural language model forward inference Forward inference is the task, given an input, of running a forward pass on the network to produce a probability distribution over possible outputs, in this case next words. -7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling 7.5.1 Forward inference in the neural language model We first represent each of the N previous words as a one-hot vector of length |V |, i.e., with one dimension for each word in the vocabulary. A one-hot vector is one-hot vector a vector that has one element equal to 1-in the dimension corresponding to that word's index in the vocabulary-while all the other elements are set to zero. Thus in a one-hot representation for the word "toothpaste", supposing it is V 5 , i.e., index 5 in the vocabulary, x 5 = 1, and x i = 0 ∀i = 5, as shown here: +7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling 7.5.1 Forward inference in the neural language model "We first represent each of the N previous words as a one-hot vector of length |V |, i.e., with one dimension for each word in the vocabulary. A one-hot vector is one-hot vector a vector that has one element equal to 1-in the dimension corresponding to that word's index in the vocabulary-while all the other elements are set to zero. Thus in a one-hot representation for the word ""toothpaste"", supposing it is V 5 , i.e., index 5 in the vocabulary, x 5 = 1, and x i = 0 ∀i = 5, as shown here:" 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling 7.5.1 Forward inference in the neural language model [0 0 0 0 1 0 0 ... 0 0 0 0] 1 2 3 4 5 6 7 ... ... |V| The feedforward neural language model (sketched in Fig. 7 .13) has a moving window that can see N words into the past. We'll let N-3, so the 3 words w t−1 , w t−2 , and w t−3 are each represented as a one-hot vector. We then multiply these one-hot vectors by the embedding matrix E. The embedding weight matrix E has a column for each word, each a column vector of d dimensions, and hence has dimensionality d × |V |. Multiplying by a one-hot vector that has only one non-zero element x i = 1 simply selects out the relevant column vector for word i, resulting in the embedding for word i, as shown in Fig. 7 .12. 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling 7.5.1 Forward inference in the neural language model The 3 resulting embedding vectors are concatenated to produce e, the embedding layer. This is followed by a hidden layer and an output layer whose softmax produces a probability distribution over words. For example y 42 , the value of output node 42, is the probability of the next word w t being V 42 , the vocabulary word with index 42 (which is the word 'fish' in our example). 7 Neural Networks and Neural Language Models 7.5 Feedforward Neural Language Modeling 7.5.1 Forward inference in the neural language model Here's the algorithm in detail for our mini example: 1. Select three embeddings from E: Given the three previous words, we look up their indices, create 3 one-hot vectors, and then multiply each by the embedding matrix E. Consider w t−3 . The one-hot vector for 'for' (index 35) is Figure 7 .13 Forward inference in a feedforward neural language model. At each timestep t the network computes a d-dimensional embedding for each context word (by multiplying a one-hot vector by the embedding matrix E), and concatenates the 3 resulting embeddings to get the embedding layer e. The embedding vector e is multiplied by a weight matrix W and then an activation function is applied element-wise to produce the hidden layer h, which is then multiplied by another weight matrix U. Finally, a softmax output layer predicts at each node i the probability that the next word w t will be vocabulary word V i . @@ -1101,16 +1101,16 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 7 Neural Networks and Neural Language Models 7.9 Bibliographical and Historical Notes By the 1990s larger neural networks began to be applied to many practical language processing tasks as well, like handwriting recognition (LeCun et al. 1989) and speech recognition (Morgan and Bourlard 1990) . By the early 2000s, improvements in computer hardware and advances in optimization and training techniques made it possible to train even larger and deeper networks, leading to the modern term deep learning (Hinton et al. 2006 , Bengio et al. 2007 . We cover more related history in Chapter 9 and Chapter 26. 7 Neural Networks and Neural Language Models 7.9 Bibliographical and Historical Notes There are a number of excellent books on the subject. Goldberg (2017) has superb coverage of neural networks for natural language processing. For neural networks in general see Goodfellow et al. (2016) and Nielsen (2015). 8 Sequence Labeling for Parts of Speech and Named Entities Dionysius Thrax of Alexandria (c. 100 B.C.), or perhaps someone else (it was a long time ago), wrote a grammatical sketch of Greek (a “techn ̄e”) that summarized the linguistic knowledge of his day. This work is the source of an astonishing proportion of modern linguistic vocabulary, including the words syntax, diphthong, clitic, and analogy. Also included are a description of eight parts of speech: noun, verb,parts of speech pronoun, preposition, adverb, conjunction, participle, and article. Although earlier scholars (including Aristotle as well as the Stoics) had their own lists of parts of speech, it was Thrax’s set of eight that became the basis for descriptions of European languages for the next 2000 years. (All the way to the Schoolhouse Rock educational television shows of our childhood, which had songs about 8 parts of speech, like the late great Bob Dorough’s Conjunction Junction.) The durability of parts of speech through two millennia speaks to their centrality in models of human language. -8 Sequence Labeling for Parts of Speech and Named Entities Proper names are another important and anciently studied linguistic category. While parts of speech are generally assigned to individual words or morphemes, a proper name is often an entire multiword phrase, like the name "Marie Curie", the location "New York City", or the organization "Stanford University". We'll use the term named entity for, roughly speaking, anything that can be referred to with a named entity proper name: a person, a location, an organization, although as we'll see the term is commonly extended to include things that aren't entities per se. +8 Sequence Labeling for Parts of Speech and Named Entities "Proper names are another important and anciently studied linguistic category. While parts of speech are generally assigned to individual words or morphemes, a proper name is often an entire multiword phrase, like the name ""Marie Curie"", the location ""New York City"", or the organization ""Stanford University"". We'll use the term named entity for, roughly speaking, anything that can be referred to with a named entity proper name: a person, a location, an organization, although as we'll see the term is commonly extended to include things that aren't entities per se." 8 Sequence Labeling for Parts of Speech and Named Entities Parts of speech (also known as POS) and named entities are useful clues to POS sentence structure and meaning. Knowing whether a word is a noun or a verb tells us about likely neighboring words (nouns in English are preceded by determiners and adjectives, verbs by nouns) and syntactic structure (verbs have dependency links to nouns), making part-of-speech tagging a key aspect of parsing. Knowing if a named entity like Washington is a name of a person, a place, or a university is important to many natural language processing tasks like question answering, stance detection, or information extraction. In this chapter we'll introduce the task of part-of-speech tagging, taking a sequence of words and assigning each word a part of speech like NOUN or VERB, and the task of named entity recognition (NER), assigning words or phrases tags like PERSON, LOCATION, or ORGANIZATION. 8 Sequence Labeling for Parts of Speech and Named Entities Such tasks in which we assign, to each word x i in an input word sequence, a label y i , so that the output sequence Y has the same length as the input sequence X are called sequence labeling tasks. We'll introduce classic sequence labeling algo-sequence labeling rithms, one generative-the Hidden Markov Model (HMM)-and one discriminativethe Conditional Random Field (CRF). In following chapters we'll introduce modern sequence labelers based on RNNs and Transformers. 8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes Until now we have been using part-of-speech terms like noun and verb rather freely. In this section we give more complete definitions. While word classes do have semantic tendencies-adjectives, for example, often describe properties and nouns people-parts of speech are defined instead based on their grammatical relationship with neighboring words or the morphological properties about their affixes. 8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes Open Class Nouns are words for people, places, or things, but include others as well. Actually, I ran home extremely quickly yesterday -8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes Adverbs generally modify something (often verbs, hence the name "adverb", but also other adverbs and entire verb phrases). Directional adverbs or locative adlocative verbs (home, here, downhill) specify the direction or location of some action; degree degree adverbs (extremely, very, somewhat) specify the extent of some action, process, or property; manner adverbs (slowly, slinkily, delicately) describe the manner of some manner action or process; and temporal adverbs describe the time that some action or event temporal took place (yesterday, Monday). +8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes "Adverbs generally modify something (often verbs, hence the name ""adverb"", but also other adverbs and entire verb phrases). Directional adverbs or locative adlocative verbs (home, here, downhill) specify the direction or location of some action; degree degree adverbs (extremely, very, somewhat) specify the extent of some action, process, or property; manner adverbs (slowly, slinkily, delicately) describe the manner of some manner action or process; and temporal adverbs describe the time that some action or event temporal took place (yesterday, Monday)." 8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes Interjections (oh, hey, alas, uh, um) , are a smaller open class, that also includes interjection greetings (hello, goodbye), and question responses (yes, no, uh-huh) . 8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes English adpositions occur before nouns, hence are called prepositions. They can preposition indicate spatial or temporal relations, whether literal (on it, before then, by the house) or metaphorical (on time, with gusto, beside herself), and relations like marking the agent in Hamlet was written by Shakespeare. 8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes A particle resembles a preposition or an adverb and is used in combination with particle a verb. Particles often have extended meanings that aren't quite the same as the prepositions they resemble, as in the particle over in she turned the paper over. A verb and a particle acting as a single unit is called a phrasal verb. The meaning phrasal verb of phrasal verbs is often non-compositional-not predictable from the individual meanings of the verb and the particle. Thus, turn down means 'reject', rule out 'eliminate', and go on 'continue'. Determiners like this and that (this chapter, that page) can mark the start of an determiner English noun phrase. Articles like a, an, and the, are a type of determiner that mark article discourse properties of the noun and are quite frequent; the is the most common word in written English, with a and an right behind. -8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes Conjunctions join two phrases, clauses, or sentences. Coordinating conjuncconjunction tions like and, or, and but join two elements of equal status. Subordinating conjunctions are used when one of the elements has some embedded status. For example, the subordinating conjunction that in "I thought that you might like some milk" links the main clause I thought with the subordinate clause you might like some milk. This clause is called subordinate because this entire clause is the "content" of the main verb thought. Subordinating conjunctions like that which link a verb to its argument in this way are also called complementizers. +8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes "Conjunctions join two phrases, clauses, or sentences. Coordinating conjuncconjunction tions like and, or, and but join two elements of equal status. Subordinating conjunctions are used when one of the elements has some embedded status. For example, the subordinating conjunction that in ""I thought that you might like some milk"" links the main clause I thought with the subordinate clause you might like some milk. This clause is called subordinate because this entire clause is the ""content"" of the main verb thought. Subordinating conjunctions like that which link a verb to its argument in this way are also called complementizers." 8 Sequence Labeling for Parts of Speech and Named Entities 8.1 (Mostly) English Word Classes complementizer Pronouns act as a shorthand for referring to an entity or event. Personal propronoun nouns refer to persons or entities (you, she, I, it, me, etc.). Possessive pronouns are forms of personal pronouns that indicate either actual possession or more often just an abstract relation between the person and some object (my, your, his, her, its, one's, our, their). Wh-pronouns (what, who, whom, whoever) are used in certain wh question forms, or act as complementizers (Frida, who married Diego. . . ). Auxiliary verbs mark semantic features of a main verb such as its tense, whether auxiliary it is completed (aspect), whether it is negated (polarity), and whether an action is necessary, possible, suggested, or desired (mood). English auxiliaries include the copula verb be, the two verbs do and have, forms, as well as modal verbs used to copula modal mark the mood associated with the event depicted by the main verb: can indicates ability or possibility, may permission or possibility, must necessity. An English-specific tagset, the 45-tag Penn Treebank tagset (Marcus et al., 1993) , shown in Fig. 8.2 , has been used to label many syntactically annotated corpora like the Penn Treebank corpora, so is worth knowing about. Below we show some examples with each word tagged according to both the UD and Penn tagsets. Notice that the Penn tagset distinguishes tense and participles on verbs, and has a special tag for the existential there construction in English. Note that since New England Journal of Medicine is a proper noun, both tagsets mark its component nouns as NNP, including journal and medicine, which might otherwise be labeled as common nouns (NOUN/NN). 8 Sequence Labeling for Parts of Speech and Named Entities 8.2 Part-of-Speech Tagging Part-of-speech tagging is the process of assigning a part-of-speech to each word in part-of-speech tagging a text. The input is a sequence x 1 , x 2 , ..., x n of (tokenized) words and a tagset, and the output is a sequence y 1 , y 2 , ..., y n of tags, each output y i corresponding exactly to one input x i , as shown in the intuition in Fig. 8.3 . Tagging is a disambiguation task; words are ambiguous -have more than one ambiguous possible part-of-speech-and the goal is to find the correct tag for the situation. For example, book can be a verb (book that flight) or a noun (hand me that book). That can be a determiner (Does that flight serve dinner) or a complementizer (I We'll introduce algorithms for the task in the next few sections, but first let's explore the task. Exactly how hard is it? Fig. 8.4 shows that most word types (85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the ambiguous words, though accounting for only 14-15% of the vocabulary, are very common, and 55-67% of word tokens in running text are ambiguous. Particularly ambiguous common words include that, back, down, put and set; here are some examples of the 6 different parts of speech for the word back: earnings growth took a back/JJ seat a small building in the back/NN a clear majority of senators back/VBP the bill Dave began to back/VB toward the door enable the country to buy back/RP debt I was twenty-one back/RB then Nonetheless, many words are easy to disambiguate, because their different tags aren't equally likely. For example, a can be a determiner or the letter a, but the determiner sense is much more likely. 8 Sequence Labeling for Parts of Speech and Named Entities 8.2 Part-of-Speech Tagging This idea suggests a useful baseline: given an ambiguous word, choose the tag which is most frequent in the training corpus. This is a key concept: @@ -1160,7 +1160,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.3 The components of a HMM tagger P(w i |t i ) = C(t i , w i ) C(t i ) (8.10) 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.3 The components of a HMM tagger Of the 13124 occurrences of MD in the WSJ corpus, it is associated with will 4046 times: 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.3 The components of a HMM tagger P(will|MD) = C(MD, will) C(MD) = 4046 13124 = .31 (8.11) -8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.3 The components of a HMM tagger We saw this kind of Bayesian modeling in Chapter 4; recall that this likelihood term is not asking "which is the most likely tag for the word will?" That would be the posterior P(MD|will). Instead, P(will|MD) answers the slightly counterintuitive question "If we were going to generate a MD, how likely is it that this modal would be will?" +8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.3 The components of a HMM tagger "We saw this kind of Bayesian modeling in Chapter 4; recall that this likelihood term is not asking ""which is the most likely tag for the word will?"" That would be the posterior P(MD|will). Instead, P(will|MD) answers the slightly counterintuitive question ""If we were going to generate a MD, how likely is it that this modal would be will?""" 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.3 The components of a HMM tagger The A transition probabilities, and B observation likelihoods of the HMM are illustrated in Fig. 8 .9 for three states in an HMM part-of-speech tagger; the full tagger would have one state for each tag. 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.4 HMM tagging as decoding For any model, such as an HMM, that contains hidden variables, the task of determining the hidden variables sequence corresponding to the sequence of observations is called decoding. More formally, Figure 8 .9 An illustration of the two parts of an HMM representation: the A transition probabilities used to compute the prior probability, and the B observation likelihoods that are associated with each state, one likelihood for each possible observation word. 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.4 HMM tagging as decoding For part-of-speech tagging, the goal of HMM decoding is to choose the tag sequence t 1 . . .t n that is most probable given the observation sequence of n words w 1 . . . w n :t @@ -1192,7 +1192,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.5 The Viterbi Algorithm v t ( j) is computed as v t ( j) = N max i=1 v t−1 (i) a i j b j (o t ) (8.19) 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.5 The Viterbi Algorithm The three factors that are multiplied in Eq. 8.19 for extending the previous paths to compute the Viterbi probability at time t are .11 A sketch of the lattice for Janet will back the bill, showing the possible tags (q i ) for each word and highlighting the path corresponding to the correct tag sequence through the hidden states. States (parts of speech) which have a zero probability of generating a particular word according to the B matrix (such as the probability that a determiner DT will be realized as Janet) are greyed out. 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.5 The Viterbi Algorithm v t−1 (i) the previous Viterbi path probability from the previous time step a i j the transition probability from previous state q i to current state q j b j (o t ) the state observation likelihood of the observation symbol o t given the current state j -8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.6 Working through an example Let's tag the sentence Janet will back the bill; the goal is the correct series of tags (see also Fig. 8.11 Let the HMM be defined by the two tables in Fig. 8 .12 and Fig. 8 .13. Figure 8 .12 lists the a i j probabilities for transitioning between the hidden states (part-of-speech tags). Figure 8 .13 expresses the b i (o t ) probabilities, the observation likelihoods of words given tags. This table is (slightly simplified) from counts in the WSJ corpus. So the word Janet only appears as an NNP, back has 4 possible parts of speech, and the word the can appear as a determiner or as an NNP (in titles like "Somewhere Over the Rainbow" all words are tagged as NNP). Figure 8 .14 The first few entries in the individual state columns for the Viterbi algorithm. Each cell keeps the probability of the best path so far and a pointer to the previous cell along that path. We have only filled out columns 1 and 2; to avoid clutter most cells with value 0 are left empty. The rest is left as an exercise for the reader. After the cells are filled in, backtracing from the end state, we should be able to reconstruct the correct state sequence NNP MD VB DT NN. Figure 8 .14 shows a fleshed-out version of the sketch we saw in Fig. 8 .11, the Viterbi lattice for computing the best hidden state sequence for the observation sequence Janet will back the bill. +8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.6 Working through an example "Let's tag the sentence Janet will back the bill; the goal is the correct series of tags (see also Fig. 8.11 Let the HMM be defined by the two tables in Fig. 8 .12 and Fig. 8 .13. Figure 8 .12 lists the a i j probabilities for transitioning between the hidden states (part-of-speech tags). Figure 8 .13 expresses the b i (o t ) probabilities, the observation likelihoods of words given tags. This table is (slightly simplified) from counts in the WSJ corpus. So the word Janet only appears as an NNP, back has 4 possible parts of speech, and the word the can appear as a determiner or as an NNP (in titles like ""Somewhere Over the Rainbow"" all words are tagged as NNP). Figure 8 .14 The first few entries in the individual state columns for the Viterbi algorithm. Each cell keeps the probability of the best path so far and a pointer to the previous cell along that path. We have only filled out columns 1 and 2; to avoid clutter most cells with value 0 are left empty. The rest is left as an exercise for the reader. After the cells are filled in, backtracing from the end state, we should be able to reconstruct the correct state sequence NNP MD VB DT NN. Figure 8 .14 shows a fleshed-out version of the sketch we saw in Fig. 8 .11, the Viterbi lattice for computing the best hidden state sequence for the observation sequence Janet will back the bill." 8 Sequence Labeling for Parts of Speech and Named Entities 8.4 HMM Part-of-Speech Tagging 8.4.6 Working through an example There are N = 5 state columns. We begin in column 1 (for the word Janet) by setting the Viterbi value in each cell to the product of the π transition probability (the start probability for that state i, which we get from the entry of Fig. 8.12) , and the observation likelihood of the word Janet given the tag for that cell. Most of the cells in the column are zero since the word Janet cannot be any of those tags. The reader should find this in Fig. 8.14. Next, each cell in the will column gets updated. For each state, we compute the value viterbi[s,t] by taking the maximum over the extensions of all the paths from the previous column that lead to the current cell according to Eq. 8.19 . We have shown the values for the MD, VB, and NN cells. Each cell gets the max of the 7 values from the previous column, multiplied by the appropriate transition probability; as it happens in this case, most of them are zero from the previous column. The remaining value is multiplied by the relevant observation probability, and the (trivial) max is taken. In this case the final value, 2.772e-8, comes from the NNP state at the previous column. The reader should fill in the rest of the lattice in Fig. 8 .14 and backtrace to see whether or not the Viterbi algorithm returns the gold state sequence NNP MD VB DT NN. 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) While the HMM is a useful and powerful model, it turns out that HMMs need a number of augmentations to achieve high accuracy. For example, in POS tagging as in other tasks, we often run into unknown words: proper names and acronyms unknown words are created very often, and even new common nouns and verbs enter the language at a surprising rate. It would be great to have ways to add arbitrary features to help with this, perhaps based on capitalization or morphology (words starting with capital letters are likely to be proper nouns, words ending with -ed tend to be past tense (VBD or VBN), etc.) Or knowing the previous or following words might be a useful feature (if the previous word is the, the current tag is unlikely to be a verb). Although we could try to hack the HMM to find ways to incorporate some of these, in general it's hard for generative models like HMMs to add arbitrary features directly into the model in a clean way. We've already seen a model for combining arbitrary features in a principled way: log-linear models like the logistic regression model of Chapter 5! But logistic regression isn't a sequence model; it assigns a class to a single observation. 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) Luckily, there is a discriminative sequence model based on log-linear models: the conditional random field (CRF). We'll describe here the linear chain CRF, CRF the version of the CRF most commonly used for language processing, and the one whose conditioning closely matches the HMM. @@ -1209,7 +1209,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) Each of these local features f k in a linear-chain CRF is allowed to make use of the current output token y i , the previous output token y i−1 , the entire input string X (or any subpart of it), and the current position i. This constraint to only depend on the current and previous output tokens y i and y i−1 are what characterizes a linear chain CRF. As we will see, this limitation makes it possible to use versions of the linear chain CRF efficient Viterbi and Forward-Backwards algorithms from the HMM. A general CRF, by contrast, allows a feature to make use of any output token, and are thus necessary for tasks in which the decision depend on distant output tokens, like y i−4 . General CRFs require more complex inference, and are less commonly used for language processing. 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) 8.5.1 Features in a CRF POS Tagger Let's look at some of these features in detail, since the reason to use a discriminative sequence model is that it's easier to incorporate a lot of features. 2 Again, in a linear-chain CRF, each local feature f k at position i can depend on any information from: (y i−1 , y i , X, i). So some legal features representing common situations might be the following: 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) 8.5.1 Features in a CRF POS Tagger 1{x i = the, y i = DET} 1{y i = PROPN, x i+1 = Street, y i−1 = NUM} 1{y i = VERB, y i−1 = AUX} -8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) 8.5.1 Features in a CRF POS Tagger For simplicity, we'll assume all CRF features take on the value 1 or 0. Above, we explicitly use the notation 1{x} to mean "1 if x is true, and 0 otherwise". From now on, we'll leave off the 1 when we define features, but you can assume each feature has it there implicitly. +8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) 8.5.1 Features in a CRF POS Tagger "For simplicity, we'll assume all CRF features take on the value 1 or 0. Above, we explicitly use the notation 1{x} to mean ""1 if x is true, and 0 otherwise"". From now on, we'll leave off the 1 when we define features, but you can assume each feature has it there implicitly." 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) 8.5.1 Features in a CRF POS Tagger Although the idea of what features to use is done by the system designer by hand, the specific features are automatically populated by using feature templates as we feature templates briefly mentioned in Chapter 5. Here are some templates that only use information from y i−1 , y i , X, i): 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) 8.5.1 Features in a CRF POS Tagger y i , x i , y i , y i−1 , y i , x i−1 , x i+2 8 Sequence Labeling for Parts of Speech and Named Entities 8.5 Conditional Random Fields (CRFs) 8.5.1 Features in a CRF POS Tagger These templates automatically populate the set of features from every instance in the training and test set. Thus for our example Janet/NNP will/MD back/VB the/DT bill/NN, when x i is the word back, the following features would be generated and have the value 1 (we've assigned them arbitrary feature numbers): @@ -1259,7 +1259,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 8 Sequence Labeling for Parts of Speech and Named Entities 8.8 Summary • Two common approaches to sequence modeling are a generative approach, HMM tagging, and a discriminative approach, CRF tagging. We will see a neural approach in following chapters. • The probabilities in HMM taggers are estimated by maximum likelihood estimation on tag-labeled training corpora. The Viterbi algorithm is used for decoding, finding the most likely tag sequence • Conditional Random Fields or CRF taggers train a log-linear model that can choose the best tag sequence given an observation sequence, based on features that condition on the output tag, the prior output tag, the entire input sequence, and the current timestep. They use the Viterbi algorithm for inference, to choose the best sequence of tags, and a version of the Forward-Backward algorithm (see Appendix A) for training. 8 Sequence Labeling for Parts of Speech and Named Entities 8.9 Bibliographical and Historical Notes What is probably the earliest part-of-speech tagger was part of the parser in Zellig Harris's Transformations and Discourse Analysis Project (TDAP), implemented between June 1958 and July 1959 at the University of Pennsylvania (Harris, 1962), although earlier systems had used part-of-speech dictionaries. TDAP used 14 handwritten rules for part-of-speech disambiguation; the use of part-of-speech tag sequences and the relative frequency of tags for a word prefigures modern algorithms. The parser was implemented essentially as a cascade of finite-state transducers; see Joshi and Hopely (1999) and Karttunen (1999) for a reimplementation. The Computational Grammar Coder (CGC) of Klein and Simmons (1963) had three components: a lexicon, a morphological analyzer, and a context disambiguator. The small 1500-word lexicon listed only function words and other irregular words. The morphological analyzer used inflectional and derivational suffixes to assign part-of-speech classes. These were run over words to produce candidate parts of speech which were then disambiguated by a set of 500 context rules by relying on surrounding islands of unambiguous words. For example, one rule said that between an ARTICLE and a VERB, the only allowable sequences were ADJ-NOUN, NOUN-ADVERB, or NOUN-NOUN. The TAGGIT tagger (Greene and Rubin, 1971) used the same architecture as Klein and Simmons (1963) , with a bigger dictionary and more tags (87). TAGGIT was applied to the Brown corpus and, according to Francis and Kučera (1982, p. 9) , accurately tagged 77% of the corpus; the remainder of the Brown corpus was then tagged by hand. All these early algorithms were based on a two-stage architecture in which a dictionary was first used to assign each word a set of potential parts of speech, and then lists of handwritten disambiguation rules winnowed the set down to a single part of speech per word. 8 Sequence Labeling for Parts of Speech and Named Entities 8.9 Bibliographical and Historical Notes Probabilities were used in tagging by Stolz et al. (1965) and a complete probabilistic tagger with Viterbi decoding was sketched by Bahl and Mercer (1976) . The Lancaster-Oslo/Bergen (LOB) corpus, a British English equivalent of the Brown corpus, was tagged in the early 1980's with the CLAWS tagger (Marshall 1983; Marshall 1987; Garside 1987) , a probabilistic algorithm that approximated a simplified HMM tagger. The algorithm used tag bigram probabilities, but instead of storing the word likelihood of each tag, the algorithm marked tags either as rare (P(tag|word) < .01) infrequent (P(tag|word) < .10) or normally frequent (P(tag|word) > .10). -8 Sequence Labeling for Parts of Speech and Named Entities 8.9 Bibliographical and Historical Notes DeRose (1988) developed a quasi-HMM algorithm, including the use of dynamic programming, although computing P(t|w)P(w) instead of P(w|t)P(w). The same year, the probabilistic PARTS tagger of Church 1988 Church , 1989 was probably the first implemented HMM tagger, described correctly in Church (1989), although Church (1988) also described the computation incorrectly as P(t|w)P(w) instead of P(w|t)P(w). Church (p.c.) explained that he had simplified for pedagogical purposes because using the probability P(t|w) made the idea seem more understandable as "storing a lexicon in an almost standard form". +8 Sequence Labeling for Parts of Speech and Named Entities 8.9 Bibliographical and Historical Notes "DeRose (1988) developed a quasi-HMM algorithm, including the use of dynamic programming, although computing P(t|w)P(w) instead of P(w|t)P(w). The same year, the probabilistic PARTS tagger of Church 1988 Church , 1989 was probably the first implemented HMM tagger, described correctly in Church (1989), although Church (1988) also described the computation incorrectly as P(t|w)P(w) instead of P(w|t)P(w). Church (p.c.) explained that he had simplified for pedagogical purposes because using the probability P(t|w) made the idea seem more understandable as ""storing a lexicon in an almost standard form""." 8 Sequence Labeling for Parts of Speech and Named Entities 8.9 Bibliographical and Historical Notes Later taggers explicitly introduced the use of the hidden Markov model (Kupiec 1992; Weischedel et al. 1993; Schütze and Singer 1994) . Merialdo (1994) showed that fully unsupervised EM didn't work well for the tagging task and that reliance on hand-labeled data was important. Charniak et al. (1993) showed the importance of the most frequent tag baseline; the 92.3% number we give above was from Abney et al. (1999) . See Brants (2000) for HMM tagger implementation details, including the extension to trigram contexts, and the use of sophisticated unknown word features; its performance is still close to state of the art taggers. 8 Sequence Labeling for Parts of Speech and Named Entities 8.9 Bibliographical and Historical Notes Log-linear models for POS tagging were introduced by Ratnaparkhi (1996), who introduced a system called MXPOST which implemented a maximum entropy Markov model (MEMM), a slightly simpler version of a CRF. Around the same time, sequence labelers were applied to the task of named entity tagging, first with HMMs (Bikel et al., 1997) and MEMMs (McCallum et al., 2000) , and then once CRFs were developed (Lafferty et al. 2001) , they were also applied to NER (Mc-Callum and Li, 2003) . A wide exploration of features followed (Zhou et al., 2005) . Neural approaches to NER mainly follow from the pioneering results of Collobert et al. 2011, who applied a CRF on top of a convolutional net. BiLSTMs with word and character-based embeddings as input followed shortly and became a standard neural algorithm for NER (Huang et al. 2015 , Ma and Hovy 2016 , Lample et al. 2016 followed by the more recent use of Transformers and BERT. 8 Sequence Labeling for Parts of Speech and Named Entities 8.9 Bibliographical and Historical Notes The idea of using letter suffixes for unknown words is quite old; the early Klein and Simmons (1963) system checked all final letter suffixes of lengths 1-5. The unknown word features described on page 169 come mainly from Ratnaparkhi (1996) , with augmentations from Toutanova et al. (2003) and Manning (2011). @@ -1272,7 +1272,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 9 Deep Learning Architectures for Sequence Processing The simple feedforward sliding-window is promising, but isn't a completely satisfactory solution to temporality. By using embeddings as inputs, it does solve the main problem of the simple n-gram models of Chapter 3 (recall that n-grams were based on words rather than embeddings, making them too literal, unable to generalize across contexts of similar words). But feedforward networks still share another weakness of n-gram approaches: limited context. Anything outside the context window has no impact on the decision being made. Yet many language tasks require access to information that can be arbitrarily distant from the current word. Second, the use of windows makes it difficult for networks to learn systematic patterns arising from phenomena like constituency and compositionality: the way the meaning of words in phrases combine together. For example, in Fig. 9 .1 the phrase all the appears in one window in the second and third positions, and in the next window in the first and second positions, forcing the network to learn two separate patterns for what should be the same item. 9 Deep Learning Architectures for Sequence Processing This chapter introduces two important deep learning architectures designed to address these challenges: recurrent neural networks and transformer networks. Both approaches have mechanisms to deal directly with the sequential nature of language that allow them to capture and exploit the temporal nature of language. The recurrent network offers a new way to represent the prior context, allowing the model's deci- Figure 9 .1 Simplified sketch of a feedforward neural language model moving through a text. At each time step t the network converts N context words, each to a d-dimensional embedding, and concatenates the N embeddings together to get the Nd × 1 unit input vector x for the network. The output of the network is a probability distribution over the vocabulary representing the model's belief with respect to each word being the next possible word. sion to depend on information from hundreds of words in the past. The transformer offers new mechanisms (self-attention and positional encodings) that help represent time and help focus on how words relate to each other over long distances. We'll see how to apply both models to the task of language modeling, to sequence modeling tasks like part-of-speech tagging, and to text classification tasks like sentiment analysis. 9 Deep Learning Architectures for Sequence Processing w t-1 w t-2 w t w t-3 p(doe|…) p(ant|…) p(zebra|…) p(fish|…) … U W -9 Deep Learning Architectures for Sequence Processing 9.1 Language Models Revisited In this chapter, we'll begin exploring the RNN and transformer architectures through the lens of probabilistic language models, so let's briefly remind ourselves of the framework for language modeling. Recall from Chapter 3 that probabilistic language models predict the next word in a sequence given some preceding context. For example, if the preceding context is "Thanks for all the" and we want to know how likely the next word is "fish" we would compute: +9 Deep Learning Architectures for Sequence Processing 9.1 Language Models Revisited "In this chapter, we'll begin exploring the RNN and transformer architectures through the lens of probabilistic language models, so let's briefly remind ourselves of the framework for language modeling. Recall from Chapter 3 that probabilistic language models predict the next word in a sequence given some preceding context. For example, if the preceding context is ""Thanks for all the"" and we want to know how likely the next word is ""fish"" we would compute:" 9 Deep Learning Architectures for Sequence Processing 9.1 Language Models Revisited Language models give us the ability to assign such a conditional probability to every possible next word, giving us a distribution over the entire vocabulary. We can also assign probabilities to entire sequences by using these conditional probabilities in combination with the chain rule: 9 Deep Learning Architectures for Sequence Processing 9.1 Language Models Revisited P(w 1:n ) = n i=1 P(w i |w A A A C A 3 i c b +9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models "< l a t e x i t s h a 1 _ b a s e 6 4 = "" 6 P R 2 s G B b v q 8 n z o e N 9 f 9 Y 6 Q K 3 8 5 E = "" > A A A C A 3 i c b" 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models V B L S 8 N A G N z U V 6 2 v q E c v i 1 X w V B I p q A e h W A Q P H i r 0 B U 0 N m + 2 m X b p 5 s P t F K C F H L / 4 V L y J e F L z 7 F / w 3 J m 0 u b R 1 Y G G Z m 2 Z 1 x Q s E V G M a v V l h Z X V v f K G 6 W t r Z 3 d v f 0 / Y O 2 C i J J W Y s G I p B d h y g m u M 9 a w E G w b i g Z 8 R z B O s 6 4 n v m d J y Y V D / w m T E L W 9 8 j Q 5 y 6 n B F L J 1 k 8 s V x I a m 0 n c T L C l I s + O 4 d p M H p v 4 3 o 7 r t 0 k p g 6 2 X j Y o x B V 4 m Z k 7 K K E f D 1 n + s Q U A j j / l A B V G q Z x o h 9 G M i g V P B k p I V K R Y S O i Z D F k 8 7 J P g 0 l Q b Y D W R 6 f M B T d S 5 H P K U m n p M m P Q I j t e h l 4 n 9 e L w L 3 s h 9 z P 4 y A + X T 2 k B s J D A H O B s E D L h k F M U k J o Z K n P 8 R 0 R N J R I J 0 t q 2 4 u F l 0 m 7 f O K W a 1 c P V T L t Z t 8 h C I 6 Q s f o D J n o A t X Q H W q g F q L o B b 2 h T / S l P W u v 2 r v 2 M Y s W t P z O I Z q D 9 v 0 H 1 O O V -9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models c A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " K z 9 -9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models + I Q R F a h E D C m J O p t N b J + 7 D x O Q = " > A A A B + 3 i c b V D L S s N A F L 2 p r 1 p f U Z e 6 G C y C G 0 u i o i 6 L b l x W s A 9 o S p h M J + 3 Q y Y O Z i R B C N v 6 K G x E 3 C v 6 D v + D f O G m z a e u B g c M 5 Z 7 j 3 H i / m T C r L + j U q K 6 t r 6 x v V z d r W 9 s 7 u n r l / 0 J F R I g h t k 4 h H o u d h S T k L a V s x x W k v F h Q H H q d d b 3 J f + N 1 n K i S L w i e V x n Q Q 4 F H I f E a w 0 p J r H p 8 j h 0 c j l L q Z E 2 A 1 F k H m R y L P a w V c s 2 4 1 r C n Q M r F L U o c S L d f 8 c Y Y R S Q I a K s K x l H 3 b i t U g w 0 I -9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models x w m l e c x J J Y 0 w m e E S z 6 e 4 5 O t X S E O m Z + o U K T d W 5 H A 6 k T A N P J 4 v 9 5 K J X i P 9 5 / U T 5 t 4 O M h X G i a E h m g / y E I x W h o g g 0 Z I I S x V N N M B F M b 4 j I G A t M l K 6 r O N 1 e P H S Z d C 4 a 9 n X j 8 v G q 3 r w r S 6 j C E Z z A G d h w A 0 1 4 g B a 0 g c A L v M E n f B m 5 8 W q 8 G x + z a M U o / x z C H I z v P 6 S a k q c = < / l a t e x i t > log y for < l a t e x i t s h a 1 _ b a s e 6 4 = " E q u g m y X m W 7 n S i U m o x R H Y 3 T j n b 2 g = " > A A A B 9 3 i c b +9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models "c A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "" K z 9" +9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models "+ I Q R F a h E D C m J O p t N b J + 7 D x O Q = "" > A A A B + 3 i c b V D L S s N A F L 2 p r 1 p f U Z e 6 G C y C G 0 u i o i 6 L b l x W s A 9 o S p h M J + 3 Q y Y O Z i R B C N v 6 K G x E 3 C v 6 D v + D f O G m z a e u B g c M 5 Z 7 j 3 H i / m T C r L + j U q K 6 t r 6 x v V z d r W 9 s 7 u n r l / 0 J F R I g h t k 4 h H o u d h S T k L a V s x x W k v F h Q H H q d d b 3 J f + N 1 n K i S L w i e V x n Q Q 4 F H I f E a w 0 p J r H p 8 j h 0 c j l L q Z E 2 A 1 F k H m R y L P a w V c s 2 4 1 r C n Q M r F L U o c S L d f 8 c Y Y R S Q I a K s K x l H 3 b i t U g w 0 I" +9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models "x w m l e c x J J Y 0 w m e E S z 6 e 4 5 O t X S E O m Z + o U K T d W 5 H A 6 k T A N P J 4 v 9 5 K J X i P 9 5 / U T 5 t 4 O M h X G i a E h m g / y E I x W h o g g 0 Z I I S x V N N M B F M b 4 j I G A t M l K 6 r O N 1 e P H S Z d C 4 a 9 n X j 8 v G q 3 r w r S 6 j C E Z z A G d h w A 0 1 4 g B a 0 g c A L v M E n f B m 5 8 W q 8 G x + z a M U o / x z C H I z v P 6 S a k q c = < / l a t e x i t > log y for < l a t e x i t s h a 1 _ b a s e 6 4 = "" E q u g m y X m W 7 n S i U m o x R H Y 3 T j n b 2 g = "" > A A A B 9 3 i c b" 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models V D L S s N A F J 3 U V 6 2 v q B t B k c E i u L E k C u q y 6 M Z l C / Y B T Q m T y a Q d O p k J M x M h h L r z O 9 y I u F F w 1 0 / w F / w G f 8 K k 7 a a t B w Y O 5 5 z h n n u 9 i F G l L e v H K C w t r 6 y u F d d L G 5 t b 2 z v m 7 l 5 T i V h i 0 s C C C d n 2 k C K M c t L Q V D P S j i R B o c d I y x v c 5 X 7 r k U h F B X / Q S U S 6 I e p x G l C M d C a 5 5 s E 5 d J j o w c R N n R D p v g x T x P 3 h s O S 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models a Z a t i j Q E X i T 0 l 5 e r R q P 7 7 f D y q u e a 3 4 w s c h 4 R r z J B S H 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models d u K d D d F U l P M y L D k x I p E C A 9 Q j 6 T j 3 k N 4 m k k + D I T M H t d w r M 7 k U K h U E n p Z M u + m 5 r 1 c / M / r x D q 4 6 a a U R 7 E m H E 8 G B T G D W s D 8 C N C n k m D N k o w g L G n W E O I + k g j r 7 F T 5 6 v b 8 o o u k e V G x r y q X d b t c v Q U T F M E h O A F n w A b X o A r u Q Q 0 0 A A Z P 4 B V 8 g E 8 j M V 6 M N + N 9 E i 0 Y 0 z / 7 Y A b G 1 x 9 b R Z X r < / l -9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models V C k = " > A A A C A H i c b V D L S s N A F J 3 U V 6 2 v q A s X b g a L 4 M a S q K j L o h u X F e w D m h A m 0 0 k 7 d C Y T Z i Z C C d n 4 K 2 5 c K O L W z 3 D n 3 z h p s 9 D W A x c O 5 9 z L v f e E C a N K O 8 6 3 V V l a X l l d q 6 7 X N j a 3 t n f s 3 b 2 O E q n E p I 0 F E 7 I X I k U Y j U l b U 8 1 I L 5 E E 8 Z C R b j i + L f z u I 5 G K i v h B T x L i c z S M a U Q x 0 k Y K 7 I N T 6 D E x h J M g 8 z j S I 8 k z x F i e B 3 b d a T h T w E X i l q Q O S r Q C + 8 s b C J x y E m v -9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models M k F J 9 1 0 m 0 n y G p K W Y k r 3 m p I g n C Y z Q k f U N j x I n y s + k D O T w 2 y g B G Q p q K N Z y q v y c y x J W a 8 N B 0 F k e q e a 8 Q / / P 6 q Y 6 u / Y z G S a p J j G e L o p R B L W C R B h x Q S b B m E 0 M Q l t T c C v E I S Y S 1 y a x m Q n D n X 1 4 k n b O G e 9 k 4 v 7 + o N 2 / K O K r g E B y B E + C C K 9 A E d 6 A F 2 g C D H D y D V / B m P V k v 1 r v 1 M W u t W O X M P v g D 6 / M H w H e W i g = = < / l a t e x i t > log y all < l a t e x i t s h a 1 _ b a s e 6 4 = " K b x L q C A W 5 L D k l 1 r I X a Q v g y 2 6 V x U = " > A A A C A 3 i c b V D L S s N A F J 3 4 r P U V d a e b w S K 4 s S Q q 6 r L o x m U F + 4 A m l M l 0 0 g 6 d m Y S Z i R B C w I 2 / 4 s a F I m 7 9 C X f + j Z M 2 C 2 0 9 c O F w z r 3 c e 0 8 Q M 6 q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W U S I x a e G I R b I b I E U Y F a S l q W a k G 0 u C e M B I J x j f F H 7 n g U h F I 3 G v 0 5 j 4 H A 0 F D S l G 2 k h 9 e / 8 E e i w a w r S f e R z p k e S Z H i E x V n n e t 2 t O 3 Z k A z h O 3 J D +9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models "V C k = "" > A A A C A H i c b V D L S s N A F J 3 U V 6 2 v q A s X b g a L 4 M a S q K j L o h u X F e w D m h A m 0 0 k 7 d C Y T Z i Z C C d n 4 K 2 5 c K O L W z 3 D n 3 z h p s 9 D W A x c O 5 9 z L v f e E C a N K O 8 6 3 V V l a X l l d q 6 7 X N j a 3 t n f s 3 b 2 O E q n E p I 0 F E 7 I X I k U Y j U l b U 8 1 I L 5 E E 8 Z C R b j i + L f z u I 5 G K i v h B T x L i c z S M a U Q x 0 k Y K 7 I N T 6 D E x h J M g 8 z j S I 8 k z x F i e B 3 b d a T h T w E X i l q Q O S r Q C + 8 s b C J x y E m v" +9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models "M k F J 9 1 0 m 0 n y G p K W Y k r 3 m p I g n C Y z Q k f U N j x I n y s + k D O T w 2 y g B G Q p q K N Z y q v y c y x J W a 8 N B 0 F k e q e a 8 Q / / P 6 q Y 6 u / Y z G S a p J j G e L o p R B L W C R B h x Q S b B m E 0 M Q l t T c C v E I S Y S 1 y a x m Q n D n X 1 4 k n b O G e 9 k 4 v 7 + o N 2 / K O K r g E B y B E + C C K 9 A E d 6 A F 2 g C D H D y D V / B m P V k v 1 r v 1 M W u t W O X M P v g D 6 / M H w H e W i g = = < / l a t e x i t > log y all < l a t e x i t s h a 1 _ b a s e 6 4 = "" K b x L q C A W 5 L D k l 1 r I X a Q v g y 2 6 V x U = "" > A A A C A 3 i c b V D L S s N A F J 3 4 r P U V d a e b w S K 4 s S Q q 6 r L o x m U F + 4 A m l M l 0 0 g 6 d m Y S Z i R B C w I 2 / 4 s a F I m 7 9 C X f + j Z M 2 C 2 0 9 c O F w z r 3 c e 0 8 Q M 6 q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W U S I x a e G I R b I b I E U Y F a S l q W a k G 0 u C e M B I J x j f F H 7 n g U h F I 3 G v 0 5 j 4 H A 0 F D S l G 2 k h 9 e / 8 E e i w a w r S f e R z p k e S Z H i E x V n n e t 2 t O 3 Z k A z h O 3 J D" 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models V Q o t m 3 v 7 x B h B N O h M Y M K d V z n V j 7 G Z K a Y k b y q p c o E i M 8 R k P S M 1 Q g T p S f T X 7 I 4 Z F R B j C M p C m h 4 U T 9 P Z E h r l T K A 9 N Z 3 K l m v U L 8 z + s l O r z y M y r i R B O B p 4 v C h E E d w S I Q O K C S Y M 1 S Q x C W 1 N w K 8 Q h J h L W J r W p C c G d -9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models f n i f t 0 7 p 7 U T + 7 O 6 8 1 r s s 4 K u A A H I J j 4 I J L 0 A C 3 o A l a A I N H 8 A x e w Z v 1 Z L 1 Y 7 9 b H t H X B K m f 2 w B 9 Y n z 9 J c Z f 4 < / l a t e x i t > log y thanks < l a t e x i t s h a 1 _ b a s e 6 4 = " e 2 v w v r Z I C 4 t a v P n t 1 U q 3 S q V r 9 y 0 = " > A A A C A X i c b V A 9 S w N B E J 3 z M 8 a v q I 1 g s x g E G 8 O d i l o G b S w j m A / I h b C 3 2 V y W 7 O 0 e u 3 v C c c T G v 2 J j o Y i t / 8 L O f + M m u U I T H w w 8 3 p t h Z l 4 Q c 6 a N 6 3 4 7 C 4 t L y y u r h b X i + s b m 1 n Z p Z 7 e h Z a I I r R P J p W o F W F P O B K 0 b Z j h t x Y r i K O C 0 G Q x v x n 7 z g S r N p L g 3 a U w 7 E Q 4 F 6 z O C j Z W 6 p f 0 T 5 H M Z o r S b + R E 2 A x V l X I p w N O q W y m 7 F n Q D N E y 8 n Z c h R 6 5 a + / J 4 k S U S F I R x r 3 f b c 2 H Q y r A w j n I 6 K f q J p j M k Q h 7 R t q c A R 1 Z 1 s 8 s E I H V m l h / p S 2 R I G T d T f E x m O t E 6 j w H a O r 9 S z 3 l j 8 z 2 s n p n / predict the next word (rather than feeding the model its best case from the previous time step) is called teacher forcing. +9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models "f n i f t 0 7 p 7 U T + 7 O 6 8 1 r s s 4 K u A A H I J j 4 I J L 0 A C 3 o A l a A I N H 8 A x e w Z v 1 Z L 1 Y 7 9 b H t H X B K m f 2 w B 9 Y n z 9 J c Z f 4 < / l a t e x i t > log y thanks < l a t e x i t s h a 1 _ b a s e 6 4 = "" e 2 v w v r Z I C 4 t a v P n t 1 U q 3 S q V r 9 y 0 = "" > A A A C A X i c b V A 9 S w N B E J 3 z M 8 a v q I 1 g s x g E G 8 O d i l o G b S w j m A / I h b C 3 2 V y W 7 O 0 e u 3 v C c c T G v 2 J j o Y i t / 8 L O f + M m u U I T H w w 8 3 p t h Z l 4 Q c 6 a N 6 3 4 7 C 4 t L y y u r h b X i + s b m 1 n Z p Z 7 e h Z a I I r R P J p W o F W F P O B K 0 b Z j h t x Y r i K O C 0 G Q x v x n 7 z g S r N p L g 3 a U w 7 E Q 4 F 6 z O C j Z W 6 p f 0 T 5 H M Z o r S b + R E 2 A x V l X I p w N O q W y m 7 F n Q D N E y 8 n Z c h R 6 5 a + / J 4 k S U S F I R x r 3 f b c 2 H Q y r A w j n I 6 K f q J p j M k Q h 7 R t q c A R 1 Z 1 s 8 s E I H V m l h / p S 2 R I G T d T f E x m O t E 6 j w H a O r 9 S z 3 l j 8 z 2 s n p n / predict the next word (rather than feeding the model its best case from the previous time step) is called teacher forcing." 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models V y Z i I E 0 M F m S 7 q J x w Z i c Z x o B 5 T l B i e W o K J Y v Z W R A Z Y Y W J 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models The weights in the network are adjusted to minimize the average CE loss over the training sequence via gradient descent. Fig. 9 .6 illustrates this training regimen. 9 Deep Learning Architectures for Sequence Processing 9.3 RNNs as Language Models Careful readers may have noticed that the input embedding matrix E and the final layer matrix V, which feeds the output softmax, are quite similar. The rows of E represent the word embeddings for each word in the vocabulary learned during the training process with the goal that words that have similar meaning and function will have similar embeddings. And, since the length of these embeddings corresponds to the size of the hidden layer d h , the shape of the embedding matrix E is |V | × d h . @@ -1454,19 +1454,19 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 10 Machine Translation and Encoder-Decoder Models Encoder-decoder or sequence-to-sequence models are used for a different kind of sequence modeling in which the output sequence is a complex function of the entire input sequencer; we must map from a sequence of input words or tokens to a sequence of tags that are not merely direct mappings from individual words. 10 Machine Translation and Encoder-Decoder Models Machine translation is exactly such a task: the words of the target language don't necessarily agree with the words of the source language in number or order. Consider translating the following made-up English sentence into Japanese. Note that the elements of the sentences are in very different places in the different languages. In English, the verb is in the middle of the sentence, while in Japanese, the verb kaita comes at the end. The Japanese sentence doesn't require the pronoun he, while English does. Such differences between languages can be quite complex. In the following actual sentence from the United Nations, notice the many changes between the Chinese sentence (we've given in in red a word-by-word gloss of the Chinese characters) and its English equivalent. 10 Machine Translation and Encoder-Decoder Models (10.2) 大会/General Assembly 在/on 1982年/1982 12月/December 10日/10 通过 了/adopted 第37号/37th 决议/resolution ,核准了/approved 第二 次/second 探索/exploration 及/and 和平peaceful 利用/using 外层空 间/outer space 会议/conference 的/of 各项/various 建议/suggestions 。 On 10 December 1982 , the General Assembly adopted resolution 37 in which it endorsed the recommendations of the Second United Nations Conference on the Exploration and Peaceful Uses of Outer Space . -10 Machine Translation and Encoder-Decoder Models Note the many ways the English and Chinese differ. For example the ordering differs in major ways; the Chinese order of the noun phrase is "peaceful using outer space conference of suggestions" while the English has "suggestions of the ... conference on peaceful use of outer space"). And the order differs in minor ways (the date is ordered differently). English requires the in many places that Chinese doesn't, and adds some details (like "in which" and "it") that aren't necessary in Chinese. Chinese doesn't grammatically mark plurality on nouns (unlike English, which has the "-s" in "recommendations"), and so the Chinese must use the modifier 各项/various to make it clear that there is not just one recommendation. English capitalizes some words but not others. +10 Machine Translation and Encoder-Decoder Models "Note the many ways the English and Chinese differ. For example the ordering differs in major ways; the Chinese order of the noun phrase is ""peaceful using outer space conference of suggestions"" while the English has ""suggestions of the ... conference on peaceful use of outer space""). And the order differs in minor ways (the date is ordered differently). English requires the in many places that Chinese doesn't, and adds some details (like ""in which"" and ""it"") that aren't necessary in Chinese. Chinese doesn't grammatically mark plurality on nouns (unlike English, which has the ""-s"" in ""recommendations""), and so the Chinese must use the modifier 各项/various to make it clear that there is not just one recommendation. English capitalizes some words but not others." 10 Machine Translation and Encoder-Decoder Models Encoder-decoder networks are very successful at handling these sorts of complicated cases of sequence mappings. Indeed, the encoder-decoder algorithm is not just for MT; it's the state of the art for many other tasks where complex mappings between two sequences are involved. These include summarization (where we map from a long text to its summary, like a title or an abstract), dialogue (where we map from what the user said to what our dialogue system should respond), semantic parsing (where we map from a string of words to a semantic representation like logic or SQL), and many others. 10 Machine Translation and Encoder-Decoder Models We'll introduce the algorithm in sections Section 10.2, and in following sections give important components of the model like beam search decoding, and we'll discuss how MT is evaluated, introducing the simple chrF metric. 10 Machine Translation and Encoder-Decoder Models But first, in the next section, we begin by summarizing the linguistic background to MT: key differences among languages that are important to consider when considering the task of translation. 10 Machine Translation and Encoder-Decoder Models 10.1 Language Divergences and Typology Some aspects of human language seem to be universal, holding true for every lanuniversal guage, or are statistical universals, holding true for most languages. Many universals arise from the functional role of language as a communicative system by humans. Every language, for example, seems to have words for referring to people, for talking about eating and drinking, for being polite or not. There are also structural linguistic universals; for example, every language seems to have nouns and verbs (Chapter 8), has ways to ask questions, or issue commands, linguistic mechanisms for indicating agreement or disagreement. -10 Machine Translation and Encoder-Decoder Models 10.1 Language Divergences and Typology Yet languages also differ in many ways, and an understanding of what causes such translation divergences will help us build better MT models. We often distin-translation divergence guish the idiosyncratic and lexical differences that must be dealt with one by one (the word for "dog" differs wildly from language to language), from systematic differences that we can model in a general way (many languages put the verb before the direct object; others put the verb after the direct object). The study of these systematic cross-linguistic similarities and differences is called linguistic typology. This typology section sketches some typological facts that impact machine translation; the interested reader should also look into WALS, the World Atlas of Language Structures, which gives many typological facts about languages (Dryer and Haspelmath, 2013). +10 Machine Translation and Encoder-Decoder Models 10.1 Language Divergences and Typology "Yet languages also differ in many ways, and an understanding of what causes such translation divergences will help us build better MT models. We often distin-translation divergence guish the idiosyncratic and lexical differences that must be dealt with one by one (the word for ""dog"" differs wildly from language to language), from systematic differences that we can model in a general way (many languages put the verb before the direct object; others put the verb after the direct object). The study of these systematic cross-linguistic similarities and differences is called linguistic typology. This typology section sketches some typological facts that impact machine translation; the interested reader should also look into WALS, the World Atlas of Language Structures, which gives many typological facts about languages (Dryer and Haspelmath, 2013)." 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology As we hinted it in our example above comparing English and Japanese, languages differ in the basic word order of verbs, subjects, and objects in simple declarative clauses. German, French, English, and Mandarin, for example, are all SVO SVO (Subject-Verb-Object) languages, meaning that the verb tends to come between the subject and object. Hindi and Japanese, by contrast, are SOV languages, mean-SOV ing that the verb tends to come at the end of basic clauses, and Irish and Arabic are VSO languages. Two languages that share their basic word order type often have VSO other similarities. For example, VO languages generally have prepositions, whereas OV languages generally have postpositions. 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology Let's look in more detail at the example we saw above. In this SVO English sentence, the verb wrote is followed by its object a letter and the prepositional phrase to a friend, in which the preposition to is followed by its argument a friend. Arabic, with a VSO order, also has the verb before the object and prepositions. By contrast, in the Japanese example that follows, each of these orderings is reversed; the verb is preceded by its arguments, and the postposition follows its argument. .1 shows examples of other word order differences. All of these word order differences between languages can cause problems for translation, requiring the system to do huge structural reorderings as it generates the output. 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.2 Lexical Divergences Of course we also need to translate the individual words from one language to another. For any translation, the appropriate word can vary depending on the context. The English source-language word bass, for example, can appear in Spanish as the fish lubina or the musical instrument bajo. German uses two distinct words for what in English would be called a wall: Wand for walls inside a building, and Mauer for walls outside a building. Where English uses the word brother for any male sibling, Chinese and many other languages have distinct words for older brother and younger brother (Mandarin gege and didi, respectively). In all these cases, translating bass, wall, or brother from English would require a kind of specialization, disambiguating the different uses of a word. For this reason the fields of MT and Word Sense Disambiguation (Chapter 18) are closely linked. 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.2 Lexical Divergences Sometimes one language places more grammatical constraints on word choice than another. We saw above that English marks nouns for whether they are singular or plural. Mandarin doesn't. Or French and Spanish, for example, mark grammatical gender on adjectives, so an English translation into French requires specifying adjective gender. 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.2 Lexical Divergences The way that languages differ in lexically dividing up conceptual space may be more complex than this one-to-many translation problem, leading to many-to-many mappings. For example, Fig. 10 .2 summarizes some of the complexities discussed by Hutchins and Somers (1992) in translating English leg, foot, and paw, to French. For example, when leg is used about an animal it's translated as French jambe; but about the leg of a journey, as French etape; if the leg is of a chair, we use French pied. -10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.2 Lexical Divergences Further, one language may have a lexical gap, where no word or phrase, short lexical gap of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the "satellites": particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 . -10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.3 Morphological Typology Morphologically, languages are often characterized along two dimensions of variation. The first is the number of morphemes per word, ranging from isolating isolating languages like Vietnamese and Cantonese, in which each word generally has one morpheme, to polysynthetic languages like Siberian Yupik ("Eskimo"), in which a polysynthetic single word may have very many morphemes, corresponding to a whole sentence in English. The second dimension is the degree to which morphemes are segmentable, ranging from agglutinative languages like Turkish, in which morphemes have relagglutinative atively clean boundaries, to fusion languages like Russian, in which a single affix fusion may conflate multiple morphemes, like -om in the word stolom (table-SG-INSTR-DECL1), which fuses the distinct morphological categories instrumental, singular, and first declension. Translating between languages with rich morphology requires dealing with structure below the word level, and for this reason modern systems generally use subword models like the wordpiece or BPE models of Section 10.7.1. +10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.2 Lexical Divergences "Further, one language may have a lexical gap, where no word or phrase, short lexical gap of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the ""satellites"": particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 ." +10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.3 Morphological Typology "Morphologically, languages are often characterized along two dimensions of variation. The first is the number of morphemes per word, ranging from isolating isolating languages like Vietnamese and Cantonese, in which each word generally has one morpheme, to polysynthetic languages like Siberian Yupik (""Eskimo""), in which a polysynthetic single word may have very many morphemes, corresponding to a whole sentence in English. The second dimension is the degree to which morphemes are segmentable, ranging from agglutinative languages like Turkish, in which morphemes have relagglutinative atively clean boundaries, to fusion languages like Russian, in which a single affix fusion may conflate multiple morphemes, like -om in the word stolom (table-SG-INSTR-DECL1), which fuses the distinct morphological categories instrumental, singular, and first declension. Translating between languages with rich morphology requires dealing with structure below the word level, and for this reason modern systems generally use subword models like the wordpiece or BPE models of Section 10.7.1." 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.4 Referential Density Finally, languages vary along a typological dimension related to the things they tend to omit. Some languages, like English, require that we use an explicit pronoun when talking about a referent that is given in the discourse. In other languages, however, we can sometimes omit pronouns altogether, as the following example from Spanish shows 1 : (10.6) [El jefe] i dio con un libro. / 0 i Mostró a un descifrador ambulante. [The boss] came upon a book. [He] showed it to a wandering decoder. 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.4 Referential Density Languages that can omit pronouns are called pro-drop languages. Even among pro-drop the pro-drop languages, there are marked differences in frequencies of omission. Japanese and Chinese, for example, tend to omit far more than does Spanish. This dimension of variation across languages is called the dimension of referential density. We say that languages that tend to use more pronouns are more referentially referential density dense than those that use more zeros. Referentially sparse languages, like Chinese or Japanese, that require the hearer to do more inferential work to recover antecedents are also called cold languages. Languages that are more explicit and make it easier Marshall McLuhan's 1964 distinction between hot media like movies, which fill in many details for the viewer, versus cold media like comics, which require the reader to do more inferential work to fill out the representation (Bickel, 2003) . 10 Machine Translation and Encoder-Decoder Models 10.1 Word Order Typology 10.1.4 Referential Density Translating from languages with extensive pro-drop, like Chinese or Japanese, to non-pro-drop languages like English can be difficult since the model must somehow identify each zero and recover who or what is being talked about in order to insert the proper pronoun. @@ -1479,7 +1479,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs At a particular time t, we pass the prefix of t − 1 tokens through the language model, using forward inference to produce a sequence of hidden states, ending with the hidden state corresponding to the last word of the prefix. We then use the final hidden state of the prefix as our starting point to generate the next token. 10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs More formally, if g is an activation function like tanh or ReLU, a function of the input at time t and the hidden state at time t − 1, and f is a softmax over the set of possible vocabulary items, then at time t the output y t and hidden state h t are computed as: 10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs h t = g(h t−1 , x t ) (10.8) y t = f (h t ) (10.9) -10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs We only have to make one slight change to turn this language model with autoregressive generation into a translation model that can translate from a source text source in one language to a target text in a second: add an sentence separation marker at target the end of the source text, and then simply concatenate the target text. We briefly introduced this idea of a sentence separator token in Chapter 9 when we considered using a Transformer language model to do summarization, by training a conditional language model. If we call the source text x and the target text y, we are computing the probability p(y|x) as follows: Fig. 10.4 shows the setup for a simplified version of the encoder-decoder model (we'll see the full model, which requires attention, in the next section). Fig. 10.4 shows an English source text ("the green witch arrived"), a sentence separator token (, and a Spanish target text ("llegó la bruja verde"). To translate a source text, we run it through the network performing forward inference to generate hidden states until we get to the end of the source. Then we begin autoregressive generation, asking for a word in the context of the hidden layer from the end of the source input as well as the end-of-sentence marker. Subsequent words are conditioned on the previous hidden state and the embedding for the last word generated. Let's formalize and generalize this model a bit in Fig. 10 .5. (To help keep things straight, we'll use the superscripts e and d where needed to distinguish the hidden states of the encoder and the decoder.) The elements of the network on the left process the input sequence x and comprise the encoder. While our simplified figure shows only a single network layer for the encoder, stacked architectures are the norm, where the output states from the top layer of the stack are taken as the final representation. A widely used encoder design makes use of stacked biLSTMs where the hidden states from top layers from the forward and backward passes are concatenated as described in Chapter 9 to provide the contextualized representations for each time step. The entire purpose of the encoder is to generate a contextualized representation of the input. This representation is embodied in the final hidden state of the encoder, h e n . This representation, also called c for context, is then passed to the decoder. The decoder network on the right takes this state and uses it to initialize the first +10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs "We only have to make one slight change to turn this language model with autoregressive generation into a translation model that can translate from a source text source in one language to a target text in a second: add an sentence separation marker at target the end of the source text, and then simply concatenate the target text. We briefly introduced this idea of a sentence separator token in Chapter 9 when we considered using a Transformer language model to do summarization, by training a conditional language model. If we call the source text x and the target text y, we are computing the probability p(y|x) as follows: Fig. 10.4 shows the setup for a simplified version of the encoder-decoder model (we'll see the full model, which requires attention, in the next section). Fig. 10.4 shows an English source text (""the green witch arrived""), a sentence separator token (, and a Spanish target text (""llegó la bruja verde""). To translate a source text, we run it through the network performing forward inference to generate hidden states until we get to the end of the source. Then we begin autoregressive generation, asking for a word in the context of the hidden layer from the end of the source input as well as the end-of-sentence marker. Subsequent words are conditioned on the previous hidden state and the embedding for the last word generated. Let's formalize and generalize this model a bit in Fig. 10 .5. (To help keep things straight, we'll use the superscripts e and d where needed to distinguish the hidden states of the encoder and the decoder.) The elements of the network on the left process the input sequence x and comprise the encoder. While our simplified figure shows only a single network layer for the encoder, stacked architectures are the norm, where the output states from the top layer of the stack are taken as the final representation. A widely used encoder design makes use of stacked biLSTMs where the hidden states from top layers from the forward and backward passes are concatenated as described in Chapter 9 to provide the contextualized representations for each time step. The entire purpose of the encoder is to generate a contextualized representation of the input. This representation is embodied in the final hidden state of the encoder, h e n . This representation, also called c for context, is then passed to the decoder. The decoder network on the right takes this state and uses it to initialize the first" 10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs p(y|x) = p(y 1 |x)p(y 2 |y 1 , x)p(y 3 |y 1 , y 2 , x)...P(y m |y 1 , ..., y m−1 , x) (10.10) 10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs hidden state of the decoder. That is, the first decoder RNN cell uses c as its prior hidden state h d 0 . The decoder autoregressively generates a sequence of outputs, an element at a time, until an end-of-sequence marker is generated. Each hidden state is conditioned on the previous hidden state and the output generated in the previous state. Figure 10 .6 Allowing every hidden state of the decoder (not just the first decoder state) to be influenced by the context c produced by the encoder. 10 Machine Translation and Encoder-Decoder Models 10.3 Encoder-Decoder with RNNs h d 1 h d 2 h d i y 1 y 2 y i c … … … @@ -1589,7 +1589,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 10 Machine Translation and Encoder-Decoder Models 10.8 MT Evaluation 10.8.3 Automatic Evaluation: Embedding-Based Methods EQUATION 10 Machine Translation and Encoder-Decoder Models 10.9 Bias and Ethical Issues Machine translation raises many of the same ethical issues that we've discussed in earlier chapters. For example, consider MT systems translating from Hungarian (which has the gender neutral pronounő) or Spanish (which often drops pronouns) into English (in which pronouns are obligatory, and they have grammatical gender). When translating a reference to a person described without specified gender, MT systems often default to male gender (Schiebinger 2014 , Prates et al. 2019 . And MT systems often assign gender according to culture stereotypes of the sort we saw in Section 6.11. Fig. 10 .19 shows examples from Prates et al. 2019, in which Hungarian gender-neutralő is a nurse is translated with she, but gender-neutralő is a CEO is translated with he. Prates et al. (2019) find that these stereotypes can't completely be accounted for by gender bias in US labor statistics, because the biases are amplified by MT systems, with pronouns being mapped to male or female gender with a probability higher than if the mapping was based on actual labor employment statistics. 10 Machine Translation and Encoder-Decoder Models 10.9 Bias and Ethical Issues Hungarian (gender neutral) source English MT output o egyápoló she is a nurse o egy tudós he is a scientist o egy mérnök he is an engineer o egy pék he is a baker o egy tanár she is a teacher o egy vesküvőszervező she is a wedding organizer o egy vezérigazgató he is a CEO Figure 10 .19 When translating from gender-neutral languages like Hungarian into English, current MT systems interpret people from traditionally male-dominated occupations as male, and traditionally female-dominated occupations as female (Prates et al., 2019) . -10 Machine Translation and Encoder-Decoder Models 10.9 Bias and Ethical Issues Similarly, a recent challenge set, the WinoMT dataset (Stanovsky et al., 2019) shows that MT systems perform worse when they are asked to translate sentences that describe people with non-stereotypical gender roles, like "The doctor asked the nurse to help her in the operation". +10 Machine Translation and Encoder-Decoder Models 10.9 Bias and Ethical Issues "Similarly, a recent challenge set, the WinoMT dataset (Stanovsky et al., 2019) shows that MT systems perform worse when they are asked to translate sentences that describe people with non-stereotypical gender roles, like ""The doctor asked the nurse to help her in the operation""." 10 Machine Translation and Encoder-Decoder Models 10.9 Bias and Ethical Issues Many ethical questions in MT require further research. One open problem is developing metrics for knowing what our systems don't know. This is because MT systems can be used in urgent situations where human translators may be unavailable or delayed: in medical domains, to help translate when patients and doctors don't speak the same language, or in legal domains, to help judges or lawyers communicate with witnesses or defendants. In order to 'do no harm', systems need ways to assign confidence values to candidate translations, so they can abstain from giving confidence incorrect translations that may cause harm. 10 Machine Translation and Encoder-Decoder Models 10.9 Bias and Ethical Issues Another is the need for low-resource algorithms that can translate to and from all the world's languages, the vast majority of which do not have large parallel training texts available. This problem is exacerbated by the tendency of many MT approaches to focus on the case where one of the languages is English (Anastasopoulos and Neubig, 2020). ∀ et al. (2020) propose a participatory design process to encourage content creators, curators, and language technologists who speak these low-resourced languages to participate in developing MT algorithms. They pro-low-resourced languages vide online groups, mentoring, and infrastructure, and report on a case study on developing MT algorithms for low-resource African languages. 10 Machine Translation and Encoder-Decoder Models 10.10 Summary Machine translation is one of the most widely used applications of NLP, and the encoder-decoder model, first developed for MT is a key tool that has applications throughout NLP. @@ -1600,7 +1600,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 10 Machine Translation and Encoder-Decoder Models 10.11 Bibliographical and Historical Notes increasing depth of analysis required (on both the analysis and generation end) as we move from the direct approach through transfer approaches to interlingual approaches. In addition, it shows the decreasing amount of transfer knowledge needed as we move up the triangle, from huge amounts of transfer at the direct level (almost all knowledge is transfer knowledge for each word) through transfer (transfer rules only for parse trees or thematic roles) through interlingua (no specific transfer knowledge). We can view the encoder-decoder network as an interlingual approach, with attention acting as an integration of direct and transfer, allowing words or their representations to be directly accessed by the decoder. Statistical methods began to be applied around 1990, enabled first by the development of large bilingual corpora like the Hansard corpus of the proceedings of the Canadian Parliament, which are kept in both French and English, and then by the growth of the Web. Early on, a number of researchers showed that it was possible to extract pairs of aligned sentences from bilingual corpora, using words or simple cues like sentence length (Kay and Röscheisen 1988 , Gale and Church 1991 , Gale and Church 1993 , Kay and Röscheisen 1993 . 10 Machine Translation and Encoder-Decoder Models 10.11 Bibliographical and Historical Notes At the same time, the IBM group, drawing directly on the noisy channel model for speech recognition, proposed two related paradigm for statistical MT. These statistical MT include the generative algorithms that became known as IBM Models 1 through IBM Models 5, implemented in the Candide system. The algorithms (except for the decoder) Candide were published in full detail-encouraged by the US government which had par- (Papineni et al., 2002) , NIST (Doddington, 2002) , TER (Translation Error Rate) (Snover et al., 2006) , Precision and Recall (Turian et al., 2003) , and METEOR (Banerjee and Lavie, 2005); character n-gram overlap methods like chrF (Popović, 2015) came later. More recent evaluation work, echoing the ALPAC report, has emphasized the importance of careful statistical methodology and the use of human evaluation (Kocmi et al., 2021; Marie et al., 2021) . 10 Machine Translation and Encoder-Decoder Models 10.11 Bibliographical and Historical Notes The early history of MT is surveyed in Hutchins 1986 and 1997; Nirenburg et al. (2002) collects early readings. See Croft (1990) or Comrie (1989) for introductions to linguistic typology. -11 Transfer Learning with Pretrained Language Models and Contextual Embeddings "How much do we know at any time? Much more, or so I believe, than we know we know." Agatha Christie, The Moving Finger +11 Transfer Learning with Pretrained Language Models and Contextual Embeddings """How much do we know at any time? Much more, or so I believe, than we know we know."" Agatha Christie, The Moving Finger" 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings Fluent speakers bring an enormous amount of knowledge to bear during comprehension and production of language. This knowledge is embodied in many forms, perhaps most obviously in the vocabulary. That is, in the rich representations associated with the words we know, including their grammatical function, meaning, real-world reference, and pragmatic function. This makes the vocabulary a useful lens to explore the acquisition of knowledge from text, by both people and machines. 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings Estimates of the size of adult vocabularies vary widely both within and across languages. For example, estimates of the vocabulary size of young adult speakers of American English range from 30,000 to 100,000 depending on the resources used to make the estimate and the definition of what it means to know a word. What is agreed upon is that the vast majority of words that mature speakers use in their dayto-day interactions are acquired early in life through spoken interactions in context with care givers and peers, usually well before the start of formal schooling. This active vocabulary is extremely limited compared to the size of the adult vocabulary (usually on the order of 2000 words for young speakers) and is quite stable, with very few additional words learned via casual conversation beyond this early stage. Obviously, this leaves a very large number of words to be acquired by some other means. 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings A simple consequence of these facts is that children have to learn about 7 to 10 words a day, every single day, to arrive at observed vocabulary levels by the time they are 20 years of age. And indeed empirical estimates of vocabulary growth in late elementary through high school are consistent with this rate. How do children achieve this rate of vocabulary growth given their daily experiences during this period? We know that most of this growth is not happening through direct vocabulary instruction in school since these methods are largely ineffective, and are not deployed at a rate that would result in the reliable acquisition of words at the required rate. @@ -1695,7 +1695,7 @@ Advanced: Perplexity's Relation to Entropy 3.10 Bibliographical and Historical N 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.3 Sequence Labelling Sanitas I-LOC is O in O Sunshine B-LOC Canyon I-LOC . O 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.3 Sequence Labelling Unfortunately, the WordPiece tokenization for this sentence yields the following sequence of tokens which doesn't align directly with BIO tags in the ground truth annotation: 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.3 Sequence Labelling 'Mt ', '.', 'San', '##itas', 'is', 'in', 'Sunshine', 'Canyon' '.' To deal with this misalignment, we need a way to assign BIO tags to subword tokens during training and a corresponding way to recover word-level tags from subwords during decoding. For training, we can just assign the gold-standard tag associated with each word to all of the subword tokens derived from it. -11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.3 Sequence Labelling For decoding, the simplest approach is to use the argmax BIO tag associated with the first subword token of a word. Thus, in our example, the BIO tag assigned to "Mt" would be assigned to "Mt." and the tag assigned to "San" would be assigned to "Sanitas", effectively ignoring the information in the tags assigned to "." and "##itas". More complex approaches combine the distribution of tag probabilities across the subwords in an attempt to find an optimal word-level tag. +11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.3 Sequence Labelling "For decoding, the simplest approach is to use the argmax BIO tag associated with the first subword token of a word. Thus, in our example, the BIO tag assigned to ""Mt"" would be assigned to ""Mt."" and the tag assigned to ""San"" would be assigned to ""Sanitas"", effectively ignoring the information in the tags assigned to ""."" and ""##itas"". More complex approaches combine the distribution of tag probabilities across the subwords in an attempt to find an optimal word-level tag." 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.4 Fine-tuning for Span-Based Applications Span-oriented applications operate in a middle ground between sequence level and token level tasks. That is, in span-oriented applications the focus is on generating and operating with representations of contiguous sequences of tokens. Typical operations include identifying spans of interest, classifying spans according to some labeling scheme, and determining relations among discovered spans. Applications include named entity recognition, question answering, syntactic parsing, semantic role labeling and coreference resolution. 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.4 Fine-tuning for Span-Based Applications Formally, given an input sequence x consisting of T tokens, (x 1 , x 2 , ..., x T ), a span is a contiguous sequence of tokens with start i and end j such that 1 <= i <= j <= T . This formulation results in a total set of spans equal to T (T −1) 11 Transfer Learning with Pretrained Language Models and Contextual Embeddings 11.3 Transfer Learning through Fine-Tuning 11.3.4 Fine-tuning for Span-Based Applications . For practical purposes, span-based models often impose an application-specific length limit L, so the legal spans are limited to those where j − i < L. In the following, we'll refer to the enumerated set of legal spans in x as S(x).