diff --git "a/slp3ed.csv" "b/slp3ed.csv" --- "a/slp3ed.csv" +++ "b/slp3ed.csv" @@ -88,8 +88,7 @@ n_chapter,chapter,n_section,section,n_subsection,subsection,text 2,Regular Expressions,2.3,Corpora,,,"Because language is so situated, when developing computational models for language processing from a corpus, it's important to consider who produced the language, in what context, for what purpose. How can a user of a dataset know all these details? The best way is for the corpus creator to build a datasheet (Gebru et al., 2020) or data statement (Bender and Friedman, 2018) for each corpus. A datasheet specifies properties of a dataset like:" 2,Regular Expressions,2.3,Corpora,,,"Motivation: Why was the corpus collected, by whom, and who funded it? Situation: When and in what situation was the text written/spoken? For example, was there a task? Was the language originally spoken conversation, edited text, social media communication, monologue vs. dialogue? Language variety: What language (including dialect/region) was the corpus in? Speaker demographics: What was, e.g., age or gender of the authors of the text? Collection process: How big is the data? If it is a subsample how was it sampled?" 2,Regular Expressions,2.3,Corpora,,,"Was the data collected with consent? How was the data pre-processed, and what metadata is available? Annotation process: What are the annotations, what are the demographics of the annotators, how were they trained, how was the data annotated? Distribution: Are there copyright or other intellectual property restrictions?" -2,Regular Expressions,2.4,Text Normalization,,,"Before almost any natural language processing of a text, the text has to be normalized. At least three tasks are commonly applied as part of any normalization process:" -2,Regular Expressions,2.4,Text Normalization,,,1. Tokenizing (segmenting) words 2. Normalizing word formats 3. Segmenting sentences In the next sections we walk through each of these tasks. +2,Regular Expressions,2.4,Text Normalization,,,"Before almost any natural language processing of a text, the text has to be normalized. At least three tasks are commonly applied as part of any normalization process: 1. Tokenizing (segmenting) words 2. Normalizing word formats 3. Segmenting sentences In the next sections we walk through each of these tasks." 2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"Let's begin with an easy, if somewhat naive version of word tokenization and normalization (and frequency computation) that can be accomplished for English solely in a single UNIX command-line, inspired by Church (1994) . We'll make use of some Unix commands: tr, used to systematically change particular characters in the input; sort, which sorts input lines in alphabetical order; and uniq, which collapses and counts adjacent identical lines." 2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,"For example let's begin with the 'complete words' of Shakespeare in one file, sh.txt. We can use tr to tokenize the words by changing every sequence of nonalphabetic characters to a newline ('A-Za-z' means alphabetic, the -c option complements to non-alphabet, and the -s option squeezes all sequences into a single character): tr -sc 'A-Za-z' '\n' < sh.txt" 2,Regular Expressions,2.4,Text Normalization,2.4.1,Unix Tools for Crude Tokenization and Normalization,The output of this command will be: THE SONNETS by William Shakespeare From fairest creatures We ... @@ -317,7 +316,7 @@ n_chapter,chapter,n_section,section,n_subsection,subsection,text 3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,Kneser-Ney 1998). 3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,Kneser-Ney has its roots in a method called absolute discounting. Recall that discounting of the counts for frequent n-grams is necessary to save some probability mass for the smoothing algorithm to distribute to the unseen n-grams. 3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"To see this, we can use a clever idea from Church and Gale (1991) . Consider an n-gram that has count 4. We need to discount this count by some amount. But how much should we discount it? Church and Gale's clever idea was to look at a held-out corpus and just see what the count is for all those bigrams that had count 4 in the training set. They computed a bigram grammar from 22 million words of AP newswire and then checked the counts of each of these bigrams in another 22 million words. On average, a bigram that occurred 4 times in the first 22 million words occurred 3.23 times in the next 22 million words. Fig. 3 .9 from Church and Gale (1991) shows these counts for bigrams with c from 0 to 9. .9 For all bigrams in 22 million words of AP newswire of count 0, 1, 2,...,9, the counts of these bigrams in a held-out corpus also of 22 million words." -3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"Notice in Fig. 3 .9 that except for the held-out counts for 0 and 1, all the other bigram counts in the held-out set could be estimated pretty well by just subtracting 0.75 from the count in the training set! Absolute discounting formalizes this intu-Absolute discounting ition by subtracting a fixed (absolute) discount d from each count. The intuition is that since we have good estimates already for the very high counts, a small discount d won't affect them much. It will mainly modify the smaller counts, for which we don't necessarily trust the estimate anyway, and Fig. 3 .9 suggests that in practice this discount is actually a good one for bigrams with counts 2 through 9. The equation for interpolated absolute discounting applied to bigrams:" +3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"Notice in Fig. 3 .9 that except for the held-out counts for 0 and 1, all the other bigram counts in the held-out set could be estimated pretty well by just subtracting 0.75 from the count in the training set! Absolute discounting formalizes this intuition by subtracting a fixed (absolute) discount d from each count. The intuition is that since we have good estimates already for the very high counts, a small discount d won't affect them much. It will mainly modify the smaller counts, for which we don't necessarily trust the estimate anyway, and Fig. 3 .9 suggests that in practice this discount is actually a good one for bigrams with counts 2 through 9. The equation for interpolated absolute discounting applied to bigrams:" 3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,P AbsoluteDiscounting (w i |w i−1 ) = C(w i−1 w i ) − d v C(w i−1 v) + λ (w i−1 )P(w i ) (3.30) 3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"The first term is the discounted bigram, and the second term is the unigram with an interpolation weight λ . We could just set all the d values to .75, or we could keep a separate discount value of 0.5 for the bigrams with counts of 1." 3,N-gram Language Models,3.6,Kneser-Ney Smoothing,,,"Kneser-Ney discounting (Kneser and Ney, 1995) augments absolute discounting with a more sophisticated way to handle the lower-order unigram distribution. Consider the job of predicting the next word in this sentence, assuming we are interpolating a bigram and a unigram model. I can't see without my reading . The word glasses seems much more likely to follow here than, say, the word Kong, so we'd like our unigram model to prefer glasses. But in fact it's Kong that is more common, since Hong Kong is a very frequent word. A standard unigram model will assign Kong a higher probability than glasses. We would like to capture the intuition that although Kong is frequent, it is mainly only frequent in the phrase Hong Kong, that is, after the word Hong. The word glasses has a much wider distribution." @@ -347,50 +346,50 @@ n_chapter,chapter,n_section,section,n_subsection,subsection,text 3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,S(w i |w i−k+1 : i−1 ) =    count(w i−k+1 : i ) count(w i−k+1 : i−1 ) if count(w i−k+1 : i ) > 0 λ S(w i |w i−k+2 : i−1 ) otherwise (3.40) 3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,"The backoff terminates in the unigram, which has probability S(w) = count(w)" 3,N-gram Language Models,3.7,Huge Language Models and Stupid Backoff,,,. Brants et al. (2007) find that a value of 0.4 worked well for λ . -Advanced:,Perplexity's Relation to Entropy,,,,,"We introduced perplexity in Section 3.2.1 as a way to evaluate n-gram models on a test set. A better n-gram model is one that assigns a higher probability to the test data, and perplexity is a normalized version of the probability of the test set. The perplexity measure actually arises from the information-theoretic concept of cross-entropy, which explains otherwise mysterious properties of perplexity (why the inverse probability, for example?) and its relationship to entropy. Entropy is a Entropy measure of information. Given a random variable X ranging over whatever we are predicting (words, letters, parts of speech, the set of which we'll call χ) and with a particular probability function, call it p(x), the entropy of the random variable X is:" -Advanced:,Perplexity's Relation to Entropy,,,,,H(X) = − x∈χ p(x) log 2 p(x) (3.41) -Advanced:,Perplexity's Relation to Entropy,,,,,"The log can, in principle, be computed in any base. If we use log base 2, the resulting value of entropy will be measured in bits." -Advanced:,Perplexity's Relation to Entropy,,,,,One intuitive way to think about entropy is as a lower bound on the number of bits it would take to encode a certain decision or piece of information in the optimal coding scheme. -Advanced:,Perplexity's Relation to Entropy,,,,,"Consider an example from the standard information theory textbook Cover and Thomas (1991) . Imagine that we want to place a bet on a horse race but it is too far to go all the way to Yonkers Racetrack, so we'd like to send a short message to the bookie to tell him which of the eight horses to bet on. One way to encode this message is just to use the binary representation of the horse's number as the code; thus, horse 1 would be 001, horse 2 010, horse 3 011, and so on, with horse 8 coded as 000. If we spend the whole day betting and each horse is coded with 3 bits, on average we would be sending 3 bits per race." -Advanced:,Perplexity's Relation to Entropy,,,,,Can we do better? Suppose that the spread is the actual distribution of the bets placed and that we represent it as the prior probability of each horse as follows: The entropy of the random variable X that ranges over horses gives us a lower bound on the number of bits and is -Advanced:,Perplexity's Relation to Entropy,,,,,H(X) = − i=8 i=1 p(i) log p(i) = − 1 2 log 1 2 − 1 4 log 1 4 − 1 8 log 1 8 − 1 16 log 1 16 −4( 1 64 log 1 64 ) = 2 bits (3.42) -Advanced:,Perplexity's Relation to Entropy,,,,,"A code that averages 2 bits per race can be built with short encodings for more probable horses, and longer encodings for less probable horses. For example, we could encode the most likely horse with the code 0, and the remaining horses as 10, then 110, 1110, 111100, 111101, 111110, and 111111." -Advanced:,Perplexity's Relation to Entropy,,,,,"What if the horses are equally likely? We saw above that if we used an equallength binary code for the horse numbers, each horse took 3 bits to code, so the average was 3. Is the entropy the same? In this case each horse would have a probability of 1 8 . The entropy of the choice of horses is then" -Advanced:,Perplexity's Relation to Entropy,,,,,H(X) = − i=8 i=1 1 8 log 1 8 = − log 1 8 = 3 bits (3.43) -Advanced:,Perplexity's Relation to Entropy,,,,,"Until now we have been computing the entropy of a single variable. But most of what we will use entropy for involves sequences. For a grammar, for example, we will be computing the entropy of some sequence of words W = {w 1 , w 2 , . . . , w n }. One way to do this is to have a variable that ranges over sequences of words. For example we can compute the entropy of a random variable that ranges over all finite sequences of words of length n in some language L as follows:" -Advanced:,Perplexity's Relation to Entropy,,,,,"H(w 1 , w 2 , . . . , w n ) = − w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.44)" -Advanced:,Perplexity's Relation to Entropy,,,,,We could define the entropy rate (we could also think of this as the per-word entropy rate entropy) as the entropy of this sequence divided by the number of words: -Advanced:,Perplexity's Relation to Entropy,,,,,1 n H(w 1 : n ) = − 1 n w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.45) -Advanced:,Perplexity's Relation to Entropy,,,,,"But to measure the true entropy of a language, we need to consider sequences of infinite length. If we think of a language as a stochastic process L that produces a sequence of words, and allow W to represent the sequence of words w 1 , . . . , w n , then L's entropy rate H(L) is defined as" -Advanced:,Perplexity's Relation to Entropy,,,,,"H(L) = lim n→∞ 1 n H(w 1 , w 2 , . . . , w n ) = − lim n→∞ 1 n W ∈L p(w 1 , . . . , w n ) log p(w 1 , . . . , w n ) (3.46)" -Advanced:,Perplexity's Relation to Entropy,,,,,"The Shannon-McMillan-Breiman theorem (Algoet and Cover 1988, Cover and Thomas 1991) states that if the language is regular in certain ways (to be exact, if it is both stationary and ergodic)," -Advanced:,Perplexity's Relation to Entropy,,,,,H(L) = lim n→∞ − 1 n log p(w 1 w 2 . . . w n ) (3.47) -Advanced:,Perplexity's Relation to Entropy,,,,,"That is, we can take a single sequence that is long enough instead of summing over all possible sequences. The intuition of the Shannon-McMillan-Breiman theorem is that a long-enough sequence of words will contain in it many other shorter sequences and that each of these shorter sequences will reoccur in the longer sequence according to their probabilities." -Advanced:,Perplexity's Relation to Entropy,,,,,"A stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index. In other words, the probability distribution for words at time t is the same as the probability distribution at time t + 1. Markov models, and hence n-grams, are stationary. For example, in a bigram, P i is dependent only on P i−1 . So if we shift our time index by x, P i+x is still dependent on P i+x−1 . But natural language is not stationary, since as we show in Chapter 12, the probability of upcoming words can be dependent on events that were arbitrarily distant and time dependent. Thus, our statistical models only give an approximation to the correct distributions and entropies of natural language. To summarize, by making some incorrect but convenient simplifying assumptions, we can compute the entropy of some stochastic process by taking a very long sample of the output and computing its average log probability. Now we are ready to introduce cross-entropy. The cross-entropy is useful when cross-entropy we don't know the actual probability distribution p that generated some data. It allows us to use some m, which is a model of p (i.e., an approximation to p). The cross-entropy of m on p is defined by" -Advanced:,Perplexity's Relation to Entropy,,,,,"H(p, m) = lim n→∞ − 1 n W ∈L p(w 1 , . . . , w n ) log m(w 1 , . . . , w n ) (3.48)" -Advanced:,Perplexity's Relation to Entropy,,,,,"That is, we draw sequences according to the probability distribution p, but sum the log of their probabilities according to m." -Advanced:,Perplexity's Relation to Entropy,,,,,"Again, following the Shannon-McMillan-Breiman theorem, for a stationary ergodic process:" -Advanced:,Perplexity's Relation to Entropy,,,,,"H(p, m) = lim n→∞ − 1 n log m(w 1 w 2 . . . w n ) (3.49)" -Advanced:,Perplexity's Relation to Entropy,,,,,"This means that, as for entropy, we can estimate the cross-entropy of a model m on some distribution p by taking a single sequence that is long enough instead of summing over all possible sequences." -Advanced:,Perplexity's Relation to Entropy,,,,,"What makes the cross-entropy useful is that the cross-entropy H(p, m) is an upper bound on the entropy H(p). For any model m:" -Advanced:,Perplexity's Relation to Entropy,,,,,"H(p) ≤ H(p, m) (3.50)" -Advanced:,Perplexity's Relation to Entropy,,,,,"This means that we can use some simplified model m to help estimate the true entropy of a sequence of symbols drawn according to probability p. The more accurate m is, the closer the cross-entropy H(p, m) will be to the true entropy H(p). Thus, the difference between H(p, m) and H(p) is a measure of how accurate a model is. Between two models m 1 and m 2 , the more accurate model will be the one with the lower cross-entropy. (The cross-entropy can never be lower than the true entropy, so a model cannot err by underestimating the true entropy.)" -Advanced:,Perplexity's Relation to Entropy,,,,,"We are finally ready to see the relation between perplexity and cross-entropy as we saw it in Eq. 3.49. Cross-entropy is defined in the limit as the length of the observed word sequence goes to infinity. We will need an approximation to crossentropy, relying on a (sufficiently long) sequence of fixed length. This approximation to the cross-entropy of a model" -Advanced:,Perplexity's Relation to Entropy,,,,,M = P(w i |w i−N+1 : i−1 ) on a sequence of words W is H(W ) = − 1 N log P(w 1 w 2 . . . w N ) (3.51) -Advanced:,Perplexity's Relation to Entropy,,,,,The perplexity of a model P on a sequence of words W is now formally defined as perplexity 2 raised to the power of this cross-entropy: -Advanced:,Perplexity's Relation to Entropy,,,,,Perplexity(W ) = 2 H(W ) = P(w 1 w 2 . . . w N ) − 1 N = N 1 P(w 1 w 2 . . . w N ) = N N i=1 1 P(w i |w 1 . . . w i−1 ) (3.52) -Advanced:,Perplexity's Relation to Entropy,3.9,Summary,,,"This chapter introduced language modeling and the n-gram, one of the most widely used tools in language processing." -Advanced:,Perplexity's Relation to Entropy,3.9,Summary,,,"• Language models offer a way to assign a probability to a sentence or other sequence of words, and to predict a word from preceding words. • n-grams are Markov models that estimate words from a fixed window of previous words. n-gram probabilities can be estimated by counting in a corpus and normalizing (the maximum likelihood estimate). • n-gram language models are evaluated extrinsically in some task, or intrinsically using perplexity. • The perplexity of a test set according to a language model is the geometric mean of the inverse test set probability computed by the model. • Smoothing algorithms provide a more sophisticated way to estimate the probability of n-grams. Commonly used smoothing algorithms for n-grams rely on lower-order n-gram counts through backoff or interpolation." -Advanced:,Perplexity's Relation to Entropy,3.9,Summary,,,• Both backoff and interpolation require discounting to create a probability distribution. • Kneser-Ney smoothing makes use of the probability of a word being a novel continuation. The interpolated Kneser-Ney smoothing algorithm mixes a discounted probability with a lower-order continuation probability. -Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"The underlying mathematics of the n-gram was first proposed by Markov (1913) , who used what are now called Markov chains (bigrams and trigrams) to predict whether an upcoming letter in Pushkin's Eugene Onegin would be a vowel or a consonant. Markov classified 20,000 letters as V or C and computed the bigram and trigram probability that a given letter would be a vowel given the previous one or two letters. Shannon (1948) applied n-grams to compute approximations to English word sequences. Based on Shannon's work, Markov models were commonly used in engineering, linguistic, and psychological work on modeling word sequences by the 1950s. In a series of extremely influential papers starting with Chomsky (1956) and including Chomsky (1957) and Miller and Chomsky (1963) , Noam Chomsky argued that ""finite-state Markov processes"", while a possibly useful engineering heuristic, were incapable of being a complete cognitive model of human grammatical knowledge. These arguments led many linguists and computational linguists to ignore work in statistical modeling for decades. The resurgence of n-gram models came from Jelinek and colleagues at the IBM Thomas J. Watson Research Center, who were influenced by Shannon, and Baker at CMU, who was influenced by the work of Baum and colleagues. Independently these two labs successfully used n-grams in their speech recognition systems (Baker 1975b , Jelinek 1976 , Baker 1975a , Bahl et al. 1983 , Jelinek 1990 )." -Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,Add-one smoothing derives from Laplace's 1812 law of succession and was first applied as an engineering solution to the zero frequency problem by Jeffreys (1948) based on an earlier Add-K suggestion by Johnson (1932) . Problems with the addone algorithm are summarized in Gale and Church (1994) . -Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"A wide variety of different language modeling and smoothing techniques were proposed in the 80s and 90s, including Good-Turing discounting-first applied to the n-gram smoothing at IBM by Katz (Nádas 1984, Church and Gale 1991)-Witten-Bell discounting (Witten and Bell, 1991) , and varieties of class-based ngram models that used information about word classes." -Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"Starting in the late 1990s, Chen and Goodman performed a number of carefully controlled experiments comparing different discounting algorithms, cache models, class-based models, and other language model parameters (Chen and Goodman 1999, Goodman 2006, inter alia) . They showed the advantages of Modified Interpolated Kneser-Ney, which became the standard baseline for n-gram language modeling, especially because they showed that caches and class-based models provided only minor additional improvement. These papers are recommended for any reader with further interest in n-gram language modeling. SRILM (Stolcke, 2002) and KenLM (Heafield 2011 , Heafield et al. 2013 are publicly available toolkits for building n-gram language models." -Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical Notes,,,"Modern language modeling is more commonly done with neural network language models, which solve the major problems with n-grams: the number of parameters increases exponentially as the n-gram order increases, and n-grams have no way to generalize from training to test set. Neural language models instead project words into a continuous space in which words with similar contexts have similar representations. We'll introduce both feedforward language models (Bengio et al. 2006 , Schwenk 2007 in Chapter 7, and recurrent language models (Mikolov, 2012) in Chapter 9." +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"We introduced perplexity in Section 3.2.1 as a way to evaluate n-gram models on a test set. A better n-gram model is one that assigns a higher probability to the test data, and perplexity is a normalized version of the probability of the test set. The perplexity measure actually arises from the information-theoretic concept of cross-entropy, which explains otherwise mysterious properties of perplexity (why the inverse probability, for example?) and its relationship to entropy. Entropy is a Entropy measure of information. Given a random variable X ranging over whatever we are predicting (words, letters, parts of speech, the set of which we'll call χ) and with a particular probability function, call it p(x), the entropy of the random variable X is:" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,H(X) = − x∈χ p(x) log 2 p(x) (3.41) +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"The log can, in principle, be computed in any base. If we use log base 2, the resulting value of entropy will be measured in bits." +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,One intuitive way to think about entropy is as a lower bound on the number of bits it would take to encode a certain decision or piece of information in the optimal coding scheme. +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"Consider an example from the standard information theory textbook Cover and Thomas (1991) . Imagine that we want to place a bet on a horse race but it is too far to go all the way to Yonkers Racetrack, so we'd like to send a short message to the bookie to tell him which of the eight horses to bet on. One way to encode this message is just to use the binary representation of the horse's number as the code; thus, horse 1 would be 001, horse 2 010, horse 3 011, and so on, with horse 8 coded as 000. If we spend the whole day betting and each horse is coded with 3 bits, on average we would be sending 3 bits per race." +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,Can we do better? Suppose that the spread is the actual distribution of the bets placed and that we represent it as the prior probability of each horse as follows: The entropy of the random variable X that ranges over horses gives us a lower bound on the number of bits and is +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,H(X) = − i=8 i=1 p(i) log p(i) = − 1 2 log 1 2 − 1 4 log 1 4 − 1 8 log 1 8 − 1 16 log 1 16 −4( 1 64 log 1 64 ) = 2 bits (3.42) +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"A code that averages 2 bits per race can be built with short encodings for more probable horses, and longer encodings for less probable horses. For example, we could encode the most likely horse with the code 0, and the remaining horses as 10, then 110, 1110, 111100, 111101, 111110, and 111111." +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"What if the horses are equally likely? We saw above that if we used an equallength binary code for the horse numbers, each horse took 3 bits to code, so the average was 3. Is the entropy the same? In this case each horse would have a probability of 1 8 . The entropy of the choice of horses is then" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,H(X) = − i=8 i=1 1 8 log 1 8 = − log 1 8 = 3 bits (3.43) +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"Until now we have been computing the entropy of a single variable. But most of what we will use entropy for involves sequences. For a grammar, for example, we will be computing the entropy of some sequence of words W = {w 1 , w 2 , . . . , w n }. One way to do this is to have a variable that ranges over sequences of words. For example we can compute the entropy of a random variable that ranges over all finite sequences of words of length n in some language L as follows:" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"H(w 1 , w 2 , . . . , w n ) = − w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.44)" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,We could define the entropy rate (we could also think of this as the per-word entropy rate entropy) as the entropy of this sequence divided by the number of words: +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,1 n H(w 1 : n ) = − 1 n w 1 : n ∈L p(w 1 : n ) log p(w 1 : n ) (3.45) +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"But to measure the true entropy of a language, we need to consider sequences of infinite length. If we think of a language as a stochastic process L that produces a sequence of words, and allow W to represent the sequence of words w 1 , . . . , w n , then L's entropy rate H(L) is defined as" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"H(L) = lim n→∞ 1 n H(w 1 , w 2 , . . . , w n ) = − lim n→∞ 1 n W ∈L p(w 1 , . . . , w n ) log p(w 1 , . . . , w n ) (3.46)" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"The Shannon-McMillan-Breiman theorem (Algoet and Cover 1988, Cover and Thomas 1991) states that if the language is regular in certain ways (to be exact, if it is both stationary and ergodic)," +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,H(L) = lim n→∞ − 1 n log p(w 1 w 2 . . . w n ) (3.47) +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"That is, we can take a single sequence that is long enough instead of summing over all possible sequences. The intuition of the Shannon-McMillan-Breiman theorem is that a long-enough sequence of words will contain in it many other shorter sequences and that each of these shorter sequences will reoccur in the longer sequence according to their probabilities." +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"A stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index. In other words, the probability distribution for words at time t is the same as the probability distribution at time t + 1. Markov models, and hence n-grams, are stationary. For example, in a bigram, P i is dependent only on P i−1 . So if we shift our time index by x, P i+x is still dependent on P i+x−1 . But natural language is not stationary, since as we show in Chapter 12, the probability of upcoming words can be dependent on events that were arbitrarily distant and time dependent. Thus, our statistical models only give an approximation to the correct distributions and entropies of natural language. To summarize, by making some incorrect but convenient simplifying assumptions, we can compute the entropy of some stochastic process by taking a very long sample of the output and computing its average log probability. Now we are ready to introduce cross-entropy. The cross-entropy is useful when cross-entropy we don't know the actual probability distribution p that generated some data. It allows us to use some m, which is a model of p (i.e., an approximation to p). The cross-entropy of m on p is defined by" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"H(p, m) = lim n→∞ − 1 n W ∈L p(w 1 , . . . , w n ) log m(w 1 , . . . , w n ) (3.48)" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"That is, we draw sequences according to the probability distribution p, but sum the log of their probabilities according to m." +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"Again, following the Shannon-McMillan-Breiman theorem, for a stationary ergodic process:" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"H(p, m) = lim n→∞ − 1 n log m(w 1 w 2 . . . w n ) (3.49)" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"This means that, as for entropy, we can estimate the cross-entropy of a model m on some distribution p by taking a single sequence that is long enough instead of summing over all possible sequences." +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"What makes the cross-entropy useful is that the cross-entropy H(p, m) is an upper bound on the entropy H(p). For any model m:" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"H(p) ≤ H(p, m) (3.50)" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"This means that we can use some simplified model m to help estimate the true entropy of a sequence of symbols drawn according to probability p. The more accurate m is, the closer the cross-entropy H(p, m) will be to the true entropy H(p). Thus, the difference between H(p, m) and H(p) is a measure of how accurate a model is. Between two models m 1 and m 2 , the more accurate model will be the one with the lower cross-entropy. (The cross-entropy can never be lower than the true entropy, so a model cannot err by underestimating the true entropy.)" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,"We are finally ready to see the relation between perplexity and cross-entropy as we saw it in Eq. 3.49. Cross-entropy is defined in the limit as the length of the observed word sequence goes to infinity. We will need an approximation to crossentropy, relying on a (sufficiently long) sequence of fixed length. This approximation to the cross-entropy of a model" +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,M = P(w i |w i−N+1 : i−1 ) on a sequence of words W is H(W ) = − 1 N log P(w 1 w 2 . . . w N ) (3.51) +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,The perplexity of a model P on a sequence of words W is now formally defined as perplexity 2 raised to the power of this cross-entropy: +3,N-gram Language Models,3.8,Perplexity's Relation to Entropy,,,Perplexity(W ) = 2 H(W ) = P(w 1 w 2 . . . w N ) − 1 N = N 1 P(w 1 w 2 . . . w N ) = N N i=1 1 P(w i |w 1 . . . w i−1 ) (3.52) +3,N-gram Language Models,3.9,Summary,,,"This chapter introduced language modeling and the n-gram, one of the most widely used tools in language processing." +3,N-gram Language Models,3.9,Summary,,,"• Language models offer a way to assign a probability to a sentence or other sequence of words, and to predict a word from preceding words. • n-grams are Markov models that estimate words from a fixed window of previous words. n-gram probabilities can be estimated by counting in a corpus and normalizing (the maximum likelihood estimate). • n-gram language models are evaluated extrinsically in some task, or intrinsically using perplexity. • The perplexity of a test set according to a language model is the geometric mean of the inverse test set probability computed by the model. • Smoothing algorithms provide a more sophisticated way to estimate the probability of n-grams. Commonly used smoothing algorithms for n-grams rely on lower-order n-gram counts through backoff or interpolation." +3,N-gram Language Models,3.9,Summary,,,• Both backoff and interpolation require discounting to create a probability distribution. • Kneser-Ney smoothing makes use of the probability of a word being a novel continuation. The interpolated Kneser-Ney smoothing algorithm mixes a discounted probability with a lower-order continuation probability. +3,N-gram Language Models,3.10,Bibliographical and Historical Notes,,,"The underlying mathematics of the n-gram was first proposed by Markov (1913) , who used what are now called Markov chains (bigrams and trigrams) to predict whether an upcoming letter in Pushkin's Eugene Onegin would be a vowel or a consonant. Markov classified 20,000 letters as V or C and computed the bigram and trigram probability that a given letter would be a vowel given the previous one or two letters. Shannon (1948) applied n-grams to compute approximations to English word sequences. Based on Shannon's work, Markov models were commonly used in engineering, linguistic, and psychological work on modeling word sequences by the 1950s. In a series of extremely influential papers starting with Chomsky (1956) and including Chomsky (1957) and Miller and Chomsky (1963) , Noam Chomsky argued that ""finite-state Markov processes"", while a possibly useful engineering heuristic, were incapable of being a complete cognitive model of human grammatical knowledge. These arguments led many linguists and computational linguists to ignore work in statistical modeling for decades. The resurgence of n-gram models came from Jelinek and colleagues at the IBM Thomas J. Watson Research Center, who were influenced by Shannon, and Baker at CMU, who was influenced by the work of Baum and colleagues. Independently these two labs successfully used n-grams in their speech recognition systems (Baker 1975b , Jelinek 1976 , Baker 1975a , Bahl et al. 1983 , Jelinek 1990 )." +3,N-gram Language Models,3.10,Bibliographical and Historical Notes,,,Add-one smoothing derives from Laplace's 1812 law of succession and was first applied as an engineering solution to the zero frequency problem by Jeffreys (1948) based on an earlier Add-K suggestion by Johnson (1932) . Problems with the addone algorithm are summarized in Gale and Church (1994) . +3,N-gram Language Models,3.10,Bibliographical and Historical Notes,,,"A wide variety of different language modeling and smoothing techniques were proposed in the 80s and 90s, including Good-Turing discounting-first applied to the n-gram smoothing at IBM by Katz (Nádas 1984, Church and Gale 1991)-Witten-Bell discounting (Witten and Bell, 1991) , and varieties of class-based ngram models that used information about word classes." +3,N-gram Language Models,3.10,Bibliographical and Historical Notes,,,"Starting in the late 1990s, Chen and Goodman performed a number of carefully controlled experiments comparing different discounting algorithms, cache models, class-based models, and other language model parameters (Chen and Goodman 1999, Goodman 2006, inter alia) . They showed the advantages of Modified Interpolated Kneser-Ney, which became the standard baseline for n-gram language modeling, especially because they showed that caches and class-based models provided only minor additional improvement. These papers are recommended for any reader with further interest in n-gram language modeling. SRILM (Stolcke, 2002) and KenLM (Heafield 2011 , Heafield et al. 2013 are publicly available toolkits for building n-gram language models." +3,N-gram Language Models,3.10,Bibliographical and Historical Notes,,,"Modern language modeling is more commonly done with neural network language models, which solve the major problems with n-grams: the number of parameters increases exponentially as the n-gram order increases, and n-grams have no way to generalize from training to test set. Neural language models instead project words into a continuous space in which words with similar contexts have similar representations. We'll introduce both feedforward language models (Bengio et al. 2006 , Schwenk 2007 in Chapter 7, and recurrent language models (Mikolov, 2012) in Chapter 9." 4,Naive Bayes and Sentiment Classification,,,,,"Classification lies at the heart of both human and machine intelligence. Deciding what letter, word, or image has been presented to our senses, recognizing faces or voices, sorting mail, assigning grades to homeworks; these are all examples of assigning a category to an input. The potential challenges of this task are highlighted by the fabulist Jorge Luis Borges 1964, who imagined classifying animals into:" 4,Naive Bayes and Sentiment Classification,,,,,"(a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel's hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance." 4,Naive Bayes and Sentiment Classification,,,,,"Many language processing tasks involve classification, although luckily our classes are much easier to define than those of Borges. In this chapter we introduce the naive Bayes algorithm and apply it to text categorization, the task of assigning a label or text categorization category to an entire text or document." -4,Naive Bayes and Sentiment Classification,,,,,"We focus on one common text categorization task, sentiment analysis, the ex-sentiment analysis traction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Extracting consumer or public sentiment is thus relevant for fields from marketing to politics. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... Spam detection is another important commercial application, the binary classpam detection sification task of assigning an email to one of the two classes spam or not-spam. Many lexical and other features can be used to perform this classification. For example you might quite reasonably be suspicious of an email containing phrases like ""online pharmaceutical"" or ""WITHOUT ANY COST"" or ""Dear Winner""." +4,Naive Bayes and Sentiment Classification,,,,,"We focus on one common text categorization task, sentiment analysis, the extraction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Extracting consumer or public sentiment is thus relevant for fields from marketing to politics. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... Spam detection is another important commercial application, the binary classpam detection sification task of assigning an email to one of the two classes spam or not-spam. Many lexical and other features can be used to perform this classification. For example you might quite reasonably be suspicious of an email containing phrases like ""online pharmaceutical"" or ""WITHOUT ANY COST"" or ""Dear Winner""." 4,Naive Bayes and Sentiment Classification,,,,,"Another thing we might want to know about a text is the language it's written in. Texts on social media, for example, can be in any number of languages and we'll need to apply different processing. The task of language id is thus the first language id step in most language processing pipelines. Related text classification tasks like authorship attributiondetermining a text's author-are also relevant to the digital authorship attribution humanities, social sciences, and forensic linguistics." 4,Naive Bayes and Sentiment Classification,,,,,"Finally, one of the oldest tasks in text classification is assigning a library subject category or topic label to a text. Deciding whether a research paper concerns epidemiology or instead, perhaps, embryology, is an important component of information retrieval. Various sets of subject categories exist, such as the MeSH (Medical Subject Headings) thesaurus. In fact, as we will see, subject category classification is the task for which the naive Bayes algorithm was invented in 1961." 4,Naive Bayes and Sentiment Classification,,,,,"Classification is essential for tasks below the level of the document as well. We've already seen period disambiguation (deciding if a period is the end of a sentence or part of a word), and word tokenization (deciding if a character should be a word boundary). Even language modeling can be viewed as classification: each word can be thought of as a class, and so predicting the next word is classifying the context-so-far into a class for each next word. A part-of-speech tagger (Chapter 8) classifies each occurrence of a word in a sentence as, e.g., a noun or a verb." @@ -399,7 +398,7 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 4,Naive Bayes and Sentiment Classification,,,,,"(d 1 , c 1 ), ...., (d N , c N )." 4,Naive Bayes and Sentiment Classification,,,,,Our goal is to learn a classifier that is capable of mapping from a new document d to its correct class c ∈ C. A probabilistic classifier additionally will tell us the probability of the observation being in the class. This full distribution over the classes can be useful information for downstream decisions; avoiding making discrete decisions early on can be useful when combining systems. 4,Naive Bayes and Sentiment Classification,,,,,"Many kinds of machine learning algorithms are used to build classifiers. This chapter introduces naive Bayes; the following one introduces logistic regression. These exemplify two ways of doing classification. Generative classifiers like naive Bayes build a model of how a class could generate some input data. Given an observation, they return the class most likely to have generated the observation. Discriminative classifiers like logistic regression instead learn what features from the input are most useful to discriminate between the different possible classes. While discriminative systems are often more accurate and hence more commonly used, generative classifiers still have a role." -4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"In this section we introduce the multinomial naive Bayes classifier, so called be- cause it is a Bayesian classifier that makes a simplifying (naive) assumption about how the features interact. The intuition of the classifier is shown in Fig. 4 .1. We represent a text document as if it were a bag-of-words, that is, an unordered set of words with their position bag-of-words ignored, keeping only their frequency in the document. In the example in the figure, instead of representing the word order in all the phrases like ""I love this movie"" and ""I would recommend it"", we simply note that the word I occurred 5 times in the entire excerpt, the word it 6 times, the words love, recommend, and movie once, and so on." +4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"In this section we introduce the multinomial naive Bayes classifier, so called because it is a Bayesian classifier that makes a simplifying (naive) assumption about how the features interact. The intuition of the classifier is shown in Fig. 4 .1. We represent a text document as if it were a bag-of-words, that is, an unordered set of words with their position bag-of-words ignored, keeping only their frequency in the document. In the example in the figure, instead of representing the word order in all the phrases like ""I love this movie"" and ""I would recommend it"", we simply note that the word I occurred 5 times in the entire excerpt, the word it 6 times, the words love, recommend, and movie once, and so on." 4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,Figure 4 .1 Intuition of the multinomial naive Bayes classifier applied to a movie review. The position of the words is ignored (the bag of words assumption) and we make use of the frequency of each word. 4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"Naive Bayes is a probabilistic classifier, meaning that for a document d, out of all classes c ∈ C the classifier returns the classĉ which has the maximum posterior probability given the document. In Eq. 4.1 we use the hat notationˆto mean ""our estimate of the correct class""." 4,Naive Bayes and Sentiment Classification,4.1,Naive Bayes Classifiers,,,"This idea of Bayesian inference has been known since the work of Bayes (1763), and was first applied to text classification by Mosteller and Wallace (1964) . The intuition of Bayesian classification is to use Bayes' rule to transform Eq. 4.1 into other probabilities that have some useful properties. Bayes' rule is presented in Eq. 4.2; it gives us a way to break down any conditional probability P(x|y) into three other probabilities:" @@ -448,7 +447,7 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,"For the test sentence S = ""predictable with no fun"", after removing the word 'with', the chosen class, via Eq. 4.9, is therefore computed as follows:" 4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,P(−)P(S|−) = 3 5 × 2 × 2 × 1 34 3 = 6.1 × 10 −5 P(+)P(S|+) = 2 5 × 1 × 1 × 2 29 3 = 3.2 × 10 −5 4,Naive Bayes and Sentiment Classification,4.3,Worked Example,,,The model thus predicts the class negative for the test sentence. -4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"While standard naive Bayes text classification can work well for sentiment analysis, some small changes are generally employed that improve performance. First, for sentiment classification and a number of other text classification tasks, whether a word occurs or not seems to matter more than its frequency. Thus it often improves performance to clip the word counts in each document at 1 (see the end of the chapter for pointers to these results). This variant is called binary 4.4 • OPTIMIZING FOR SENTIMENT ANALYSIS 63 multinomial naive Bayes or binary NB. The variant uses the same Eq. 4.10 except binary NB that for each document we remove all duplicate words before concatenating them into the single big document. Fig. 4 .3 shows an example in which a set of four documents (shortened and text-normalized for this example) are remapped to binary, with the modified counts shown in the table on the right. The example is worked without add-1 smoothing to make the differences clearer. Note that the results counts need not be 1; the word great has a count of 2 even for Binary NB, because it appears in multiple documents." +4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"While standard naive Bayes text classification can work well for sentiment analysis, some small changes are generally employed that improve performance. First, for sentiment classification and a number of other text classification tasks, whether a word occurs or not seems to matter more than its frequency. Thus it often improves performance to clip the word counts in each document at 1 (see the end of the chapter for pointers to these results). This variant is called binary multinomial naive Bayes or binary NB. The variant uses the same Eq. 4.10 except binary NB that for each document we remove all duplicate words before concatenating them into the single big document. Fig. 4 .3 shows an example in which a set of four documents (shortened and text-normalized for this example) are remapped to binary, with the modified counts shown in the table on the right. The example is worked without add-1 smoothing to make the differences clearer. Note that the results counts need not be 1; the word great has a count of 2 even for Binary NB, because it appears in multiple documents." 4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"− it was pathetic the worst part was the boxing scenes − no plot twists or great scenes + and satire and great plot twists + great scenes great film After per-document binarization: A second important addition commonly made when doing text classification for sentiment is to deal with negation. Consider the difference between I really like this movie (positive) and I didn't like this movie (negative). The negation expressed by didn't completely alters the inferences we draw from the predicate like. Similarly, negation can modify a negative word to produce a positive review (don't dismiss this film, doesn't let us get bored)." 4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"A very simple baseline that is commonly used in sentiment analysis to deal with negation is the following: during text normalization, prepend the prefix NOT to every word after a token of logical negation (n't, not, no, never) until the next punctuation mark. Thus the phrase didn't like this movie , but I becomes didn't NOT_like NOT_this NOT_movie , but I Newly formed 'words' like NOT like, NOT recommend will thus occur more often in negative document and act as cues for negative sentiment, while words like NOT bored, NOT dismiss will acquire positive associations. We will return in Chapter 16 to the use of parsing to deal more accurately with the scope relationship between these negation words and the predicates they modify, but this simple baseline works quite well in practice." 4,Naive Bayes and Sentiment Classification,4.4,Optimizing for Sentiment Analysis,,,"Finally, in some situations we might have insufficient labeled training data to train accurate naive Bayes classifiers using all words in the training set to estimate positive and negative sentiment. In such cases we can instead derive the positive and negative word features from sentiment lexicons, lists of words that are pre-sentiment lexicons annotated with positive or negative sentiment. Four popular lexicons are the General Inquirer (Stone et al., 1966) , LIWC (Pennebaker et al., 2007) , the opinion lexicon" @@ -513,8 +512,7 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,δ (x * (i) ) > 2δ (x) 4,Naive Bayes and Sentiment Classification,4.9,Statistical Significance Testing,4.9.1,The Paired Bootstrap Test,. This percentage then acts as a one-sided empirical p-value 4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"It is important to avoid harms that may result from classifiers, harms that exist both for naive Bayes classifiers and for the other classification algorithms we introduce in later chapters." -4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"One class of harms is representational harms (Crawford 2017, Blodgett et al." -4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"representational harms 2020), harms caused by a system that demeans a social group, for example by perpetuating negative stereotypes about them. For example Kiritchenko and Mohammad (2018) examined the performance of 200 sentiment analysis systems on pairs of sentences that were identical except for containing either a common African American first name (like Shaniqua) or a common European American first name (like Stephanie), chosen from the Caliskan et al. (2017) study discussed in Chapter 6. They found that most systems assigned lower sentiment and more negative emotion to sentences with African American names, reflecting and perpetuating stereotypes that associate African Americans with negative emotions (Popp et al., 2003) . In other tasks classifiers may lead to both representational harms and other harms, such as censorship. For example the important text classification task of toxicity detection is the task of detecting hate speech, abuse, harassment, or other toxicity detection kinds of toxic language. While the goal of such classifiers is to help reduce societal harm, toxicity classifiers can themselves cause harms. For example, researchers have shown that some widely used toxicity classifiers incorrectly flag as being toxic sentences that are non-toxic but simply mention minority identities like women (Park et al., 2018), blind people (Hutchinson et al., 2020) or gay people (Dixon et al., 2018), or simply use linguistic features characteristic of varieties like African-American Vernacular English (Sap et al. 2019 , Davidson et al. 2019 . Such false positive errors, if employed by toxicity detection systems without human oversight, could lead to the censoring of discourse by or about these groups." +4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"One class of harms is representational harms (Crawford 2017, Blodgett et al. 2020), harms caused by a system that demeans a social group, for example by perpetuating negative stereotypes about them. For example Kiritchenko and Mohammad (2018) examined the performance of 200 sentiment analysis systems on pairs of sentences that were identical except for containing either a common African American first name (like Shaniqua) or a common European American first name (like Stephanie), chosen from the Caliskan et al. (2017) study discussed in Chapter 6. They found that most systems assigned lower sentiment and more negative emotion to sentences with African American names, reflecting and perpetuating stereotypes that associate African Americans with negative emotions (Popp et al., 2003) . In other tasks classifiers may lead to both representational harms and other harms, such as censorship. For example the important text classification task of toxicity detection is the task of detecting hate speech, abuse, harassment, or other toxicity detection kinds of toxic language. While the goal of such classifiers is to help reduce societal harm, toxicity classifiers can themselves cause harms. For example, researchers have shown that some widely used toxicity classifiers incorrectly flag as being toxic sentences that are non-toxic but simply mention minority identities like women (Park et al., 2018), blind people (Hutchinson et al., 2020) or gay people (Dixon et al., 2018), or simply use linguistic features characteristic of varieties like African-American Vernacular English (Sap et al. 2019 , Davidson et al. 2019 . Such false positive errors, if employed by toxicity detection systems without human oversight, could lead to the censoring of discourse by or about these groups." 4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"These model problems can be caused by biases or other problems in the training data; in general, machine learning systems replicate and even amplify the biases in their training data. But these problems can also be caused by the labels (for example due to biases in the human labelers), by the resources used (like lexicons, or model components like pretrained embeddings), or even by model architecture (like what the model is trained to optimized). While the mitigation of these biases (for example by carefully considering the training data sources) is an important area of research, we currently don't have general solutions. For this reason it's important, when introducing any NLP model, to study these these kinds of factors and make them clear. One way to do this is by releasing a model card (Mitchell et al., 2019) model card for each version of a model. A model card documents a machine learning model with information like:" 4,Naive Bayes and Sentiment Classification,4.10,Avoiding Harms in Classification,,,"• training algorithms and parameters • training data sources, motivation, and preprocessing • evaluation data sources, motivation, and preprocessing • intended use and users • model performance across different demographic or other groups and environmental situations" 4,Naive Bayes and Sentiment Classification,4.11,Summary,,,This chapter introduced the naive Bayes model for classification and applied it to the text categorization task of sentiment analysis. @@ -667,7 +665,7 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 5,Logistic Regression,5.5,Regularization,,,"which in log space, with µ = 0, and assuming 2σ 2 = 1, corresponds tô" 5,Logistic Regression,5.5,Regularization,,,θ = argmax θ m i=1 log P(y (i) |x (i) ) − α n j=1 θ 2 j (5.29) 5,Logistic Regression,5.5,Regularization,,,which is in the same form as Eq. 5.24. -5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"Sometimes we need more than two classes. Perhaps we might want to do 3-way sentiment classification (positive, negative, or neutral). Or we could be assigning some of the labels we will introduce in Chapter 8, like the part of speech of a word (choosing from 10, 30, or even 50 different parts of speech), or the named entity type of a phrase (choosing from tags like person, location, organization). In such cases we use multinomial logistic regression, also called softmax re-multinomial logistic regression gression (or, historically, the maxent classifier). In multinomial logistic regression the target y is a variable that ranges over more than two classes; we want to know the probability of y being in each potential class c ∈ C, p(y = c|x)." +5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"Sometimes we need more than two classes. Perhaps we might want to do 3-way sentiment classification (positive, negative, or neutral). Or we could be assigning some of the labels we will introduce in Chapter 8, like the part of speech of a word (choosing from 10, 30, or even 50 different parts of speech), or the named entity type of a phrase (choosing from tags like person, location, organization). In such cases we use multinomial logistic regression, also called softmax regression (or, historically, the maxent classifier). In multinomial logistic regression the target y is a variable that ranges over more than two classes; we want to know the probability of y being in each potential class c ∈ C, p(y = c|x)." 5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"The multinomial logistic classifier uses a generalization of the sigmoid, called the softmax function, to compute the probability p(y = c|x). The softmax function softmax takes a vector z = [z 1 , z 2 , ..., z k ] of k arbitrary values and maps them to a probability distribution, with each value in the range (0,1), and all the values summing to 1. Like the sigmoid, it is an exponential function." 5,Logistic Regression,5.6,Multinomial Logistic Regression,,,"For a vector z of dimensionality k, the softmax is defined as:" 5,Logistic Regression,5.6,Multinomial Logistic Regression,,,softmax(z i ) = exp (z i ) k j=1 exp (z j ) 1 ≤ i ≤ k (5.30) @@ -726,16 +724,14 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,Lemmas and Senses Let's start by looking at how one word (we'll choose mouse) might be defined in a dictionary (simplified from the online dictionary WordNet): mouse (N) 1. any of numerous small rodents... 2. a hand-operated device that controls a cursor... 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Here the form mouse is the lemma, also called the citation form. The form lemma citation form mouse would also be the lemma for the word mice; dictionaries don't have separate definitions for inflected forms like mice. Similarly sing is the lemma for sing, sang, sung. In many languages the infinitive form is used as the lemma for the verb, so Spanish dormir ""to sleep"" is the lemma for duermes ""you sleep"". The specific forms sung or carpets or sing or duermes are called wordforms." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"wordform As the example above shows, each lemma can have multiple meanings; the lemma mouse can refer to the rodent or the cursor control device. We call each of these aspects of the meaning of mouse a word sense. The fact that lemmas can be polysemous (have multiple senses) can make interpretation difficult (is someone who types ""mouse info"" into a search engine looking for a pet or a tool?). Chapter 18 will discuss the problem of polysemy, and introduce word sense disambiguation, the task of determining which sense of a word is being used in a particular context." -6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Synonymy One important component of word meaning is the relationship between word senses. For example when one word has a sense whose meaning is identical to a sense of another word, or nearly identical, we say the two senses of those two words are synonyms. Synonyms include such pairs as synonym 6.1 • LEXICAL SEMANTICS 99 couch/sofa vomit/throw up filbert/hazelnut car/automobile A more formal definition of synonymy (between words rather than senses) is that two words are synonymous if they are substitutable for one another in any sentence without changing the truth conditions of the sentence, the situations in which the sentence would be true. We often say in this case that the two words have the same propositional meaning." -6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,propositional meaning +6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Synonymy One important component of word meaning is the relationship between word senses. For example when one word has a sense whose meaning is identical to a sense of another word, or nearly identical, we say the two senses of those two words are synonyms. Synonyms include such pairs as couch/sofa vomit/throw up filbert/hazelnut car/automobile A more formal definition of synonymy (between words rather than senses) is that two words are synonymous if they are substitutable for one another in any sentence without changing the truth conditions of the sentence, the situations in which the sentence would be true. We often say in this case that the two words have the same propositional meaning." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"While substitutions between some pairs of words like car / automobile or water / H 2 O are truth preserving, the words are still not identical in meaning. Indeed, probably no two words are absolutely identical in meaning. One of the fundamental tenets of semantics, called the principle of contrast (Girard 1718, Bréal 1897, Clark principle of contrast 1987), states that a difference in linguistic form is always associated with some difference in meaning. For example, the word H 2 O is used in scientific contexts and would be inappropriate in a hiking guide-water would be more appropriate-and this genre difference is part of the meaning of the word. In practice, the word synonym is therefore used to describe a relationship of approximate or rough synonymy." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Word Similarity While words don't have many synonyms, most words do have lots of similar words. Cat is not a synonym of dog, but cats and dogs are certainly similar words. In moving from synonymy to similarity, it will be useful to shift from talking about relations between word senses (like synonymy) to relations between words (like similarity). Dealing with words avoids having to commit to a particular representation of word senses, which will turn out to simplify our task. The notion of word similarity is very useful in larger semantic tasks. Knowing similarity how similar two words are can help in computing how similar the meaning of two phrases or sentences are, a very important component of tasks like question answering, paraphrasing, and summarization. One way of getting values for word similarity is to ask humans to judge how similar one word is to another. A number of datasets have resulted from such experiments. For example the SimLex-999 dataset (Hill et al., 2015) gives values on a scale from 0 to 10, like the examples below, which range from near-synonyms (vanish, disappear) to pairs that scarcely seem to have anything in common (hole, agreement): Consider the meanings of the words coffee and cup. Coffee is not similar to cup; they share practically no features (coffee is a plant or a beverage, while a cup is a manufactured object with a particular shape). But coffee and cup are clearly related; they are associated by co-participating in an everyday event (the event of drinking coffee out of a cup). Similarly scalpel and surgeon are not similar but are related eventively (a surgeon tends to make use of a scalpel)." -6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,vanish 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"One common kind of relatedness between words is if they belong to the same semantic field. A semantic field is a set of words which cover a particular semantic semantic field domain and bear structured relations with each other. For example, words might be related by being in the semantic field of hospitals (surgeon, scalpel, nurse, anesthetic, hospital), restaurants (waiter, menu, plate, food, chef), or houses (door, roof, kitchen, family, bed). Semantic fields are also related to topic models, like Latent topic models Dirichlet Allocation, LDA, which apply unsupervised learning on large sets of texts to induce sets of associated words from text. Semantic fields and topic models are very useful tools for discovering topical structure in documents." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"In Chapter 18 we'll introduce more relations between senses like hypernymy or IS-A, antonymy (opposites) and meronymy (part-whole relations)." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Semantic Frames and Roles Closely related to semantic fields is the idea of a semantic frame. A semantic frame is a set of words that denote perspectives or semantic frame participants in a particular type of event. A commercial transaction, for example, is a kind of event in which one entity trades money to another entity in return for some good or service, after which the good changes hands or perhaps the service is performed. This event can be encoded lexically by using verbs like buy (the event from the perspective of the buyer), sell (from the perspective of the seller), pay (focusing on the monetary aspect), or nouns like buyer. Frames have semantic roles (like buyer, seller, goods, money), and words in a sentence can take on these roles." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Knowing that buy and sell have this relation makes it possible for a system to know that a sentence like Sam bought the book from Ling could be paraphrased as Ling sold the book to Sam, and that Sam has the role of the buyer in the frame and Ling the seller. Being able to recognize such paraphrases is important for question answering, and can help in shifting perspective for machine translation." -6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Connotation Finally, words have affective meanings or connotations. The word connotations connotation has different meanings in different fields, but here we use it to mean the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations. For example some words have positive connotations (happy) while others have negative connotations (sad). Even words whose meanings are similar in other ways can vary in connotation; consider the difference in connotations between fake, knockoff, forgery, on the one hand, and copy, replica, reproduction on the other, or innocent (positive connotation) and naive (negative connotation). Some words describe positive evaluation (great, love) and others negative evaluation (terrible, hate). Positive or negative evaluation language is called sentiment, as we saw in Chapter 4, and word sentiment plays a role in important sentiment tasks like sentiment analysis, stance detection, and applications of NLP to the language of politics and consumer reviews." +6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Connotation Finally, words have affective meanings or connotations. The word connotations has different meanings in different fields, but here we use it to mean the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations. For example some words have positive connotations (happy) while others have negative connotations (sad). Even words whose meanings are similar in other ways can vary in connotation; consider the difference in connotations between fake, knockoff, forgery, on the one hand, and copy, replica, reproduction on the other, or innocent (positive connotation) and naive (negative connotation). Some words describe positive evaluation (great, love) and others negative evaluation (terrible, hate). Positive or negative evaluation language is called sentiment, as we saw in Chapter 4, and word sentiment plays a role in important sentiment tasks like sentiment analysis, stance detection, and applications of NLP to the language of politics and consumer reviews." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Early work on affective meaning (Osgood et al., 1957) found that words varied along three important dimensions of affective meaning:" 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"valence: the pleasantness of the stimulus arousal: the intensity of emotion provoked by the stimulus dominance: the degree of control exerted by the stimulus Thus words like happy or satisfied are high on valence, while unhappy or annoyed are low on valence. Excited is high on arousal, while calm is low on arousal." 6,Vector Semantics and Embeddings,6.1,Lexical Semantics,,,"Controlling is high on dominance, while awed or influenced are low on dominance. Each word is thus represented by three numbers, corresponding to its value on each of the three dimensions: (1957) noticed that in using these 3 numbers to represent the meaning of a word, the model was representing each word as a point in a threedimensional space, a vector whose three dimensions corresponded to the word's rating on the three scales. This revolutionary idea that word meaning could be represented as a point in space (e.g., that part of the meaning of heartbreak can be represented as the point [2.45, 5.65, 3.58]) was the first expression of the vector semantics models that we introduce next." @@ -1112,13 +1108,12 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"A particle resembles a preposition or an adverb and is used in combination with particle a verb. Particles often have extended meanings that aren't quite the same as the prepositions they resemble, as in the particle over in she turned the paper over. A verb and a particle acting as a single unit is called a phrasal verb. The meaning phrasal verb of phrasal verbs is often non-compositional-not predictable from the individual meanings of the verb and the particle. Thus, turn down means 'reject', rule out 'eliminate', and go on 'continue'. Determiners like this and that (this chapter, that page) can mark the start of an determiner English noun phrase. Articles like a, an, and the, are a type of determiner that mark article discourse properties of the noun and are quite frequent; the is the most common word in written English, with a and an right behind." 8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"Conjunctions join two phrases, clauses, or sentences. Coordinating conjuncconjunction tions like and, or, and but join two elements of equal status. Subordinating conjunctions are used when one of the elements has some embedded status. For example, the subordinating conjunction that in ""I thought that you might like some milk"" links the main clause I thought with the subordinate clause you might like some milk. This clause is called subordinate because this entire clause is the ""content"" of the main verb thought. Subordinating conjunctions like that which link a verb to its argument in this way are also called complementizers." 8,Sequence Labeling for Parts of Speech and Named Entities,8.1,(Mostly) English Word Classes,,,"complementizer Pronouns act as a shorthand for referring to an entity or event. Personal propronoun nouns refer to persons or entities (you, she, I, it, me, etc.). Possessive pronouns are forms of personal pronouns that indicate either actual possession or more often just an abstract relation between the person and some object (my, your, his, her, its, one's, our, their). Wh-pronouns (what, who, whom, whoever) are used in certain wh question forms, or act as complementizers (Frida, who married Diego. . . ). Auxiliary verbs mark semantic features of a main verb such as its tense, whether auxiliary it is completed (aspect), whether it is negated (polarity), and whether an action is necessary, possible, suggested, or desired (mood). English auxiliaries include the copula verb be, the two verbs do and have, forms, as well as modal verbs used to copula modal mark the mood associated with the event depicted by the main verb: can indicates ability or possibility, may permission or possibility, must necessity. An English-specific tagset, the 45-tag Penn Treebank tagset (Marcus et al., 1993) , shown in Fig. 8.2 , has been used to label many syntactically annotated corpora like the Penn Treebank corpora, so is worth knowing about. Below we show some examples with each word tagged according to both the UD and Penn tagsets. Notice that the Penn tagset distinguishes tense and participles on verbs, and has a special tag for the existential there construction in English. Note that since New England Journal of Medicine is a proper noun, both tagsets mark its component nouns as NNP, including journal and medicine, which might otherwise be labeled as common nouns (NOUN/NN)." -8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,"Part-of-speech tagging is the process of assigning a part-of-speech to each word in part-of-speech tagging a text. The input is a sequence x 1 , x 2 , ..., x n of (tokenized) words and a tagset, and the output is a sequence y 1 , y 2 , ..., y n of tags, each output y i corresponding exactly to one input x i , as shown in the intuition in Fig. 8.3 . Tagging is a disambiguation task; words are ambiguous -have more than one ambiguous possible part-of-speech-and the goal is to find the correct tag for the situation. For example, book can be a verb (book that flight) or a noun (hand me that book). That can be a determiner (Does that flight serve dinner) or a complementizer (I We'll introduce algorithms for the task in the next few sections, but first let's explore the task. Exactly how hard is it? Fig. 8.4 shows that most word types (85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the ambiguous words, though accounting for only 14-15% of the vocabulary, are very common, and 55-67% of word tokens in running text are ambiguous. Particularly ambiguous common words include that, back, down, put and set; here are some examples of the 6 different parts of speech for the word back: earnings growth took a back/JJ seat a small building in the back/NN a clear majority of senators back/VBP the bill Dave began to back/VB toward the door enable the country to buy back/RP debt I was twenty-one back/RB then Nonetheless, many words are easy to disambiguate, because their different tags aren't equally likely. For example, a can be a determiner or the letter a, but the determiner sense is much more likely." +8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,"Part-of-speech tagging is the process of assigning a part-of-speech to each word in a text. The input is a sequence x 1 , x 2 , ..., x n of (tokenized) words and a tagset, and the output is a sequence y 1 , y 2 , ..., y n of tags, each output y i corresponding exactly to one input x i , as shown in the intuition in Fig. 8.3 . Tagging is a disambiguation task; words are ambiguous -have more than one ambiguous possible part-of-speech-and the goal is to find the correct tag for the situation. For example, book can be a verb (book that flight) or a noun (hand me that book). That can be a determiner (Does that flight serve dinner) or a complementizer (I We'll introduce algorithms for the task in the next few sections, but first let's explore the task. Exactly how hard is it? Fig. 8.4 shows that most word types (85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the ambiguous words, though accounting for only 14-15% of the vocabulary, are very common, and 55-67% of word tokens in running text are ambiguous. Particularly ambiguous common words include that, back, down, put and set; here are some examples of the 6 different parts of speech for the word back: earnings growth took a back/JJ seat a small building in the back/NN a clear majority of senators back/VBP the bill Dave began to back/VB toward the door enable the country to buy back/RP debt I was twenty-one back/RB then Nonetheless, many words are easy to disambiguate, because their different tags aren't equally likely. For example, a can be a determiner or the letter a, but the determiner sense is much more likely." 8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,"This idea suggests a useful baseline: given an ambiguous word, choose the tag which is most frequent in the training corpus. This is a key concept:" 8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,Most Frequent Class Baseline: Always compare a classifier against a baseline at least as good as the most frequent class baseline (assigning each token to the class it occurred in most often in the training set). 8,Sequence Labeling for Parts of Speech and Named Entities,8.2,Part-of-Speech Tagging,,,The most-frequent-tag baseline has an accuracy of about 92% 1 . The baseline thus differs from the state-of-the-art and human ceiling (97%) by only 5%. 8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"Part of speech tagging can tell us that words like Janet, Stanford University, and Colorado are all proper nouns; being a proper noun is a grammatical property of these words. But viewed from a semantic perspective, these proper nouns refer to different kinds of entities: Janet is a person, Stanford University is an organization,.. and Colorado is a location." -8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"A named entity is, roughly speaking, anything that can be referred to with a named entity" -8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"proper name: a person, a location, an organization. The text contains 13 mentions of named entities including 5 organizations, 4 locations, 2 times, 1 person, and 1 mention of money. Figure 8 .5 shows typical generic named entity types. Many applications will also need to use specific entity types like proteins, genes, commercial products, or works of art." +8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"A named entity is, roughly speaking, anything that can be referred to with a proper name: a person, a location, an organization. The text contains 13 mentions of named entities including 5 organizations, 4 locations, 2 times, 1 person, and 1 mention of money. Figure 8 .5 shows typical generic named entity types. Many applications will also need to use specific entity types like proteins, genes, commercial products, or works of art." 8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"Palo Alto is raising the fees for parking. Named entity tagging is a useful first step in lots of natural language processing tasks. In sentiment analysis we might want to know a consumer's sentiment toward a particular entity. Entities are a useful first stage in question answering, or for linking text to information in structured knowledge sources like Wikipedia. And named entity tagging is also central to tasks involving building semantic representations, like extracting events and the relationship between participants." 8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"Unlike part-of-speech tagging, where there is no segmentation problem since each word gets one tag, the task of named entity recognition is to find and label spans of text, and is difficult partly because of the ambiguity of segmentation; we" 8,Sequence Labeling for Parts of Speech and Named Entities,8.3,Named Entities and Named Entity Tagging,,,"need to decide what's an entity and what isn't, and where the boundaries are. Indeed, most words in a text will not be named entities. Another difficulty is caused by type ambiguity. The mention JFK can refer to a person, the airport in New York, or any number of schools, bridges, and streets around the United States. Some examples of this kind of cross-type confusion are given in Figure 8 The standard approach to sequence labeling for a span-recognition problem like NER is BIO tagging (Ramshaw and Marcus, 1995) . This is a method that allows us to treat NER like a word-by-word sequence labeling task, via tags that capture both the boundary and the named entity type. Consider the following sentence: variants called IO tagging and BIOES tagging. In BIO tagging we label any token that begins a span of interest with the label B, tokens that occur inside a span are tagged with an I, and any tokens outside of any span of interest are labeled O. While there is only one O tag, we'll have distinct B and I tags for each named entity class. The number of tags is thus 2n + 1 tags, where n is the number of entity types. BIO tagging can represent exactly the same information as the bracketed notation, but has the advantage that we can represent the task in the same simple sequence modeling way as part-of-speech tagging: assigning a single label y i to each input word x i : We've also shown two variant tagging schemes: IO tagging, which loses some information by eliminating the B tag, and BIOES tagging, which adds an end tag E for the end of a span, and a span tag S for a span consisting of only one word. A sequence labeler (HMM, CRF, RNN, Transformer, etc.) is trained to label each token in a text with tags that indicate the presence (or absence) of particular kinds of named entities." @@ -1310,20 +1305,6 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"In the case of language modeling, the correct distribution y t comes from knowing the next word. This is represented as a one-hot vector corresponding to the vocabulary where the entry for the actual next word is 1, and all the other entries are 0. Thus, the cross-entropy loss for language modeling is determined by the probability the model assigns to the correct next word. So at time t the CE loss is the negative log probability the model assigns to the next word in the training sequence." 9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"L CE (ŷ t , y t ) = − logŷ t [w t+1 ] (9.13)" 9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"Thus at each word position t of the input, the model takes as input the correct sequence of tokens w 1:t , and uses them to compute a probability distribution over possible next words so as to compute the model's loss for the next token w t+1 . Then we move to the next word, we ignore what the model predicted for the next word and instead use the correct sequence of tokens w 1:t+1 to estimate the probability of token w t+2 . This idea that we always give the model the correct history sequence to" -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,… Loss 1 T T X t=1 L CE -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"< l a t e x i t s h a 1 _ b a s e 6 4 = "" 6 P R 2 s G B b v q 8 n z o e N 9 f 9 Y 6 Q K 3 8 5 E = "" > A A A C A 3 i c b" -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V B L S 8 N A G N z U V 6 2 v q E c v i 1 X w V B I p q A e h W A Q P H i r 0 B U 0 N m + 2 m X b p 5 s P t F K C F H L / 4 V L y J e F L z 7 F / w 3 J m 0 u b R 1 Y G G Z m 2 Z 1 x Q s E V G M a v V l h Z X V v f K G 6 W t r Z 3 d v f 0 / Y O 2 C i J J W Y s G I p B d h y g m u M 9 a w E G w b i g Z 8 R z B O s 6 4 n v m d J y Y V D / w m T E L W 9 8 j Q 5 y 6 n B F L J 1 k 8 s V x I a m 0 n c T L C l I s + O 4 d p M H p v 4 3 o 7 r t 0 k p g 6 2 X j Y o x B V 4 m Z k 7 K K E f D 1 n + s Q U A j j / l A B V G q Z x o h 9 G M i g V P B k p I V K R Y S O i Z D F k 8 7 J P g 0 l Q b Y D W R 6 f M B T d S 5 H P K U m n p M m P Q I j t e h l 4 n 9 e L w L 3 s h 9 z P 4 y A + X T 2 k B s J D A H O B s E D L h k F M U k J o Z K n P 8 R 0 R N J R I J 0 t q 2 4 u F l 0 m 7 f O K W a 1 c P V T L t Z t 8 h C I 6 Q s f o D J n o A t X Q H W q g F q L o B b 2 h T / S l P W u v 2 r v 2 M Y s W t P z O I Z q D 9 v 0 H 1 O O V -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"c A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "" K z 9" -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"+ I Q R F a h E D C m J O p t N b J + 7 D x O Q = "" > A A A B + 3 i c b V D L S s N A F L 2 p r 1 p f U Z e 6 G C y C G 0 u i o i 6 L b l x W s A 9 o S p h M J + 3 Q y Y O Z i R B C N v 6 K G x E 3 C v 6 D v + D f O G m z a e u B g c M 5 Z 7 j 3 H i / m T C r L + j U q K 6 t r 6 x v V z d r W 9 s 7 u n r l / 0 J F R I g h t k 4 h H o u d h S T k L a V s x x W k v F h Q H H q d d b 3 J f + N 1 n K i S L w i e V x n Q Q 4 F H I f E a w 0 p J r H p 8 j h 0 c j l L q Z E 2 A 1 F k H m R y L P a w V c s 2 4 1 r C n Q M r F L U o c S L d f 8 c Y Y R S Q I a K s K x l H 3 b i t U g w 0 I" -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"x w m l e c x J J Y 0 w m e E S z 6 e 4 5 O t X S E O m Z + o U K T d W 5 H A 6 k T A N P J 4 v 9 5 K J X i P 9 5 / U T 5 t 4 O M h X G i a E h m g / y E I x W h o g g 0 Z I I S x V N N M B F M b 4 j I G A t M l K 6 r O N 1 e P H S Z d C 4 a 9 n X j 8 v G q 3 r w r S 6 j C E Z z A G d h w A 0 1 4 g B a 0 g c A L v M E n f B m 5 8 W q 8 G x + z a M U o / x z C H I z v P 6 S a k q c = < / l a t e x i t > log y for < l a t e x i t s h a 1 _ b a s e 6 4 = "" E q u g m y X m W 7 n S i U m o x R H Y 3 T j n b 2 g = "" > A A A B 9 3 i c b" -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V D L S s N A F J 3 U V 6 2 v q B t B k c E i u L E k C u q y 6 M Z l C / Y B T Q m T y a Q d O p k J M x M h h L r z O 9 y I u F F w 1 0 / w F / w G f 8 K k 7 a a t B w Y O 5 5 z h n n u 9 i F G l L e v H K C w t r 6 y u F d d L G 5 t b 2 z v m 7 l 5 T i V h i 0 s C C C d n 2 k C K M c t L Q V D P S j i R B o c d I y x v c 5 X 7 r k U h F B X / Q S U S 6 I e p x G l C M d C a 5 5 s E 5 d J j o w c R N n R D p v g x T x P 3 h s O S -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,a Z a t i j Q E X i T 0 l 5 e r R q P 7 7 f D y q u e a 3 4 w s c h 4 R r z J B S H -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,d u K d D d F U l P M y L D k x I p E C A 9 Q j 6 T j 3 k N 4 m k k + D I T M H t d w r M 7 k U K h U E n p Z M u + m 5 r 1 c / M / r x D q 4 6 a a U R 7 E m H E 8 G B T G D W s D 8 C N C n k m D N k o w g L G n W E O I + k g j r 7 F T 5 6 v b 8 o o u k e V G x r y q X d b t c v Q U T F M E h O A F n w A b X o A r u Q Q 0 0 A A Z P 4 B V 8 g E 8 j M V 6 M N + N 9 E i 0 Y 0 z / 7 Y A b G 1 x 9 b R Z X r < / l -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"V C k = "" > A A A C A H i c b V D L S s N A F J 3 U V 6 2 v q A s X b g a L 4 M a S q K j L o h u X F e w D m h A m 0 0 k 7 d C Y T Z i Z C C d n 4 K 2 5 c K O L W z 3 D n 3 z h p s 9 D W A x c O 5 9 z L v f e E C a N K O 8 6 3 V V l a X l l d q 6 7 X N j a 3 t n f s 3 b 2 O E q n E p I 0 F E 7 I X I k U Y j U l b U 8 1 I L 5 E E 8 Z C R b j i + L f z u I 5 G K i v h B T x L i c z S M a U Q x 0 k Y K 7 I N T 6 D E x h J M g 8 z j S I 8 k z x F i e B 3 b d a T h T w E X i l q Q O S r Q C + 8 s b C J x y E m v" -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"M k F J 9 1 0 m 0 n y G p K W Y k r 3 m p I g n C Y z Q k f U N j x I n y s + k D O T w 2 y g B G Q p q K N Z y q v y c y x J W a 8 N B 0 F k e q e a 8 Q / / P 6 q Y 6 u / Y z G S a p J j G e L o p R B L W C R B h x Q S b B m E 0 M Q l t T c C v E I S Y S 1 y a x m Q n D n X 1 4 k n b O G e 9 k 4 v 7 + o N 2 / K O K r g E B y B E + C C K 9 A E d 6 A F 2 g C D H D y D V / B m P V k v 1 r v 1 M W u t W O X M P v g D 6 / M H w H e W i g = = < / l a t e x i t > log y all < l a t e x i t s h a 1 _ b a s e 6 4 = "" K b x L q C A W 5 L D k l 1 r I X a Q v g y 2 6 V x U = "" > A A A C A 3 i c b V D L S s N A F J 3 4 r P U V d a e b w S K 4 s S Q q 6 r L o x m U F + 4 A m l M l 0 0 g 6 d m Y S Z i R B C w I 2 / 4 s a F I m 7 9 C X f + j Z M 2 C 2 0 9 c O F w z r 3 c e 0 8 Q M 6 q 0 4 3 x b C 4 t L y y u r l b X q + s b m 1 r a 9 s 9 t W U S I x a e G I R b I b I E U Y F a S l q W a k G 0 u C e M B I J x j f F H 7 n g U h F I 3 G v 0 5 j 4 H A 0 F D S l G 2 k h 9 e / 8 E e i w a w r S f e R z p k e S Z H i E x V n n e t 2 t O 3 Z k A z h O 3 J D" -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V Q o t m 3 v 7 x B h B N O h M Y M K d V z n V j 7 G Z K a Y k b y q p c o E i M 8 R k P S M 1 Q g T p S f T X 7 I 4 Z F R B j C M p C m h 4 U T 9 P Z E h r l T K A 9 N Z 3 K l m v U L 8 z + s l O r z y M y r i R B O B p 4 v C h E E d w S I Q O K C S Y M 1 S Q x C W 1 N w K 8 Q h J h L W J r W p C c G d -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"f n i f t 0 7 p 7 U T + 7 O 6 8 1 r s s 4 K u A A H I J j 4 I J L 0 A C 3 o A l a A I N H 8 A x e w Z v 1 Z L 1 Y 7 9 b H t H X B K m f 2 w B 9 Y n z 9 J c Z f 4 < / l a t e x i t > log y thanks < l a t e x i t s h a 1 _ b a s e 6 4 = "" e 2 v w v r Z I C 4 t a v P n t 1 U q 3 S q V r 9 y 0 = "" > A A A C A X i c b V A 9 S w N B E J 3 z M 8 a v q I 1 g s x g E G 8 O d i l o G b S w j m A / I h b C 3 2 V y W 7 O 0 e u 3 v C c c T G v 2 J j o Y i t / 8 L O f + M m u U I T H w w 8 3 p t h Z l 4 Q c 6 a N 6 3 4 7 C 4 t L y y u r h b X i + s b m 1 n Z p Z 7 e h Z a I I r R P J p W o F W F P O B K 0 b Z j h t x Y r i K O C 0 G Q x v x n 7 z g S r N p L g 3 a U w 7 E Q 4 F 6 z O C j Z W 6 p f 0 T 5 H M Z o r S b + R E 2 A x V l X I p w N O q W y m 7 F n Q D N E y 8 n Z c h R 6 5 a + / J 4 k S U S F I R x r 3 f b c 2 H Q y r A w j n I 6 K f q J p j M k Q h 7 R t q c A R 1 Z 1 s 8 s E I H V m l h / p S 2 R I G T d T f E x m O t E 6 j w H a O r 9 S z 3 l j 8 z 2 s n p n / predict the next word (rather than feeding the model its best case from the previous time step) is called teacher forcing." -9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,V y Z i I E 0 M F m S 7 q J x w Z i c Z x o B 5 T l B i e W o K J Y v Z W R A Z Y Y W J 9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,The weights in the network are adjusted to minimize the average CE loss over the training sequence via gradient descent. Fig. 9 .6 illustrates this training regimen. 9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"Careful readers may have noticed that the input embedding matrix E and the final layer matrix V, which feeds the output softmax, are quite similar. The rows of E represent the word embeddings for each word in the vocabulary learned during the training process with the goal that words that have similar meaning and function will have similar embeddings. And, since the length of these embeddings corresponds to the size of the hidden layer d h , the shape of the embedding matrix E is |V | × d h ." 9,Deep Learning Architectures for Sequence Processing,9.3,RNNs as Language Models,,,"The final layer matrix V provides a way to score the likelihood of each word in the vocabulary given the evidence present in the final hidden layer of the network through the calculation of Vh. This entails that it also has the dimensionality |V | × d h . That is, the rows of V provide a second set of learned word embeddings that capture relevant aspects of word meaning and function. This leads to an obvious question -is it even necessary to have both? Weight tying is a method that Weight tying dispenses with this redundancy and uses a single set of embeddings at the input and softmax layers. That is, E = V. To do this, we set the dimensionality of the final hidden layer to be the same d h , (or add an additional projection layer to do the same thing), and simply use the same matrix for both layers. In addition to providing improved perplexity results, this approach significantly reduces the number of parameters required for the model." @@ -1359,8 +1340,8 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"Assigning a high probability to was following airline is straightforward since airline provides a strong local context for the singular agreement. However, assigning an appropriate probability to were is quite difficult, not only because the plural flights is quite distant, but also because the intervening context involves singular constituents. Ideally, a network should be able to retain the distant information about plural flights until it is needed, while still processing the intermediate parts of the sequence correctly. One reason for the inability of RNNs to carry forward critical information is that the hidden layers, and, by extension, the weights that determine the values in the hidden layer, are being asked to perform two tasks simultaneously: provide information useful for the current decision, and updating and carrying forward information required for future decisions." 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"A second difficulty with training RNNs arises from the need to backpropagate the error signal back through time. Recall from Section 9.2.2 that the hidden layer at time t contributes to the loss at the next time step since it takes part in that calculation. As a result, during the backward pass of training, the hidden layers are subject to repeated multiplications, as determined by the length of the sequence. A frequent result of this process is that the gradients are eventually driven to zero, a situation called the vanishing gradients problem." 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"To address these issues, more complex network architectures have been designed to explicitly manage the task of maintaining relevant context over time, by enabling the network to learn to forget information that is no longer needed and to remember information required for decisions still to come." -9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"The most commonly used such extension to RNNs is the Long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) . LSTMs divide Long short-term memory the context management problem into two sub-problems: removing information no longer needed from the context, and adding information likely to be needed for later decision making. The key to solving both problems is to learn how to manage this context rather than hard-coding a strategy into the architecture. LSTMs accomplish this by first adding an explicit context layer to the architecture (in addition to the usual recurrent hidden layer), and through the use of specialized neural units that make use of gates to control the flow of information into and out of the units that comprise the network layers. These gates are implemented through the use of additional weights that operate sequentially on the input, and previous hidden layer, and previous context layers. The gates in an LSTM share a common design pattern; each consists of a feedforward layer, followed by a sigmoid activation function, followed by a pointwise multiplication with the layer being gated. The choice of the sigmoid as the activation function arises from its tendency to push its outputs to either 0 or 1. Combining this with a pointwise multiplication has an effect similar to that of a binary mask. Values in the layer being gated that align with values near 1 in the mask are passed through nearly unchanged; values corresponding to lower values are essentially erased." -9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"The first gate we'll consider is the forget gate. The purpose of this gate to delete forget gate information from the context that is no longer needed. The forget gate computes a weighted sum of the previous state's hidden layer and the current input and passes that through a sigmoid. This mask is then multiplied element-wise by the context vector to remove the information from context that is no longer required. Elementwise multiplication of two vectors (represented by the operator , and sometimes called the Hadamard product) is the vector of the same dimension as the two input vectors, where each element i is the product of element i in the two input vectors:" +9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"The most commonly used such extension to RNNs is the Long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) . LSTMs divide the context management problem into two sub-problems: removing information no longer needed from the context, and adding information likely to be needed for later decision making. The key to solving both problems is to learn how to manage this context rather than hard-coding a strategy into the architecture. LSTMs accomplish this by first adding an explicit context layer to the architecture (in addition to the usual recurrent hidden layer), and through the use of specialized neural units that make use of gates to control the flow of information into and out of the units that comprise the network layers. These gates are implemented through the use of additional weights that operate sequentially on the input, and previous hidden layer, and previous context layers. The gates in an LSTM share a common design pattern; each consists of a feedforward layer, followed by a sigmoid activation function, followed by a pointwise multiplication with the layer being gated. The choice of the sigmoid as the activation function arises from its tendency to push its outputs to either 0 or 1. Combining this with a pointwise multiplication has an effect similar to that of a binary mask. Values in the layer being gated that align with values near 1 in the mask are passed through nearly unchanged; values corresponding to lower values are essentially erased." +9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,"The first gate we'll consider is the forget gate. The purpose of this gate to delete information from the context that is no longer needed. The forget gate computes a weighted sum of the previous state's hidden layer and the current input and passes that through a sigmoid. This mask is then multiplied element-wise by the context vector to remove the information from context that is no longer required. Elementwise multiplication of two vectors (represented by the operator , and sometimes called the Hadamard product) is the vector of the same dimension as the two input vectors, where each element i is the product of element i in the two input vectors:" 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,f t = σ (U f h t−1 + W f x t ) (9.19) k t = c t−1 f t (9.20) 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,The next task is compute the actual information we need to extract from the previous hidden state and current inputs -the same basic computation we've been using for all our recurrent networks. 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,,,g t = tanh(U g h t−1 + W g x t ) (9.21) @@ -1374,14 +1355,8 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,9.6.1,"Gated Units, Layers and Networks",The increased complexity of the LSTM units is encapsulated within the unit itself. The only additional external complexity for the LSTM over the basic recurrent unit (b) is the presence of the additional context vector as an input and output. 9,Deep Learning Architectures for Sequence Processing,9.6,The LSTM,9.6.1,"Gated Units, Layers and Networks","This modularity is key to the power and widespread applicability of LSTM units. LSTM units (or other varieties, like GRUs) can be substituted into any of the network architectures described in Section 9.5. And, as with simple RNNs, multi-layered networks making use of gated units can be unrolled into deep feedforward networks and trained in the usual fashion with backpropagation." 9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"While the addition of gates allows LSTMs to handle more distant information than RNNs, they don't completely solve the underlying problem: passing information through an extended series of recurrent connections leads to information loss and difficulties in training. Moreover, the inherently sequential nature of recurrent networks makes it hard to do computation in parallel. These considerations led to the development of transformers -an approach to sequence processing that eliminates transformers recurrent connections and returns to architectures reminiscent of the fully connected networks described earlier in Chapter 7. Transformers map sequences of input vectors (x 1 , ..., x n ) to sequences of output vectors (y 1 , ..., y n ) of the same length. Transformers are made up of stacks of transformer blocks, which are multilayer networks made by combining simple linear layers, feedforward networks, and self-attention layers, they key innovation of self-attention transformers. Self-attention allows a network to directly extract and use information from arbitrarily large contexts without the need to pass it through intermediate recurrent connections as in RNNs. We'll start by describing how self-attention works and then return to how it fits into larger transformer blocks. Fig. 9 .15 illustrates the flow of information in a single causal, or backward looking, self-attention layer. As with the overall transformer, a self-attention layer maps input sequences (x 1 , ..., x n ) to output sequences of the same length (y 1 , ..., y n ). When processing each item in the input, the model has access to all of the inputs up to and including the one under consideration, but no access to information about inputs beyond the current one. In addition, the computation performed for each item is independent of all the other computations. The first point ensures that we can use this approach to create language models and use them for autoregressive generation, and the second point means that we can easily parallelize both forward inference and training of such models." -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,At the core of an attention-based approach is the ability to compare an item of 9.7 • SELF-ATTENTION NETWORKS: TRANSFORMERS 195 -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,Self-Attention Layer -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 1 y 1 -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 2 y 2 y 3 y 4 y 5 -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 3 -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,x 4 -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"x 5 Figure 9 .15 Information flow in a causal (or masked) self-attention model. In processing each element of the sequence, the model attends to all the inputs up to, and including, the current one. Unlike RNNs, the computations at each time step are independent of all the other steps and therefore can be performed in parallel." -9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"interest to a collection of other items in a way that reveals their relevance in the current context. In the case of self-attention, the set of comparisons are to other elements within a given sequence. The result of these comparisons is then used to compute an output for the current input. For example, returning to Fig. 9 .15, the computation of y 3 is based on a set of comparisons between the input x 3 and its preceding elements x 1 and x 2 , and to x 3 itself. The simplest form of comparison between elements in a self-attention layer is a dot product. Let's refer to the result of this comparison as a score (we'll be updating this equation to add attention to the computation of this score):" +9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"Figure 9 .15 Information flow in a causal (or masked) self-attention model. In processing each element of the sequence, the model attends to all the inputs up to, and including, the current one. Unlike RNNs, the computations at each time step are independent of all the other steps and therefore can be performed in parallel." +9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"At the core of an attention-based approach is the ability to compare an item of interest to a collection of other items in a way that reveals their relevance in the current context. In the case of self-attention, the set of comparisons are to other elements within a given sequence. The result of these comparisons is then used to compute an output for the current input. For example, returning to Fig. 9 .15, the computation of y 3 is based on a set of comparisons between the input x 3 and its preceding elements x 1 and x 2 , and to x 3 itself. The simplest form of comparison between elements in a self-attention layer is a dot product. Let's refer to the result of this comparison as a score (we'll be updating this equation to add attention to the computation of this score):" 9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"score(x i , x j ) = x i • x j (9.27)" 9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"The result of a dot product is a scalar value ranging from −∞ to ∞, the larger the value the more similar the vectors that are being compared. Continuing with our example, the first step in computing y 3 would be to compute three scores:" 9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,,,"x 3 • x 1 , x 3 • x 2 and x 3 • x 3 ." @@ -1433,7 +1408,7 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 9,Deep Learning Architectures for Sequence Processing,9.7,Self-Attention Networks: Transformers,9.7.3,Modeling Word Order: Positional Embeddings,"word embedding for each input to its corresponding positional embedding. This new embedding serves as the input for further processing. Fig. 9 .20 shows the idea. A potential problem with the simple absolute position embedding approach is that there will be plenty of training examples for the initial positions in our inputs and correspondingly fewer at the outer length limits. These latter embeddings may be poorly trained and may not generalize well during testing. An alternative approach to positional embeddings is to choose a static function that maps integer inputs to realvalued vectors in a way that captures the inherent relationships among the positions. That is, it captures the fact that position 4 in an input is more closely related to position 5 than it is to position 17. A combination of sine and cosine functions with differing frequencies was used in the original transformer work. Developing better position representations is an ongoing research topic." 9,Deep Learning Architectures for Sequence Processing,9.8,Transformers as Language Models,,,"Now that we've seen all the major components of transformers, let's examine how to deploy them as language models via semi-supervised learning. To do this, we'll proceed just as we did with the RNN-based approach: given a training corpus of plain text we'll train a model to predict the next word in a sequence using teacher forcing. Fig. 9 .21 illustrates the general approach. At each step, given all the preceding words, the final transformer layer produces an output distribution over the entire vocabulary. During training, the probability assigned to the correct word is used to calculate the cross-entropy loss for each item in the sequence. As with RNNs, the loss for a training sequence is the average cross-entropy loss over the entire sequence." 9,Deep Learning Architectures for Sequence Processing,9.8,Transformers as Language Models,,,"Linear Layer Figure 9 .21 Training a transformer as a language model. Note the key difference between this figure and the earlier RNN-based version shown in Fig. 9 .6. There the calculation of the outputs and the losses at each step was inherently serial given the recurrence in the calculation of the hidden states. With transformers, each training item can be processed in parallel since the output for each element in the sequence is computed separately. Once trained, we can compute the perplexity of the resulting model, or autoregressively generate novel text just as with RNN-based models." -9,Deep Learning Architectures for Sequence Processing,9.8,Transformers as Language Models,,,"A simple variation on autoregressive generation that underlies a number of practical applications uses a prior context to prime the autoregressive generation process. Fig. 9 .22 illustrates this with the task of text completion. Here a standard language 9.9 • CONTEXTUAL GENERATION AND SUMMARIZATION 203 model is given the prefix to some text and is asked to generate a possible completion to it. Note that as the generation process proceeds, the model has direct access to the priming context as well as to all of its own subsequently generated outputs. This ability to incorporate the entirety of the earlier context and generated outputs at each time step is the key to the power of these models. Text summarization is a practical application of context-based autoregressive Text summarization generation. The task is to take a full-length article and produce an effective summary of it. To train a transformer-based autoregressive model to perform this task, we start with a corpus consisting of full-length articles accompanied by their corresponding summaries. Fig. 9 .23 shows an example of this kind of data from a widely used summarization corpus consisting of CNN and Daily Mirror news articles." +9,Deep Learning Architectures for Sequence Processing,9.8,Transformers as Language Models,,,"A simple variation on autoregressive generation that underlies a number of practical applications uses a prior context to prime the autoregressive generation process. Fig. 9 .22 illustrates this with the task of text completion. Here a standard language model is given the prefix to some text and is asked to generate a possible completion to it. Note that as the generation process proceeds, the model has direct access to the priming context as well as to all of its own subsequently generated outputs. This ability to incorporate the entirety of the earlier context and generated outputs at each time step is the key to the power of these models. Text summarization is a practical application of context-based autoregressive Text summarization generation. The task is to take a full-length article and produce an effective summary of it. To train a transformer-based autoregressive model to perform this task, we start with a corpus consisting of full-length articles accompanied by their corresponding summaries. Fig. 9 .23 shows an example of this kind of data from a widely used summarization corpus consisting of CNN and Daily Mirror news articles." 9,Deep Learning Architectures for Sequence Processing,9.9,Contextual Generation and Summarization,,,"A simple but surprisingly effective approach to applying transformers to summarization is to append a summary to each full-length article in a corpus, with a unique marker separating the two. More formally, each article-summary pair (x 1 , ..., x m ), (y 1 , ..., y n ) in a training corpus is converted into a single training instance (x 1 , ..., x m , δ , y 1 , ...y n ) with an overall length of n + m + 1. These training instances are treated as long sentences and then used to train an autoregressive language model using teacher forcing, exactly as we did earlier." 9,Deep Learning Architectures for Sequence Processing,9.9,Contextual Generation and Summarization,,,"Once trained, full articles ending with the special marker are used as the context to prime the generation process to produce a summary as illustrated in Fig. 9 .24. Note that, in contrast to RNNs, the model has access to the original article as well as to the newly generated text throughout the process." 9,Deep Learning Architectures for Sequence Processing,9.9,Contextual Generation and Summarization,,,"As we'll see in later chapters, variations on this simple scheme are the basis for successful text-to-text applications including machine translation, summarization and question answering." @@ -1450,7 +1425,7 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 10,Machine Translation and Encoder-Decoder Models,,,,,"Machine translation in its present form therefore focuses on a number of very practical tasks. Perhaps the most common current use of machine translation is for information access. We might want to translate some instructions on the web, information access perhaps the recipe for a favorite dish, or the steps for putting together some furniture. Or we might want to read an article in a newspaper, or get information from an online resource like Wikipedia or a government webpage in a foreign language. MT for information access is probably one of the most common uses of NLP technology, and Google Translate alone (shown above) translates hundreds of billions of words a day between over 100 languages." 10,Machine Translation and Encoder-Decoder Models,,,,,Another common use of machine translation is to aid human translators. MT systems are routinely used to produce a draft translation that is fixed up in a post-editing post-editing phase by a human translator. This task is often called computer-aided translation or CAT. CAT is commonly used as part of localization: the task of adapting content CAT localization or a product to a particular language community. 10,Machine Translation and Encoder-Decoder Models,,,,,"Finally, a more recent application of MT is to in-the-moment human communication needs. This includes incremental translation, translating speech on-the-fly before the entire sentence is complete, as is commonly used in simultaneous interpretation. Image-centric translation can be used for example to use OCR of the text on a phone camera image as input to an MT system to translate menus or street signs." -10,Machine Translation and Encoder-Decoder Models,,,,,"The standard algorithm for MT is the encoder-decoder network, also called the encoderdecoder sequence to sequence network, an architecture that can be implemented with RNNs or with Transformers. We've seen in prior chapters that RNN or Transformer architecture can be used to do classification (for example to map a sentence to a positive or negative sentiment tag for sentiment analysis), or can be used to do sequence labeling (for example to assign each word in an input sentence with a part-of-speech, or with a named entity tag). For part-of-speech tagging, recall that the output tag is associated directly with each input word, and so we can just model the tag as output y t for each input word x t ." +10,Machine Translation and Encoder-Decoder Models,,,,,"The standard algorithm for MT is the encoder-decoder network, also called the sequence to sequence network, an architecture that can be implemented with RNNs or with Transformers. We've seen in prior chapters that RNN or Transformer architecture can be used to do classification (for example to map a sentence to a positive or negative sentiment tag for sentiment analysis), or can be used to do sequence labeling (for example to assign each word in an input sentence with a part-of-speech, or with a named entity tag). For part-of-speech tagging, recall that the output tag is associated directly with each input word, and so we can just model the tag as output y t for each input word x t ." 10,Machine Translation and Encoder-Decoder Models,,,,,Encoder-decoder or sequence-to-sequence models are used for a different kind of sequence modeling in which the output sequence is a complex function of the entire input sequencer; we must map from a sequence of input words or tokens to a sequence of tags that are not merely direct mappings from individual words. 10,Machine Translation and Encoder-Decoder Models,,,,,"Machine translation is exactly such a task: the words of the target language don't necessarily agree with the words of the source language in number or order. Consider translating the following made-up English sentence into Japanese. Note that the elements of the sentences are in very different places in the different languages. In English, the verb is in the middle of the sentence, while in Japanese, the verb kaita comes at the end. The Japanese sentence doesn't require the pronoun he, while English does. Such differences between languages can be quite complex. In the following actual sentence from the United Nations, notice the many changes between the Chinese sentence (we've given in in red a word-by-word gloss of the Chinese characters) and its English equivalent." 10,Machine Translation and Encoder-Decoder Models,,,,,"(10.2) 大会/General Assembly 在/on 1982年/1982 12月/December 10日/10 通过 了/adopted 第37号/37th 决议/resolution ,核准了/approved 第二 次/second 探索/exploration 及/and 和平peaceful 利用/using 外层空 间/outer space 会议/conference 的/of 各项/various 建议/suggestions 。 On 10 December 1982 , the General Assembly adopted resolution 37 in which it endorsed the recommendations of the Second United Nations Conference on the Exploration and Peaceful Uses of Outer Space ." @@ -1465,10 +1440,10 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"Of course we also need to translate the individual words from one language to another. For any translation, the appropriate word can vary depending on the context. The English source-language word bass, for example, can appear in Spanish as the fish lubina or the musical instrument bajo. German uses two distinct words for what in English would be called a wall: Wand for walls inside a building, and Mauer for walls outside a building. Where English uses the word brother for any male sibling, Chinese and many other languages have distinct words for older brother and younger brother (Mandarin gege and didi, respectively). In all these cases, translating bass, wall, or brother from English would require a kind of specialization, disambiguating the different uses of a word. For this reason the fields of MT and Word Sense Disambiguation (Chapter 18) are closely linked." 10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"Sometimes one language places more grammatical constraints on word choice than another. We saw above that English marks nouns for whether they are singular or plural. Mandarin doesn't. Or French and Spanish, for example, mark grammatical gender on adjectives, so an English translation into French requires specifying adjective gender." 10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"The way that languages differ in lexically dividing up conceptual space may be more complex than this one-to-many translation problem, leading to many-to-many mappings. For example, Fig. 10 .2 summarizes some of the complexities discussed by Hutchins and Somers (1992) in translating English leg, foot, and paw, to French. For example, when leg is used about an animal it's translated as French jambe; but about the leg of a journey, as French etape; if the leg is of a chair, we use French pied." -10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"Further, one language may have a lexical gap, where no word or phrase, short lexical gap of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the ""satellites"": particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 ." +10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.2,Lexical Divergences,"Further, one language may have a lexical gap, where no word or phrase, short of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the ""satellites"": particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 ." 10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.3,Morphological Typology,"Morphologically, languages are often characterized along two dimensions of variation. The first is the number of morphemes per word, ranging from isolating isolating languages like Vietnamese and Cantonese, in which each word generally has one morpheme, to polysynthetic languages like Siberian Yupik (""Eskimo""), in which a polysynthetic single word may have very many morphemes, corresponding to a whole sentence in English. The second dimension is the degree to which morphemes are segmentable, ranging from agglutinative languages like Turkish, in which morphemes have relagglutinative atively clean boundaries, to fusion languages like Russian, in which a single affix fusion may conflate multiple morphemes, like -om in the word stolom (table-SG-INSTR-DECL1), which fuses the distinct morphological categories instrumental, singular, and first declension. Translating between languages with rich morphology requires dealing with structure below the word level, and for this reason modern systems generally use subword models like the wordpiece or BPE models of Section 10.7.1." 10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.4,Referential Density,"Finally, languages vary along a typological dimension related to the things they tend to omit. Some languages, like English, require that we use an explicit pronoun when talking about a referent that is given in the discourse. In other languages, however, we can sometimes omit pronouns altogether, as the following example from Spanish shows 1 : (10.6) [El jefe] i dio con un libro. / 0 i Mostró a un descifrador ambulante. [The boss] came upon a book. [He] showed it to a wandering decoder." -10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.4,Referential Density,"Languages that can omit pronouns are called pro-drop languages. Even among pro-drop the pro-drop languages, there are marked differences in frequencies of omission. Japanese and Chinese, for example, tend to omit far more than does Spanish. This dimension of variation across languages is called the dimension of referential density. We say that languages that tend to use more pronouns are more referentially referential density dense than those that use more zeros. Referentially sparse languages, like Chinese or Japanese, that require the hearer to do more inferential work to recover antecedents are also called cold languages. Languages that are more explicit and make it easier Marshall McLuhan's 1964 distinction between hot media like movies, which fill in many details for the viewer, versus cold media like comics, which require the reader to do more inferential work to fill out the representation (Bickel, 2003) ." +10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.4,Referential Density,"Languages that can omit pronouns are called pro-drop languages. Even among the pro-drop languages, there are marked differences in frequencies of omission. Japanese and Chinese, for example, tend to omit far more than does Spanish. This dimension of variation across languages is called the dimension of referential density. We say that languages that tend to use more pronouns are more referentially referential density dense than those that use more zeros. Referentially sparse languages, like Chinese or Japanese, that require the hearer to do more inferential work to recover antecedents are also called cold languages. Languages that are more explicit and make it easier Marshall McLuhan's 1964 distinction between hot media like movies, which fill in many details for the viewer, versus cold media like comics, which require the reader to do more inferential work to fill out the representation (Bickel, 2003) ." 10,Machine Translation and Encoder-Decoder Models,10.1,Word Order Typology,10.1.4,Referential Density,"Translating from languages with extensive pro-drop, like Chinese or Japanese, to non-pro-drop languages like English can be difficult since the model must somehow identify each zero and recover who or what is being talked about in order to insert the proper pronoun." 10,Machine Translation and Encoder-Decoder Models,10.2,The Encoder-Decoder Model,,,"Encoder-decoder networks, or sequence-to-sequence networks, are models ca-encoderdecoder pable of generating contextually appropriate, arbitrary length, output sequences. Encoder-decoder networks have been applied to a very wide range of applications including machine translation, summarization, question answering, and dialogue." 10,Machine Translation and Encoder-Decoder Models,10.2,The Encoder-Decoder Model,,,"The key idea underlying these networks is the use of an encoder network that takes an input sequence and creates a contextualized representation of it, often called the context. This representation is then passed to a decoder which generates a taskspecific output sequence. Fig. 10 .3 illustrates the architecture Encoder-decoder networks consist of three components:" @@ -1519,7 +1494,7 @@ Advanced:,Perplexity's Relation to Entropy,3.10,Bibliographical and Historical N 10,Machine Translation and Encoder-Decoder Models,10.4,Attention,,,"The weights W s , which are then trained during normal end-to-end training, give the network the ability to learn which aspects of similarity between the decoder and encoder states are important to the current application. This bilinear model also allows the encoder and decoder to use different dimensional vectors, whereas the simple dot-product attention requires the encoder and decoder hidden states have the same dimensionality." 10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"The decoding algorithm we gave above for generating translations has a problem (as does the autoregressive generation we introduced in Chapter 9 for generating from a conditional language model). Recall that algorithm: at each time step in decoding, the output y t is chosen by computing a softmax over the set of possible outputs (the vocabulary, in the case of language modeling or MT), and then choosing the highest probability token (the argmax):" 10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"y t = argmax w∈V P(w|x, y 1 ...y t−1 ) (10.18)" -10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Choosing the single most probable token to generate at each step is called greedy greedy decoding; a greedy algorithm is one that make a choice that is locally optimal, whether or not it will turn out to have been the best choice with hindsight. Indeed, greedy search is not optimal, and may not find the highest probability translation. The problem is that the token that looks good to the decoder now might turn out later to have been the wrong choice! Let's see this by looking at the search tree, a graphical representation of the search tree choices the decoder makes in searching for the best translation, in which we view the decoding problem as a heuristic state-space search and systematically explore the space of possible outputs. In such a search tree, the branches are the actions, in this case the action of generating a token, and the nodes are the states, in this case the state of having generated a particular prefix. We are searching for the best action sequence, i.e. the target string with the highest probability. Fig. 10 .11 demonstrates the problem, using a made-up example. Notice that the most probable sequence is ok ok (with a probability of .4*.7*1.0), but a greedy search algorithm will fail to find it, because it incorrectly chooses yes as the first word since it has the highest local probability. Figure 10 .11 A search tree for generating the target string T = t 1 ,t 2 , ... from the vocabulary V = {yes, ok, }, given the source string, showing the probability of generating each token from that state. Greedy search would choose yes at the first time step followed by yes, instead of the globally most probable sequence ok ok." +10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Choosing the single most probable token to generate at each step is called greedy decoding; a greedy algorithm is one that make a choice that is locally optimal, whether or not it will turn out to have been the best choice with hindsight. Indeed, greedy search is not optimal, and may not find the highest probability translation. The problem is that the token that looks good to the decoder now might turn out later to have been the wrong choice! Let's see this by looking at the search tree, a graphical representation of the search tree choices the decoder makes in searching for the best translation, in which we view the decoding problem as a heuristic state-space search and systematically explore the space of possible outputs. In such a search tree, the branches are the actions, in this case the action of generating a token, and the nodes are the states, in this case the state of having generated a particular prefix. We are searching for the best action sequence, i.e. the target string with the highest probability. Fig. 10 .11 demonstrates the problem, using a made-up example. Notice that the most probable sequence is ok ok (with a probability of .4*.7*1.0), but a greedy search algorithm will fail to find it, because it incorrectly chooses yes as the first word since it has the highest local probability. Figure 10 .11 A search tree for generating the target string T = t 1 ,t 2 , ... from the vocabulary V = {yes, ok, }, given the source string, showing the probability of generating each token from that state. Greedy search would choose yes at the first time step followed by yes, instead of the globally most probable sequence ok ok." 10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Recall from Chapter 8 that for part-of-speech tagging we used dynamic programming search (the Viterbi algorithm) to address this problem. Unfortunately, dynamic programming is not applicable to generation problems with long-distance dependencies between the output decisions. The only method guaranteed to find the best solution is exhaustive search: computing the probability of every one of the V T possible sentences (for some length value T ) which is obviously too slow." 10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Instead, decoding in MT and other sequence generation problems generally uses a method called beam search. In beam search, instead of choosing the best token beam search to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width, on the metaphor of a flashlight beam beam width that can be parameterized to be wider or narrower." 10,Machine Translation and Encoder-Decoder Models,10.5,Beam Search,,,"Thus at the first step of decoding, we compute a softmax over the entire vocabulary, assigning a probability to each word. We then select the k-best options from this softmax output. These initial k outputs are the search frontier and these k initial words are called hypotheses. A hypothesis is an output sequence, a translation-sofar, together with its probability. Figure 10 .12 Beam search decoding with a beam width of k = 2. At each time step, we choose the k best hypotheses, compute the V possible extensions of each hypothesis, score the resulting k * V possible hypotheses and choose the best k to continue. At time 1, the frontier is filled with the best 2 options from the initial state of the decoder: arrived and the. We then extend each of those, compute the probability of all the hypotheses so far (arrived the, arrived aardvark, the green, the witch) and compute the best 2 (in this case the green and the witch) to be the search frontier to extend on the next step. On the arcs we show the decoders that we run to score the extension words (although for simplicity we haven't shown the context value c i that is input at each step)."