chapter,section,subsection,question,paragraph,answer Regular Expressions,Regular Expressions,Basic Regular Expressions,"What is the meaning of the Kleene star in Regex?","This language consists of strings with a b, followed by at least two a's, followed by an exclamation point. The set of operators that allows us to say things like ""some number of as"" are based on the asterisk or *, commonly called the Kleene * (gen-Kleene * erally pronounced ""cleany star""). The Kleene star means ""zero or more occurrences of the immediately previous character or regular expression"". So /a*/ means ""any string of zero or more as"". This will match a or aaaaaa, but it will also match Off Minor since the string Off Minor has zero a's. So the regular expression for matching one or more a is /aa*/, meaning one a followed by zero or more as. More complex patterns can also be repeated. So /[ab]*/ means ""zero or more a's or b's"" (not ""zero or more right square braces""). This will match strings like aaaa or ababab or bbbb.","The Kleene star means ""zero or more occurrences of the immediately previous character or regular expression""" Regular Expressions,Regular Expressions,Lookahead Assertions,"What is the usage of the Regex lookahead operator ""?=""?","These lookahead assertions make use of the (? syntax that we saw in the previous section for non-capture groups. The operator (?= pattern) is true if pattern occurs, but is zero-width, i.e. the match pointer doesn’t advance. The operator (?! pattern) only returns true if a pattern does not match, but again is zero-width and doesn’t advance the cursor. Negative lookahead is commonly used when we are parsing some complex pattern but want to rule out a special case. For example suppose we want to match, at the beginning of a line, any single word that doesn’t start with “Volcano”. We can use negative lookahead to do this: /ˆ(?!Volcano)[A-Za-z]+/","The operator (?= pattern) is true if pattern occurs, but is zero-width, i.e. the match pointer doesn’t advance." Regular Expressions,Text Normalization,,"What are the most common steps in a text normalization process?","Before almost any natural language processing of a text, the text has to be normalized. At least three tasks are commonly applied as part of any normalization process: 1. Tokenizing (segmenting) words 2. Normalizing word formats 3. Segmenting sentences In the next sections we walk through each of these tasks.","1. Tokenizing (segmenting) words 2. Normalizing word formats 3. Segmenting sentences" Regular Expressions,Text Normalization,Byte-Pair Encoding for Tokenization,"What are the two most common components of a tokenization scheme?","Most tokenization schemes have two parts: a token learner, and a token segmenter. The token learner takes a raw training corpus (sometimes roughly preseparated into words, for example by whitespace) and induces a vocabulary, a set of tokens. The token segmenter takes a raw test sentence and segments it into the tokens in the vocabulary. Three algorithms are widely used: byte-pair encoding (Sennrich et al., 2016) , unigram language modeling (Kudo, 2018) , and WordPiece (Schuster and Nakajima, 2012) ; there is also a SentencePiece library that includes implementations of the first two of the three (Kudo and Richardson, 2018) .","a token learner, and a token segmenter" Regular Expressions,Text Normalization,Byte-Pair Encoding for Tokenization,"What is the purpose of a token segmenter?","Most tokenization schemes have two parts: a token learner, and a token segmenter. The token learner takes a raw training corpus (sometimes roughly preseparated into words, for example by whitespace) and induces a vocabulary, a set of tokens. The token segmenter takes a raw test sentence and segments it into the tokens in the vocabulary. Three algorithms are widely used: byte-pair encoding (Sennrich et al., 2016) , unigram language modeling (Kudo, 2018) , and WordPiece (Schuster and Nakajima, 2012) ; there is also a SentencePiece library that includes implementations of the first two of the three (Kudo and Richardson, 2018) .","The token segmenter takes a raw test sentence and segments it into the tokens in the vocabulary." Regular Expressions,Text Normalization,Byte-Pair Encoding for Tokenization,"What is the purpose of a token learner in the BPE algorithm?","The token learner part of the BPE algorithm for taking a corpus broken up into individual characters or bytes, and learning a vocabulary by iteratively merging tokens.","taking a corpus broken up into individual characters or bytes, and learning a vocabulary by iteratively merging tokens." Regular Expressions,Text Normalization,"Word Normalization, Lemmatization and Stemming","What is word normalization?","Word normalization is the task of putting words/tokens in a standard format, choosing a single normal form for words with multiple forms like USA and US or uh-huh and uhhuh. This standardization may be valuable, despite the spelling information that is lost in the normalization process. For information retrieval or information extraction about the US, we might want to see information from documents whether they mention the US or the USA.","Word normalization is the task of putting words/tokens in a standard format" Regular Expressions,Text Normalization,"Word Normalization, Lemmatization and Stemming","How is lemmatization performed?","How is lemmatization done? The most sophisticated methods for lemmatization involve complete morphological parsing of the word. Morphology is the study of the way words are built up from smaller meaning-bearing units called morphemes. Two broad classes of morphemes can be distinguished: stems-the central morpheme of the word, supplying the main meaning-and affixes-adding ""additional"" meanings of various kinds. So, for example, the word fox consists of one morpheme (the morpheme fox) and the word cats consists of two: the morpheme cat and the morpheme -s. A morphological parser takes a word like cats and parses it into the two morphemes cat and s, or parses a Spanish word like amaren ('if in the future they would love') into the morpheme amar 'to love', and the morphological features 3PL and future subjunctive.","The most sophisticated methods for lemmatization involve complete morphological parsing of the word." Regular Expressions,Text Normalization,"Word Normalization, Lemmatization and Stemming","What is lemmatization?","Lemmatization is the task of determining that two words have the same root, despite their surface differences. The words am, are, and is have the shared lemma be; the words dinner and dinners both have the lemma dinner. Lemmatizing each of these forms to the same lemma will let us find all mentions of words in Russian like Moscow. The lemmatized form of a sentence like He is reading detective stories would thus be He be read detective story.","Lemmatization is the task of determining that two words have the same root, despite their surface differences." Regular Expressions,Minimum Edit Distance,,"How is the minimum edit distance between two strings defined?","Again, the fact that these two strings are very similar (differing by only one word) seems like useful evidence for deciding that they might be coreferent. Edit distance gives us a way to quantify both of these intuitions about string similarity. More formally, the minimum edit distance between two strings is defined as the minimum number of editing operations (operations like insertion, deletion, substitution) needed to transform one string into another.","the minimum edit distance between two strings is defined as the minimum number of editing operations (operations like insertion, deletion, substitution) needed to transform one string into another." N-gram Language Models,,,"What is a language model?","Models that assign probabilities to sequences of words are called language models or LMs. In this chapter we introduce the simplest model that assigns probabilities to sentences and sequences of words, the n-gram. An n-gram is a sequence of n words: a 2-gram (which we'll call bigram) is a two-word sequence of words like ""please turn"", ""turn your"", or ""your homework"", and a 3-gram (a trigram) is a three-word sequence of words like ""please turn your"", or ""turn your homework"". We'll see how to use n-gram models to estimate the probability of the last word of an n-gram given the previous words, and also to assign probabilities to entire sequences. In a bit of terminological ambiguity, we usually drop the word ""model"", and use the term n-gram (and bigram, etc.) to mean either the word sequence itself or the predictive model that assigns it a probability. While n-gram models are much simpler than state-of-the art neural language models based on the RNNs and transformers we will introduce in Chapter 9, they are an important foundational tool for understanding the fundamental concepts of language modeling.","Models that assign probabilities to sequences of words are called language models" N-gram Language Models,N-Grams,,"How can we estimate the probability of a word?","One way to estimate this probability is from relative frequency counts: take a very large corpus, count the number of times we see its water is so transparent that, and count the number of times this is followed by the. This would be answering the question ""Out of the times we saw the history h, how many times was it followed by the word w"", as follows:","from relative frequency counts" N-gram Language Models,N-Grams,,"What is a Markov model?","The assumption that the probability of a word depends only on the previous word is called a Markov assumption. Markov models are the class of probabilistic models Markov that assume we can predict the probability of some future unit without looking too far into the past. We can generalize the bigram (which looks one word into the past) to the trigram (which looks two words into the past) and thus to the n-gram (which n-gram looks n − 1 words into the past). Thus, the general equation for this n-gram approximation to the conditional probability of the next word in a sequence is","Markov models are the class of probabilistic models Markov that assume we can predict the probability of some future unit without looking too far into the past." N-gram Language Models,N-Grams,,"What technique can be used to estimate n-gram probabilities?","How do we estimate these bigram or n-gram probabilities? An intuitive way to estimate probabilities is called maximum likelihood estimation or MLE. We get maximum likelihood estimation the MLE estimate for the parameters of an n-gram model by getting counts from a corpus, and normalizing the counts so that they lie between 0 and 1. 1 normalize For example, to compute a particular bigram probability of a word y given a previous word x, we'll compute the count of the bigram C(xy) and normalize by the sum of all the bigrams that share the same first word x:","maximum likelihood estimation or MLE" N-gram Language Models,N-Grams,,"What is the difference between bigram and trigram models?","Some practical issues: Although for pedagogical purposes we have only described bigram models, in practice it's more common to use trigram models, which condition on the previous two words rather than the previous word, or 4-gram or even 5-gram models, when there is sufficient training data. Note that for these larger n-grams, we'll need to assume extra contexts to the left and right of the sentence end. For example, to compute trigram probabilities at the very beginning of the sentence, we use two pseudo-words for the first trigram (i.e., P(I|).","condition on the previous two words rather than the previous word" N-gram Language Models,Evaluating Language Models,,"How can two probabilisting language models be compared on a test set?","But what does it mean to ""fit the test set""? The answer is simple: whichever model assigns a higher probability to the test set-meaning it more accurately predicts the test set-is a better model. Given two probabilistic models, the better model is the one that has a tighter fit to the test data or that better predicts the details of the test data, and hence will assign a higher probability to the test data.","whichever model assigns a higher probability to the test set-meaning it more accurately predicts the test set-is a better model" N-gram Language Models,Evaluating Language Models,Perplexity,"What is the perplexity of a language model?","In practice we don't use raw probability as our metric for evaluating language models, but a variant called perplexity. The perplexity (sometimes called PP for short) perplexity of a language model on a test set is the inverse probability of the test set, normalized by the number of words. For a test set W = w 1 w 2 . . . w N ,:","perplexity of a language model on a test set is the inverse probability of the test set, normalized by the number of words." N-gram Language Models,Smoothing,,"How can language models be prevented from assigning zero probability to unknown words?","What do we do with words that are in our vocabulary (they are not unknown words) but appear in a test set in an unseen context (for example they appear after a word they never appeared after in training)? To keep a language model from assigning zero probability to these unseen events, we'll have to shave off a bit of probability mass from some more frequent events and give it to the events we've never seen. This modification is called smoothing or discounting. In this section and the folsmoothing discounting lowing ones we'll introduce a variety of ways to do smoothing: Laplace (add-one) smoothing, add-k smoothing, stupid backoff, and Kneser-Ney smoothing.","smoothing or discounting" N-gram Language Models,Huge Language Models and Stupid Backoff,,"What technique can be used to shrink an n-gram language model?","An n-gram language model can also be shrunk by pruning, for example only storing n-grams with counts greater than some threshold (such as the count threshold of 40 used for the Google n-gram release) or using entropy to prune less-important n-grams (Stolcke, 1998) . Another option is to build approximate language models using techniques like Bloom filters (Talbot and Osborne 2007, Church et al. 2007) .","pruning" N-gram Language Models,Perplexity's Relation to Entropy,,"When is a stochastic process said to be stationary?","A stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index. In other words, the probability distribution for words at time t is the same as the probability distribution at time t + 1. Markov models, and hence n-grams, are stationary. For example, in a bigram, P i is dependent only on P i−1 . So if we shift our time index by x, P i+x is still dependent on P i+x−1 . But natural language is not stationary, since as we show in Chapter 12, the probability of upcoming words can be dependent on events that were arbitrarily distant and time dependent. Thus, our statistical models only give an approximation to the correct distributions and entropies of natural language. To summarize, by making some incorrect but convenient simplifying assumptions, we can compute the entropy of some stochastic process by taking a very long sample of the output and computing its average log probability. Now we are ready to introduce cross-entropy. The cross-entropy is useful when cross-entropy we don't know the actual probability distribution p that generated some data. It allows us to use some m, which is a model of p (i.e., an approximation to p). The cross-entropy of m on p is defined by","A stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index." Naive Bayes and Sentiment Classification,Naive Bayes Classifiers,,"Why are simplifying assumptions needed when using a Naive Bayes Classifier?","Unfortunately, Eq. 4.6 is still too hard to compute directly: without some simplifying assumptions, estimating the probability of every possible combination of features (for example, every possible set of words and positions) would require huge numbers of parameters and impossibly large training sets. Naive Bayes classifiers therefore make two simplifying assumptions.","without some simplifying assumptions, estimating the probability of every possible combination of features (for example, every possible set of words and positions) would require huge numbers of parameters and impossibly large training sets." Naive Bayes and Sentiment Classification,Optimizing for Sentiment Analysis,,"What can be done to optimize Naive Bayes when an insufficient amount of labeled data is present?","Finally, in some situations we might have insufficient labeled training data to train accurate naive Bayes classifiers using all words in the training set to estimate positive and negative sentiment. In such cases we can instead derive the positive and negative word features from sentiment lexicons, lists of words that are pre-sentiment lexicons annotated with positive or negative sentiment. Four popular lexicons are the General Inquirer (Stone et al., 1966) , LIWC (Pennebaker et al., 2007) , the opinion lexicon","derive the positive and negative word features from sentiment lexicons, lists of words that are pre-sentiment lexicons annotated with positive or negative sentiment." Naive Bayes and Sentiment Classification,Naive Bayes as a Language Model,,"What type of features can be used to train a Naive Bayes classifier?","As we saw in the previous section, naive Bayes classifiers can use any sort of feature: dictionaries, URLs, email addresses, network features, phrases, and so on. But if, as in the previous section, we use only individual word features, and we use all of the words in the text (not a subset), then naive Bayes has an important similarity to language modeling. Specifically, a naive Bayes model can be viewed as a set of class-specific unigram language models, in which the model for each class instantiates a unigram language model.","dictionaries, URLs, email addresses, network features, phrases" Naive Bayes and Sentiment Classification,"Evaluation: Precision, Recall, F-measure",,"Why is accuracy rarely used alone for unbalanced text classification tasks?","To the bottom right of the table is the equation for accuracy, which asks what percentage of all the observations (for the spam or pie examples that means all emails or tweets) our system labeled correctly. Although accuracy might seem a natural metric, we generally don't use it for text classification tasks. That's because accuracy doesn't work well when the classes are unbalanced (as indeed they are with spam, which is a large majority of email, or with tweets, which are mainly not about pie). To make this more explicit, imagine that we looked at a million tweets, and let's say that only 100 of them are discussing their love (or hatred) for our pie,","because accuracy doesn't work well when the classes are unbalanced" Naive Bayes and Sentiment Classification,Test sets and Cross-validation,,"What are folds in cross-validation?","In cross-validation, we choose a number k, and partition our data into k disjoint subsets called folds. Now we choose one of those k folds as a test set, train our folds classifier on the remaining k − 1 folds, and then compute the error rate on the test set. Then we repeat with another fold as the test set, again training on the other k − 1 folds. We do this sampling process k times and average the test set error rate from these k runs to get an average error rate. If we choose k = 10, we would train 10 different models (each on 90% of our data), test the model 10 times, and average these 10 values. This is called 10-fold cross-validation.","k disjoints subsets" Logistic Regression,Classification: the Sigmoid,,"What is the purpose of logistic regression?","The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation. Here we introduce the sigmoid classifier that will help us make this decision. Consider a single input observation x, which we will represent by a vector of features [x 1 , x 2 , ..., x n ] (we'll show sample features in the next subsection). The classifier output y can be 1 (meaning the observation is a member of the class) or 0 (the observation is not a member of the class). We want to know the probability P(y = 1|x) that this observation is a member of the class. So perhaps the decision is ""positive sentiment"" versus ""negative sentiment"", the features represent counts of words in a document, P(y = 1|x) is the probability that the document has positive sentiment, and P(y = 0|x) is the probability that the document has negative sentiment. Logistic regression solves this task by learning, from a training set, a vector of weights and a bias term. Each weight w i is a real number, and is associated with one of the input features x i . The weight w i represents how important that input feature is to the classification decision, and can be positive (providing evidence that the instance being classified belongs in the positive class) or negative (providing evidence that the instance being classified belongs in the negative class). Thus we might expect in a sentiment task the word awesome to have a high positive weight, and abysmal to have a very negative weight. The bias term, also called the intercept, is bias term intercept another real number that's added to the weighted inputs.","The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation." Logistic Regression,Gradient Descent,,"What is gradient descent?","How shall we find the minimum of this (or any) loss function? Gradient descent is a method that finds a minimum of a function by figuring out in which direction (in the space of the parameters θ ) the function's slope is rising the most steeply, and moving in the opposite direction. The intuition is that if you are hiking in a canyon and trying to descend most quickly down to the river at the bottom, you might look around yourself 360 degrees, find the direction where the ground is sloping the steepest, and walk downhill in that direction.","Gradient descent is a method that finds a minimum of a function by figuring out in which direction (in the space of the parameters θ ) the function's slope is rising the most steeply, and moving in the opposite direction." Logistic Regression,Gradient Descent,,"What is stochastic gradient descent?","Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example, and nudging θ in the right direction (the opposite direction of the gradient). (an ""online algorithm"" is one that processes its input example by example, rather than waiting until it sees the entire input). x is the set of training inputs","Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example" Logistic Regression,Regularization,,"Which regularization technique is easier to optimize?","These kinds of regularization come from statistics, where L1 regularization is called lasso regression (Tibshirani, 1996) and L2 regularization is called ridge regression, lasso ridge and both are commonly used in language processing. L2 regularization is easier to optimize because of its simple derivative (the derivative of θ 2 is just 2θ ), while L1 regularization is more complex (the derivative of |θ | is non-continuous at zero).","L2 regularization is easier to optimize because of its simple derivative" Logistic Regression,Multinomial Logistic Regression,,"What is the other name of multinomial logistic regression?","Sometimes we need more than two classes. Perhaps we might want to do 3-way sentiment classification (positive, negative, or neutral). Or we could be assigning some of the labels we will introduce in Chapter 8, like the part of speech of a word (choosing from 10, 30, or even 50 different parts of speech), or the named entity type of a phrase (choosing from tags like person, location, organization). In such cases we use multinomial logistic regression, also called softmax regression (or, historically, the maxent classifier). In multinomial logistic regression the target y is a variable that ranges over more than two classes; we want to know the probability of y being in each potential class c ∈ C, p(y = c|x).","softmax regression" Vector Semantics and Embeddings,Lexical Semantics,,"What are word connotations?","Connotation Finally, words have affective meanings or connotations. The word connotations has different meanings in different fields, but here we use it to mean the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations. For example some words have positive connotations (happy) while others have negative connotations (sad). Even words whose meanings are similar in other ways can vary in connotation; consider the difference in connotations between fake, knockoff, forgery, on the one hand, and copy, replica, reproduction on the other, or innocent (positive connotation) and naive (negative connotation). Some words describe positive evaluation (great, love) and others negative evaluation (terrible, hate). Positive or negative evaluation language is called sentiment, as we saw in Chapter 4, and word sentiment plays a role in important sentiment tasks like sentiment analysis, stance detection, and applications of NLP to the language of politics and consumer reviews.","the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations" Vector Semantics and Embeddings,Vector Semantics,,"What is the idea behind vector semantics?","The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors. Vectors for representing words are called embeddings (although embeddings the term is sometimes more strictly applied only to dense vectors like word2vec (Section 6.8), rather than sparse tf-idf or PPMI vectors (Section 6.3-Section 6.6)). The word ""embedding"" derives from its mathematical sense as a mapping from one space or structure to another, although the meaning has shifted; see the end of the chapter. Figure 6 .1 A two-dimensional (t-SNE) projection of embeddings for some words and phrases, showing that words with similar meanings are nearby in space. The original 60dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. 2015with colors added for explanation.","The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors." Vector Semantics and Embeddings,Words and Vectors,,"What do rows represent in a term-document matrix?","In a term-document matrix, each row represents a word in the vocabulary and each term-document matrix column represents a document from some collection of documents. Fig. 6 .2 shows a small selection from a term-document matrix showing the occurrence of four words in four plays by Shakespeare. Each cell in this matrix represents the number of times a particular word (defined by the row) occurs in a particular document (defined by the column). Thus fool appeared 58 times in Twelfth Night.","a word in the vocabulary" Vector Semantics and Embeddings,Words and Vectors,,"What do columns represent in a term-document matrix?","In a term-document matrix, each row represents a word in the vocabulary and each term-document matrix column represents a document from some collection of documents. Fig. 6 .2 shows a small selection from a term-document matrix showing the occurrence of four words in four plays by Shakespeare. Each cell in this matrix represents the number of times a particular word (defined by the row) occurs in a particular document (defined by the column). Thus fool appeared 58 times in Twelfth Night.","a document from some collection of documents" Vector Semantics and Embeddings,Cosine for measuring similarity,,"Why can dot product be used as a similarity metric?","As we will see, most metrics for similarity between vectors are based on the dot product. The dot product acts as a similarity metric because it will tend to be high just when the two vectors have large values in the same dimensions. Alternatively, vectors that have zeros in different dimensions-orthogonal vectors-will have a dot product of 0, representing their strong dissimilarity.","The dot product acts as a similarity metric because it will tend to be high just when the two vectors have large values in the same dimensions." Vector Semantics and Embeddings,Pointwise Mutual Information (PMI),,"What is the intuition behind PPMI?","An alternative weighting function to tf-idf, PPMI (positive pointwise mutual information), is used for term-term-matrices, when the vector dimensions correspond to words rather than documents. PPMI draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in our corpus than we would have a priori expected them to appear by chance.","PPMI draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in our corpus than we would have a priori expected them to appear by chance." Vector Semantics and Embeddings,Applications of the TF-IDF or PPMI Vector Models,,"What is a centroid?","The centroid is the multidimensional version of the mean; the centroid of a set of vectors is a single vector that has the minimum sum of squared distances to each of the vectors in the set. Given k word vectors w 1 , w 2 , ..., w k , the centroid document vector d is:","a multidimensional version of the mean" Neural Networks and Neural Language Models,Feedforward Neural Networks,,"What are the three type of nodes in a feedforward neural network?","Let's now walk through a slightly more formal presentation of the simplest kind of neural network, the feedforward network. A feedforward network is a multilayer feedforward network network in which the units are connected with no cycles; the outputs from units in each layer are passed to units in the next higher layer, and no outputs are passed back to lower layers. (In Chapter 9 we'll introduce networks with cycles, called recurrent neural networks.) For historical reasons multilayer networks, especially feedforward networks, are sometimes called multi-layer perceptrons (or MLPs); this is a technical misnomer, multi-layer perceptrons MLP since the units in modern multilayer networks aren't perceptrons (perceptrons are purely linear, but modern networks are made up of units with non-linearities like sigmoids), but at some point the name stuck. Simple feedforward networks have three kinds of nodes: input units, hidden units, and output units. Fig. 7.8 shows a picture.","input units, hidden units, and output units" Neural Networks and Neural Language Models,Feedforward Neural Language Modeling,Forward inference in the neural language model,"What is the output of the output layer's softmax in a neural language model?","The 3 resulting embedding vectors are concatenated to produce e, the embedding layer. This is followed by a hidden layer and an output layer whose softmax produces a probability distribution over words. For example y 42 , the value of output node 42, is the probability of the next word w t being V 42 , the vocabulary word with index 42 (which is the word 'fish' in our example).","a probability distribution over words" Neural Networks and Neural Language Models,Training Neural Nets,Computation Graphs,"What is a computation graph?","A computation graph is a representation of the process of computing a mathematical expression, in which the computation is broken down into separate operations, each of which is modeled as a node in a graph.","A computation graph is a representation of the process of computing a mathematical expression, in which the computation is broken down into separate operations, each of which is modeled as a node in a graph." Sequence Labeling for Parts of Speech and Named Entities,Part-of-Speech Tagging,,"What is part-of-speech tagging?","Part-of-speech tagging is the process of assigning a part-of-speech to each word in a text. The input is a sequence x 1 , x 2 , ..., x n of (tokenized) words and a tagset, and the output is a sequence y 1 , y 2 , ..., y n of tags, each output y i corresponding exactly to one input x i , as shown in the intuition in Fig. 8.3 . Tagging is a disambiguation task; words are ambiguous -have more than one ambiguous possible part-of-speech-and the goal is to find the correct tag for the situation. For example, book can be a verb (book that flight) or a noun (hand me that book). That can be a determiner (Does that flight serve dinner) or a complementizer (I We'll introduce algorithms for the task in the next few sections, but first let's explore the task. Exactly how hard is it? Fig. 8.4 shows that most word types (85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the ambiguous words, though accounting for only 14-15% of the vocabulary, are very common, and 55-67% of word tokens in running text are ambiguous. Particularly ambiguous common words include that, back, down, put and set; here are some examples of the 6 different parts of speech for the word back: earnings growth took a back/JJ seat a small building in the back/NN a clear majority of senators back/VBP the bill Dave began to back/VB toward the door enable the country to buy back/RP debt I was twenty-one back/RB then Nonetheless, many words are easy to disambiguate, because their different tags aren't equally likely. For example, a can be a determiner or the letter a, but the determiner sense is much more likely.","Part-of-speech tagging is the process of assigning a part-of-speech to each word in a text." Sequence Labeling for Parts of Speech and Named Entities,Named Entities and Named Entity Tagging,,"What are named entities?","A named entity is, roughly speaking, anything that can be referred to with a proper name: a person, a location, an organization. The text contains 13 mentions of named entities including 5 organizations, 4 locations, 2 times, 1 person, and 1 mention of money. Figure 8 .5 shows typical generic named entity types. Many applications will also need to use specific entity types like proteins, genes, commercial products, or works of art.","A named entity is, roughly speaking, anything that can be referred to with a proper name: a person, a location, an organization." Sequence Labeling for Parts of Speech and Named Entities,HMM Part-of-Speech Tagging,,"What is a Hidden Markov Model?","An HMM is a probabilistic sequence model: given a sequence of units (words, letters, morphemes, sentences, whatever), it computes a probability distribution over possible sequences of labels and chooses the best label sequence.","An HMM is a probabilistic sequence model" Deep Learning Architectures for Sequence Processing,,,"Why is limited context a limitation of feedforward neural networks?","The simple feedforward sliding-window is promising, but isn't a completely satisfactory solution to temporality. By using embeddings as inputs, it does solve the main problem of the simple n-gram models of Chapter 3 (recall that n-grams were based on words rather than embeddings, making them too literal, unable to generalize across contexts of similar words). But feedforward networks still share another weakness of n-gram approaches: limited context. Anything outside the context window has no impact on the decision being made. Yet many language tasks require access to information that can be arbitrarily distant from the current word. Second, the use of windows makes it difficult for networks to learn systematic patterns arising from phenomena like constituency and compositionality: the way the meaning of words in phrases combine together. For example, in Fig. 9 .1 the phrase all the appears in one window in the second and third positions, and in the next window in the first and second positions, forcing the network to learn two separate patterns for what should be the same item.","Anything outside the context window has no impact on the decision being made." Deep Learning Architectures for Sequence Processing,Recurrent Neural Networks,,"What is a recurrent neural network?","A recurrent neural network (RNN) is any network that contains a cycle within its network connections, meaning that the value of some unit is directly, or indirectly, dependent on its own earlier outputs as an input. While powerful, such networks are difficult to reason about and to train. However, within the general class of recurrent networks there are constrained architectures that have proven to be extremely effective when applied to language. In this section, we consider a class of recurrent networks referred to as Elman Networks (Elman, 1990) or simple recurrent net-Elman Networks works. These networks are useful in their own right and serve as the basis for more complex approaches like the Long Short-Term Memory (LSTM) networks discussed later in this chapter. In this chapter when we use the term RNN we'll be referring to these simpler more constrained networks (although you will often see the term RNN to mean any net with recurrent properties including LSTMs).","any network that contains a cycle within its network connections" Deep Learning Architectures for Sequence Processing,RNNs for other NLP tasks,Generation with RNN-Based Language Models,"What is an autoregressive language model?","Technically an autoregressive model is a model that predicts a value at time t based on a linear function of the previous values at times t − 1, t − 2, and so on. Although language models are not linear (since they have many layers of non-linearities), we loosely refer to this generation technique as autoregressive generation since the word generated at each time step is conditioned on the word selected by the network from the previous step. Fig. 9 .9 illustrates this approach. In this figure, the details of the RNN's hidden layers and recurrent connections are hidden within the blue block. This simple architecture underlies state-of-the-art approaches to applications such as machine translation, summarization, and question answering. The key to these approaches is to prime the generation component with an appropriate context. That is, instead of simply using to get things started we can provide a richer task-appropriate context; for translation the context is the sentence in the source language; for summarization it's the long text we want to summarize. We'll discuss the application of contextual generation to the problem of summarization in Section 9.9 in the context of transformer-based language models, and then again in Chapter 10 when we introduce encoder-decoder models.","a model that predicts a value at time t based on a linear function of the previous values at times t − 1, t − 2, and so on." Deep Learning Architectures for Sequence Processing,The LSTM,,"What are the two subproblems derived from the context management problem by LSTMs?","The most commonly used such extension to RNNs is the Long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) . LSTMs divide the context management problem into two sub-problems: removing information no longer needed from the context, and adding information likely to be needed for later decision making. The key to solving both problems is to learn how to manage this context rather than hard-coding a strategy into the architecture. LSTMs accomplish this by first adding an explicit context layer to the architecture (in addition to the usual recurrent hidden layer), and through the use of specialized neural units that make use of gates to control the flow of information into and out of the units that comprise the network layers. These gates are implemented through the use of additional weights that operate sequentially on the input, and previous hidden layer, and previous context layers. The gates in an LSTM share a common design pattern; each consists of a feedforward layer, followed by a sigmoid activation function, followed by a pointwise multiplication with the layer being gated. The choice of the sigmoid as the activation function arises from its tendency to push its outputs to either 0 or 1. Combining this with a pointwise multiplication has an effect similar to that of a binary mask. Values in the layer being gated that align with values near 1 in the mask are passed through nearly unchanged; values corresponding to lower values are essentially erased.","removing information no longer needed from the context, and adding information likely to be needed for later decision making." Deep Learning Architectures for Sequence Processing,The LSTM,,"What is the purpose of a forget gate in the LSTM architecture?","The first gate we'll consider is the forget gate. The purpose of this gate to delete information from the context that is no longer needed. The forget gate computes a weighted sum of the previous state's hidden layer and the current input and passes that through a sigmoid. This mask is then multiplied element-wise by the context vector to remove the information from context that is no longer required. Elementwise multiplication of two vectors (represented by the operator , and sometimes called the Hadamard product) is the vector of the same dimension as the two input vectors, where each element i is the product of element i in the two input vectors:","delete information from the context that is no longer needed" Deep Learning Architectures for Sequence Processing,Self-Attention Networks: Transformers,,"What is expressed in the attention-based approach by comparing items in a collection?","At the core of an attention-based approach is the ability to compare an item of interest to a collection of other items in a way that reveals their relevance in the current context. In the case of self-attention, the set of comparisons are to other elements within a given sequence. The result of these comparisons is then used to compute an output for the current input. For example, returning to Fig. 9 .15, the computation of y 3 is based on a set of comparisons between the input x 3 and its preceding elements x 1 and x 2 , and to x 3 itself. The simplest form of comparison between elements in a self-attention layer is a dot product. Let's refer to the result of this comparison as a score (we'll be updating this equation to add attention to the computation of this score):","their relevance in the current context" Deep Learning Architectures for Sequence Processing,Self-Attention Networks: Transformers,,"Why does the dot product output need to be scaled in the attention computation?","The result of a dot product can be an arbitrarily large (positive or negative) value. Exponentiating such large values can lead to numerical issues and to an effective loss of gradients during training. To avoid this, the dot product needs to be scaled in a suitable fashion. A scaled dot-product approach divides the result of the dot product by a factor related to the size of the embeddings before passing them through the softmax. A typical approach is to divide the dot product by the square root of the dimensionality of the query and key vectors (d k ), leading us to update our scoring function one more time, replacing Eq. 9.27 and Eq. 9.32 with Eq. 9.34:","The result of a dot product can be an arbitrarily large (positive or negative) value. Exponentiating such large values can lead to numerical issues and to an effective loss of gradients during training." Deep Learning Architectures for Sequence Processing,Self-Attention Networks: Transformers,,"What components constitute a transformer block?","The self-attention calculation lies at the core of what's called a transformer block, which, in addition to the self-attention layer, includes additional feedforward layers, residual connections, and normalizing layers. The input and output dimensions of these blocks are matched so they can be stacked just as was the case for stacked RNNs. Fig. 9 .18 illustrates a standard transformer block consisting of a single attention layer followed by a fully-connected feedforward layer with residual connections and layer normalizations following each. We've already seen feedforward layers in Chapter 7, but what are residual connections and layer norm? In deep networks, residual connections are connections that pass information from a lower layer to a higher layer without going through the intermediate layer. Allowing information from the activation going forward and the gradient going backwards to skip a layer improves learning and gives higher level layers direct access to information from lower layers (He et al., 2016) . Residual connections in transformers are implemented by added a layer's input vector to its output vector before passing it forward . In the transformer block shown in Fig. 9 .18, residual connections are used with both the attention and feedforward sublayers. These summed vectors are then normalized using layer normalization (Ba et al., 2016 ). If we think of a layer as one long vector of units, the resulting function computed in a transformer block can be expressed as:","in addition to the self-attention layer, includes additional feedforward layers, residual connections, and normalizing layers" Machine Translation and Encoder-Decoder Models,Word Order Typology,Lexical Divergences,"What is a lexical gap?","Further, one language may have a lexical gap, where no word or phrase, short of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the ""satellites"": particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 .","no word or phrase, short of an explanatory footnote, can express the exact meaning of a word in the other language." Machine Translation and Encoder-Decoder Models,Word Order Typology,Referential Density,"What are pro-drop languages?","Languages that can omit pronouns are called pro-drop languages. Even among the pro-drop languages, there are marked differences in frequencies of omission. Japanese and Chinese, for example, tend to omit far more than does Spanish. This dimension of variation across languages is called the dimension of referential density. We say that languages that tend to use more pronouns are more referentially referential density dense than those that use more zeros. Referentially sparse languages, like Chinese or Japanese, that require the hearer to do more inferential work to recover antecedents are also called cold languages. Languages that are more explicit and make it easier Marshall McLuhan's 1964 distinction between hot media like movies, which fill in many details for the viewer, versus cold media like comics, which require the reader to do more inferential work to fill out the representation (Bickel, 2003) .","Languages that can omit pronouns" Machine Translation and Encoder-Decoder Models,Beam Search,,"How does the beam search decoding algorithm operate?","Instead, decoding in MT and other sequence generation problems generally uses a method called beam search. In beam search, instead of choosing the best token beam search to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width, on the metaphor of a flashlight beam beam width that can be parameterized to be wider or narrower."," In beam search, instead of choosing the best token beam search to generate at each timestep, we keep k possible tokens at each step." Machine Translation and Encoder-Decoder Models,Encoder-Decoder with Transformers,,"What is the difference between cross-attention and multi-head self-attention?","But the components of the architecture differ somewhat from the RNN and also from the transformer block we've seen. First, in order to attend to the source language, the transformer blocks in the decoder has an extra cross-attention layer. Recall that the transformer block of Chapter 9 consists of a self-attention layer that attends to the input from the previous layer, followed by layer norm, a feed forward layer, and another layer norm. The decoder transformer block includes an extra layer with a special kind of attention, cross-attention (also sometimes called cross-attention encoder-decoder attention or source attention). Cross-attention has the same form as the multi-headed self-attention in a normal transformer block, except that while the queries as usual come from the previous layer of the decoder, the keys and values come from the output of the encoder.","the keys and values come from the output of the encoder." Machine Translation and Encoder-Decoder Models,MT Evaluation,Automatic Evaluation: Embedding-Based Methods,"How is the chrF evaluation metric for MT computed?","The chrF metric is based on measuring the exact character n-grams a human reference and candidate machine translation have in common. However, this criterion is overly strict, since a good translation may use alternate words or paraphrases. A solution first pioneered in early metrics like METEOR (Banerjee and Lavie, 2005) was to allow synonyms to match between the reference x and candidatex. More recent metrics use BERT or other embeddings to implement this intuition.","The chrF metric is based on measuring the exact character n-grams a human reference and candidate machine translation have in common." Transfer Learning with Pretrained Language Models and Contextual Embeddings,Training Bidirectional Encoders,Masking Words,"What percentage of tokens sampled for learning are replaced with the [MASK] token during BERT pre-training?","In BERT, 15% of the input tokens in a training sequence are sampled for learning. Of these, 80% are replaced with [MASK] , 10% are replaced with randomly selected tokens, and the remaining 10% are left unchanged.","80%" Transfer Learning with Pretrained Language Models and Contextual Embeddings,Training Bidirectional Encoders,Masking Words,"What is the Masked Language Modeling training objective?","The MLM training objective is to predict the original inputs for each of the masked tokens using a bidirectional encoder of the kind described in the last section. The cross-entropy loss from these predictions drives the training process for all the parameters in the model. Note that all of the input tokens play a role in the selfattention process, but only the sampled tokens are used for learning.","predict the original inputs for each of the masked tokens" Transfer Learning with Pretrained Language Models and Contextual Embeddings,Transfer Learning through Fine-Tuning,Sequence Classification,"What role does the [CLS] token play in BERT?","Sequence classification applications often represent an input sequence with a single consolidated representation. With RNNs, we used the hidden layer associated with the final input element to stand for the entire sequence. A similar approach is used with transformers. An additional vector is added to the model to stand for the entire sequence. This vector is sometimes called the sentence embedding since it refers sentence embedding to the entire sequence, although the term 'sentence embedding' is also used in other ways. In BERT, the [CLS] token plays the role of this embedding. This unique token is added to the vocabulary and is prepended to the start of all input sequences, both during pretraining and encoding. The output vector in the final layer of the model for the [CLS] input represents the entire input sequence and serves as the input to a classifier head, a logistic regression or neural network classifier that makes the classifier head relevant decision.","sentence embedding"