|
chapter,section,subsection,question,paragraph,answer |
|
Regular Expressions,Regular Expressions,Basic Regular Expressions,,, |
|
Regular Expressions,Regular Expressions,Lookahead Assertions,,, |
|
Regular Expressions,Text Normalization,,,, |
|
Regular Expressions,Text Normalization,Byte-Pair Encoding for Tokenization,,, |
|
Regular Expressions,Text Normalization,Byte-Pair Encoding for Tokenization,,, |
|
Regular Expressions,Text Normalization,Byte-Pair Encoding for Tokenization,,, |
|
Regular Expressions,Text Normalization,,,, |
|
Regular Expressions,Text Normalization,,,, |
|
Regular Expressions,Text Normalization,,,, |
|
Regular Expressions,Minimum Edit Distance,,,, |
|
N-gram Language Models,,,,, |
|
N-gram Language Models,N-Grams,,,, |
|
N-gram Language Models,N-Grams,,,, |
|
N-gram Language Models,N-Grams,,,, |
|
N-gram Language Models,N-Grams,,,, |
|
N-gram Language Models,Evaluating Language Models,,,, |
|
N-gram Language Models,Evaluating Language Models,Perplexity,,, |
|
N-gram Language Models,Smoothing,,,, |
|
N-gram Language Models,Huge Language Models and Stupid Backoff,,,, |
|
N-gram Language Models,Perplexity's Relation to Entropy,,"When is a stochastic process said to be stationary?","A stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index. In other words, the probability distribution for words at time t is the same as the probability distribution at time t + 1. Markov models, and hence n-grams, are stationary. For example, in a bigram, P i is dependent only on P i−1 . So if we shift our time index by x, P i+x is still dependent on P i+x−1 . But natural language is not stationary, since as we show in Chapter 12, the probability of upcoming words can be dependent on events that were arbitrarily distant and time dependent. Thus, our statistical models only give an approximation to the correct distributions and entropies of natural language. To summarize, by making some incorrect but convenient simplifying assumptions, we can compute the entropy of some stochastic process by taking a very long sample of the output and computing its average log probability. Now we are ready to introduce cross-entropy. The cross-entropy is useful when cross-entropy we don't know the actual probability distribution p that generated some data. It allows us to use some m, which is a model of p (i.e., an approximation to p). The cross-entropy of m on p is defined byA stochastic process is said to be stationary if the probabilities it assigns to a Stationary sequence are invariant with respect to shifts in the time index. |
|
Why are simplifying assumptions needed when using a Naive Bayes Classifier?Unfortunately, Eq. 4.6 is still too hard to compute directly: without some simplifying assumptions, estimating the probability of every possible combination of features (for example, every possible set of words and positions) would require huge numbers of parameters and impossibly large training sets. Naive Bayes classifiers therefore make two simplifying assumptions.without some simplifying assumptions, estimating the probability of every possible combination of features (for example, every possible set of words and positions) would require huge numbers of parameters and impossibly large training sets. |
|
What can be done to optimize Naive Bayes when an insufficient amount of labeled data is present?Finally, in some situations we might have insufficient labeled training data to train accurate naive Bayes classifiers using all words in the training set to estimate positive and negative sentiment. In such cases we can instead derive the positive and negative word features from sentiment lexicons, lists of words that are pre-sentiment lexicons annotated with positive or negative sentiment. Four popular lexicons are the General Inquirer (Stone et al., 1966) , LIWC (Pennebaker et al., 2007) , the opinion lexiconderive the positive and negative word features from sentiment lexicons, lists of words that are pre-sentiment lexicons annotated with positive or negative sentiment. |
|
What type of features can be used to train a Naive Bayes classifier?As we saw in the previous section, naive Bayes classifiers can use any sort of feature: dictionaries, URLs, email addresses, network features, phrases, and so on. But if, as in the previous section, we use only individual word features, and we use all of the words in the text (not a subset), then naive Bayes has an important similarity to language modeling. Specifically, a naive Bayes model can be viewed as a set of class-specific unigram language models, in which the model for each class instantiates a unigram language model.dictionaries, URLs, email addresses, network features, phrases |
|
Evaluation: Precision, Recall, F-measureWhy is accuracy rarely used alone for unbalanced text classification tasks?To the bottom right of the table is the equation for accuracy, which asks what percentage of all the observations (for the spam or pie examples that means all emails or tweets) our system labeled correctly. Although accuracy might seem a natural metric, we generally don't use it for text classification tasks. That's because accuracy doesn't work well when the classes are unbalanced (as indeed they are with spam, which is a large majority of email, or with tweets, which are mainly not about pie). To make this more explicit, imagine that we looked at a million tweets, and let's say that only 100 of them are discussing their love (or hatred) for our pie,because accuracy doesn't work well when the classes are unbalanced" |
|
Naive Bayes and Sentiment Classification,Test sets and Cross-validation,,"What are folds in cross-validation?","In cross-validation, we choose a number k, and partition our data into k disjoint subsets called folds. Now we choose one of those k folds as a test set, train our folds classifier on the remaining k − 1 folds, and then compute the error rate on the test set. Then we repeat with another fold as the test set, again training on the other k − 1 folds. We do this sampling process k times and average the test set error rate from these k runs to get an average error rate. If we choose k = 10, we would train 10 different models (each on 90% of our data), test the model 10 times, and average these 10 values. This is called 10-fold cross-validation.","k disjoints subsets" |
|
Logistic Regression,Classification: the Sigmoid,,"What is the purpose of logistic regression?","The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation. Here we introduce the sigmoid classifier that will help us make this decision. Consider a single input observation x, which we will represent by a vector of features [x 1 , x 2 , ..., x n ] (we'll show sample features in the next subsection). The classifier output y can be 1 (meaning the observation is a member of the class) or 0 (the observation is not a member of the class). We want to know the probability P(y = 1|x) that this observation is a member of the class. So perhaps the decision is positive sentiment versus negative sentiment, the features represent counts of words in a document, P(y = 1|x) is the probability that the document has positive sentiment, and P(y = 0|x) is the probability that the document has negative sentiment. Logistic regression solves this task by learning, from a training set, a vector of weights and a bias term. Each weight w i is a real number, and is associated with one of the input features x i . The weight w i represents how important that input feature is to the classification decision, and can be positive (providing evidence that the instance being classified belongs in the positive class) or negative (providing evidence that the instance being classified belongs in the negative class). Thus we might expect in a sentiment task the word awesome to have a high positive weight, and abysmal to have a very negative weight. The bias term, also called the intercept, is bias term intercept another real number that's added to the weighted inputs.","The goal of binary logistic regression is to train a classifier that can make a binary decision about the class of a new input observation." |
|
Logistic Regression,Gradient Descent,,"What is gradient descent?","How shall we find the minimum of this (or any) loss function? Gradient descent is a method that finds a minimum of a function by figuring out in which direction (in the space of the parameters θ ) the function's slope is rising the most steeply, and moving in the opposite direction. The intuition is that if you are hiking in a canyon and trying to descend most quickly down to the river at the bottom, you might look around yourself 360 degrees, find the direction where the ground is sloping the steepest, and walk downhill in that direction.Gradient descent is a method that finds a minimum of a function by figuring out in which direction (in the space of the parameters θ ) the function's slope is rising the most steeply, and moving in the opposite direction." |
|
Logistic Regression,Gradient Descent,,"What is stochastic gradient descent?","Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example, and nudging θ in the right direction (the opposite direction of the gradient). (an ""online algorithm"" is one that processes its input example by example, rather than waiting until it sees the entire input). x is the set of training inputs","Stochastic gradient descent is an online algorithm that minimizes the loss function by computing its gradient after each training example" |
|
Logistic Regression,Regularization,,"Which regularization technique is easier to optimize?","These kinds of regularization come from statistics, where L1 regularization is called lasso regression (Tibshirani, 1996) and L2 regularization is called ridge regression, lasso ridge and both are commonly used in language processing. L2 regularization is easier to optimize because of its simple derivative (the derivative of θ 2 is just 2θ ), while L1 regularization is more complex (the derivative of |θ | is non-continuous at zero).","L2 regularization is easier to optimize because of its simple derivative" |
|
Logistic Regression,Multinomial Logistic Regression,,"What is the other name of multinomial logistic regression?","Sometimes we need more than two classes. Perhaps we might want to do 3-way sentiment classification (positive, negative, or neutral). Or we could be assigning some of the labels we will introduce in Chapter 8, like the part of speech of a word (choosing from 10, 30, or even 50 different parts of speech), or the named entity type of a phrase (choosing from tags like person, location, organization). In such cases we use multinomial logistic regression, also called softmax regression (or, historically, the maxent classifier). In multinomial logistic regression the target y is a variable that ranges over more than two classes; we want to know the probability of y being in each potential class c ∈ C, p(y = c|x).","softmax regression" |
|
Vector Semantics and Embeddings,Lexical Semantics,,"What are word connotations?","Connotation Finally, words have affective meanings or connotations. The word connotations has different meanings in different fields, but here we use it to mean the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations. For example some words have positive connotations (happy) while others have negative connotations (sad). Even words whose meanings are similar in other ways can vary in connotation; consider the difference in connotations between fake, knockoff, forgery, on the one hand, and copy, replica, reproduction on the other, or innocent (positive connotation) and naive (negative connotation). Some words describe positive evaluation (great, love) and others negative evaluation (terrible, hate). Positive or negative evaluation language is called sentiment, as we saw in Chapter 4, and word sentiment plays a role in important sentiment tasks like sentiment analysis, stance detection, and applications of NLP to the language of politics and consumer reviews.","the aspects of a word's meaning that are related to a writer or reader's emotions, sentiment, opinions, or evaluations" |
|
Vector Semantics and Embeddings,Vector Semantics,,"What is the idea behind vector semantics?","The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors. Vectors for representing words are called embeddings (although embeddings the term is sometimes more strictly applied only to dense vectors like word2vec (Section 6.8), rather than sparse tf-idf or PPMI vectors (Section 6.3-Section 6.6)). The word embedding derives from its mathematical sense as a mapping from one space or structure to another, although the meaning has shifted; see the end of the chapter. Figure 6 .1 A two-dimensional (t-SNE) projection of embeddings for some words and phrases, showing that words with similar meanings are nearby in space. The original 60dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. 2015with colors added for explanation.The idea of vector semantics is to represent a word as a point in a multidimensional semantic space that is derived (in ways we'll see) from the distributions of word neighbors." |
|
Vector Semantics and Embeddings,Words and Vectors,,"What do rows represent in a term-document matrix?","In a term-document matrix, each row represents a word in the vocabulary and each term-document matrix column represents a document from some collection of documents. Fig. 6 .2 shows a small selection from a term-document matrix showing the occurrence of four words in four plays by Shakespeare. Each cell in this matrix represents the number of times a particular word (defined by the row) occurs in a particular document (defined by the column). Thus fool appeared 58 times in Twelfth Night.","a word in the vocabulary" |
|
Vector Semantics and Embeddings,Words and Vectors,,"What do columns represent in a term-document matrix?","In a term-document matrix, each row represents a word in the vocabulary and each term-document matrix column represents a document from some collection of documents. Fig. 6 .2 shows a small selection from a term-document matrix showing the occurrence of four words in four plays by Shakespeare. Each cell in this matrix represents the number of times a particular word (defined by the row) occurs in a particular document (defined by the column). Thus fool appeared 58 times in Twelfth Night.","a document from some collection of documents" |
|
Vector Semantics and Embeddings,Cosine for measuring similarity,,"Why can dot product be used as a similarity metric?","As we will see, most metrics for similarity between vectors are based on the dot product. The dot product acts as a similarity metric because it will tend to be high just when the two vectors have large values in the same dimensions. Alternatively, vectors that have zeros in different dimensions-orthogonal vectors-will have a dot product of 0, representing their strong dissimilarity.","The dot product acts as a similarity metric because it will tend to be high just when the two vectors have large values in the same dimensions." |
|
Vector Semantics and Embeddings,Pointwise Mutual Information (PMI),,"What is the intuition behind PPMI?","An alternative weighting function to tf-idf, PPMI (positive pointwise mutual information), is used for term-term-matrices, when the vector dimensions correspond to words rather than documents. PPMI draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in our corpus than we would have a priori expected them to appear by chance.","PPMI draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in our corpus than we would have a priori expected them to appear by chance." |
|
Vector Semantics and Embeddings,Applications of the TF-IDF or PPMI Vector Models,,"What is a centroid?","The centroid is the multidimensional version of the mean; the centroid of a set of vectors is a single vector that has the minimum sum of squared distances to each of the vectors in the set. Given k word vectors w 1 , w 2 , ..., w k , the centroid document vector d is:","a multidimensional version of the mean" |
|
Neural Networks and Neural Language Models,Feedforward Neural Networks,,"What are the three type of nodes in a feedforward neural network?","Let's now walk through a slightly more formal presentation of the simplest kind of neural network, the feedforward network. A feedforward network is a multilayer feedforward network network in which the units are connected with no cycles; the outputs from units in each layer are passed to units in the next higher layer, and no outputs are passed back to lower layers. (In Chapter 9 we'll introduce networks with cycles, called recurrent neural networks.) For historical reasons multilayer networks, especially feedforward networks, are sometimes called multi-layer perceptrons (or MLPs); this is a technical misnomer, multi-layer perceptrons MLP since the units in modern multilayer networks aren't perceptrons (perceptrons are purely linear, but modern networks are made up of units with non-linearities like sigmoids), but at some point the name stuck. Simple feedforward networks have three kinds of nodes: input units, hidden units, and output units. Fig. 7.8 shows a picture.input units, hidden units, and output units |
|
What is the output of the output layer's softmax in a neural language model?","The 3 resulting embedding vectors are concatenated to produce e, the embedding layer. This is followed by a hidden layer and an output layer whose softmax produces a probability distribution over words. For example y 42 , the value of output node 42, is the probability of the next word w t being V 42 , the vocabulary word with index 42 (which is the word 'fish' in our example).","a probability distribution over words" |
|
Neural Networks and Neural Language Models,Training Neural Nets,Computation Graphs,"What is a computation graph?","A computation graph is a representation of the process of computing a mathematical expression, in which the computation is broken down into separate operations, each of which is modeled as a node in a graph.","A computation graph is a representation of the process of computing a mathematical expression, in which the computation is broken down into separate operations, each of which is modeled as a node in a graph." |
|
Sequence Labeling for Parts of Speech and Named Entities,Part-of-Speech Tagging,,"What is part-of-speech tagging?","Part-of-speech tagging is the process of assigning a part-of-speech to each word in a text. The input is a sequence x 1 , x 2 , ..., x n of (tokenized) words and a tagset, and the output is a sequence y 1 , y 2 , ..., y n of tags, each output y i corresponding exactly to one input x i , as shown in the intuition in Fig. 8.3 . Tagging is a disambiguation task; words are ambiguous -have more than one ambiguous possible part-of-speech-and the goal is to find the correct tag for the situation. For example, book can be a verb (book that flight) or a noun (hand me that book). That can be a determiner (Does that flight serve dinner) or a complementizer (I We'll introduce algorithms for the task in the next few sections, but first let's explore the task. Exactly how hard is it? Fig. 8.4 shows that most word types (85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the ambiguous words, though accounting for only 14-15% of the vocabulary, are very common, and 55-67% of word tokens in running text are ambiguous. Particularly ambiguous common words include that, back, down, put and set; here are some examples of the 6 different parts of speech for the word back: earnings growth took a back/JJ seat a small building in the back/NN a clear majority of senators back/VBP the bill Dave began to back/VB toward the door enable the country to buy back/RP debt I was twenty-one back/RB then Nonetheless, many words are easy to disambiguate, because their different tags aren't equally likely. For example, a can be a determiner or the letter a, but the determiner sense is much more likely.Part-of-speech tagging is the process of assigning a part-of-speech to each word in a text. |
|
What are named entities?A named entity is, roughly speaking, anything that can be referred to with a proper name: a person, a location, an organization. The text contains 13 mentions of named entities including 5 organizations, 4 locations, 2 times, 1 person, and 1 mention of money. Figure 8 .5 shows typical generic named entity types. Many applications will also need to use specific entity types like proteins, genes, commercial products, or works of art.A named entity is, roughly speaking, anything that can be referred to with a proper name: a person, a location, an organization. |
|
What is a Hidden Markov Model?An HMM is a probabilistic sequence model: given a sequence of units (words, letters, morphemes, sentences, whatever), it computes a probability distribution over possible sequences of labels and chooses the best label sequence.An HMM is a probabilistic sequence model |
|
Why is limited context a limitation of feedforward neural networks?The simple feedforward sliding-window is promising, but isn't a completely satisfactory solution to temporality. By using embeddings as inputs, it does solve the main problem of the simple n-gram models of Chapter 3 (recall that n-grams were based on words rather than embeddings, making them too literal, unable to generalize across contexts of similar words). But feedforward networks still share another weakness of n-gram approaches: limited context. Anything outside the context window has no impact on the decision being made. Yet many language tasks require access to information that can be arbitrarily distant from the current word. Second, the use of windows makes it difficult for networks to learn systematic patterns arising from phenomena like constituency and compositionality: the way the meaning of words in phrases combine together. For example, in Fig. 9 .1 the phrase all the appears in one window in the second and third positions, and in the next window in the first and second positions, forcing the network to learn two separate patterns for what should be the same item.","Anything outside the context window has no impact on the decision being made." |
|
Deep Learning Architectures for Sequence Processing,Recurrent Neural Networks,,"What is a recurrent neural network?","A recurrent neural network (RNN) is any network that contains a cycle within its network connections, meaning that the value of some unit is directly, or indirectly, dependent on its own earlier outputs as an input. While powerful, such networks are difficult to reason about and to train. However, within the general class of recurrent networks there are constrained architectures that have proven to be extremely effective when applied to language. In this section, we consider a class of recurrent networks referred to as Elman Networks (Elman, 1990) or simple recurrent net-Elman Networks works. These networks are useful in their own right and serve as the basis for more complex approaches like the Long Short-Term Memory (LSTM) networks discussed later in this chapter. In this chapter when we use the term RNN we'll be referring to these simpler more constrained networks (although you will often see the term RNN to mean any net with recurrent properties including LSTMs).any network that contains a cycle within its network connections |
|
What is an autoregressive language model?Technically an autoregressive model is a model that predicts a value at time t based on a linear function of the previous values at times t − 1, t − 2, and so on. Although language models are not linear (since they have many layers of non-linearities), we loosely refer to this generation technique as autoregressive generation since the word generated at each time step is conditioned on the word selected by the network from the previous step. Fig. 9 .9 illustrates this approach. In this figure, the details of the RNN's hidden layers and recurrent connections are hidden within the blue block. This simple architecture underlies state-of-the-art approaches to applications such as machine translation, summarization, and question answering. The key to these approaches is to prime the generation component with an appropriate context. That is, instead of simply using <s> to get things started we can provide a richer task-appropriate context; for translation the context is the sentence in the source language; for summarization it's the long text we want to summarize. We'll discuss the application of contextual generation to the problem of summarization in Section 9.9 in the context of transformer-based language models, and then again in Chapter 10 when we introduce encoder-decoder models.","a model that predicts a value at time t based on a linear function of the previous values at times t − 1, t − 2, and so on." |
|
Deep Learning Architectures for Sequence Processing,The LSTM,,"What are the two subproblems derived from the context management problem by LSTMs?","The most commonly used such extension to RNNs is the Long short-term memory (LSTM) network (Hochreiter and Schmidhuber, 1997) . LSTMs divide the context management problem into two sub-problems: removing information no longer needed from the context, and adding information likely to be needed for later decision making. The key to solving both problems is to learn how to manage this context rather than hard-coding a strategy into the architecture. LSTMs accomplish this by first adding an explicit context layer to the architecture (in addition to the usual recurrent hidden layer), and through the use of specialized neural units that make use of gates to control the flow of information into and out of the units that comprise the network layers. These gates are implemented through the use of additional weights that operate sequentially on the input, and previous hidden layer, and previous context layers. The gates in an LSTM share a common design pattern; each consists of a feedforward layer, followed by a sigmoid activation function, followed by a pointwise multiplication with the layer being gated. The choice of the sigmoid as the activation function arises from its tendency to push its outputs to either 0 or 1. Combining this with a pointwise multiplication has an effect similar to that of a binary mask. Values in the layer being gated that align with values near 1 in the mask are passed through nearly unchanged; values corresponding to lower values are essentially erased.","removing information no longer needed from the context, and adding information likely to be needed for later decision making." |
|
Deep Learning Architectures for Sequence Processing,The LSTM,,"What is the purpose of a forget gate in the LSTM architecture?","The first gate we'll consider is the forget gate. The purpose of this gate to delete information from the context that is no longer needed. The forget gate computes a weighted sum of the previous state's hidden layer and the current input and passes that through a sigmoid. This mask is then multiplied element-wise by the context vector to remove the information from context that is no longer required. Elementwise multiplication of two vectors (represented by the operator , and sometimes called the Hadamard product) is the vector of the same dimension as the two input vectors, where each element i is the product of element i in the two input vectors:","delete information from the context that is no longer needed" |
|
Deep Learning Architectures for Sequence Processing,Self-Attention Networks: Transformers,,"What is expressed in the attention-based approach by comparing items in a collection?","At the core of an attention-based approach is the ability to compare an item of interest to a collection of other items in a way that reveals their relevance in the current context. In the case of self-attention, the set of comparisons are to other elements within a given sequence. The result of these comparisons is then used to compute an output for the current input. For example, returning to Fig. 9 .15, the computation of y 3 is based on a set of comparisons between the input x 3 and its preceding elements x 1 and x 2 , and to x 3 itself. The simplest form of comparison between elements in a self-attention layer is a dot product. Let's refer to the result of this comparison as a score (we'll be updating this equation to add attention to the computation of this score):","their relevance in the current context" |
|
Deep Learning Architectures for Sequence Processing,Self-Attention Networks: Transformers,,"Why does the dot product output need to be scaled in the attention computation?","The result of a dot product can be an arbitrarily large (positive or negative) value. Exponentiating such large values can lead to numerical issues and to an effective loss of gradients during training. To avoid this, the dot product needs to be scaled in a suitable fashion. A scaled dot-product approach divides the result of the dot product by a factor related to the size of the embeddings before passing them through the softmax. A typical approach is to divide the dot product by the square root of the dimensionality of the query and key vectors (d k ), leading us to update our scoring function one more time, replacing Eq. 9.27 and Eq. 9.32 with Eq. 9.34:","The result of a dot product can be an arbitrarily large (positive or negative) value. Exponentiating such large values can lead to numerical issues and to an effective loss of gradients during training." |
|
Deep Learning Architectures for Sequence Processing,Self-Attention Networks: Transformers,,"What components constitute a transformer block?","The self-attention calculation lies at the core of what's called a transformer block, which, in addition to the self-attention layer, includes additional feedforward layers, residual connections, and normalizing layers. The input and output dimensions of these blocks are matched so they can be stacked just as was the case for stacked RNNs. Fig. 9 .18 illustrates a standard transformer block consisting of a single attention layer followed by a fully-connected feedforward layer with residual connections and layer normalizations following each. We've already seen feedforward layers in Chapter 7, but what are residual connections and layer norm? In deep networks, residual connections are connections that pass information from a lower layer to a higher layer without going through the intermediate layer. Allowing information from the activation going forward and the gradient going backwards to skip a layer improves learning and gives higher level layers direct access to information from lower layers (He et al., 2016) . Residual connections in transformers are implemented by added a layer's input vector to its output vector before passing it forward . In the transformer block shown in Fig. 9 .18, residual connections are used with both the attention and feedforward sublayers. These summed vectors are then normalized using layer normalization (Ba et al., 2016 ). If we think of a layer as one long vector of units, the resulting function computed in a transformer block can be expressed as:in addition to the self-attention layer, includes additional feedforward layers, residual connections, and normalizing layers |
|
What is a lexical gap?Further, one language may have a lexical gap, where no word or phrase, short of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the satellites: particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 .no word or phrase, short of an explanatory footnote, can express the exact meaning of a word in the other language. |
|
What are pro-drop languages?Languages that can omit pronouns are called pro-drop languages. Even among the pro-drop languages, there are marked differences in frequencies of omission. Japanese and Chinese, for example, tend to omit far more than does Spanish. This dimension of variation across languages is called the dimension of referential density. We say that languages that tend to use more pronouns are more referentially referential density dense than those that use more zeros. Referentially sparse languages, like Chinese or Japanese, that require the hearer to do more inferential work to recover antecedents are also called cold languages. Languages that are more explicit and make it easier Marshall McLuhan's 1964 distinction between hot media like movies, which fill in many details for the viewer, versus cold media like comics, which require the reader to do more inferential work to fill out the representation (Bickel, 2003) .","Languages that can omit pronouns" |
|
Machine Translation and Encoder-Decoder Models,Beam Search,,"How does the beam search decoding algorithm operate?","Instead, decoding in MT and other sequence generation problems generally uses a method called beam search. In beam search, instead of choosing the best token beam search to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width, on the metaphor of a flashlight beam beam width that can be parameterized to be wider or narrower."," In beam search, instead of choosing the best token beam search to generate at each timestep, we keep k possible tokens at each step." |
|
Machine Translation and Encoder-Decoder Models,Encoder-Decoder with Transformers,,"What is the difference between cross-attention and multi-head self-attention?","But the components of the architecture differ somewhat from the RNN and also from the transformer block we've seen. First, in order to attend to the source language, the transformer blocks in the decoder has an extra cross-attention layer. Recall that the transformer block of Chapter 9 consists of a self-attention layer that attends to the input from the previous layer, followed by layer norm, a feed forward layer, and another layer norm. The decoder transformer block includes an extra layer with a special kind of attention, cross-attention (also sometimes called cross-attention encoder-decoder attention or source attention). Cross-attention has the same form as the multi-headed self-attention in a normal transformer block, except that while the queries as usual come from the previous layer of the decoder, the keys and values come from the output of the encoder.the keys and values come from the output of the encoder. |
|
How is the chrF evaluation metric for MT computed?The chrF metric is based on measuring the exact character n-grams a human reference and candidate machine translation have in common. However, this criterion is overly strict, since a good translation may use alternate words or paraphrases. A solution first pioneered in early metrics like METEOR (Banerjee and Lavie, 2005) was to allow synonyms to match between the reference x and candidatex. More recent metrics use BERT or other embeddings to implement this intuition.The chrF metric is based on measuring the exact character n-grams a human reference and candidate machine translation have in common. |
|
What percentage of tokens sampled for learning are replaced with the [MASK] token during BERT pre-training?In BERT, 15% of the input tokens in a training sequence are sampled for learning. Of these, 80% are replaced with [MASK] , 10% are replaced with randomly selected tokens, and the remaining 10% are left unchanged.80% |
|
What is the Masked Language Modeling training objective?The MLM training objective is to predict the original inputs for each of the masked tokens using a bidirectional encoder of the kind described in the last section. The cross-entropy loss from these predictions drives the training process for all the parameters in the model. Note that all of the input tokens play a role in the selfattention process, but only the sampled tokens are used for learning.predict the original inputs for each of the masked tokens |
|
What role does the [CLS] token play in BERT?Sequence classification applications often represent an input sequence with a single consolidated representation. With RNNs, we used the hidden layer associated with the final input element to stand for the entire sequence. A similar approach is used with transformers. An additional vector is added to the model to stand for the entire sequence. This vector is sometimes called the sentence embedding since it refers sentence embedding to the entire sequence, although the term 'sentence embedding' is also used in other ways. In BERT, the [CLS] token plays the role of this embedding. This unique token is added to the vocabulary and is prepended to the start of all input sequences, both during pretraining and encoding. The output vector in the final layer of the model for the [CLS] input represents the entire input sequence and serves as the input to a classifier head, a logistic regression or neural network classifier that makes the classifier head relevant decision.sentence embedding |
|
|