n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.2
Multihead Attention
, and these get multiplied by the inputs packed into X to produce Q ∈ R N×d k , K ∈ R N×d k , and V ∈ R N×d v . The output of each of the h heads is of shape N × d v , and so the output of the multi-head layer with h heads consists of h vectors of shape N × d v . To make use of these vectors in further processing, they are combined and then reduced down to the original input dimension d. This is accomplished by concatenating the outputs from each head and then using yet another linear projection, W O ∈ R hd v ×d , to reduce it to the original output dimension for each token, or a total N × d output. Fig. 9 .19 illustrates this approach with 4 self-attention heads. This multihead layer replaces the single self-attention layer in the transformer block shown earlier in Fig. 9 .18, the rest of the transformer block with its feedforward layer, residual connections, and layer norms remains the same.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.2
Multihead Attention
MultiHeadAttn(X) = (head 1 ⊕ head 2 ... ⊕ head h )W O (9.43) Q = XW Q i ; K = XW K i ; V = XW V i (9.44) head i = SelfAttention(Q, K, V) (9.45)
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.3
Modeling Word Order: Positional Embeddings
How does a transformer model the position of each token in the input sequence? With RNNs, information about the order of the inputs was built into the structure of the model. Unfortunately, the same isn't true for transformers; the models as we've described them so far don't have any notion of the relative, or absolute, positions of the tokens in the input. This can be seen from the fact that if you scramble the order of the inputs in the attention computation in Fig. 9 .16 you get exactly the same answer.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.3
Modeling Word Order: Positional Embeddings
One simple solution is to modify the input embeddings by combining them with positional embeddings specific to each position in an input sequence.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.3
Modeling Word Order: Positional Embeddings
Where do we get these positional embeddings? The simplest method is to start with randomly initialized embeddings corresponding to each possible input position up to some maximum length. For example, just as we have an embedding for the word fish, we'll have an embedding for the position 3. As with word embeddings, these positional embeddings are learned along with other parameters during training. To produce an input embedding that captures positional information, we just add the 9.7 • SELF-ATTENTION NETWORKS: TRANSFORMERS 201
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.3
Modeling Word Order: Positional Embeddings
Figure 9 .19 Multihead self-attention: Each of the multihead self-attention layers is provided with its own set of key, query and value weight matrices. The outputs from each of the layers are concatenated and then projected down to d, thus producing an output of the same size as the input so layers can be stacked.
9
Deep Learning Architectures for Sequence Processing
9.7
Self-Attention Networks: Transformers
9.7.3
Modeling Word Order: Positional Embeddings
word embedding for each input to its corresponding positional embedding. This new embedding serves as the input for further processing. Fig. 9 .20 shows the idea. A potential problem with the simple absolute position embedding approach is that there will be plenty of training examples for the initial positions in our inputs and correspondingly fewer at the outer length limits. These latter embeddings may be poorly trained and may not generalize well during testing. An alternative approach to positional embeddings is to choose a static function that maps integer inputs to realvalued vectors in a way that captures the inherent relationships among the positions. That is, it captures the fact that position 4 in an input is more closely related to position 5 than it is to position 17. A combination of sine and cosine functions with differing frequencies was used in the original transformer work. Developing better position representations is an ongoing research topic.
9
Deep Learning Architectures for Sequence Processing
9.8
Transformers as Language Models
nan
nan
Now that we've seen all the major components of transformers, let's examine how to deploy them as language models via semi-supervised learning. To do this, we'll proceed just as we did with the RNN-based approach: given a training corpus of plain text we'll train a model to predict the next word in a sequence using teacher forcing. Fig. 9 .21 illustrates the general approach. At each step, given all the preceding words, the final transformer layer produces an output distribution over the entire vocabulary. During training, the probability assigned to the correct word is used to calculate the cross-entropy loss for each item in the sequence. As with RNNs, the loss for a training sequence is the average cross-entropy loss over the entire sequence.
9
Deep Learning Architectures for Sequence Processing
9.8
Transformers as Language Models
nan
nan
Linear Layer Figure 9 .21 Training a transformer as a language model. Note the key difference between this figure and the earlier RNN-based version shown in Fig. 9 .6. There the calculation of the outputs and the losses at each step was inherently serial given the recurrence in the calculation of the hidden states. With transformers, each training item can be processed in parallel since the output for each element in the sequence is computed separately. Once trained, we can compute the perplexity of the resulting model, or autoregressively generate novel text just as with RNN-based models.
9
Deep Learning Architectures for Sequence Processing
9.8
Transformers as Language Models
nan
nan
A simple variation on autoregressive generation that underlies a number of practical applications uses a prior context to prime the autoregressive generation process. Fig. 9 .22 illustrates this with the task of text completion. Here a standard language model is given the prefix to some text and is asked to generate a possible completion to it. Note that as the generation process proceeds, the model has direct access to the priming context as well as to all of its own subsequently generated outputs. This ability to incorporate the entirety of the earlier context and generated outputs at each time step is the key to the power of these models. Text summarization is a practical application of context-based autoregressive Text summarization generation. The task is to take a full-length article and produce an effective summary of it. To train a transformer-based autoregressive model to perform this task, we start with a corpus consisting of full-length articles accompanied by their corresponding summaries. Fig. 9 .23 shows an example of this kind of data from a widely used summarization corpus consisting of CNN and Daily Mirror news articles.
9
Deep Learning Architectures for Sequence Processing
9.9
Contextual Generation and Summarization
nan
nan
A simple but surprisingly effective approach to applying transformers to summarization is to append a summary to each full-length article in a corpus, with a unique marker separating the two. More formally, each article-summary pair (x 1 , ..., x m ), (y 1 , ..., y n ) in a training corpus is converted into a single training instance (x 1 , ..., x m , δ , y 1 , ...y n ) with an overall length of n + m + 1. These training instances are treated as long sentences and then used to train an autoregressive language model using teacher forcing, exactly as we did earlier.
9
Deep Learning Architectures for Sequence Processing
9.9
Contextual Generation and Summarization
nan
nan
Once trained, full articles ending with the special marker are used as the context to prime the generation process to produce a summary as illustrated in Fig. 9 .24. Note that, in contrast to RNNs, the model has access to the original article as well as to the newly generated text throughout the process.
9
Deep Learning Architectures for Sequence Processing
9.9
Contextual Generation and Summarization
nan
nan
As we'll see in later chapters, variations on this simple scheme are the basis for successful text-to-text applications including machine translation, summarization and question answering.
9
Deep Learning Architectures for Sequence Processing
9.9
Contextual Generation and Summarization
9.9.1
Applying Transformers to other NLP tasks
Transformers can also be used for sequence labeling tasks (like part-of-speech tagging or named entity tagging) and sequence classification tasks (like sentiment classification), as we'll see in detail in Chapter 11. Just to give a preview, however, we don't directly train a raw transformer on these tasks. Instead, we use a technique called pretraining, in which we first train a transformer language model on a large corpus of text, in a normal self-supervised way, and only afterwards add a linear or feedforward layer on top that we finetune on a smaller dataset hand-labeled withfinetune part-of-speech or sentiment labels. Pretraining on large amounts of data via the self-supervised language model objective turns out to be a very useful way of incorporating rich information about language, and the resulting representations make it much easier to learn from the generally smaller supervised datasets for tagging or sentiment.
9
Deep Learning Architectures for Sequence Processing
9.1
Summary
nan
nan
This chapter has introduced the concepts of recurrent neural networks and transformers and how they can be applied to language problems. Here’s a summary of the main points that we covered:
9
Deep Learning Architectures for Sequence Processing
9.1
Summary
nan
nan
• In simple Recurrent Neural Networks sequences are processed one element at a time, with the output of each neural unit at time t based both on the current input at t and the hidden layer from time t − 1. • RNNs can be trained with a straightforward extension of the backpropagation algorithm, known as backpropagation through time (BPTT). • Simple recurrent networks fail on long inputs because of problems like vanishing gradients; instead modern systems use more complex gated architectures such as LSTMs that explicitly decide what to remember and forget in their hidden and context layers. • Transformers are non-recurrent networks based on self-attention. A selfattention layer maps input sequences to output sequences of the same length, based on a set of attention heads that each model how the surrounding words are relevant for the processing of the current word. • A transformer block consists of a single attention layer followed by a feedforward layer with residual connections and layer normalizations following each. Transformer blocks can be stacked to make deeper and more powerful networks. • Common language-based applications for RNNs and transformers include:
9
Deep Learning Architectures for Sequence Processing
9.1
Summary
nan
nan
-Probabilistic language modeling: assigning a probability to a sequence, or to the next element of a sequence given the preceding words. -Auto-regressive generation using a trained language model. -Sequence labeling like part-of-speech tagging, where each element of a sequence is assigned a label. -Sequence classification, where an entire text is assigned to a category, as in spam detection, sentiment analysis or topic classification.
9
Deep Learning Architectures for Sequence Processing
9.11
Bibliographical and Historical Notes
nan
nan
Influential investigations of RNNs were conducted in the context of the Parallel Distributed Processing (PDP) group at UC San Diego in the 1980's. Much of this work was directed at human cognitive modeling rather than practical NLP applications Rumelhart and McClelland 1986c McClelland and Rumelhart 1986 . Models using recurrence at the hidden layer in a feedforward network (Elman networks) were introduced by Elman (1990) . Similar architectures were investigated by Jordan (1986) with a recurrence from the output layer, and Mathis and Mozer (1995) with the addition of a recurrent context layer prior to the hidden layer. The possibility of unrolling a recurrent network into an equivalent feedforward network is discussed in (Rumelhart and McClelland, 1986c) . In parallel with work in cognitive modeling, RNNs were investigated extensively in the continuous domain in the signal processing and speech communities (Giles et al. 1994 , Robinson et al. 1996 . Schuster and Paliwal (1997) introduced bidirectional RNNs and described results on the TIMIT phoneme transcription task.
9
Deep Learning Architectures for Sequence Processing
9.11
Bibliographical and Historical Notes
nan
nan
While theoretically interesting, the difficulty with training RNNs and managing context over long sequences impeded progress on practical applications. This situation changed with the introduction of LSTMs in Hochreiter and Schmidhuber (1997) and Gers et al. (2000) . Impressive performance gains were demonstrated on tasks at the boundary of signal processing and language processing including phoneme recognition (Graves and Schmidhuber, 2005) , handwriting recognition (Graves et al., 2007) and most significantly speech recognition (Graves et al., 2013b) .
9
Deep Learning Architectures for Sequence Processing
9.11
Bibliographical and Historical Notes
nan
nan
Interest in applying neural networks to practical NLP problems surged with the work of Collobert and Weston (2008) and Collobert et al. (2011) . These efforts made use of learned word embeddings, convolutional networks, and end-to-end training. They demonstrated near state-of-the-art performance on a number of standard shared tasks including part-of-speech tagging, chunking, named entity recognition and semantic role labeling without the use of hand-engineered features.
9
Deep Learning Architectures for Sequence Processing
9.11
Bibliographical and Historical Notes
nan
nan
Approaches that married LSTMs with pre-trained collections of word-embeddings based on word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) , quickly came to dominate many common tasks: part-of-speech tagging (Ling et al., 2015) , syntactic chunking (Søgaard and Goldberg, 2016), named entity recognition (Chiu and Nichols, 2016; Ma and Hovy, 2016) , opinion mining (Irsoy and Cardie, 2014), semantic role labeling (Zhou and Xu, 2015a) and AMR parsing (Foland and Martin, 2016) . As with the earlier surge of progress involving statistical machine learning, these advances were made possible by the availability of training data provided by CONLL, SemEval, and other shared tasks, as well as shared resources such as Ontonotes (Pradhan et al., 2007b), and PropBank (Palmer et al., 2005) .
9
Deep Learning Architectures for Sequence Processing
9.11
Bibliographical and Historical Notes
nan
nan
The transformer (Vaswani et al., 2017) was developed drawing on two lines of prior research: self-attention and memory networks. Encoder-decoder attention, the idea of using a soft weighting over the encodings of input words to inform a generative decoder (see Chapter 10) was developed by Graves (2013) in the context of handwriting generation, and Bahdanau et al. (2015) for MT. This idea was extended to self-attention by dropping the need for separate encoding and decoding sequences and instead seeing attention a way of weighting the tokens in collecting information passed from lower layers to higher layers (Ling et al., 2015; Cheng et al., 2016; Liu et al., 2016b) . Other aspects of the transformer, including the terminology of key, query, and value, came from memory networks, a mechanism for adding an external read-write memory to networks, by using an embedding of a query to match keys representing content in an associative memory (Sukhbaatar et al., 2015; Weston et al., 2015; Graves et al., 2014) .
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
This chapter introduces machine translation (MT), the use of computers to translate from one language to another. Of course translation, in its full generality, such as the translation of literature, or poetry, is a difficult, fascinating, and intensely human endeavor, as rich as any other area of human creativity.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
Machine translation in its present form therefore focuses on a number of very practical tasks. Perhaps the most common current use of machine translation is for information access. We might want to translate some instructions on the web, information access perhaps the recipe for a favorite dish, or the steps for putting together some furniture. Or we might want to read an article in a newspaper, or get information from an online resource like Wikipedia or a government webpage in a foreign language. MT for information access is probably one of the most common uses of NLP technology, and Google Translate alone (shown above) translates hundreds of billions of words a day between over 100 languages.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
Another common use of machine translation is to aid human translators. MT systems are routinely used to produce a draft translation that is fixed up in a post-editing post-editing phase by a human translator. This task is often called computer-aided translation or CAT. CAT is commonly used as part of localization: the task of adapting content CAT localization or a product to a particular language community.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
Finally, a more recent application of MT is to in-the-moment human communication needs. This includes incremental translation, translating speech on-the-fly before the entire sentence is complete, as is commonly used in simultaneous interpretation. Image-centric translation can be used for example to use OCR of the text on a phone camera image as input to an MT system to translate menus or street signs.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
The standard algorithm for MT is the encoder-decoder network, also called the sequence to sequence network, an architecture that can be implemented with RNNs or with Transformers. We've seen in prior chapters that RNN or Transformer architecture can be used to do classification (for example to map a sentence to a positive or negative sentiment tag for sentiment analysis), or can be used to do sequence labeling (for example to assign each word in an input sentence with a part-of-speech, or with a named entity tag). For part-of-speech tagging, recall that the output tag is associated directly with each input word, and so we can just model the tag as output y t for each input word x t .
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
Encoder-decoder or sequence-to-sequence models are used for a different kind of sequence modeling in which the output sequence is a complex function of the entire input sequencer; we must map from a sequence of input words or tokens to a sequence of tags that are not merely direct mappings from individual words.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
Machine translation is exactly such a task: the words of the target language don't necessarily agree with the words of the source language in number or order. Consider translating the following made-up English sentence into Japanese. Note that the elements of the sentences are in very different places in the different languages. In English, the verb is in the middle of the sentence, while in Japanese, the verb kaita comes at the end. The Japanese sentence doesn't require the pronoun he, while English does. Such differences between languages can be quite complex. In the following actual sentence from the United Nations, notice the many changes between the Chinese sentence (we've given in in red a word-by-word gloss of the Chinese characters) and its English equivalent.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
(10.2) 大会/General Assembly 在/on 1982年/1982 12月/December 10日/10 通过 了/adopted 第37号/37th 决议/resolution ,核准了/approved 第二 次/second 探索/exploration 及/and 和平peaceful 利用/using 外层空 间/outer space 会议/conference 的/of 各项/various 建议/suggestions 。 On 10 December 1982 , the General Assembly adopted resolution 37 in which it endorsed the recommendations of the Second United Nations Conference on the Exploration and Peaceful Uses of Outer Space .
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
Note the many ways the English and Chinese differ. For example the ordering differs in major ways; the Chinese order of the noun phrase is "peaceful using outer space conference of suggestions" while the English has "suggestions of the ... conference on peaceful use of outer space"). And the order differs in minor ways (the date is ordered differently). English requires the in many places that Chinese doesn't, and adds some details (like "in which" and "it") that aren't necessary in Chinese. Chinese doesn't grammatically mark plurality on nouns (unlike English, which has the "-s" in "recommendations"), and so the Chinese must use the modifier 各项/various to make it clear that there is not just one recommendation. English capitalizes some words but not others.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
Encoder-decoder networks are very successful at handling these sorts of complicated cases of sequence mappings. Indeed, the encoder-decoder algorithm is not just for MT; it's the state of the art for many other tasks where complex mappings between two sequences are involved. These include summarization (where we map from a long text to its summary, like a title or an abstract), dialogue (where we map from what the user said to what our dialogue system should respond), semantic parsing (where we map from a string of words to a semantic representation like logic or SQL), and many others.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
We'll introduce the algorithm in sections Section 10.2, and in following sections give important components of the model like beam search decoding, and we'll discuss how MT is evaluated, introducing the simple chrF metric.
10
Machine Translation and Encoder-Decoder Models
nan
nan
nan
nan
But first, in the next section, we begin by summarizing the linguistic background to MT: key differences among languages that are important to consider when considering the task of translation.
10
Machine Translation and Encoder-Decoder Models
10.1
Language Divergences and Typology
nan
nan
Some aspects of human language seem to be universal, holding true for every lanuniversal guage, or are statistical universals, holding true for most languages. Many universals arise from the functional role of language as a communicative system by humans. Every language, for example, seems to have words for referring to people, for talking about eating and drinking, for being polite or not. There are also structural linguistic universals; for example, every language seems to have nouns and verbs (Chapter 8), has ways to ask questions, or issue commands, linguistic mechanisms for indicating agreement or disagreement.
10
Machine Translation and Encoder-Decoder Models
10.1
Language Divergences and Typology
nan
nan
Yet languages also differ in many ways, and an understanding of what causes such translation divergences will help us build better MT models. We often distin-translation divergence guish the idiosyncratic and lexical differences that must be dealt with one by one (the word for "dog" differs wildly from language to language), from systematic differences that we can model in a general way (many languages put the verb before the direct object; others put the verb after the direct object). The study of these systematic cross-linguistic similarities and differences is called linguistic typology. This typology section sketches some typological facts that impact machine translation; the interested reader should also look into WALS, the World Atlas of Language Structures, which gives many typological facts about languages (Dryer and Haspelmath, 2013).
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
nan
nan
As we hinted it in our example above comparing English and Japanese, languages differ in the basic word order of verbs, subjects, and objects in simple declarative clauses. German, French, English, and Mandarin, for example, are all SVO SVO (Subject-Verb-Object) languages, meaning that the verb tends to come between the subject and object. Hindi and Japanese, by contrast, are SOV languages, mean-SOV ing that the verb tends to come at the end of basic clauses, and Irish and Arabic are VSO languages. Two languages that share their basic word order type often have VSO other similarities. For example, VO languages generally have prepositions, whereas OV languages generally have postpositions.
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
nan
nan
Let's look in more detail at the example we saw above. In this SVO English sentence, the verb wrote is followed by its object a letter and the prepositional phrase to a friend, in which the preposition to is followed by its argument a friend. Arabic, with a VSO order, also has the verb before the object and prepositions. By contrast, in the Japanese example that follows, each of these orderings is reversed; the verb is preceded by its arguments, and the postposition follows its argument. .1 shows examples of other word order differences. All of these word order differences between languages can cause problems for translation, requiring the system to do huge structural reorderings as it generates the output.
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.2
Lexical Divergences
Of course we also need to translate the individual words from one language to another. For any translation, the appropriate word can vary depending on the context. The English source-language word bass, for example, can appear in Spanish as the fish lubina or the musical instrument bajo. German uses two distinct words for what in English would be called a wall: Wand for walls inside a building, and Mauer for walls outside a building. Where English uses the word brother for any male sibling, Chinese and many other languages have distinct words for older brother and younger brother (Mandarin gege and didi, respectively). In all these cases, translating bass, wall, or brother from English would require a kind of specialization, disambiguating the different uses of a word. For this reason the fields of MT and Word Sense Disambiguation (Chapter 18) are closely linked.
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.2
Lexical Divergences
Sometimes one language places more grammatical constraints on word choice than another. We saw above that English marks nouns for whether they are singular or plural. Mandarin doesn't. Or French and Spanish, for example, mark grammatical gender on adjectives, so an English translation into French requires specifying adjective gender.
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.2
Lexical Divergences
The way that languages differ in lexically dividing up conceptual space may be more complex than this one-to-many translation problem, leading to many-to-many mappings. For example, Fig. 10 .2 summarizes some of the complexities discussed by Hutchins and Somers (1992) in translating English leg, foot, and paw, to French. For example, when leg is used about an animal it's translated as French jambe; but about the leg of a journey, as French etape; if the leg is of a chair, we use French pied.
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.2
Lexical Divergences
Further, one language may have a lexical gap, where no word or phrase, short of an explanatory footnote, can express the exact meaning of a word in the other language. For example, English does not have a word that corresponds neatly to Mandarin xiào or Japanese oyakōkōo (in English one has to make do with awkward phrases like filial piety or loving child, or good son/daughter for both). Finally, languages differ systematically in how the conceptual properties of an event are mapped onto specific words. Talmy (1985, 1991) noted that languages can be characterized by whether direction of motion and manner of motion are marked on the verb or on the "satellites": particles, prepositional phrases, or adverbial phrases. For example, a bottle floating out of a cave would be described in English with the direction marked on the particle out, while in Spanish the direction Verb-framed languages mark the direction of motion on the verb (leaving the verb-framed satellites to mark the manner of motion), like Spanish acercarse 'approach', alcanzar 'reach', entrar 'enter', salir 'exit'. Satellite-framed languages mark the satellite-framed direction of motion on the satellite (leaving the verb to mark the manner of motion), like English crawl out, float off, jump down, run after. Languages like Japanese, Tamil, and the many languages in the Romance, Semitic, and Mayan languages families, are verb-framed; Chinese as well as non-Romance Indo-European languages like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991 , Slobin 1996 .
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.3
Morphological Typology
Morphologically, languages are often characterized along two dimensions of variation. The first is the number of morphemes per word, ranging from isolating isolating languages like Vietnamese and Cantonese, in which each word generally has one morpheme, to polysynthetic languages like Siberian Yupik ("Eskimo"), in which a polysynthetic single word may have very many morphemes, corresponding to a whole sentence in English. The second dimension is the degree to which morphemes are segmentable, ranging from agglutinative languages like Turkish, in which morphemes have relagglutinative atively clean boundaries, to fusion languages like Russian, in which a single affix fusion may conflate multiple morphemes, like -om in the word stolom (table-SG-INSTR-DECL1), which fuses the distinct morphological categories instrumental, singular, and first declension. Translating between languages with rich morphology requires dealing with structure below the word level, and for this reason modern systems generally use subword models like the wordpiece or BPE models of Section 10.7.1.
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.4
Referential Density
Finally, languages vary along a typological dimension related to the things they tend to omit. Some languages, like English, require that we use an explicit pronoun when talking about a referent that is given in the discourse. In other languages, however, we can sometimes omit pronouns altogether, as the following example from Spanish shows 1 : (10.6) [El jefe] i dio con un libro. / 0 i Mostró a un descifrador ambulante. [The boss] came upon a book. [He] showed it to a wandering decoder.
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.4
Referential Density
Languages that can omit pronouns are called pro-drop languages. Even among the pro-drop languages, there are marked differences in frequencies of omission. Japanese and Chinese, for example, tend to omit far more than does Spanish. This dimension of variation across languages is called the dimension of referential density. We say that languages that tend to use more pronouns are more referentially referential density dense than those that use more zeros. Referentially sparse languages, like Chinese or Japanese, that require the hearer to do more inferential work to recover antecedents are also called cold languages. Languages that are more explicit and make it easier Marshall McLuhan's 1964 distinction between hot media like movies, which fill in many details for the viewer, versus cold media like comics, which require the reader to do more inferential work to fill out the representation (Bickel, 2003) .
10
Machine Translation and Encoder-Decoder Models
10.1
Word Order Typology
10.1.4
Referential Density
Translating from languages with extensive pro-drop, like Chinese or Japanese, to non-pro-drop languages like English can be difficult since the model must somehow identify each zero and recover who or what is being talked about in order to insert the proper pronoun.
10
Machine Translation and Encoder-Decoder Models
10.2
The Encoder-Decoder Model
nan
nan
Encoder-decoder networks, or sequence-to-sequence networks, are models ca-encoderdecoder pable of generating contextually appropriate, arbitrary length, output sequences. Encoder-decoder networks have been applied to a very wide range of applications including machine translation, summarization, question answering, and dialogue.
10
Machine Translation and Encoder-Decoder Models
10.2
The Encoder-Decoder Model
nan
nan
The key idea underlying these networks is the use of an encoder network that takes an input sequence and creates a contextualized representation of it, often called the context. This representation is then passed to a decoder which generates a taskspecific output sequence. Fig. 10 .3 illustrates the architecture Encoder-decoder networks consist of three components:
10
Machine Translation and Encoder-Decoder Models
10.2
The Encoder-Decoder Model
nan
nan
1. An encoder that accepts an input sequence, x n 1 , and generates a corresponding sequence of contextualized representations, h n 1 . LSTMs, GRUs, convolutional networks, and Transformers can all be employed as encoders. 2. A context vector, c, which is a function of h n 1 , and conveys the essence of the input to the decoder.
10
Machine Translation and Encoder-Decoder Models
10.2
The Encoder-Decoder Model
nan
nan
3. A decoder, which accepts c as input and generates an arbitrary length sequence of hidden states h m 1 , from which a corresponding sequence of output states y m 1 , can be obtained. Just as with encoders, decoders can be realized by any kind of sequence architecture.
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
Let's begin by describing an encoder-decoder network based on a pair of RNNs. 2 Recall the conditional RNN language model from Chapter 9 for computing p(y), the probability of a sequence y. Like any language model, we can break down the probability as follows:
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
p(y) = p(y 1 )p(y 2 |y 1 )p(y 3 |y 1 , y 2 )...P(y m |y 1 , ..., y m−1 ) (10.7)
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
At a particular time t, we pass the prefix of t − 1 tokens through the language model, using forward inference to produce a sequence of hidden states, ending with the hidden state corresponding to the last word of the prefix. We then use the final hidden state of the prefix as our starting point to generate the next token.
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
More formally, if g is an activation function like tanh or ReLU, a function of the input at time t and the hidden state at time t − 1, and f is a softmax over the set of possible vocabulary items, then at time t the output y t and hidden state h t are computed as:
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
h t = g(h t−1 , x t ) (10.8) y t = f (h t ) (10.9)
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
We only have to make one slight change to turn this language model with autoregressive generation into a translation model that can translate from a source text source in one language to a target text in a second: add an sentence separation marker at target the end of the source text, and then simply concatenate the target text. We briefly introduced this idea of a sentence separator token in Chapter 9 when we considered using a Transformer language model to do summarization, by training a conditional language model. If we call the source text x and the target text y, we are computing the probability p(y|x) as follows: Fig. 10.4 shows the setup for a simplified version of the encoder-decoder model (we'll see the full model, which requires attention, in the next section). Fig. 10.4 shows an English source text ("the green witch arrived"), a sentence separator token (<s>, and a Spanish target text ("llegó la bruja verde"). To translate a source text, we run it through the network performing forward inference to generate hidden states until we get to the end of the source. Then we begin autoregressive generation, asking for a word in the context of the hidden layer from the end of the source input as well as the end-of-sentence marker. Subsequent words are conditioned on the previous hidden state and the embedding for the last word generated. Let's formalize and generalize this model a bit in Fig. 10 .5. (To help keep things straight, we'll use the superscripts e and d where needed to distinguish the hidden states of the encoder and the decoder.) The elements of the network on the left process the input sequence x and comprise the encoder. While our simplified figure shows only a single network layer for the encoder, stacked architectures are the norm, where the output states from the top layer of the stack are taken as the final representation. A widely used encoder design makes use of stacked biLSTMs where the hidden states from top layers from the forward and backward passes are concatenated as described in Chapter 9 to provide the contextualized representations for each time step. The entire purpose of the encoder is to generate a contextualized representation of the input. This representation is embodied in the final hidden state of the encoder, h e n . This representation, also called c for context, is then passed to the decoder. The decoder network on the right takes this state and uses it to initialize the first
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
p(y|x) = p(y 1 |x)p(y 2 |y 1 , x)p(y 3 |y 1 , y 2 , x)...P(y m |y 1 , ..., y m−1 , x) (10.10)
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
hidden state of the decoder. That is, the first decoder RNN cell uses c as its prior hidden state h d 0 . The decoder autoregressively generates a sequence of outputs, an element at a time, until an end-of-sequence marker is generated. Each hidden state is conditioned on the previous hidden state and the output generated in the previous state. Figure 10 .6 Allowing every hidden state of the decoder (not just the first decoder state) to be influenced by the context c produced by the encoder.
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
h d 1 h d 2 h d i y 1 y 2 y i c … … …
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
One weakness of this approach as described so far is that the influence of the context vector, c, will wane as the output sequence is generated. A solution is to make the context vector c available at each step in the decoding process by adding it as a parameter to the computation of the current hidden state, using the following equation (illustrated in Fig. 10.6 ):
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
h d t = g(ŷ t−1 , h d t−1 , c) (10.11)
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
Now we're ready to see the full equations for this version of the decoder in the basic encoder-decoder model, with context available at each decoding timestep. Recall that g is a stand-in for some flavor of RNN andŷ t−1 is the embedding for the output sampled from the softmax at the previous step:
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
c = h e n h d 0 = c h d t = g(ŷ t−1 , h d t−1 , c) z t = f (h d t ) y t = softmax(z t ) (10.12)
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
Finally, as shown earlier, the output y at each time step consists of a softmax computation over the set of possible outputs (the vocabulary, in the case of language modeling or MT). We compute the most likely output at each time step by taking the argmax over the softmax output:
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
nan
nan
y t = argmax w∈V P(w|x, y 1 ...y t−1 ) (10.13)
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
10.3.1
Training the Encoder-Decoder Model
Encoder-decoder architectures are trained end-to-end, just as with the RNN language models of Chapter 9. Each training example is a tuple of paired strings, a source and a target. Concatenated with a separator token, these source-target pairs can now serve as training data.
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
10.3.1
Training the Encoder-Decoder Model
For MT, the training data typically consists of sets of sentences and their translations. These can be drawn from standard datasets of aligned sentence pairs, as we'll discuss in Section 10.7.2. Once we have a training set, the training itself proceeds as with any RNN-based language model. The network is given the source text and then starting with the separator token is trained autoregressively to predict the next word, as shown in Fig. 10.7 .
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
10.3.1
Training the Encoder-Decoder Model
Total loss is the average cross-entropy loss per target word: Figure 10 .7 Training the basic RNN encoder-decoder approach to machine translation. Note that in the decoder we usually don't propagate the model's softmax outputsŷ t , but use teacher forcing to force each input to the correct gold value for training. We compute the softmax output distribution overŷ in the decoder in order to compute the loss at each token, which can then be averaged to compute a loss for the sentence.
10
Machine Translation and Encoder-Decoder Models
10.3
Encoder-Decoder with RNNs
10.3.1
Training the Encoder-Decoder Model
Note the differences between training (Fig. 10.7 ) and inference ( Fig. 10.4 ) with respect to the outputs at each time step. The decoder during inference uses its own estimated outputŷ t as the input for the next time step x t+1 . Thus the decoder will tend to deviate more and more from the gold target sentence as it keeps generating more tokens. In training, therefore, it is more common to use teacher forcing in the teacher forcing decoder. Teacher forcing means that we force the system to use the gold target token from training as the next input x t+1 , rather than allowing it to rely on the (possibly erroneous) decoder outputŷ t . This speeds up training.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
The simplicity of the encoder-decoder model is its clean separation of the encoder -which builds a representation of the source text -from the decoder, which uses this context to generate a target text. In the model as we've described it so far, this context vector is h n , the hidden state of the last (nth) time step of the source text. This final hidden state is thus acting as a bottleneck: it must represent absolutely everything about the meaning of the source text, since the only thing the decoder knows about the source text is what's in this context vector (Fig. 10.8) . Information at the beginning of the sentence, especially for long sentences, may not be equally well represented in the context vector.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
The attention mechanism is a solution to the bottleneck problem, a way of attention mechanism allowing the decoder to get information from all the hidden states of the encoder, not just the last hidden state.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
In the attention mechanism, as in the vanilla encoder-decoder model, the context vector c is a single vector that is a function of the hidden states of the encoder, that is, c = f (h e 1 . . . h e n ). Because the number of hidden states varies with the size of the
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
Encoder Decoder bottleneck bottleneck Figure 10 .8 Requiring the context c to be only the encoder's final hidden state forces all the information from the entire source sentence to pass through this representational bottleneck.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
input, we can't use the entire tensor of encoder hidden state vectors directly as the context for the decoder. The idea of attention is instead to create the single fixed-length vector c by taking a weighted sum of all the encoder hidden states. The weights focus on ('attend to') a particular part of the source text that is relevant for the token the decoder is currently producing. Attention thus replaces the static context vector with one that is dynamically derived from the encoder hidden states, different for each token in decoding.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
This context vector, c i , is generated anew with each decoding step i and takes all of the encoder hidden states into account in its derivation. We then make this context available during decoding by conditioning the computation of the current decoder hidden state on it (along with the prior hidden state and the previous output generated by the decoder), as we see in this equation (and Fig. 10.9 ): Figure 10 .9 The attention mechanism allows each hidden state of the decoder to see a different, dynamic, context, which is a function of all the encoder hidden states.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
h d i = g(ŷ i−1 , h d i−1 , c i ) (10.14) h d 1 h d 2 h d i y 1 y 2 y i c 1 c 2 c i … …
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
The first step in computing c i is to compute how much to focus on each encoder state, how relevant each encoder state is to the decoder state captured in h d i−1 . We capture relevance by computing-at each state i during decoding-a score(h d i−1 , h e j ) for each encoder state j.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
The simplest such score, called dot-product attention, implements relevance as dot-product attention similarity: measuring how similar the decoder hidden state is to an encoder hidden state, by computing the dot product between them:
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
score(h d i−1 , h e j ) = h d i−1 • h e j (10.15)
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
The score that results from this dot product is a scalar that reflects the degree of similarity between the two vectors. The vector of these scores across all the encoder hidden states gives us the relevance of each encoder state to the current step of the decoder.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
To make use of these scores, we'll normalize them with a softmax to create a vector of weights, α i j , that tells us the proportional relevance of each encoder hidden state j to the prior hidden decoder state, h d i−1 .
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
α i j = softmax(score(h d i−1 , h e j ) ∀ j ∈ e) = exp(score(h d i−1 , h e j ) k exp(score(h d i−1 , h e k )) (10.16)
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
Finally, given the distribution in α, we can compute a fixed-length context vector for the current decoder state by taking a weighted average over all the encoder hidden states. 10.17) With this, we finally have a fixed-length context vector that takes into account information from the entire encoder state that is dynamically updated to reflect the needs of the decoder at each step of decoding. Fig. 10 .10 illustrates an encoderdecoder network with attention, focusing on the computation of one context vector c i .
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
c i = j α i j h e j
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
With this, we finally have a fixed-length context vector that takes into account
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
information from the entire encoder state that is dynamically updated to reflect the
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
needs of the decoder at each step of decoding. Fig. 10.10 illustrates an encoder-
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
decoder network with attention, focusing on the computation of one context vector
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
ci.
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
Figure 10 .10 A sketch of the encoder-decoder network with attention, focusing on the computation of c i . The context value c i is one of the inputs to the computation of h d i . It is computed by taking the weighted sum of all the encoder hidden states, each weighted by their dot product with the prior decoder hidden state h d i−1 .
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
It's also possible to create more sophisticated scoring functions for attention models. Instead of simple dot product attention, we can get a more powerful function that computes the relevance of each encoder hidden state to the decoder hidden state by parameterizing the score with its own set of weights, W s .
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
score(h d i−1 , h e j ) = h d t−1 W s h e j
10
Machine Translation and Encoder-Decoder Models
10.4
Attention
nan
nan
The weights W s , which are then trained during normal end-to-end training, give the network the ability to learn which aspects of similarity between the decoder and encoder states are important to the current application. This bilinear model also allows the encoder and decoder to use different dimensional vectors, whereas the simple dot-product attention requires the encoder and decoder hidden states have the same dimensionality.
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
The decoding algorithm we gave above for generating translations has a problem (as does the autoregressive generation we introduced in Chapter 9 for generating from a conditional language model). Recall that algorithm: at each time step in decoding, the output y t is chosen by computing a softmax over the set of possible outputs (the vocabulary, in the case of language modeling or MT), and then choosing the highest probability token (the argmax):
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
y t = argmax w∈V P(w|x, y 1 ...y t−1 ) (10.18)
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
Choosing the single most probable token to generate at each step is called greedy decoding; a greedy algorithm is one that make a choice that is locally optimal, whether or not it will turn out to have been the best choice with hindsight. Indeed, greedy search is not optimal, and may not find the highest probability translation. The problem is that the token that looks good to the decoder now might turn out later to have been the wrong choice! Let's see this by looking at the search tree, a graphical representation of the search tree choices the decoder makes in searching for the best translation, in which we view the decoding problem as a heuristic state-space search and systematically explore the space of possible outputs. In such a search tree, the branches are the actions, in this case the action of generating a token, and the nodes are the states, in this case the state of having generated a particular prefix. We are searching for the best action sequence, i.e. the target string with the highest probability. Fig. 10 .11 demonstrates the problem, using a made-up example. Notice that the most probable sequence is ok ok </s> (with a probability of .4*.7*1.0), but a greedy search algorithm will fail to find it, because it incorrectly chooses yes as the first word since it has the highest local probability. Figure 10 .11 A search tree for generating the target string T = t 1 ,t 2 , ... from the vocabulary V = {yes, ok, <s>}, given the source string, showing the probability of generating each token from that state. Greedy search would choose yes at the first time step followed by yes, instead of the globally most probable sequence ok ok.
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
Recall from Chapter 8 that for part-of-speech tagging we used dynamic programming search (the Viterbi algorithm) to address this problem. Unfortunately, dynamic programming is not applicable to generation problems with long-distance dependencies between the output decisions. The only method guaranteed to find the best solution is exhaustive search: computing the probability of every one of the V T possible sentences (for some length value T ) which is obviously too slow.
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
Instead, decoding in MT and other sequence generation problems generally uses a method called beam search. In beam search, instead of choosing the best token beam search to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width, on the metaphor of a flashlight beam beam width that can be parameterized to be wider or narrower.
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
Thus at the first step of decoding, we compute a softmax over the entire vocabulary, assigning a probability to each word. We then select the k-best options from this softmax output. These initial k outputs are the search frontier and these k initial words are called hypotheses. A hypothesis is an output sequence, a translation-sofar, together with its probability. Figure 10 .12 Beam search decoding with a beam width of k = 2. At each time step, we choose the k best hypotheses, compute the V possible extensions of each hypothesis, score the resulting k * V possible hypotheses and choose the best k to continue. At time 1, the frontier is filled with the best 2 options from the initial state of the decoder: arrived and the. We then extend each of those, compute the probability of all the hypotheses so far (arrived the, arrived aardvark, the green, the witch) and compute the best 2 (in this case the green and the witch) to be the search frontier to extend on the next step. On the arcs we show the decoders that we run to score the extension words (although for simplicity we haven't shown the context value c i that is input at each step).
10
Machine Translation and Encoder-Decoder Models
10.5
Beam Search
nan
nan
At subsequent steps, each of the k best hypotheses is extended incrementally by being passed to distinct decoders, which each generate a softmax over the entire vocabulary to extend the hypothesis to every possible next token. Each of these k * V hypotheses is scored by P(y i |x, y <i ): the product of the probability of current word choice multiplied by the probability of the path that led to it. We then prune the k * V hypotheses down to the k best hypotheses, so there are never more than k hypotheses 10.5 • BEAM SEARCH 221 at the frontier of the search, and never more than k decoders. Fig. 10 .12 illustrates this process with a beam width of 2. This process continues until a </s> is generated indicating that a complete candidate output has been found. At this point, the completed hypothesis is removed from the frontier and the size of the beam is reduced by one. The search continues until the beam has been reduced to 0. The result will be k hypotheses.