{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:22.272573Z" }, "title": "Recurrent Attention for the Transformer", "authors": [ { "first": "Jan", "middle": [], "last": "Rosendahl", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Christian", "middle": [], "last": "Herold", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Frithjof", "middle": [], "last": "Petrick", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this work, we conduct a comprehensive investigation on one of the centerpieces of modern machine translation systems: the encoderdecoder attention mechanism. Motivated by the concept of first-order alignments, we extend the (cross-)attention mechanism by a recurrent connection, allowing direct access to previous attention/alignment decisions. We propose several ways to include such a recurrency into the attention mechanism. Verifying their performance across different translation tasks we conclude that these extensions and dependencies are not beneficial for the translation performance of the Transformer architecture.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this work, we conduct a comprehensive investigation on one of the centerpieces of modern machine translation systems: the encoderdecoder attention mechanism. Motivated by the concept of first-order alignments, we extend the (cross-)attention mechanism by a recurrent connection, allowing direct access to previous attention/alignment decisions. We propose several ways to include such a recurrency into the attention mechanism. Verifying their performance across different translation tasks we conclude that these extensions and dependencies are not beneficial for the translation performance of the Transformer architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since its introduction by Vaswani et al. (2017) , the Transformer architecture has enabled state of the art results on nearly all machine translation (MT) tasks (Bojar et al., 2018; Barrault et al., 2019; Ott et al., 2018) . Compared to previous neural machine translation (NMT) approaches (Sutskever et al., 2014; Bahdanau et al., 2015) , it introduces many new concepts like self-attention, positional encoding and multi-head attention. However, the Transformer still relies on the encoder-decoder attention mechanism introduced by Bahdanau et al. (2015) to translate a source sentence into the target language. While for earlier NMT models, this attention mechanism was thoroughly investigated and many different variants were proposed (Feng et al., 2016; Cohn et al., 2016; Sankaran et al., 2016; Tu et al., 2016) , the same can not be said for the Transformer. In the present work, we discuss the Transformer encoder-decoder attention mechanism, propose different ways to enhance its capabilities and analyze the resulting systems.", "cite_spans": [ { "start": 26, "end": 47, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF14" }, { "start": 161, "end": 181, "text": "(Bojar et al., 2018;", "ref_id": "BIBREF2" }, { "start": 182, "end": 204, "text": "Barrault et al., 2019;", "ref_id": "BIBREF1" }, { "start": 205, "end": 222, "text": "Ott et al., 2018)", "ref_id": "BIBREF8" }, { "start": 273, "end": 278, "text": "(NMT)", "ref_id": null }, { "start": 290, "end": 314, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF12" }, { "start": 315, "end": 337, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 534, "end": 556, "text": "Bahdanau et al. (2015)", "ref_id": "BIBREF0" }, { "start": 739, "end": 758, "text": "(Feng et al., 2016;", "ref_id": "BIBREF4" }, { "start": 759, "end": 777, "text": "Cohn et al., 2016;", "ref_id": "BIBREF3" }, { "start": 778, "end": 800, "text": "Sankaran et al., 2016;", "ref_id": "BIBREF10" }, { "start": 801, "end": 817, "text": "Tu et al., 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One particular design decision in the Transformer attention mechanism catches the eye: When calculating the context vector in the current decoding step, there is no direct information flow coming from the previous steps. While earlier neural architectures explicitly incorporated the hidden state from the previous decoding step in the attention calculation (Bahdanau et al., 2015) and traditional count-based alignment models used higher order Markov assumptions, the Transformer relies on the self-attention mechanism and layer stacking to learn context dependencies. Therefore we ask the questions if and how an explicit dependency on the previous attention decisions should be included in the Transformer encoder-decoder attention mechanism. In order to provide an answer we propose numerous approaches towards modeling such an explicit dependency and report our findings across three language pairs.", "cite_spans": [ { "start": 358, "end": 381, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recurrent network architectures (Bahdanau et al., 2015) the decoder state recurrently depends on the previous decoding step. Many works have extended this by additionally adding an explicit recurrent dependency within the attention mechanism itself.", "cite_spans": [ { "start": 35, "end": 58, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Feng et al. (2016) concatenate the attention context produced in the previous decoding step to the input of the attention mechanism. Other approaches approximate a coverage value for every source position by accumulating the attention weights over all previous time steps, which is then included in the attention calculation (Cohn et al., 2016) . Tu et al. (2016) extend this idea by renormalizing the coverage using a fertility model that predicts how much attention a specific source word should receive. In a similar spirit Sankaran et al. (2016) explicitly bias the attention weights to be more focused on source positions that did not receive much attention yet.", "cite_spans": [ { "start": 325, "end": 344, "text": "(Cohn et al., 2016)", "ref_id": "BIBREF3" }, { "start": 347, "end": 363, "text": "Tu et al. (2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In contrast to network architectures with a recurrent decoder, the Transformer (Vaswani et al., 2017) is trained completely parallel and uses multi-head, additive cross-attention. This work tries to answer whether introducing a recurrent dependency can 63 also benefit the Transformer cross-attention.", "cite_spans": [ { "start": 79, "end": 101, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Recurrent Cross-Attention", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The 'vanilla' Transformer is an intricate encoderdecoder architecture that uses an attention mechanism to map a sequence of input tokens f J 1 onto a sequence of output tokens e I 1 . In this framework, a context vector c ,n i for the -th decoder layer and the n-th attention head is calculated in the i-th decoding step by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "c ,n i = j \u03b1 ,n i,j (W ,n v h j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "Here, h j denotes the j-th output of the encoder which is transformed by a trainable weight matrix", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "W ,n v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "into the value. \u03b1 ,n i,j is calculated using h j as well as the output of the previous decoder layer (after self-attention) s i . More specifically, we calculate the energ\u0177", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "\u03b1 ,n i,j = 1 \u221a d k (W ,n k h j ) (W ,n q s i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "where d k is the feature dimension, W ,n k and W ,n q are trainable weight matrices, transforming h j and s i into the key and query respectively. This naming stems from the intuition that we use a query W ,n q s i to perform a lookup on a series of key-value pairs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "W ,n k h 1 , W ,n v h 1 , . . . , W ,n k h J , W ,n v h J .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "The energy\u03b1 ,n i,j is then normalized using the softmax operation to get the so called attention 'weights'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "\u03b1 ,n i,j = softmax(\u03b1 ,n i,j ) = exp(\u03b1 ,n i,j ) j exp(\u03b1 ,n i,j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": ". 1Once the c ,n i are calculated, the full context vector c i is formed by concatenating the outputs of all attention heads followed by a linear transformation. A combination of residual connections, feedforward and self attention layers is used to transform", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "c i into s +1 i = f (c i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": ", the decoder state before the next cross-attention layer. In this work we focus on the cross-attention and refere the reader to Vaswani et al. (2017) for the details on the self-attention concept.", "cite_spans": [ { "start": 129, "end": 150, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "One thing that becomes obvious in the above description is the lack of information flow along the decoder 'time-axis' i. The only way the system can make use of such information is through the aforementioned self-attention concept. In this work we raise the question whether such an indirect way of information flow is sufficient or if the system can profit from a more direct integration of its 'past attention decisions'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder-Decoder Attention", "sec_num": "3.1" }, { "text": "A straight forward way to use information from the previous decoder time step i \u2212 1 in the current attention calculation is by modifying the query vector. We do this by simple concatenation, resulting in\u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": ",n i,j = 1 \u221a d k (W ,n k h j ) W ,n q s i f i\u22121 where f i\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "is some function holding information from the previous time step. One apparent way to define this function is the concatenate previous context variant,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "f i\u22121 = c ,n i\u22121 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "where we simply use the context vector of the previous time step. One can argue that the previous attention weight of the j-th source position, \u03b1 ,n i\u22121,j , is more useful than the already condensed context vector. Therefore we consider the concatenate previous weight approach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f i\u22121 = \u03b1 ,n i\u22121,j .", "eq_num": "(3)" } ], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "However, here we only take into account the time step directly preceding the current one. In order to investigate if additional information from earlier decisions might be helpful, we define the concatenate previous accumulated weight approach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f i\u22121 = i\u22121 i =1 \u03b1 ,n i j", "eq_num": "(4)" } ], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "specifying how much the encoder output from the j-th position has been attended to so far. For all of the variants described in this section, the resulting energies\u03b1 ,n i,j are normalized using a softmax operation (see Equation 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modifying the Query", "sec_num": "3.2" }, { "text": "Staying in the 'query-key-value' framework, the pendant to modifying the query vector (as in Section 3.2) would be to modify the key-value list in order to incorporate information from the previous time step. We expand this list by inserting one additional vector pair (g k , g v ) along the time axis and name this approach expand key-value list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "For choosing the vectors g k and g v , we test four different variants. In variant 1 we use the (linearly transformed) full context vector c i\u22121 from the previous time step as both additional key and value vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g k = W ,n k c i\u22121 , g v = W ,n v c i\u22121 .", "eq_num": "(5)" } ], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "The context vector is transformed using the same matrices W ,n k and W ,n v which we also use for transforming the other keys and values respectively. One can argue that a separate transformation is needed for the context vector, which leads us to variant 2,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g k = W ,n g k c i\u22121 , g v = W ,n gv c i\u22121", "eq_num": "(6)" } ], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "where W ,n g k and W ,n gv are used specifically for transforming c i\u22121 . Furthermore, we speculate that a specific attention head should mostly just benefit from incorporating its own previous output. Therefore we define variant 3 as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g k = W ,n g k c ,n i\u22121 , g v = W ,n gv c ,n i\u22121 .", "eq_num": "(7)" } ], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "where just the context vector c ,n i\u22121 , produced by the same head, is considered in the calculation. Finally we test variant 4 in which only the key is transformed but the value is not:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g k = W ,n g k c ,n i\u22121 , g v = c ,n i\u22121 .", "eq_num": "(8)" } ], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "The rationale here is that c ,n i\u22121 already 'belongs' in the context vector embedding space (not the encoder output space like h j ) and therefore no transformation should be necessary. On a side note, while all of these changes might make sense from an architectural point of view, they certainly raise questions regarding the interpretability of the attention weights as a target to source alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expanding the Key-Value List", "sec_num": "3.3" }, { "text": "Finally, the most direct way to use information from the previous time step i \u2212 1 in the current attention calculation is by directly modifying the attention weights. We test two ways of doing this:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "\u2022 Encouraging continuous attention patterns where the attention weights from the previous decoding step are similar to the weights of the current one", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "\u03b1 ,n i,j = \u03bb\u03b1 ,n i,j + 1 \u2212 \u03bb 2k + 1 j+k j =j\u2212k\u03b1 ,n i\u22121,j . (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "\u2022 Encouraging coverage by reducing the attention weight by an amount proportional to the extend in which the source position j already has been attended to in all preceding time steps combined", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 ,n i,j =\u03b1 ,n i,j \u2212 \u03bb \u221a d k i\u22121 i =1 \u03b1 ,n i ,j .", "eq_num": "(10)" } ], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "For both variants we apply normalization", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 ,n i,j = softmax(\u03b1 ,n i,j )", "eq_num": "(11)" } ], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "and tune the hyperparameters: the scaling factor \u03bb (both approaches) and the window size k (only first).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Re-scaling the Attention Weights", "sec_num": "3.4" }, { "text": "We evaluate our approaches on three tasks: The WMT 2016 news translation Romanian\u2192English task, the WMT 2018 news translation Turkish\u2192English task, as well as the IWSLT 2017 English\u2192Italian translation task on TED data. Our training data consists of 612k (Ro\u2192En: SE Times, Europarl v8), 208k (Tr\u2192En: SE Times) and 227k (En\u2192It: TED talk) parallel sentences, which we preprocess using 20k byte-pair-encoding operations (8k for En\u2192It) learned jointly on source and target data. We train a 6-layer Transformer for each task, similar to the 'base'-configuration of Vaswani et al. (2017) . All models are implemented in RETURNN (Zeyer et al., 2018) . We tie the weights of all embedding/projection matrices and apply a dropout of 20% for Ro\u2192En and 30% for Tr\u2192En and En\u2192It. The baseline models use a batch size of 9600, however GPU memory limitations allow a batch size of maximum 7600 for some experiments that add a recurrency to the decoder. We select the best checkpoint according to BLEU on the development set and report case-sensitive BLEU calculated with SacreBLEU (Post, 2018) and TER with TERCom (Snover et al., 2006) Table 2 : Performance comparison of the approaches using additional context information from the previous time steps as described in Section 3.1. Train time refers to the average GPU time per training checkpoint measured on Ro\u2192En. We show the best results reported in literature for each task: 1 Kasai et al. 2020, 2 Marie et al. 2018and 3 Lakew et al. 2017in the way in which the context vector is transformed before being used as an additional keyvalue pair. The performance of each variant in terms of BLEU and TER is shown in Table 1 . The variants 1 (Equation 5) and 2 (Equation 6) perform the strongest, being both slightly better than our baseline system. Re-using the transformation matrices from the other key-value pairs does not seem to hurt the system. Limiting the additional context information to the same attention head (variant 3, Equation 7) results in a slight performance loss. Additionally, omitting the transformation of c ,n i\u22121 for the value-list (variant 4, Equation 8) results in a significant performance loss, indicating that this vector is not directly compatible with the other vectors in the list after all. Since it exhibits the best balance between performance and complexity, we choose variant 1 (Equation 5) for the complete system comparison. Furthermore, we have to look at the different ways for re-scaling the attention weights as introduced in Section 3.4. We tune the hyperparameters k and \u03bb for each method applicable. For the window size, we find k = 5 to work best and for the scaling factor we choose \u03bb = 0.5 for all variants.", "cite_spans": [ { "start": 560, "end": 581, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF14" }, { "start": 622, "end": 642, "text": "(Zeyer et al., 2018)", "ref_id": "BIBREF15" }, { "start": 1066, "end": 1078, "text": "(Post, 2018)", "ref_id": "BIBREF9" }, { "start": 1099, "end": 1120, "text": "(Snover et al., 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 1121, "end": 1128, "text": "Table 2", "ref_id": null }, { "start": 1651, "end": 1658, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "The comparison of all the approaches defined in Section 3.1 and tuned/selected in Section 5.1 are shown in Table 2 . Note that all the approachspecific hyperparameter tuning was done on the Ro\u2192En task, distinguishing it from the other two.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Main Comparison", "sec_num": "5.2" }, { "text": "For the most part there is very little variation in system performance across all proposed methods, none of which can outperform the Transformer baseline by a significant amount. While there were still some (although small) improvements visible when evaluating on the development set, e.g. for the methods discussed in Section 3.4, these mostly vanish when evaluating on unseen test sets and on different tasks. This might be a testament to overfitting on the development set when tuning the hyperparameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Comparison", "sec_num": "5.2" }, { "text": "While one can argue that the proposed methods exhibit the same level of performance as the Transformer baseline, there is a significant downside: training speed. In the last column of Table 2 the average computation time per checkpoint relative to the Transformer baseline is shown. All proposed methods slow down the training by at least a factor of 5. This is due to a combination of breaking the parallelization inside the decoder (we have to wait for timestep i \u2212 1 to finish in order to do the computations for timestep i) and having to use a smaller batch size in training.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 191, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Main Comparison", "sec_num": "5.2" }, { "text": "In this work we provide a detailed analysis on the encoder-decoder attention mechanism in the Transformer architecture. We argue that -compared to previous attention formulations -there does not exist a direct link to the context produced in the earlier decoding steps. We propose different approaches to explicitly model this link and test the resulting systems on three machine translation tasks. The results show no significant improvements for any of the tested approaches. This leads us to the conclusion that the context information which is incorporated through self-attention is already sufficient for the given task of machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Findings of the 2019 conference on machine translation (WMT19)", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation, WMT 2019", "volume": "2", "issue": "", "pages": "1--61", "other_ids": { "DOI": [ "10.18653/v1/w19-5301" ] }, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Ondrej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Confer- ence on Machine Translation, WMT 2019, Florence, Italy, August 1-2, 2019 -Volume 2: Shared Task Pa- pers, Day 1, pages 1-61. Association for Computa- tional Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Findings of the 2018 conference on machine translation (WMT18)", "authors": [ { "first": "Ondrej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers, WMT 2018", "volume": "", "issue": "", "pages": "272--303", "other_ids": { "DOI": [ "10.18653/v1/w18-6401" ] }, "num": null, "urls": [], "raw_text": "Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 con- ference on machine translation (WMT18). In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, WMT 2018, Belgium, Brussels, October 31 -November 1, 2018, pages 272-303. Association for Computational Linguis- tics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Incorporating structural alignment biases into an attentional neural translation model", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Cong Duy Vu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Vymolova", "suffix": "" }, { "first": "Kaisheng", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vy- molova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. CoRR, abs/1601.01085.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improving attention modeling with implicit distortion and fertility for machine translation", "authors": [ { "first": "Shujie", "middle": [], "last": "Shi Feng", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kenny", "middle": [ "Q" ], "last": "Zhou", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2016, "venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", "volume": "", "issue": "", "pages": "3082--3092", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shi Feng, Shujie Liu, Nan Yang, Mu Li, Ming Zhou, and Kenny Q. Zhu. 2016. Improving attention mod- eling with implicit distortion and fertility for ma- chine translation. In COLING 2016, 26th Inter- national Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 3082- 3092. ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Non-autoregressive machine translation with disentangled context transformer", "authors": [ { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "James", "middle": [], "last": "Cross", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "5144--5155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020. Non-autoregressive machine trans- lation with disentangled context transformer. In In- ternational Conference on Machine Learning, pages 5144-5155. PMLR.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Improving zero-shot translation of low-resource languages", "authors": [ { "first": "M", "middle": [], "last": "Surafel", "suffix": "" }, { "first": "", "middle": [], "last": "Lakew", "suffix": "" }, { "first": "F", "middle": [], "last": "Quintino", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Lotito", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Turchi", "suffix": "" }, { "first": "", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2017, "venue": "14th International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Surafel M Lakew, Quintino F Lotito, Matteo Negri, Marco Turchi, and Marcello Federico. 2017. Im- proving zero-shot translation of low-resource lan- guages. In 14th International Workshop on Spoken Language Translation.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Nict's neural and statistical machine translation systems for the wmt18 news translation task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Marie", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Atsushi", "middle": [], "last": "Fujita", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "449--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Marie, Rui Wang, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2018. Nict's neural and statistical machine translation systems for the wmt18 news translation task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 449-455.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Scaling neural machine translation", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018", "volume": "", "issue": "", "pages": "1--9", "other_ids": { "DOI": [ "10.18653/v1/w18-6301" ] }, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 -November 1, 2018, pages 1-9. Association for Computational Linguis- tics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/w18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 -November 1, 2018, pages 186-191. Association for Computational Lin- guistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Temporal attention model for neural machine translation", "authors": [ { "first": "Haitao", "middle": [], "last": "Baskaran Sankaran", "suffix": "" }, { "first": "Yaser", "middle": [], "last": "Mi", "suffix": "" }, { "first": "Abe", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "", "middle": [], "last": "Ittycheriah", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah. 2016. Temporal attention model for neural machine translation. CoRR, abs/1608.02927.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In In Proceedings of Association for Machine Translation in the Americas, pages 223-231.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Informa- tion Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Modeling coverage for neural machine translation", "authors": [ { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016", "volume": "1", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/p16-1008" ] }, "num": null, "urls": [], "raw_text": "Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "RETURNN as a generic flexible neural toolkit with application to translation and speech recognition", "authors": [ { "first": "Albert", "middle": [], "last": "Zeyer", "suffix": "" }, { "first": "Tamer", "middle": [], "last": "Alkhouli", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018", "volume": "", "issue": "", "pages": "128--133", "other_ids": { "DOI": [ "10.18653/v1/P18-4022" ] }, "num": null, "urls": [], "raw_text": "Albert Zeyer, Tamer Alkhouli, and Hermann Ney. 2018. RETURNN as a generic flexible neural toolkit with application to translation and speech recognition. In Proceedings of ACL 2018, Melbourne, Australia, July 15-20, 2018, System Demonstrations, pages 128-133. Association for Computational Linguis- tics.", "links": null } }, "ref_entries": {} } }