{ "paper_id": "I17-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:37:37.739425Z" }, "title": "Context-Aware Smoothing for Neural Machine Translation", "authors": [ { "first": "Kehai", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "Machine Intelligence & Translation Laboratory", "institution": "Harbin Institute of Technology", "location": {} }, "email": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology (NICT)", "location": {} }, "email": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology (NICT)", "location": {} }, "email": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Information and Communications Technology (NICT)", "location": {} }, "email": "eiichiro.sumita@nict.go.jp" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "Machine Intelligence & Translation Laboratory", "institution": "Harbin Institute of Technology", "location": {} }, "email": "tjzhao@hit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In Neural Machine Translation (NMT), each word is represented as a lowdimension, real-value vector for encoding its syntax and semantic information. This means that even if the word is in a different sentence context, it is represented as the fixed vector to learn source representation. Moreover, a large number of Out-Of-Vocabulary (OOV) words, which have different syntax and semantic information, are represented as the same vector representation of unk. To alleviate this problem, we propose a novel contextaware smoothing method to dynamically learn a sentence-specific vector for each word (including OOV words) depending on its local context words in a sentence. The learned context-aware representation is integrated into the NMT to improve the translation performance. Empirical results on NIST Chinese-to-English translation task show that the proposed approach achieves 1.78 BLEU improvements on average over a strong attentional NMT, and outperforms some existing systems.", "pdf_parse": { "paper_id": "I17-1002", "_pdf_hash": "", "abstract": [ { "text": "In Neural Machine Translation (NMT), each word is represented as a lowdimension, real-value vector for encoding its syntax and semantic information. This means that even if the word is in a different sentence context, it is represented as the fixed vector to learn source representation. Moreover, a large number of Out-Of-Vocabulary (OOV) words, which have different syntax and semantic information, are represented as the same vector representation of unk. To alleviate this problem, we propose a novel contextaware smoothing method to dynamically learn a sentence-specific vector for each word (including OOV words) depending on its local context words in a sentence. The learned context-aware representation is integrated into the NMT to improve the translation performance. Empirical results on NIST Chinese-to-English translation task show that the proposed approach achieves 1.78 BLEU improvements on average over a strong attentional NMT, and outperforms some existing systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , has shown prominent performances in comparison with the conventional Phrase Based Statistical Machine Translation (PBSMT) (Koehn et al., 2003) . In NMT, a source sentence is converted into a vector representation by an RNN called encoder, then another RNN called decoder generates target sentence word by word based on the source representation with attention information and target history.", "cite_spans": [ { "start": 33, "end": 65, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF12" }, { "start": 66, "end": 89, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF28" }, { "start": 90, "end": 112, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF2" }, { "start": 237, "end": 257, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One advantage of NMT systems is that each word is represented as a low-dimension, realvalued vector, instead of storing statistical rules as in PBSMT. This means that even if the word is in a different sentence context, it is represented as the fixed vector to learn source representation. Figure 1 (a) shows two pair of Chinese-to-English parallel sentences in which two Chinese sentences contain the same word \"da\". Intuitively, the \"da\" denotes \"beating\" in the first Chinese sentence while the \"da\" denotes \"playing\" in the second Chinese sentence. It is obvious that the \"da\" which denotes different meanings in a specific sentence is represented as the same word vector in the encoder of NMT, as show in Figure 1 (b) . Although the RNN-based encoder can capture the sentence context for each word, we believe that offering better word vector with context-aware representation might help improve translation quality of NMT.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 298, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 710, "end": 722, "text": "Figure 1 (b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moreover, a large number of Out-Of-Vocabulary (OOV) words which have different syntax and semantic information are represented as the same vector representation of unk. Actually, this kind of simple approach may cause ambiguity of the sentences since the single unk breaks the structure of sentences, thus hurts representation learning of source sentence and translation prediction of the target word. For example, the unk firstly affects source representation learning in encoder; then the negative effect would be further transformed to the decoder, which generates the poverty context vector and hidden layer state for translation prediction, as shown in the gray parts of Figure 1 (c) . Besides, when the generated target word may also be unk, the negative effect of unk will become more severe. In this paper, we propose a novel contextaware smoothing method to dynamically learn a Context-Aware Representation (CAR) for each word (including OOV words) depending on its local context words in a sentence. We then use the learned CAR to extend word vector in a sentence, thus enhancing source representation for improving the translation performance of NMT. First, compared with the single unk vector, we encode the context words of each OOV as a Context-Aware Representation (CAR), which has the potential to capture the OOV's semantic information. Second, we also extend the contextaware smoothing method to in-vocabulary words, which enhances encoder and decoder of NMT by more effectively utilizing context information by the learned CAR. To this end, we proposed two unique neural networks to learn the contextaware representation for each word depending on its context words in a fixed-size window. We then design four NMT models with CAR to improve translation performance by smoothing the encoder and decoder.", "cite_spans": [], "ref_spans": [ { "start": 676, "end": 688, "text": "Figure 1 (c)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u6b63\u5728 zhengzai \u56e0\u4e3a yinwei \u4e89\u6267 zhengzhi \u800c er \u6253 da \u5bf9\u65b9 duifang (a) Encoder 2 3 4 J \u2026 \u2026 1 5 x J x 4 \u2026 x 5 v 4 x 3 v 3 x 2 v 2 x 1 v 1 v 5 \u2026 v J (b) Encoder 2 3 4 J \u2026 \u2026 1 5 x J Decoder 2 2 3 3 4 4 \u2026 \u2026 1 1 5 5 2 3 4 \u2026 1 5 x 4 \u2026 \u2026 \u2026 x u v 4 x 3 v 3 x 2 v 2 x 1 v 1 v u \u2026 v J Attention \u03b1 (c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of the paper is organized as follows. Section 2 introduces the related work in the NMT. Section 3 presents two novel neural models to learn the CAR for each word. Section 4 integrates the CAR into the NMT by using smoothing strategies. Section 5 reports the experimental results obtained in the Chineseto-English task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, we conclude the contributions of the paper and discuss the further work in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In traditional SMT, there are many research works to improve the translations of OOVs. Fung and Cheung (2004) and Shao and Ng (2004) adopte comparable corpora and web resources to extract translations for each unknown word. Marton et al. (2009) and Mirkin et al. (2009) applied paraphrase model and entailment rules to replace unknown words with in-vocabulary synonyms before translation. A series of works (Knight and Graehl, 1997; Jiang et al., 2007; Al-Onaizan and Knight, 2002) utilized transliteration and web mining techniques with external monolingual/bilingual corpora, comparable data and the web resource to find the translation of the unknown words. Nearly most of the related PBSMT researches focused on finding the correct translation of the unknown words with external resources and ignored the negative effect for other words.", "cite_spans": [ { "start": 87, "end": 109, "text": "Fung and Cheung (2004)", "ref_id": "BIBREF7" }, { "start": 114, "end": 132, "text": "Shao and Ng (2004)", "ref_id": "BIBREF27" }, { "start": 224, "end": 244, "text": "Marton et al. (2009)", "ref_id": "BIBREF19" }, { "start": 249, "end": 269, "text": "Mirkin et al. (2009)", "ref_id": "BIBREF23" }, { "start": 407, "end": 432, "text": "(Knight and Graehl, 1997;", "ref_id": "BIBREF13" }, { "start": 433, "end": 452, "text": "Jiang et al., 2007;", "ref_id": "BIBREF11" }, { "start": 453, "end": 481, "text": "Al-Onaizan and Knight, 2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Compared with PBSMT, due to high computational cost, NMT has a more limited vocabulary size and severe OOV phenomenon. The existing PBSMT methods that used external resources to translate unknown words for SMT are hard to be directly introduced into NMT, because of NMT's soft-alignment mechanism (Bahdanau et al., 2015) . To relieve the negative effect of unknown words for NMT, Luong et al. (2015) proposed a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence, and to translate every OOV in a post-processing step using a external bilingual dictionary. Although these methods improved the translation of OOV, they must learn external bilingual dictionary information in advance.", "cite_spans": [ { "start": 297, "end": 320, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF2" }, { "start": 380, "end": 399, "text": "Luong et al. (2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "From the point of vocabulary size, many works tried to use a large vocabulary size, thus covering more words. Jean et al. (2015) proposed a method based on importance sampling that allowed NMT model to use a very large target vocabulary for relieving the OOV phenomenon in NMT, which are only designed to reduce the computational complexity in training, not for decoding. Arthur et al. (2016) introduced discrete translation lexicons into NMT to imrpove the translations of these low-frequency words. Mi et al. (2016) proposed a vocabulary manipulation approach by limiting the number of vocabulary being predicted by each batch or sentence, to reduce both the training and the decoding complexity. These methods focused on the translation of OOV itself and ignored the other negative effect caused by the OOV, such as the translations of the words around the OOV.", "cite_spans": [ { "start": 110, "end": 128, "text": "Jean et al. (2015)", "ref_id": "BIBREF10" }, { "start": 372, "end": 392, "text": "Arthur et al. (2016)", "ref_id": "BIBREF1" }, { "start": 501, "end": 517, "text": "Mi et al. (2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently, many works exploited the granularity translation unit from words to smaller subwords or characters. Sennrich et al. 2016introduced a simpler and more effective approach to encode rare and unknown words as sequences of subword units by Byte Pair Encoding (Gage, 1994) . This is based on the intuition that various word classes are translatable via smaller units than words. Luong and Manning (2016) segmented the known words into character sequence, and learned the unknown word representation by characterlevel recurrent neural networks, thus achieving open vocabulary NMT. Li et al. (2016) replaced OOVs with in-vocabulary words by semantic similarity to reduce the negative effect for words around the OOVs. Costa-juss\u00e0 and Fonollosa (2016) presented a character-based NMT, in which character-level embeddings were in combination with convolutional and highway layers to replace the standard lookup-based word representations. These methods extended the vocabulary to a larger or unlimited vocabulary and improved the performance of NMT tasks, especially in the morphological rich language pairs. Instead of utilizing larger vocabulary or subunit information, we exploit to relieve more translation performance for NMT from the negative effect of OOVs by learning contextaware representations for OOVs. As a result, the proposed method can smooth the representation of word and reduce the unk's negative effect in attention model, context annotations and decoding hidden states, thus improving the performance of NMT.", "cite_spans": [ { "start": 264, "end": 276, "text": "(Gage, 1994)", "ref_id": "BIBREF8" }, { "start": 383, "end": 407, "text": "Luong and Manning (2016)", "ref_id": "BIBREF17" }, { "start": 584, "end": 600, "text": "Li et al. (2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Intuitively, when one understands natural language sentence, especially including polysemy words or OOVs, one often inferences the meaning of these words depending on its context words. Context plays an important role in learning distributed representation of word (Mikolov et al., 2013a,b) . Motivated by this, we propose two neural network methods, including Feedforward Context-of-Word Model (FCWM) and Convolutional Context-of-Words Model (CCWM), to learn a Context-Aware Representation (CAR) for each word.", "cite_spans": [ { "start": 265, "end": 290, "text": "(Mikolov et al., 2013a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Context-Aware Representation", "sec_num": "3" }, { "text": "Inspired by the representation learning of word (Bengio et al., 2003) , the proposed FCWM includes an input layer, a projection layer, and a non-linear output layer, as shown in Figure 2 (a).", "cite_spans": [ { "start": 48, "end": 69, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 178, "end": 186, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "Specifically, suppose there is a source language sentence, {x 1 , x 2 , . . . , x j , . . . , x J }. If the context window is set as 2n (n = 2), the context of each word x i is defined as its historical n words and future n words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L j = x j\u2212n , . . . , x j\u22121 , x j+1 , . . . , x j+n .", "eq_num": "(1)" } ], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "In the input layer, each word in L j is transformed into one-hot representation. 1 The projection layer concatenates one-hot representations in", "cite_spans": [ { "start": 81, "end": 82, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L j to a (2nm)-dimension vector L j , 2 L j = [v j\u2212n : . . . , v j\u22121 : v j+1 : \u2022 \u2022 \u2022 : v j+n ],", "eq_num": "(2)" } ], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "where \":\" denotes the concatenation operation of word vectors. We then approximate to learn its semantic representation V L j \u2208 R m by a non-linear output layer instead of softmax layer:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V L j = \u03c3(W 1 L j + b 1 ) T ,", "eq_num": "(3)" } ], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "Projection layer where \u03c3 is a non-linear activation function (e.g., Tanh), T represents matrix transpose, and W 1 is a weight matrix and b 1 is a bias term. Finally, we extend each word with the learned CAR vector V L j , thus feeding into the NMT to enhance source representation for improving target word prediction. Therefore, the proposed FCWM plays the role of the function \u03d5 parameterized by \u03b8 1 , which maps the context L j of each word into vector V L j as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "Non-linear output layer Input layer Context words L j of x j concatenation x j-2 x j-1 x j+1 x j+2 v j-2 v j-1 v j+1 v j+2 ( + ) (a) Input layer Context words L j of x j x j-2 x j-1 x j+1 x j+2 v j-2 v j-1 v j+1 v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V L j = \u03d5(L j ; \u03b8 1 ).", "eq_num": "(4)" } ], "section": "Feedforward Context-of-Words Model", "sec_num": "3.1" }, { "text": "Compared with the FCWM, the proposed CCWM indirectly encodes the context words of each word as a compositional semantic representation to represent the OOV. Specifically, the proposed CCWM is a novel variant of the standard convolutional neural network (Collobert et al., 2011) , including an input layer, a convolution layer, a pooling layer and a non-linear output layer, as shown in Figure 2 (b). Input Layer: When the dimension of word vector is m and the context window is set to 2n, the input layer is denoted as one vector matrix M \u2208 R m\u00d72n . In M, each column denotes context words of word", "cite_spans": [ { "start": 253, "end": 277, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 386, "end": 394, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "x j , that is, M is [v j\u2212n , \u2022 \u2022 \u2022 , v j\u22121 , v j+1 , \u2022 \u2022 \u2022 , v j+n ] for the context {x j\u2212n , \u2022 \u2022 \u2022 , x j\u22121 , x j+1 , \u2022 \u2022 \u2022 , x j+n } of x j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "Convolutional Layer: In the convolutional layer, let the filter window size be m \u00d7 k (2 \u2264 k \u2264 2n), where the k is set to 3 in our experiments, thus generating feature map L j as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "L j = \u03c8(W 2 [v j : v j+1 : \u2022 \u2022 \u2022 : v j+k ] + b 2 ), (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "where \u03c8 is a non-linear activation function, 3 W 2 \u2208 R m\u00d7k\u2022m is the weight matrix and b 2 \u2208 R m is a bias term. After the filter traverses the input matrix, the output of the feature map L is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = [L 1 , . . . , L 2n\u2212k+1 ].", "eq_num": "(6)" } ], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "Pooling Layer: The pooling operation (e.g., max, average) is commonly used to extract robust features from convolution. For the output feature map of the convolution layers, a column-wise max is performed over the consecutive columns of window size 2 as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P l = max[L 2l\u22121 , L 2l ],", "eq_num": "(7)" } ], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": "where 1 \u2264 l \u2264 2n\u2212k+1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Context-of-Words Model", "sec_num": "3.2" }, { "text": ". After the max pooling, the output of the feature map P is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P = [P 1 , . . . , P 2n\u2212k+1 2 ].", "eq_num": "(8)" } ], "section": "2", "sec_num": null }, { "text": "Non-linear Output Layer: The output layer is typically a fully connected layer multiplied by a matrix. In this paper, first row-wise averaging from the pooling layers is performed without any parameters, and gain CAR of each word by nonlinear active function \u03c3 (e.g., Tanh); hence, the CAR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "V L j of word x j is obtained by V L j = \u03c3(W 3 (average( 2n\u2212k+1 2 l=1 P l )) + b 3 ). (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Therefore, the above CCWM plays the role of the function \u03d5 parameterized by \u03b8 2 , which maps", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "the context L j of word x j into vector V L j as follows: V L j = \u03d5(L j ; \u03b8 2 )", "eq_num": "(10)" } ], "section": "2", "sec_num": null }, { "text": "In this case, the word x j is represented as a CAR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "V L j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "4 NMT with Context-Aware Smoothing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "An NMT model consists of an encoder process and a decoder process, and hence it is often called encoder-decoder model (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , as shown in Figure 1 . Typically, each unit of source input (x 1 , . . . , x J ) is firstly embedded as a vector v x j , and then represented as annotation vector h j by", "cite_spans": [ { "start": 118, "end": 150, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF12" }, { "start": 151, "end": 174, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF28" }, { "start": 175, "end": 197, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 212, "end": 220, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "NMT Background", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h j = f enc (v x j , h j\u22121 ),", "eq_num": "(11)" } ], "section": "NMT Background", "sec_num": "4.1" }, { "text": "where f enc is a bidirectional Recurrent Neural Network (RNN) (Bahdanau et al., 2015) . These annotation vectors {h 1 , . . . , h J } are used to generate target word in decoder. An RNN decoder is used to compute the target word y i probability by a softmax layer g:", "cite_spans": [ { "start": 62, "end": 85, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "NMT Background", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y i |y