{
"paper_id": "I17-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:10.593363Z"
},
"title": "Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder",
"authors": [
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": "",
"affiliation": {},
"email": "ndurrani@qf.org.qa"
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": "",
"affiliation": {},
"email": "hsajjad@qf.org.qa"
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": "",
"affiliation": {},
"email": "belinkov@mit.edu"
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": "",
"affiliation": {},
"email": "svogel@qf.org.qa"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT). However, little is known about linguistic patterns of morphology, syntax and semantics learned during the training of NMT systems, and more importantly, which parts of the architecture are responsible for learning each of these phenomena. In this paper we i) analyze how much morphology an NMT decoder learns, and ii) investigate whether injecting target morphology into the decoder helps it produce better translations. To this end we present three methods: i) joint generation, ii) joint-data learning, and iii) multi-task learning. Our results show that explicit morphological information helps the decoder learn target language morphology and improves the translation quality by 0.2-0.6 BLEU points.",
"pdf_parse": {
"paper_id": "I17-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT). However, little is known about linguistic patterns of morphology, syntax and semantics learned during the training of NMT systems, and more importantly, which parts of the architecture are responsible for learning each of these phenomena. In this paper we i) analyze how much morphology an NMT decoder learns, and ii) investigate whether injecting target morphology into the decoder helps it produce better translations. To this end we present three methods: i) joint generation, ii) joint-data learning, and iii) multi-task learning. Our results show that explicit morphological information helps the decoder learn target language morphology and improves the translation quality by 0.2-0.6 BLEU points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) offers an elegant end-to-end architecture, improving translation quality compared to traditional phrase-based machine translation. These improvements are attributed to more fluent output (Toral and S\u00e1nchez-Cartagena, 2017) and better handling of morphology and long-range dependencies (Bentivogli et al., 2016) . However, systematic studies are required to understand what kinds of linguistic phenomena (morphology, syntax, semantics, etc.) are learned by these models and more importantly, which of the components is responsible for each phenomenon.",
"cite_spans": [
{
"start": 220,
"end": 255,
"text": "(Toral and S\u00e1nchez-Cartagena, 2017)",
"ref_id": "BIBREF37"
},
{
"start": 318,
"end": 343,
"text": "(Bentivogli et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 436,
"end": 473,
"text": "(morphology, syntax, semantics, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A few attempts have been made to understand what NMT models learn about morphology (Belinkov et al., 2017a) , syntax (Shi et al., 2016) and semantics (Belinkov et al., 2017b) . Shi et al. (2016) used activations at various layers from the NMT encoder to predict syntactic properties on the source-side, while Belinkov et al. (2017a) and Belinkov et al. (2017b) used a similar approach to investigate the quality of word representations on the task of morphological and semantic tagging. Belinkov et al. (2017a) found that word representations learned from the encoder are rich in morphological information, while representations learned from the decoder are significantly poorer. However, the paper does not present a convincing explanation for this finding. Our first contribution in this work is to provide a more comprehensive analysis of morphological learning on the decoder side. We hypothesize that other components of the NMT architecture -specifically the encoder and the attention mechanism, learn enough information about the target language morphology for the decoder to perform reasonably well, without incorporating high levels of morphological knowledge into the decoder. To probe this hypothesis, we investigate the following questions:",
"cite_spans": [
{
"start": 83,
"end": 107,
"text": "(Belinkov et al., 2017a)",
"ref_id": "BIBREF3"
},
{
"start": 117,
"end": 135,
"text": "(Shi et al., 2016)",
"ref_id": "BIBREF36"
},
{
"start": 150,
"end": 174,
"text": "(Belinkov et al., 2017b)",
"ref_id": "BIBREF5"
},
{
"start": 177,
"end": 194,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 309,
"end": 332,
"text": "Belinkov et al. (2017a)",
"ref_id": "BIBREF3"
},
{
"start": 337,
"end": 360,
"text": "Belinkov et al. (2017b)",
"ref_id": "BIBREF5"
},
{
"start": 487,
"end": 510,
"text": "Belinkov et al. (2017a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 What is the effect of attention on the performance of the decoder?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 How much does the encoder help the decoder in predicting the correct morphological variant of the word it generates?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To answer these questions, we train NMT models for different language pairs, involving morphologically rich languages such as German and Czech. We then use the trained models to extract features from the decoder for words in the language of interest. Finally we train a classifier using the extracted features to predict the morphological tag of the words. The accuracy of this ex-ternal classifier gives us a quantitative measure of how well the NMT model learned features that are relevant to morphology. Our results indicate that both the encoder and the attention mechanism aid the decoder in generating correct morphological forms, and thus limit the need of the decoder to learn target morphology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by these findings, we hypothesize that it may be possible to force the decoder to learn more about morphology by injecting the morphological information during training which can in turn improve the overall translation quality. In order to test this hypothesis, we experiment with three possible solutions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Joint Generation: An NMT model is trained on the concatenation of words and morphological tags on the target side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Joint-data learning: An NMT model is trained where each source sequence is used twice with an artificial token to either predict target words or morphological tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A multi-task NMT system with two objective functions is trained to jointly learn translation and morphological tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning:",
"sec_num": "3."
},
{
"text": "Our experiments show that word representations learned after explicitly injecting target morphology improve morphological tagging accuracy of the decoder by 3% and also improves the translation quality by up to 0.6 BLEU points. The remainder of this paper is organized as follows. Section 2 describes our experimental setup. Section 3 shows an analysis of the decoder. Section 4 describes the three proposed methods to integrate morphology into the decoder. Section 5 presents the results. Section 6 gives an account of related work and Section 7 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning:",
"sec_num": "3."
},
{
"text": "We used the German-English and Czech-English datasets from the WIT 3 TED corpus (Cettolo, 2016) ",
"cite_spans": [
{
"start": 80,
"end": 95,
"text": "(Cettolo, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design Parallel Data",
"sec_num": "2"
},
{
"text": "In order to train and evaluate the external classifier on the extracted features, we required data annotated with morphological tags. We used the following tools recommended on the Moses website 1 to annotate the data: LoPar (Schmid, 2000) for German, Tree-tagger (Schmid, 1994) for Czech and MXPOST (Ratnaparkhi, 1998) for English. The number of tags produced by these taggers is 214 for German and 368 for Czech.",
"cite_spans": [
{
"start": 225,
"end": 239,
"text": "(Schmid, 2000)",
"ref_id": "BIBREF32"
},
{
"start": 264,
"end": 278,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF31"
},
{
"start": 300,
"end": 319,
"text": "(Ratnaparkhi, 1998)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Annotations",
"sec_num": null
},
{
"text": "We used the standard MT pre-processing pipeline of tokenizing and truecasing the data using Moses scripts. We did not apply byte-pair encoding (BPE) (Sennrich et al., 2016b) , which has recently become a common part of the NMT pipeline, because both our analysis and the annotation tools are word level. 2 However, experimenting with BPE and other representations such as character-based models (Kim et al., 2015) would be interesting. 3",
"cite_spans": [
{
"start": 149,
"end": 173,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF35"
},
{
"start": 395,
"end": 413,
"text": "(Kim et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preprocessing",
"sec_num": null
},
{
"text": "We used the seq2seq-attn implementation (Kim, 2016) with the following default settings: word embeddings and LSTM states with 500 dimensions, SGD with an initial learning rate of 1.0 and decay rate of 0.5 (after the 9th epoch), and dropout rate of 0.3. We use two uni-directional hidden layers for both the encoder and the decoder.",
"cite_spans": [
{
"start": 40,
"end": 51,
"text": "(Kim, 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Systems",
"sec_num": null
},
{
"text": "1 These have been used frequently to annotate data in the previous evaluation campaigns (Birch et al., 2014; Durrani et al., 2014a) . 2 The difficulty with using these is that it is not straightforward to derive word representations out of a decoder that processes BPE-ed text, because the original words are split into subwords. We considered aggregating the representations of BPE subword units, but the choice of aggregation strategy may have an undesired impact on the analysis. For this reason we decided to leave exploration of BPE for future work.",
"cite_spans": [
{
"start": 88,
"end": 108,
"text": "(Birch et al., 2014;",
"ref_id": "BIBREF7"
},
{
"start": 109,
"end": 131,
"text": "Durrani et al., 2014a)",
"ref_id": "BIBREF12"
},
{
"start": 134,
"end": 135,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Systems",
"sec_num": null
},
{
"text": "3 Character-based models are becoming increasingly popular in Neural MT, for addressing the rare word problem -and they have been used previously also to benefit MT for morphologically rich (Luong et al., 2010; Belinkov and Glass, 2016; Costa-juss\u00e0 and Fonollosa, 2016) and closely related languages (Durrani et al., 2010; Sajjad et al., 2013) . Figure 1 : Features for the word Nun (DEC t 1 ) are extracted from the decoder of a pre-trained NMT system and provided to the classifier for training",
"cite_spans": [
{
"start": 190,
"end": 210,
"text": "(Luong et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 211,
"end": 236,
"text": "Belinkov and Glass, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 237,
"end": 269,
"text": "Costa-juss\u00e0 and Fonollosa, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 300,
"end": 322,
"text": "(Durrani et al., 2010;",
"ref_id": "BIBREF14"
},
{
"start": 323,
"end": 343,
"text": "Sajjad et al., 2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "NMT Systems",
"sec_num": null
},
{
"text": "The NMT system is trained for 13 epochs, and the model with the best validation loss is used for extracting features for the external classifier. We use a vocabulary size of 50000 on both the source and target side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Systems",
"sec_num": null
},
{
"text": "For the classification task, we used a feed-forward network with one hidden layer, dropout (\u03c1 = 0.5), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the NMT decoder's hidden state (500 dimensions). The classifier has no explicit access to context other than the hidden representation generated by the NMT system, which allows us to focus on the quality of the representation. We use Adam (Kingma and Ba, 2014) with default parameters to minimize the cross-entropy objective.",
"cite_spans": [
{
"start": 498,
"end": 519,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Settings",
"sec_num": null
},
{
"text": "3 Decoder Analysis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Settings",
"sec_num": null
},
{
"text": "We follow a process similar to Shi et al. (2016) and Belinkov et al. (2017a) to analyze the NMT systems but with a focus on the decoder component of the architecture. Formally, given a source sentence s = {s 1 , s 2 , ..., s N } and a target sentence t = {t 1 , t 2 , ..., t M }, we first use the encoder (Equation 1) to compute a set of hidden states h = {h 1 , h 2 , ..., h N }. We then use an attention mechanism (Bahdanau et al., 2014) to compute a weighted average of these hidden states from the previous decoder state (d i\u22121 ), known as the context vector c i (Equation 2). The context vector is a real valued vector of k dimensions, which is set to be the same as the hidden states in our case. The attention model computes a weight w h i for each hidden state of the encoder, thus giving soft alignment for each target word. The context vector is then used by the decoder (Equation 3) to generate the next word in the target sequence:",
"cite_spans": [
{
"start": 31,
"end": 48,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 53,
"end": 76,
"text": "Belinkov et al. (2017a)",
"ref_id": "BIBREF3"
},
{
"start": 416,
"end": 439,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ENC : s = {s1, ..., sN } \u2192 h = {h1, ..., hN } (1) ATTNi : h, di\u22121, ti\u22121 \u2192 ci \u2208 R k (1 \u2264 i \u2264 M ) (2) DEC : {c1, ..., cM } \u2192 t = {t1, t2, ..., tM }",
"eq_num": "(3)"
}
],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "After training the NMT system, we freeze the parameters of the network and use the encoder or the decoder as a feature extractor to generate vectors representing words in the sentence. Let ENC s i denote the representation of a source word s i . We use ENC s i to train the external classifier that for predicting the morphological tag for s i and evaluate the quality of the representation based on our ability to train a good classifier. For word representations on the target side, we feed our word of interest t i as the previously predicted word, and extract the representation DEC t i from the higher layers (See Figure 1 for illustration). Note that in the decoder, the target word representations DEC t i are not learned for predicting the word t i , but the next word (t i+1 ). Hence, it is arguable that DEC t i actually captures morphological information about t i+1 rather than t i , which can also explain the poorer decoder accuracies. To test this argument, we also trained our systems assuming that DEC t i encodes morphological information about the next word t i+1 . In this case, the decoder performance dropped by almost 15%. DEC t i probably encodes morphological information about both the current word (t i ) and the next word (t i+1 ). However, we leave this exploration for future work, and work with the assumption that DEC t i encodes information about word t i .",
"cite_spans": [],
"ref_spans": [
{
"start": 619,
"end": 627,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "Before diving into the decoder's performance, we first compare the performance of encoder versus decoder by training De\u2194En 4 and Cz\u2194En NMT models. We use the De\u2192En/Cz\u2192En models to extract encoder representations, and the En\u2192De/En\u2192Cz models to extract decoder representations. We then feed these representations to our classifier to predict morphological tags for German and Czech words. Table 2 shows that German and Czech representations learned on the encoder-side (using the De\u2192En/Cz\u2192En models) give much better accuracy compared to the ones learned on the decoder-side (using the En\u2192De/En\u2192Cz models). Given this difference in performance between the two components in our NMT system, we analyze the decoder further in various settings: comparing the performance i) with and without the attention mechanism, and ii) augmenting the decoder representation with the representation of the most attended source word. The baseline NMT models were trained with an attention mechanism.",
"cite_spans": [],
"ref_spans": [
{
"start": 387,
"end": 394,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.2"
},
{
"text": "In an attempt to probe what effect the attention mechanism has on the decoder's performance in the context of learning target language morphology, we trained NMT models without attention. Next we tried to take our baseline model (with attention) and augment its decoder representations with the encoder hidden state corresponding to the maximum attention (hereby denoted as ENC t i ). Our hypothesis is that since the decoder focuses on this hidden state to output the next target word, it may also encode some useful information about target morphology. Lastly, we also train a classifier on ENC t i alone in order to compare the ability of the encoder and decoder in learning target language morphology. Table 3 summarizes the results of these experiments. Comparing systems with (DEC t i ) and without attention (w/o-ATTN), we see that the accuracy on the morphological tagging task goes up when no attention is used. This can be explained by the fact that in the case of no attention, the decoder only receives a single context vector from the encoder and it has to learn more information about each target word to make accurate predictions. It is difficult for the encoder to transfer information about each target word using the same context vector cleanly, causing the decoder to learn more, resulting in better decoder performance in regards to the morphological information learned.",
"cite_spans": [],
"ref_spans": [
{
"start": 706,
"end": 713,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.2"
},
{
"text": "The second part of the table presents results involving encoder representations to aid morphological analysis of target words. There is a significant boost in the classifier's performance when the decoder representation for a target word t i is concatenated with the encoder representation of the most attended source word (DEC t i +ENC t i ). This hints towards several hypotheses: i) because the source and target words are translations, they share some morphological properties (e.g. nouns get translated to nouns, etc.), ii) the encoder also learns and stores information about the target language, so that the attention mechanism can make use of this information while deciding which word to focus on next. To ensure that the encoder and decoder indeed learn different information, we also tried to classify the morphological tag of a given word t i based on the encoder representation of the most attended source word alone (ENC t i ). We see a drop in accuracy, showing that both encoder and decoder learned different things about the same target word and are complementary representations. We can also see that the accuracy of the combined representation (DEC t i +ENC t i ) still lags behind the encoder's performance in predicting source morphology (Table 2 ). This indicates that there is still room for improvement in the NMT model's ability to learn target side morphology.",
"cite_spans": [],
"ref_spans": [
{
"start": 1259,
"end": 1267,
"text": "(Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.2"
},
{
"text": "In this section, we showed that the encoder and decoder learn different amounts of morphology due to the varying nature of their tasks within NMT architecture. The decoder depends on the encoder and attention mechanism to generate the correct morphological variant of a target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.2"
},
{
"text": "Motivated by the result that the decoder learns considerably less amount of morphology than the (Table 2 ) and the overall system does not learn as much about target morphology as source morphology, we investigated three ways to directly inject target morphology into the decoder, namely: i) Joint Generation, ii) Joint-data Learning, iii) Multi-task Learning. Figure 2 illustrates the approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 104,
"text": "(Table 2",
"ref_id": "TABREF3"
},
{
"start": 361,
"end": 369,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Morphology-aware Decoder",
"sec_num": "4"
},
{
"text": "As our first approach, we considered a solution that uses the standard NMT architecture, but is trained on a modified dataset. To incorporate morphological information, we modify the target sentence by appending the morphological tag sequence to it. The NMT system trained on this data learns to produce both words and morphological tags simultaneously. Formally, given a source sentence s = {s 1 , ..., s N }, target sentence t = {t 1 , ..., t M } and its morphological sequence m = {m 1 , ..., m M }, we train an NMT system on (s , t ) pairs, where s = s and t = t + m. Although this model is quite weak and the (word and morphological) bases are quite far away, we posit that the attention mechanism might be able to attend to the same source word twice. Given this, the decoder gets a similar representation from which it has to predict a word in the first instance, and a tag in the second -thus helping in common learning for the two tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Generation",
"sec_num": "4.1"
},
{
"text": "Given the drawbacks of the first approach, we considered another data augmentation technique inspired by multilingual NMT systems (Johnson et al., 2016) . Instead of having multiple source and target languages, we used one source language and two target language variations. The training data consists of sequences of source\u2192target words and source\u2192target morphological tags. We added an artificial token in the beginning of each source sentence indicating whether we want to generate target words or morphological tags. Using an artificial token in the source sentence has been explored and shown to work well to control the style of the target language (Sennrich et al., 2016a) . The objective function is the same as the one in usual sequence-to-sequence models, and is hence shared to minimize both morphological and translation error given the mixed data.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 655,
"end": 679,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint-data Learning",
"sec_num": "4.2"
},
{
"text": "In this final method, we decided to follow a more principled approach and modified the standard sequence-to-sequence for multi-task learning. The goal in multi-task training is to learn several tasks simultaneously such that each task can benefit from the mutual information learned (Collobert and Weston, 2008) . 5 With this motivation, we modified the NMT decoder to predict not only a word but also its corresponding tag. All of the layers below the output layers are shared. We have two output layers in parallel -the first to predict the target word, and the second to predict the morphological tag of the target word. Both ouput lay- Figure 3 : Improvements from adding morphology. A y-value of zero represents the baseline ers have their own separate loss function. While training, we combine the losses from both output layers to jointly train the system. This is different from the Joint-data learning technique, where we predict entire sequences of words or tags without any dependence on each other.",
"cite_spans": [
{
"start": 283,
"end": 311,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF9"
},
{
"start": 314,
"end": 315,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 640,
"end": 648,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-task Learning",
"sec_num": "4.3"
},
{
"text": "Formally, given a set of N tasks, sequence-tosequence multi-task learning involves an objective function minimizing the overall loss, which is a weighted combination of the N individual task losses. In our scenario, the training corpus consisted of a multi-target corpus: source\u2192target words and source\u2192target morphological tags, i.e N = 2. Hence, given a set of training examples D = { s (n) , t (n) , m (n) } N n=1 , where s is the source sentence, t is the target sentence and m is the target morphological tag sequence, the new objective function to maximize is as follows:",
"cite_spans": [
{
"start": 397,
"end": 400,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task Learning",
"sec_num": "4.3"
},
{
"text": "L =(1 \u2212 \u03bb) N n=1 log P (t (n) |s (n) ; \u03b8) + \u03bb N n=1 log P (m (n) |s (n) ; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task Learning",
"sec_num": "4.3"
},
{
"text": "Where \u03bb is a hyper-parameter used to shift focus towards translation or the morphological tagging. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task Learning",
"sec_num": "4.3"
},
{
"text": "Our results show that the multi-task learning approach performed the best among the three approaches, while the Joint Generation method has the poorest performance. Figure 3 summarizes the results for different language pairs. The joint generation method degrades overall translation performance, as expected, given its weakness from a modeling perspective. It is possible that even though the attention mechanism is able to focus on the source sequence in two passes, the parts of the network that predict words and tags are not tightly coupled enough to learn from each other.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The BLEU scores improved when using the other two methods. We achieved an improvement of up to 0.6 BLEU points and 3% (in tagging accuracy). The best improvements were obtained in the En\u2192De direction, while we observed lesser gains in the De\u2192En. This is perhaps because English is morphologically poorer, and the baseline system was able to learn the required amount of morphological information from the text itself. Improvements were also obtained for the En\u2192Cz direction, although not as much as in German. This could be due to data sparsity: Czech is much richer in morphology, 7 and the available TED En\u2194Cz data was 40% less than the En\u2194De data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Both Joint-data learning and Multi-task learning improved overall translation performance. In the case of En\u2192De, the performance of both approaches is very similar. However, each has its own pros and cons. While the joint-data learning method is a simple approach that allows to add morphology and other linguistic information without needing to change the architecture, the multitask learning approach is a more principled and powerful way of integrating the same information into the decoder. Having separate objective functions in multi-task learning also allows us to adjust the balance between the two tasks, which can be handy if the morphological information quality is not very high. On the flip side, this additional explicit weight adjustment can also be viewed as a potential constraint that is not present in the jointdata learning approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint-data vs. Multi-task Learning",
"sec_num": null
},
{
"text": "As discussed, the multi-task learning approach has an additional weight hyper-parameter \u03bb that adjusts the balance between word and tag prediction. Figure 4 shows the result of varying \u03bb from no morphological information (\u03bb = 0) to only morphological information (\u03bb = 1) on test-11 set. The left y-axis presents the BLEU score and the right y-axis presents the morphological accuracy. The best morphological accuracy is achieved at \u03bb = 1 which does not correspond to best translation quality since at that point the model is only minimizing the tag objective function. Similarly at \u03bb = 0, the model falls back to the baseline model with a single objective function minimizing translation error. For all language pairs, we consistently achieved the best BLEU score at \u03bb = 0.2. The parameter was tuned on a separate held out development set (test-11), and the results shown in Figure 3 are on blind test sets (test-12,13). Averages are reported in the figure.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 875,
"end": 884,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-task Weight Hyper-Parameter",
"sec_num": null
},
{
"text": "The related work to this paper can be broken into two groups:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Analysis Several approaches have been devised to analyze MT models and the linguistic properties that are learned during training. A common approach has been to use activations from a trained model to train an external classifier to predict some relevant information about the input. K\u00f6hn (2015) and Qian et al. (2016b) analyzed linguistic information learned in word embeddings, while Qian et al. (2016a) went further and analyzed linguistic properties in the hidden states of a recurrent neural network. Adi et al. (2016) looked at the overall information learned in a sentence summary vector generated by an RNN using a similar approach. Our approach closely aligns with that of Shi et al. (2016) and Belinkov et al. (2017a) , where the activations from various layers in a trained NMT system are used to predict linguistic properties.",
"cite_spans": [
{
"start": 284,
"end": 295,
"text": "K\u00f6hn (2015)",
"ref_id": "BIBREF23"
},
{
"start": 300,
"end": 319,
"text": "Qian et al. (2016b)",
"ref_id": "BIBREF28"
},
{
"start": 506,
"end": 523,
"text": "Adi et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 682,
"end": 699,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 704,
"end": 727,
"text": "Belinkov et al. (2017a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Integrating Morphology Some work has also been done in injecting morphological or more general linguistic knowledge into an NMT system. Sennrich and Haddow (2016) proposed a factored model that incorporates linguistic features on the source side as additional factors. An embedding is learned for each factor, just like a source word, and then the word and factor embeddings are combined before being passed on to the encoder. Aharoni and Goldberg (2017) proposed a method to predict the target sentence along with its syntactic tree. They linearize the tree in order to use the existing sequence-to-sequence model. Nadejde et al. (2017) also evaluated several methods of incorporating syntactic knowledge on both the source and target. While they used factors on the source side, their best method for the target side was to linearize the information and interleave it between the target words. Garc\u00eda-Mart\u00ednez et al. (2016) used a neural MT model with multiple outputs, like in our case of Multi-task learning. Their model predicts two properties at every step, the lemma of the target word and its morphological information. They then use an external tool to use this information to generate the actual target word. Dong et al. (2015) presented multi-task learning to translate a language into multiple target languages, and Luong et al. (2015) did experiments involving several levels of source and target language information. There have been previous efforts to integrate morphology into MT systems by learning factored models Durrani et al., 2014b) over POS and morphological tags.",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "Sennrich and Haddow (2016)",
"ref_id": "BIBREF33"
},
{
"start": 896,
"end": 925,
"text": "Garc\u00eda-Mart\u00ednez et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1219,
"end": 1237,
"text": "Dong et al. (2015)",
"ref_id": "BIBREF11"
},
{
"start": 1324,
"end": 1347,
"text": "and Luong et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 1533,
"end": 1555,
"text": "Durrani et al., 2014b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper we analyzed and investigated ways to improve morphological learning in the NMT decoder. We carried a series of experiments to understand why the decoder learns considerably less morphology than the encoder in the NMT architecture. We found that the decoder needs assistance from the encoder and the attention mechanism to generate correct target morphology. Additionally we explored three ways to explicitly inject morphology in the decoder: joint generation, joint-data learning, and multi-task learning. We found multi-task learning to outperform the other two methods. The simpler joint-data learning method also gave decent improvements. The code for the experiments and the modified framework is available at https://github.com/ fdalvi/seq2seq-attn-multitask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "By De\u2194En, we mean independently trained German-to-English and English-to-German models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, Eriguchi et al. (2017) jointly learned the tasks of parsing and translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We tuned the weight parameter on held-out data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The number of morphological tags in Czech are 368 versus 214 in German.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.04207"
]
},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained Anal- ysis of Sentence Embeddings Using Auxiliary Pre- diction Tasks. arXiv preprint arXiv:1608.04207.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Towards String-To-Tree Neural Machine Translation",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "132--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Towards String-To-Tree Neural Machine Translation. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 132-140. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What do Neural Machine Translation Models Learn about Morphology?",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "861--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017a. What do Neural Machine Translation Models Learn about Morphol- ogy? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 861-872. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Large-Scale Machine Translation between Arabic and Hebrew: Available Corpora and Initial Results",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Semitic Machine Translation",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2016. Large-Scale Machine Translation between Arabic and Hebrew: Available Corpora and Initial Results. In Proceed- ings of the Workshop on Semitic Machine Trans- lation, pages 7-12, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluating layers of representation in neural machine translation on parts-of-speech and semantic tagging task",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "Marquez",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Lluis Marquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Evaluating layers of representation in neural machine translation on parts-of-speech and semantic tagging task. In Proceedings of the 8th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Taipei, Taiwan. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural versus Phrase-Based Machine Translation Quality: a Case Study",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "257--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus Phrase- Based Machine Translation Quality: a Case Study. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 257-267, Austin, Texas. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Edinburgh SLT and MT system description for the IWSLT 2014 evaluation",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 11th International Workshop on Spoken Language Translation, IWSLT '14",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Birch, Matthias Huck, Nadir Durrani, Niko- lay Bogoychev, and Philipp Koehn. 2014. Ed- inburgh SLT and MT system description for the IWSLT 2014 evaluation. In Proceedings of the 11th International Workshop on Spoken Language Trans- lation, IWSLT '14, Lake Tahoe, CA, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An Arabic-Hebrew parallel corpus of TED talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the AMTA Workshop on Semitic Machine Translation (SeMaT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo. 2016. An Arabic-Hebrew parallel cor- pus of TED talks. In Proceedings of the AMTA Workshop on Semitic Machine Translation (SeMaT), Austin, US-TX.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th International Conference on Machine Learning, ICML '08, pages 160-167, New York, NY, USA. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Character-based Neural Machine Translation",
"authors": [
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "357--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta R. Costa-juss\u00e0 and Jos\u00e9 A. R. Fonollosa. 2016. Character-based Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 357-361, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-Task Learning for Multiple Language Translation",
"authors": [
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-Task Learning for Mul- tiple Language Translation. In ACL (1).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Edinburgh's phrase-based machine translation systems for WMT-14",
"authors": [
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ACL 2014 Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. 2014a. Edinburgh's phrase-based machine translation systems for WMT-14. In Pro- ceedings of the ACL 2014 Ninth Workshop on Sta- tistical Machine Translation, pages 97-104, Balti- more, MD, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Investigating the Usefulness of Generalized Word Representations in SMT",
"authors": [
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nadir Durrani, Philipp Koehn, Helmut Schmid, and Alexander Fraser. 2014b. Investigating the Useful- ness of Generalized Word Representations in SMT. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguistics: Technical Papers, pages 421-432, Dublin, Ireland. Dublin City University and Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hindi-to-Urdu Machine Translation through Transliteration",
"authors": [
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "465--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nadir Durrani, Hassan Sajjad, Alexander Fraser, and Helmut Schmid. 2010. Hindi-to-Urdu Machine Translation through Transliteration. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 465-474, Up- psala, Sweden. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to parse and translate improves neural machine translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "72--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 72-78. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Factored neural machine translation",
"authors": [
{
"first": "Mercedes",
"middle": [],
"last": "Garc\u00eda-Mart\u00ednez",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mercedes Garc\u00eda-Mart\u00ednez, Lo\u00efc Barrault, and Fethi Bougares. 2016. Factored neural machine transla- tion. CoRR, abs/1609.04621.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Seq2seq-attn",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2016. Seq2seq-attn. https:// github.com/harvardnlp/seq2seq-attn.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Character-aware Neural Language Models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.06615"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2015. Character-aware Neural Lan- guage Models. arXiv preprint arXiv:1508.06615.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adam: A Method for Stochastic Optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Factored Translation Models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "868--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Hieu Hoang. 2007. Factored Trans- lation Models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL), pages 868-876, Prague, Czech Republic. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Association for Computational Linguistics (ACL'07)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the Association for Computational Linguistics (ACL'07), Prague, Czech Republic.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "What's in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation",
"authors": [
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2067--2073",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arne K\u00f6hn. 2015. What's in an Embedding? Analyz- ing Word Embeddings through Multilingual Evalu- ation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 2067-2073, Lisbon, Portugal. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multitask sequence to sequence learning",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi- task sequence to sequence learning. CoRR, abs/1511.06114.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "148--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Preslav Nakov, and Min-Yen Kan. 2010. A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Pro- cessing, pages 148-157. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Syntax-aware neural machine translation using CCG. CoRR",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Nadejde",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, and Alexandra Birch. 2017. Syntax-aware neu- ral machine translation using CCG. CoRR, abs/1702.01147.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Analyzing Linguistic Knowledge in Sequential Model of Sentence",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Peng Qian",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "826--835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016a. Analyzing Linguistic Knowledge in Sequential Model of Sentence. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 826-835, Austin, Texas. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Investigating Language Universal and Specific Properties in Word Embeddings",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Peng Qian",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1478--1488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016b. Investigating Language Universal and Specific Prop- erties in Word Embeddings. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1478-1488, Berlin, Germany. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Maximum Entropy Models for Natural Language Ambiguity Resolution",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Translating Dialectal Arabic to English",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hassan Sajjad, Kareem Darwish, and Yonatan Be- linkov. 2013. Translating Dialectal Arabic to En- glish. In Proceedings of the 51st Conference of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Part-of-Speech Tagging with Neural Networks",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "172--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 1994. Part-of-Speech Tagging with Neural Networks. In Proceedings of the 15th Inter- national Conference on Computational Linguistics (Coling 1994), pages 172-176, Kyoto, Japan. Col- ing 1994 Organizing Committee.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "LoPar: Design and Implementation. Bericht des Sonderforschungsbereiches \"Sprachtheoretische Grundlagen fr die Computerlinguistik\" 149, Institute for Computational Linguistics",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 2000. LoPar: Design and Imple- mentation. Bericht des Sonderforschungsbereiches \"Sprachtheoretische Grundlagen fr die Computerlin- guistik\" 149, Institute for Computational Linguis- tics, University of Stuttgart.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Linguistic input features improve neural machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "83--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation, pages 83-91, Berlin, Germany. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Controlling Politeness in Neural Machine Translation via Side Constraints",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Controlling Politeness in Neural Machine Translation via Side Constraints. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, San Diego, Califor- nia.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Does String-Based Neural MT Learn Source Syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-Based Neural MT Learn Source Syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534, Austin, Texas. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A Multifaceted Evaluation of Neural versus Phrase-Based Machine Translation for 9 Language Directions",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "V\u00edctor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00e1nchez-Cartagena",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1063--1073",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Toral and V\u00edctor M. S\u00e1nchez-Cartagena. 2017. A Multifaceted Evaluation of Neural versus Phrase- Based Machine Translation for 9 Language Direc- tions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics: Volume 1, Long Papers, pages 1063-1073, Valencia, Spain. Association for Com- putational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Various approaches to inject morphological knowledge into the decoder encoder",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Multi-task learning: Translation vs. Morphological Tagging weight for En\u2192De model",
"num": null
},
"TABREF1": {
"text": "Statistics for the data used for training, tuning and testing",
"content": "
",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "Baseline ENCs i DECt i",
"content": "De\u2194En | 89.5 | 44.55 |
Cz\u2194En | 77.0 | 36.35 |
",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Comparison of morphological accuracy for the encoder and decoder representations",
"content": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "DECt i w/o-ATTN DECt i +ENCt i ENCt i",
"content": "En\u2192De | 44.55 | 50.26 | 60.34 | 43.43 |
En\u2192Cz | 36.35 | 42.09 | 48.64 | 36.36 |
",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "Morphological Tagging accuracy of the Decoder with and without attention, and effect of considering the most attended source word (ENC t i )",
"content": "",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}