|
{ |
|
"paper_id": "N19-1046", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:57:42.250107Z" |
|
}, |
|
"title": "Understanding and Improving Hidden Representation for Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Guanlin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Lemao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Tencent AI Lab \u221e The", |
|
"institution": "Chinese University of Hong Kong", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xintong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Conghui", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "chzhu@hit.edu.cn" |
|
}, |
|
{ |
|
"first": "Tiejun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "tjzhao@hit.edu.cn" |
|
}, |
|
{ |
|
"first": "Shuming", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Tencent AI Lab \u221e The", |
|
"institution": "Chinese University of Hong Kong", |
|
"location": {} |
|
}, |
|
"email": "shumingshi@tencent.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Multilayer architectures are currently the gold standard for large-scale neural machine translation. Existing works have explored some methods for understanding the hidden representations, however, they have not sought to improve the translation quality rationally according to their understanding. Towards understanding for performance improvement, we first artificially construct a sequence of nested relative tasks and measure the feature generalization ability of the learned hidden representation over these tasks. Based on our understanding, we then propose to regularize the layer-wise representations with all treeinduced tasks. To overcome the computational bottleneck resulting from the large number of regularization terms, we design efficient approximation methods by selecting a few coarse-to-fine tasks for regularization. Extensive experiments on two widely-used datasets demonstrate the proposed methods only lead to small extra overheads in training but no additional overheads in testing, and achieve consistent improvements (up to +1.3 BLEU) compared to the state-of-the-art translation model.", |
|
"pdf_parse": { |
|
"paper_id": "N19-1046", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Multilayer architectures are currently the gold standard for large-scale neural machine translation. Existing works have explored some methods for understanding the hidden representations, however, they have not sought to improve the translation quality rationally according to their understanding. Towards understanding for performance improvement, we first artificially construct a sequence of nested relative tasks and measure the feature generalization ability of the learned hidden representation over these tasks. Based on our understanding, we then propose to regularize the layer-wise representations with all treeinduced tasks. To overcome the computational bottleneck resulting from the large number of regularization terms, we design efficient approximation methods by selecting a few coarse-to-fine tasks for regularization. Extensive experiments on two widely-used datasets demonstrate the proposed methods only lead to small extra overheads in training but no additional overheads in testing, and achieve consistent improvements (up to +1.3 BLEU) compared to the state-of-the-art translation model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Neural machine translation (NMT) has witnessed great successes in recent years (Bahdanau et al., 2014; Wu et al., 2016) . Current state-of-the-art (SOTA) NMT models are mainly constructed by a stacked neural architecture consisting of multiple hidden layers from bottom-up, where a classifier is built upon the topmost layer to solve the target task of translation (Gehring et al., 2017; Vaswani et al., 2017) . Most works tend to focus on the translation performance of the classifier defined on the topmost layer, however, they do not deeply understand the learned representations of hidden layers. Shi et al. (2016) and Belinkov et al. (2017) attempt * Conghui Zhu is the corresponding author. to understand the hidden representations through the lens of a few linguistic tasks, while Ding et al. (2017) and Strobelt et al. (2018) propose appealing visualization approaches to understand NMT models including the representation of hidden layers. However, employing the analyses to motivate new methods for better translation, the ultimate goal of understanding NMT, is not achieved in these works. In our paper, we aim at understanding the hidden representation of NMT from an alternative viewpoint, and particularly we propose simple yet effective methods to improve the translation performance based on our understanding. We start from a fundamental question: what are the characteristics of the hidden representation for better translation modeling? Inspired by the lessons from transfer learning (Yosinski et al., 2014) , we propose to empirically verify the argument: good hidden representation for a target task should be able to generalize well across any similar tasks. Unlike Shi et al. (2016) and Belinkov et al. (2017) who employ one or two linguistic tasks involving human annotated data to evaluate the feature generalization ability of the hidden representation, which might make understanding bias to a specific task, we instead construct a nested sequence of many relative tasks with entailment structure induced by a hierarchical clustering tree over the output label space (target vocabulary). Each task is defined as predicting the cluster of the next token according to a given source sentence and its translation prefix. Similar to Yu et al. (2018) , Zamir et al. (2018) and Belinkov et al. (2017) , we measure the feature generalization ability of the hidden representation regarding each task. Our observations are ( \u00a72):", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 102, |
|
"text": "(Bahdanau et al., 2014;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 119, |
|
"text": "Wu et al., 2016)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 387, |
|
"text": "(Gehring et al., 2017;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 409, |
|
"text": "Vaswani et al., 2017)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 601, |
|
"end": 618, |
|
"text": "Shi et al. (2016)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 645, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 806, |
|
"text": "Ding et al. (2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 811, |
|
"end": 833, |
|
"text": "Strobelt et al. (2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1503, |
|
"end": 1526, |
|
"text": "(Yosinski et al., 2014)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1688, |
|
"end": 1705, |
|
"text": "Shi et al. (2016)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1710, |
|
"end": 1732, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 2256, |
|
"end": 2272, |
|
"text": "Yu et al. (2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 2275, |
|
"end": 2294, |
|
"text": "Zamir et al. (2018)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 2299, |
|
"end": 2321, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The hidden representations learned by NMT indeed has decent feature generalization ability for the tree-induced relative tasks compared to the randomly initialized NMT model and a strong baseline with lexical features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The hidden representations from the higher layers generalize better across tasks than those from the lower layers. And more similar tasks have closer performances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Based on the above findings, we decide to regularize and improve the hidden representations of NMT for better predictive performances regarding those relative tasks, in hope of achieving improved performance in terms of the target translation task. One natural solution is to feed all relative tasks to every hidden layer of the NMT decoder under the framework of multi-task learning. This may make the full coverage of the potential regularization effect. Unfortunately, this vanilla method is inefficient in training because there are more than one hundred task-layer combinations. 1 Based on the second finding, to approximate the vanilla method, we instead feed a single relative task to each hidden layer as a regularization auxiliary in a coarseto-fine manner ( \u00a73.1). Furthermore, we design another regularization criterion to encourage predictive decision consistency between a pair of adjacent hidden layers, which leads to better approximated regularization effect ( \u00a73.2). Our method is simple to implement and efficient for training and testing. Figure 1 illustrates the representation regularization framework. To summarize, our contributions are as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1058, |
|
"end": 1066, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose an approach to understand hidden representation of multilayer NMT by measuring their feature generalization ability across relative tasks constructed by a hierarchical clustering tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose two simple yet effective methods to regularize the hidden representation. These two methods serve as trade-offs between regularization coverage and efficiency with respect to the tree-induced tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We conduct experiments on two widely used datasets and obtain consistent improvements (up to +1.3 BLEU) over the current SOTA Transformer (Vaswani et al., 2017) model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 162, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we first introduce some background knowledge and notations of the multilayer NMT model. Then, we present a simple approach to better understand hidden representation through transfer learning. By analyzing the feature generalization ability, we draw some constructive conclusions which are used for designing regularization methods in Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Understanding Hidden Representation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Suppose x = x 1 , \u2022 \u2022 \u2022 , x", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "|x| is a source sentence, i.e. a sequence of source tokens, and a target sentence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "y = y 1 , \u2022 \u2022 \u2022 , y |y| is a translation of x,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where each y t in y belongs to Y, the target vocabulary. A translation model minimizes the following chain-rule factorized negative log-likelihood loss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "mle = \u2212 log P (y | x; \u03b8) = \u2212 t log P (y t | x, y <t ; \u03b8),", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where \u03b8 denotes the overall parameter of the translation model. According to Eq.(1), an alternative view of the translation problem can be cast to token-level stepwise classification (Daum\u00e9 et al., 2009) : predict the target token y t given a context consisting of x and y <t = y 1 , \u2022 \u2022 \u2022 , y t\u22121 corresponding to each factor P (y t | x, y <t ; \u03b8). The SOTA multilayer NMT models parameterize P (y t | x, y <t ; \u03b8) via powerful multilayer encoder and stacked layers of feature transformations h 1 , \u2022 \u2022 \u2022 , h L at the decoder side:", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 203, |
|
"text": "(Daum\u00e9 et al., 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "P (y t | x, y <t ; \u03b8) = P (y t | x, y <t , h L ; \u03b8), (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where h l (x, y <t ) = \u03c6 l x, y <t ; h l\u22121 ; \u03b8 is the l th hidden layer recursively defined by \u03c6 l on h l\u22121 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We also use h l (x, y <t ) to represent the output hidden representation of layer l for a specific context. Note that, \u03c6 l bears several types of instantiation and is an active area of research (Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 211, |
|
"text": "(Wu et al., 2016;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 233, |
|
"text": "Gehring et al., 2017;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 255, |
|
"text": "Vaswani et al., 2017)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Notations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Inspired by feature transfer learning (Yosinski et al., 2014) , we attempt to understand hidden representations of NMT by evaluating their generalization abilities across any tasks that are related to translation. There are some researchers who study hidden representations of NMT by using linguistic tasks such as morphology, named entity, part-orspeech or syntax (Shi et al., 2016; Belinkov et al., 2017 Belinkov et al., , 2018 . They typically rely on human annotated resources to train a model for each linguistic task, so their methods can not be used for languages which lack human annotations. Moreover, their considered tasks are too few to have a good coverage over task space for measuring transferability (Yu et al., 2018; Zamir et al., 2018) , and their understanding results may bias to a specific task. As a result, to evaluate the feature generalization ability of hidden representation, we artificially construct plenty of relative tasks which do not employ any human annotation. This makes our evaluation approach more general.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 61, |
|
"text": "(Yosinski et al., 2014)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 383, |
|
"text": "(Shi et al., 2016;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 405, |
|
"text": "Belinkov et al., 2017", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 429, |
|
"text": "Belinkov et al., , 2018", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 716, |
|
"end": 733, |
|
"text": "(Yu et al., 2018;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 753, |
|
"text": "Zamir et al., 2018)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Definition of the relative tasks Suppose Y k denotes any partition (or clustering) regarding the output label space (target vocabulary) Y. That is, Y k is a set of subsets", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Y k i \u2282 Y where i = 1...|Y k |, such that \u2200i, j, Y k i \u2229 Y k j = \u2205 and \u222a i Y k i = Y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We define the following relative task: given a context x, y <t , predict the subset or the cluster to which the t th token y t belongs in Y k , denoted as Y k (y t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To simplify notation, we regard Y k both as a relative task and as a partition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "It is clear that the above type of tasks are similar to the task of translation according to the description in Section \u00a72.1. Furthermore, different k represents different relative task and thus we actually obtain a great many relative tasks in total. However, it is impossible to evaluate the hidden representation on all those tasks; moreover, due to relationship between tokens (Hu et al., 2016) in Y, not all partitions are reasonable. As a consequence, motivated by the analysis of VC Dimension (Vapnik, 1995) , we construct a sequence of nested partitions with an entailment structure:", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 398, |
|
"text": "(Hu et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 500, |
|
"end": 514, |
|
"text": "(Vapnik, 1995)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "2 Y 1 \u2022 \u2022 \u2022 Y K .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The benefit is that a spectrum of task hardness can be constructed due to the increased partition or task cardinalities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As a matter of fact, we instantiate the above nested partitions through brown clustering (Brown et al., 1992; Stratos et al., 2014) over Y to get a hierarchical clustering tree and then consider each tree depth along the tree as a partition representing a relative task Y k (as shown in Figure 1 ). In the following experiments, we run brown clustering algorithm over a Ch\u21d2En dataset ( \u00a74) and construct a tree of English with depth 21. Without loss of generality, we regard the task Y 22 at a virtual 22 depth of the tree as equivalent to the translation task Y. Actually, Y and Y 22 have the same cardinality but are different in definition. 3", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 109, |
|
"text": "(Brown et al., 1992;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 131, |
|
"text": "Stratos et al., 2014)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 295, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Generalization Ability of Hidden Representations", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We use multi-class logistic regression to fit the layer-wise hidden representation learned by a well-trained 6-layer Transformer (Vaswani et al., 2017) over each training instance x, y <t . Specifically, given a context x, y <t , for each task Y k and a hidden representation h l (x, y <t ) of this context, which is fixed as constant, we predict the cluster Y k (y t ) according to the following probability:", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 151, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating generalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P Y k (y t ) | h l (x, y <t ); \u03b8 l Y k ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Evaluating generalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03b8 l Y k is the parameter of the logistic regression model for task Y k at l th layer. The difference between Eq.(3) and Eq.(2) is that the former is the linear model parameterized by \u03b8 l Y k while the latter is the NMT model parameterized by \u03b8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating generalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since there are L = 6 layers in Transformer's decoder and K = 22 relative tasks, we have more than one hundred such linear models defined with Eq. (3) in total. Therefore, it is costly to train them independently. Since the loss for each linear model is convex, joint training leads to exactly the same optimum as independent training and thus we employ mini-batch stochastic gradient descent to minimize the joint loss as follows: After training, we fix each \u03b8 l Y k and then measure the feature generalization ability of each h l by validating on the task Y k regarding a heldout dataset, following Yu et al. (2018) . For validation, we report accuracy on a heldout dataset through the strategy of maximum a posteriori (MAP). 4", |
|
"cite_spans": [ |
|
{ |
|
"start": 601, |
|
"end": 617, |
|
"text": "Yu et al. (2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating generalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2212 k l t log P Y k (y t ) | h l (x, y <t ); \u03b8 l Y k .", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Evaluating generalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Analysis To figure out how good the learned hidden representations are, we consider two baselines to extract features regarding each context x, y <t to train logistic regression models for comparison. For the first baseline, the features of the context are the hidden representations from the last layer of a randomly initialized Transformer; for the second, the features are derived by lexical feature templates, which include the sourceside bag-of-words (BOW) features and target-side BOW features indexed by relative positions of y t 's previous with up to m (Markov length) tokens. 5 As shown in Figure 2 , the lexical baseline delivers comparable accuracies for fine-grained tasks with respect to well learned Transformer's first layer, thanks to its discriminant ability with abundant lexical features. For example, its accuracy reaches about 26% for the task with cardinality |Y 21 |. The random baseline performs worse for tasks with cardinality |Y 8 |, which indicates that random representations in NMT have limited generalization abilities to fine-grained tasks as expected. The well-trained low-layer hidden representations yield much higher accuracies than the random baseline and are even better than the lexical baseline. This shows that the hidden representations from a well-trained NMT have good generalization abilities across relative tasks. In addition, as the layer goes up, the performance of hidden representations increase significantly over differ- 4 The accuracy is measured by whether arg max z\u2208Y k P (z | h l (x, y <t ); \u03b8 l Y k )) = Y k (y t ). 5 Please refer to Appendix B for more details are. ent relative tasks, which clearly demonstrates that more complex neural architecture leads to stronger expressibility. This provides a quantitative evidence to support the statement in Bengio et al. (2009) , Goodfellow et al. (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 586, |
|
"end": 587, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1475, |
|
"end": 1476, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1811, |
|
"end": 1831, |
|
"text": "Bengio et al. (2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1834, |
|
"end": 1858, |
|
"text": "Goodfellow et al. (2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 600, |
|
"end": 608, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating generalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we propose two simple methods, which respect the above findings, to enhance the hidden representations in NMT such that they generalize well across those relative tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A natural method to improve feature generalization of hidden representation is to jointly train the target task with all relative tasks for all hidden layers, which we call full-coverage method. As mentioned in Section \u00a72.2, this method will lead to training more than one hundred tasks (K \u00d7 L) in total, where K denotes the depth of the hierarchical clustering tree (aka. the number of tasks) and L the number of hidden layers. Unfortunately, since each task involves a softmax operation which may be the computation bottleneck for the task Y k with large cardinality, this method is inefficient for training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Regularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As a solution to approximate the potential regularization effect of the full-coverage method, we confine each hidden layer to engage in a single relative task. Motivated by the observation that representations from higher layers have better expressibility than lower layers, as claimed in \u00a72.2, we instead employ a coarse-to-fine strategy to select one task for each layer: finer-grained tasks for higher layers while coarser-grained task for lower layers. Specifically, suppose 1 \u2264 s(l) \u2264 K is the selected index regarding task Y s(l) for the l th layer, then it subjects to s(l) < s(l + 1) for each l. In addition, to encourage the diversity among the selected L tasks, we require s(l + 1) \u2212 s(l) to be large enough for all l. Formally, the loss of the hierarchical regularization (HR) method is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Regularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "hr = \u2212 l t log P (Y s(l) (y t ) | x, y <t , h l ; \u03b8, \u03b8 l Y s(l) ),", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Hierarchical Regularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Regularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P (Y s(l) (y t ) | x j , y j <t , h l ; \u03b8, \u03b8 l Y s(l) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Regularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is similar to Eq. 3 except that it treats the parameters \u03b8 in NMT as parameters besides \u03b8 l Y s(l) . Compared to Eq. 4, it includes fewer terms for summation. Figure 3: The conceptual graph of the KL consistency loss in SHR. Here, the multinomial probability vector p l is calculated through P (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Regularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 | x, y <t , h l ; \u03b8, \u03b8 l Y s(l) ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Regularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The HR method is very simple and computationally efficient, however, using one task to regularize a layer may not be a good approximation of the full-coverage method, since HR method might lead to inconsistent decisions for two different layers, which is formalized through the following entailment structure as introduced in Section 2.2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "arg max z\u2208Y s(l 1 ) P (z | x, y <t , h l 1 ; \u03b8, \u03b8 l 1 Y s(l 1 ) ) \u2283 arg max z\u2208Y s(l 2 ) P (z | x, y <t , h l 2 ; \u03b8, \u03b8 l 2 Y s(l 2 ) ),", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where s(l) is the selected task for the l th layer by HR, 1 \u2264 l 1 < l 2 \u2264 L and P (z|x, y <t , h l ; \u03b8, \u03b8 l Y s(l) ) is similar to Eq. (3) for the task Y s(l) and l th layer except that it does not treat the NMT parameters \u03b8 as constant. However, it always occurs on the training data that Y s", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(l 1 ) (y t ) \u2282 Y s(l 2 ) (y t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To alleviate this inconsistency issue for better approximating the full-coverage method, we leverage the above structural property by adding another regularization term. Firstly, we project the distribution P (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022|x, y <t , h l ; \u03b8, \u03b8 l Y s(l) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "into the domain of Y s(l\u22121) . Then we calculate KL divergence between the projected distribution and Figure 3 illustrates the idea. Since it is inefficient to consider all pairs of l 1 and l 2 , so we instead consider the consistency between all adjacent layers. Formally, we obtain the following loss function:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 109, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P (\u2022|x, y <t , h l\u22121 ; \u03b8, \u03b8 l\u22121 Y s(l\u22121) ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "shr = hr + 1 L \u2212 1 l KL P (\u2022|x, y <t , h l ; \u03b8, \u03b8 l Y s(l) ) || PROJ P (\u2022|x, y <t , h l+1 ; \u03b8, \u03b8 l+1 Y s(l+1) ) ,", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where PROJ is the projection defined in Figure 3 , and other notations are defined as before.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 48, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We call the above regularization as structural hierarchical regularization (SHR) since it takes advantage of the structure of the tree. In our experiments, we add HR (Eq.(5)) and SHR (Eq. 7) losses respectively into the negative log-likelihood regarding Eq. (1) for training all parameters \u03b8 and \u03b8 l Y s(l) . One of our advantage is that we only use \u03b8 for testing and thus our testing is as efficient as that for the baseline NMT model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structural Hierarchical Regularization", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We conduct experiments on two widely-used corpora. We choose from the LDC corpora about 1.8M sentence pairs for Zh\u21d2En translation with word-level vocabulary of 30k for both languages. We use the WMT14 En\u21d2De task which consists 4.5M sentence pairs and the vocabulary is built by joint BPE with 32k merging operations. Besides the baseline, we also conduct experiments on 3 regularization variants:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Baseline: the Transformer base model proposed in Vaswani et al. (2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 72, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 FHR: fine-grained HR based Transformer, which adopts the original label space as task for all selected layers for regularization. This variant is used to demonstrate that low layers which are weak in expressibility can mess up hard tasks which are unsuitable to learn.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 HR and SHR: as proposed in Section 3. Choice of relative tasks Based on the heuristics in Section 3.1, we first choose the task with the largest cardinality from the hierarchical clustering tree without the virtual depth, because this task is most related to translation (close cardinalities). Then we balance task diversity through a 5 times cardinality difference between tasks from the previous chosen task. As a result, we can obtain 4 tasks with s(l) = 5, 8, 11, 20 for the Zh\u21d2En task and s(l) = 5, 7, 10, 21 for the En\u21d2De task, where l = 2, 3, 4, 5 of the 6-layer decoder. 6 Table 1 summarizes the total number of parameters for the baseline and 3 regularization variants. As in Eq. 5, HR introduces extra parameters compared with the baseline. Besides, calculating the second term in Eq. (7) requires modest overheads. Therefore, training our SHR is slower than training the baseline. Although the proposed HR and SHR introduce extra parameters during training, they do not involve them during testing and thus testing is as efficient as the baseline. Table 2 shows the evaluation results of the baseline and 3 regularization variants on the Zh\u21d2En dataset. Since there are no recent work reporting Transformer's performance on this dataset, we choose a recurrent SOTA model to show that our baseline is already better than it, which is a common knowledge that Transformer can outperform recurrent NMT models. Our HR method surpasses the baseline 0.6 BLEU point, while the SHR method can improve upon HR by about a further 0.8 point, namely about 1.4 points over the baseline. Interestingly, the FHR method only performs on par with baseline, which indicates that forcing low layers to learn fine-grained tasks will not lead to beneficial intermediate representations since they struggle to learn a well-structured rep-resentation space. This matches the finding in Section 2: low layers may not be expressible enough to perform well on tasks with large cardinalities.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 583, |
|
"end": 590, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1061, |
|
"end": 1068, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the following, we conduct several quantitative experiments to demonstrate the advantages of our proposed two regularization methods over the baseline. Note that, since we need to guarantee that the decoded sequence has the same length with the reference for one-by-one token comparison, the following experiments are all conducted with teacher forcing and greedy decoding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analyses on Zh\u21d2En Dataset", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In the same manner as Section 2, we learn softmax weights for all relative tasks by fixing model weights learned by HR and SHR methods. Figure 4(a) , (b) show the \u2206 feature generalization ability (absolute accuracy difference) of HR and SHR over baseline. Since layer 1 is not selected as the regularized layer, no significant gap is observed. However, since layer 1 is close to the loss directly imposed on layer 2, improvements about 5% and 8% are obtained. Since in the baseline, layer 5, 6 are already close or with the ultimate fine-grained loss, HR method shows very small gain. But our SHR method can still improve about 4% absolute points. Except for layer 1, it is also evident to see larger gaps (more than 20%) at lower layers than higher layers due to the fact that lower layers, which are distant from the topmost loss in the baseline, require more supervision signals to shape their latent representation space.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 147, |
|
"text": "Figure 4(a)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Better Feature Generalization Ability", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "We measure decision consistency for a specific layer and decision consistency between a pair of layers using two metrics. The first metric is measured by conditional accuracy, which is the possibilities of the classifier parameterized by \u03b8 l Y k correctly predicting Y k (y t ) if the classifier parameterized by \u03b8 l Y k correctly predicts Y k (y t ) for any In accordance with the observations in previous subsection, except for layer 1, other layers show significant gains (HR more than 7%, SHR more than 10%) over baseline. Decision consistency for each layer proves the well-shaped layerwise representation and potentially paves the way for better inter-layer decision consistency. Figure 5 illustrates the consistency counts between any regularized layer pairs, including those without KL-based regularization. Deeper color represents more consistency counts. It is evident that the baseline has a very poor consistency between any layers. Our HR method is almost 2 times better, and the SHR obtains further improvement. A better decision consistency can couple the decision between relative tasks, so that by reaching a high accuracy on easier tasks can benefit the harder ones. Another interesting observation is that non-adjacent layers without KL loss also obtain significant improvements on decision consistency, because the KL term is actually transitive between layers where the predictive distributions are in accordance with the tree structure. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 686, |
|
"end": 694, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Improved Decision Consistency", |
|
"sec_num": "4.4.2" |
|
}, |
|
{ |
|
"text": "In this subsection, we clarify that the coarse-tofine regularized representations can also benefit low-frequency words. We divide the vocabulary into ten equally-sized bins, and summarize token accuracy for each bin over the development set. As shown in Figure 6 , the x-axis represents the frequency spectra, that is, we sort the bins by word frequency from rank 1 (the most frequent words) to 10 (the rare words). We can see that both HR and SHR methods demonstrate a gradually increased gap over the baseline as the word frequency decreases, which means our methods become better for less frequent word bins. However the gap shrinks at the 10 th bin. This may be the fact that for those words that appears with less than 50 counts, both methods are helpless. For baseline, it is hard to train well-shaped hidden representations for low-frequent words; in addition, due to the distance between the loss and the low layers, it is also hard to train weights due to the unstable gradient signal. By adding our regularization terms, every level of the multilayer decoder will receive supervision signals directly and lower layers will receive coarser grained thus higher frequency signals to shape their representations. Table 3 shows the evaluation results of the baseline and the 3 regularization variants on the En\u21d2De dataset. Notice that we use the base model while Chen et al. (2018) and Ott et al. (2018) use big models. The FHR method still does not show significant improvement over the baseline (less than 0.2 BLEU point), which verifies the hypothesis that we make by analyzing the Zh\u21d2En results. Our HR method is already stronger than Chen et al. (2018) which uses a multilayer RNN as decoder. Compared to the current state-of-theart in Ott et al. (2018) who utilize huge batch size and over 100 GPUs on the Transformer big model, our SHR method can be on par with them. This comparison indicates that better regularized hidden representations can be potentially powerful than increasing model capacity when using the same optimization method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1391, |
|
"end": 1408, |
|
"text": "Ott et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1746, |
|
"end": 1763, |
|
"text": "Ott et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 262, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1219, |
|
"end": 1226, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Promoted Low-Frequency Word Performance", |
|
"sec_num": "4.4.3" |
|
}, |
|
{ |
|
"text": "Since the dawn of NMT, many works have proposed for understanding what has been encoded in the learned hidden representations. Shi et al. (2016) are the first to investigate source syntax encoded in source hidden representations. Similarly, Belinkov et al. (2017) and Belinkov et al. (2018) give detailed analyses of both encoder and decoder's learned knowledge about part-of-speech and semantic tags at different layers. Unlike those works that employ one or two linguistic tasks, we instead construct plenty of artificial tasks without any human annotations to analyze the hidden representations. This makes our approach more general and may potentially lead to less biased conclusions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 144, |
|
"text": "Shi et al. (2016)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 263, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 268, |
|
"end": 290, |
|
"text": "Belinkov et al. (2018)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Based on our understanding of the hidden representations, we further develop simple methods to improve NMT through representation regularization. Many works regularize NMT with lexical knowledge such as BOW (Weng et al., 2017) and morphology (Niehues and Cho, 2017; Zaremoodi et al., 2018) , or syntactic knowledge (Kiperwasser and Ballesteros, 2018; Eriguchi et al., 2017) . One significant difference is that we take into account the structure among plenty of artificial tasks and design a well motivated regularization term to encourage the structural consistency of tasks, which further improves NMT performance. In addition, our coarse-to-fine way to select tasks for regularization is also inspired by recent works using a coarse-to-fine mechanism for learning better word embeddings in NMT and predicting intermediate solutions for semantic parsing (Dong and Lapata, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 226, |
|
"text": "(Weng et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 265, |
|
"text": "(Niehues and Cho, 2017;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 289, |
|
"text": "Zaremoodi et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 350, |
|
"text": "(Kiperwasser and Ballesteros, 2018;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 373, |
|
"text": "Eriguchi et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 879, |
|
"text": "(Dong and Lapata, 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this work, we present a simple approach for better understanding NMT learned layer-wise representations with transfer learning over plenty of artificially constructed relative tasks. This approach is general as it requires no human annotated data, only demanding target monolingual corpus. Based on our understanding, we propose two efficient yet effective methods for representation regularization which further pushes forward the SOTA NMT performances. In the future, we want to dig deeply into the subspace regularities of the learned representations for more fine-grained understanding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this appendix, we demonstrate a more detailed introduction to the partition of Y, the hierarchical clustering tree constructed by Y and the treeinduced relative tasks, which are introduced in Section 2.2, through an example. Suppose our vocabulary Y only consists of a few words, that is, Y = {cat, dog, run, jump, is}. Any partition of Y denoted as Y k is a set of subsets of Y. As shown in Figure 7 , this partition is actually { { cat, dog }, { jump, run }, { is } }. Then suppose the constructed hierarchical clustering tree looks like the tree in Figure 8(a) . In this tree, not all the leaves are at the same depth, so we left-branches the leaves with {cat, dog} at depth 1 to stretch to depth 2. Then by adding a virtual level at depth 3, we can construct a relative task with the same cardinality as the fine-grained translation task Y, as shown in Figure 9(b) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 403, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 566, |
|
"text": "Figure 8(a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 871, |
|
"text": "Figure 9(b)", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Hierarchical Clustering Tree Induced Relative Tasks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this appendix, we describe the lexical featurebased baseline that we use in Section 2.2 to compare with the layer-wise representations learned by Transformer. There are two types of feature template: a) the source-side bag-of-word (BOW) features; and b) the target-side order-aware BOW features. Specifically, given a context x, y <t , we extract features according to the above templates: Figure 8 : (a) The original hierarchical clustering tree, with left branching to have all leaves at the same tree depth; (b) The hierarchical clustering tree with depth 2, and a virtual task is constructed at depth 3.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 401, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B Lexical Feature-based Baseline", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 the BOW representation of the source sentence x: that is, if the source vocabulary is X , the source BOW feature vector has length of |X |, with each entry the appearances of that token in x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Lexical Feature-based Baseline", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 the order-aware BOW representation within k-Markov dependency chain: that is, we extract feature from y t\u2212k:t\u22121 by considering both the token identity and the relative distance of that token to the predicted one y t . This feature template constructs a feature vector of k \u00d7 |Y| entries (Y the target vocabulary), with each entry set to 1 when appears in the chain, otherwise 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Lexical Feature-based Baseline", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 the order-unaware BOW representation outside the k-Markov dependency chain, that is, we extract feature from the y 0:t\u2212k\u22121 with the same philosophy of the source-side BOW feature and obtain a feature vector of size Y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Lexical Feature-based Baseline", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "So in total, the feature vector extracted from the context x, y <t has a size (|X | + k \u00d7 |Y|), which will be around 300k if the vocabulary size is around 30k and the Markov order k = 10.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Lexical Feature-based Baseline", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this appendix, we describe the detailed experiment information: the construction of the hierar- chical clustering tree (Brown et al., 1992; Stratos et al., 2014) , the model configuration, and the training details. We construct two hierarchical clustering trees for the two target languages, English (Zh\u21d2En) and German (En\u21d2De) in our experiments, with Percy Liang's Brown Clustering algorithm implementation. 7 In both languages, we set the number of clusters to 5000 (a hyperparameter in the algorithm, c = 5000), that is, we will obtain trees with 5000 leaves. We set c to 5000 since the total vocabulary of the two languages are around 30k, and 5000 clusters will get a token coverage of 6 (30k/5000=6) for each leave if the clusters are balanced. By using left-branching introduced in Appendix A, we can finally get two trees with every depths as a relative task. The statistics of each relative task's cardinality for each tree are shown in Figure 9 (a) and (b) respectively. For selecting the depths in a coarse-to-fine manner, we follow the heuristics mentioned in Section 3.1, that is, we select depths which are diverse enough so as to have better coverage of all the relative tasks. Specifically, we follow a quotient between two adjacent selected tasks of 5, and select from the task which has the cardinality of 5000, then we select tasks of cardinalities around 1000, 200, 40 respectively. For the Zh\u21d2En dataset, we select tasks at depths 5, 8, 11, 20 with cardinalities 32, 208, 955, 5000; for the En\u21d2De dataset, we select tasks at depths 5, 7, 10, 21 with cardinalities 32, 127, 878, 5000. The model configuration strictly follows that of the base model in Vaswani et al. (2017) with word embedding size of 512, hidden size of 512, feedforward projection size of 2048, layer and head number of 6 and 8. The dropout rates of the embedding, attention and residual block are all set to 0.1. All architectural settings are in accordance with the base model of Vaswani et al. (2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 142, |
|
"text": "(Brown et al., 1992;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 164, |
|
"text": "Stratos et al., 2014)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 413, |
|
"text": "7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1675, |
|
"end": 1696, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1974, |
|
"end": 1995, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 949, |
|
"end": 957, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "C Detailed Experiment Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use Adam (Kingma and Ba, 2014) as the default optimizer with an initial learning rate of 0.0001. Training batch size is set to 8192 \u00d7 4 tokens per batch, that is, we use data parallelism with 4 P40 or M40 GPUs and 8192 tokens per GPU. We train the Zh\u21d2En models from scratch for about 240k iterations about 2 weeks on 4 M40 GPUs for both the baseline and the 3 regularization variants. For the En\u21d2De models, we first attempt to train all the methods from scratch up to 200k iterations, but do not see significant improvement on BLEU score (around 0.6 points on test). So we use the pretrained baseline to initialize our proposed methods, and further train them for about 200k iterations, which results in the reported improvement in Section 4.5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Detailed Experiment Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As suggested by the reviewers, we conduct independent layer regularization by imposing a relative task with cardinality of 5000 on layer 2 to 5 with the six layer Transformer. The performances are demonstrated in Table 4 . It seems that independent layer regularization can as well brings about descent improvements over the baseline. And regularizing layer 2 of the decoder performs better than our HR method. It is surpris-ing that lower layers are more urgent to be regularized than higher layers. This phenomenon may raise the question that how on earth the intermediate layer representations help with final prediction. One hypothesis may be drawn from our paper is that: in baseline, the coherence among the layer-wise representations may be weak so that some non-linear transformations from lower layer may not lead to essential predictive power of the final layer representation. And by using KL divergence to externally constrain their decision consistency may take better advantage of lower layers. As one of the reviewer pointed out, Universal Transformer (Dehghani et al., 2018) with base model's hyperparameters achieves 28.90 on WMT14 En\u21d2De newstest14, with 1/6 enc-dec parameters (not considering embeddings). Its architectural inductive bias is motivated from iterative refinement of the layer-wise representation so the decoder at each time step builds an RNN like reasoning process to refine upon previous layer's representation. We think this might be a more effective inductive bias in ResNet architecture which is adopted by Transformer, since Jastrzebski et al. (2017) provides evidence that ResNet does iterative representation inference (residual as refined quantity). Future directions may include relating the dimension reduced representations (Law et al., 2018) to the coarse-to-fine structural bias, or experiments on Universal Transformer architecture to probe its learned representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1067, |
|
"end": 1090, |
|
"text": "(Dehghani et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1565, |
|
"end": 1590, |
|
"text": "Jastrzebski et al. (2017)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1770, |
|
"end": 1788, |
|
"text": "(Law et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 220, |
|
"text": "Table 4", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D Experiments of Independent Layer Regularization on Zh\u21d2En", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are about 22 tasks that we have constructed and 6 layers in SOTA NMT models(Vaswani et al., 2017).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here what we mean entailment relation ( ) between two partitions Y k and Y k+1 is:\u2200Y k+1 i , \u2203!Y k j , s.t.Y k+1 i \u2286 Y k j . 3Please refer to Appendix A for detailed preprocessing of the tree to get nested partitions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Please refer to Appendix C for detailed information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/percyliang/brown-cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to first thank all the anonymous reviewers for their critical suggestion and valuable experimental advice. The authors would also like to thank Yong Jiang for discussion; Shangchen Zhou for better figure design; Haozhe Xie, Chaoqun Duan, Xin Li, Ziyi Dou and Mengzhou Xia for proof reading. Tiejun Zhao is supported by National Key RD Program of China Project 2017YFB1002102.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "What do neural machine translation models learn about morphology?", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadir", |
|
"middle": [], |
|
"last": "Durrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahim", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.03471" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural ma- chine translation models learn about morphology? arXiv preprint arXiv:1704.03471.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadir", |
|
"middle": [], |
|
"last": "Durrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahim", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.07772" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2018. Evaluating layers of representation in neural ma- chine translation on part-of-speech and semantic tagging tasks. arXiv preprint arXiv:1801.07772.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Learning deep architectures for ai. Foundations and trends R in Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio et al. 2009. Learning deep architec- tures for ai. Foundations and trends R in Machine Learning, 2(1):1-127.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Class-based n-gram models of natural language", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Peter F Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Desouza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent J Della", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenifer C", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computational linguistics", |
|
"volume": "18", |
|
"issue": "4", |
|
"pages": "467--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The best of both worlds: Combining recent advances in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Mia", |
|
"middle": [], |
|
"last": "Xu Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.09849" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, et al. 2018. The best of both worlds: Combining re- cent advances in neural machine translation. arXiv preprint arXiv:1804.09849.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Search-based structured prediction. Machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Langford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "75", |
|
"issue": "", |
|
"pages": "297--325", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learn- ing, 75(3):297-325.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Universal transformers", |
|
"authors": [ |
|
{ |
|
"first": "Mostafa", |
|
"middle": [], |
|
"last": "Dehghani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1807.03819" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and \u0141ukasz Kaiser. 2018. Univer- sal transformers. arXiv preprint arXiv:1807.03819.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Visualizing and understanding neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yanzhuo", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huanbo", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1150--1159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150- 1159. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Coarse-to-fine decoding for neural semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1805.04793" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. arXiv preprint arXiv:1805.04793.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning to parse and translate improves neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Akiko", |
|
"middle": [], |
|
"last": "Eriguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshimasa", |
|
"middle": [], |
|
"last": "Tsuruoka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1702.03525" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate im- proves neural machine translation. arXiv preprint arXiv:1702.03525.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Convolutional sequence to sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Gehring", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Yarats", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann N", |
|
"middle": [], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1705.03122" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N Dauphin. 2017. Convolu- tional sequence to sequence learning. arXiv preprint arXiv:1705.03122.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning, volume 1. MIT press Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Learning structured inference neural networks with label relations", |
|
"authors": [ |
|
{ |
|
"first": "Hexiang", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guang-Tong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiwei", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zicheng", |
|
"middle": [], |
|
"last": "Liao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2960--2968", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hexiang Hu, Guang-Tong Zhou, Zhiwei Deng, Zicheng Liao, and Greg Mori. 2016. Learning struc- tured inference neural networks with label relations. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 2960- 2968.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Residual connections encourage iterative inference", |
|
"authors": [ |
|
{ |
|
"first": "Stanis\u0142aw", |
|
"middle": [], |
|
"last": "Jastrzebski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devansh", |
|
"middle": [], |
|
"last": "Arpit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Ballas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vikas", |
|
"middle": [], |
|
"last": "Verma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1710.04773" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stanis\u0142aw Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, and Yoshua Bengio. 2017. Residual connections encourage iterative inference. arXiv preprint arXiv:1710.04773.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Scheduled multi-task learning: From syntax to translation", |
|
"authors": [ |
|
{ |
|
"first": "Eliyahu", |
|
"middle": [], |
|
"last": "Kiperwasser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.08915" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eliyahu Kiperwasser and Miguel Ballesteros. 2018. Scheduled multi-task learning: From syntax to translation. arXiv preprint arXiv:1804.08915.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Dimensionality reduction for representing the knowledge of probabilistic models", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Marc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jake", |
|
"middle": [], |
|
"last": "Law", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Snell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raquel", |
|
"middle": [], |
|
"last": "Amir-Massoud Farahmand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Urtasun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc T Law, Jake Snell, Amir-massoud Farahmand, Raquel Urtasun, and Richard S Zemel. 2018. Di- mensionality reduction for representing the knowl- edge of probabilistic models.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Exploiting linguistic resources for neural machine translation using multi-task learning", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Niehues", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunah", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1708.00993" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Niehues and Eunah Cho. 2017. Exploiting linguistic resources for neural machine transla- tion using multi-task learning. arXiv preprint arXiv:1708.00993.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Scaling neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation, pages 1-9, Belgium, Brussels. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Does string-based neural mt learn source syntax?", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Inkit", |
|
"middle": [], |
|
"last": "Padhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1526--1534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1526- 1534.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A spectral algorithm for learning class-based n-gram models of natural language", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Stratos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel J", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "UAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "762--771", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Stratos, Do-kyum Kim, Michael Collins, and Daniel J Hsu. 2014. A spectral algorithm for learn- ing class-based n-gram models of natural language. In UAI, pages 762-771. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Strobelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Behrisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Perer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Pfister", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Strobelt, S. Gehrmann, M. Behrisch, A. Perer, H. Pfister, and A. M. Rush. 2018. Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models. ArXiv e-prints.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The Nature of Statistical Learning Theory", |
|
"authors": [ |
|
{ |
|
"first": "Vladimir", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag, Berlin, Heidel- berg.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Neural machine translation with word predictions", |
|
"authors": [ |
|
{ |
|
"first": "Rongxiang", |
|
"middle": [], |
|
"last": "Weng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujian", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zaixiang", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinyu", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1708.01771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rongxiang Weng, Shujian Huang, Zaixiang Zheng, Xinyu Dai, and Jiajun Chen. 2017. Neural machine translation with word predictions. arXiv preprint arXiv:1708.01771.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.08144" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "How transferable are features in deep neural networks?", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Yosinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Clune", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hod", |
|
"middle": [], |
|
"last": "Lipson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3320--3328", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320-3328.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Diverse few-shot text classification with multiple metrics", |
|
"authors": [ |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoxiao", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinfeng", |
|
"middle": [], |
|
"last": "Yi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiyu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saloni", |
|
"middle": [], |
|
"last": "Potdar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerald", |
|
"middle": [], |
|
"last": "Tesauro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haoyu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1206--1215", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text clas- sification with multiple metrics. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1206-1215. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Taskonomy: Disentangling task transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Amir R Zamir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Sax", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonidas", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jitendra", |
|
"middle": [], |
|
"last": "Guibas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvio", |
|
"middle": [], |
|
"last": "Malik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Savarese", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir R Zamir, Alexander Sax, , William B Shen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Adaptive knowledge sharing in multi-task learning: Improving low-resource neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Poorya", |
|
"middle": [], |
|
"last": "Zaremoodi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "656--661", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Poorya Zaremoodi, Wray Buntine, and Gholamreza Haffari. 2018. Adaptive knowledge sharing in multi-task learning: Improving low-resource neural machine translation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 656-661.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Coarse-to-fine learning for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Zhirui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enhong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "CCF International Conference on Natural Language Processing and Chinese Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "316--328", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and En- hong Chen. 2018. Coarse-to-fine learning for neural machine translation. In CCF International Confer- ence on Natural Language Processing and Chinese Computing, pages 316-328. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Addressing troublesome words in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongjun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengqing", |
|
"middle": [], |
|
"last": "Zong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "391--400", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Zhao, Jiajun Zhang, Zhongjun He, Chengqing Zong, and Hua Wu. 2018. Addressing troublesome words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 391-400. As- sociation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "The structural hierarchical regularization framework. On the left is a 4-layer NMT decoder; on the right is a hierarchical clustering tree and the treeinduced relative tasks at every tree depth.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "The transfer learning performances.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "HR vs Baseline (d) SHR vs BaselineFigure 4: (a), (b) are the \u2206 feature generalization ability (absolute accuracy difference) of HR and SHR method compared to baseline; (c), (d) are the conditional absolute accuracy difference of HR and SHR over baseline. k < k. The second metric is measured by the counts of consistent decision pairs between any pair of regularized layers as defined in Eq. (6). Figure 4(c), (d) shows the absolute conditional accuracy difference of our HR and SHR over baseline.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "The consistency correlation between different regularized layers (lyr.2 to lyr.6) of the baseline and our two regularization methods.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Development set accuracy among the baseline and our proposed regularization variants over different word frequency bins.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"text": "One example partition of the vocabulary Y.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF6": { |
|
"uris": null, |
|
"text": "(a) Task cardinalities for English (Zh\u21d2En). (b) Task cardinalities for German (En\u21d2De).", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF7": { |
|
"uris": null, |
|
"text": "Task cardinalities.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "MethodMT02 MT03 MT04 MT05 MT06 MT08 Avg.Zhao et al. (2018) N/A 44.98 45.51 43.93 43.95 33.33 42.34 Baseline 46.08 44.09 46.50 44.45 45.26 37.10 43.48 FHR 45.46 43.56 47.51 44.00 45.45 37.22 43.58 HR 46.28 44.04 47.80 44.56 45.56 38.17 44.08 SHR 47.05 44.80 48.15 45.55 46.30 39.02 44.78", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "BLEU comparison on the LDC dataset.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "BLEU comparison on the WMT14 dataset.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>Here MT13 and MT14 denote newstest2013 and new-</td></tr><tr><td>stest2014, which are used as development and test set</td></tr><tr><td>respectively.</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": "Method MT02 MT03 MT04 MT05 MT06 MT08 Avg. Baseline 46.08 44.09 46.50 44.45 45.26 37.10 43.48 L2 46.67 44.50 47.35 45.02 46.20 38.43 44.30 L3 46.40 44.65 46.90 45.02 45.95 37.92 44.08 L4 46.35 44.30 46.97 45.10 46.06 37.31 43.95 L5 46.29 44.57 46.97 44.75 45.45 37.74 43.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td>89</td></tr><tr><td>HR</td><td>46.28</td><td>44.04 47.80 44.56 45.56 38.17 44.08</td></tr><tr><td>SHR</td><td>47.05</td><td>44.80 48.15 45.55 46.30 39.02 44.78</td></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "BLEU comparison on the LDC dataset with independently regularized layers.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |