{
"paper_id": "N19-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:57:42.250107Z"
},
"title": "Understanding and Improving Hidden Representation for Neural Machine Translation",
"authors": [
{
"first": "Guanlin",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "Tencent AI Lab \u221e The",
"institution": "Chinese University of Hong Kong",
"location": {}
},
"email": ""
},
{
"first": "Xintong",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Conghui",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {},
"email": "chzhu@hit.edu.cn"
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {},
"email": "tjzhao@hit.edu.cn"
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": "",
"affiliation": {
"laboratory": "Tencent AI Lab \u221e The",
"institution": "Chinese University of Hong Kong",
"location": {}
},
"email": "shumingshi@tencent.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multilayer architectures are currently the gold standard for large-scale neural machine translation. Existing works have explored some methods for understanding the hidden representations, however, they have not sought to improve the translation quality rationally according to their understanding. Towards understanding for performance improvement, we first artificially construct a sequence of nested relative tasks and measure the feature generalization ability of the learned hidden representation over these tasks. Based on our understanding, we then propose to regularize the layer-wise representations with all treeinduced tasks. To overcome the computational bottleneck resulting from the large number of regularization terms, we design efficient approximation methods by selecting a few coarse-to-fine tasks for regularization. Extensive experiments on two widely-used datasets demonstrate the proposed methods only lead to small extra overheads in training but no additional overheads in testing, and achieve consistent improvements (up to +1.3 BLEU) compared to the state-of-the-art translation model.",
"pdf_parse": {
"paper_id": "N19-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Multilayer architectures are currently the gold standard for large-scale neural machine translation. Existing works have explored some methods for understanding the hidden representations, however, they have not sought to improve the translation quality rationally according to their understanding. Towards understanding for performance improvement, we first artificially construct a sequence of nested relative tasks and measure the feature generalization ability of the learned hidden representation over these tasks. Based on our understanding, we then propose to regularize the layer-wise representations with all treeinduced tasks. To overcome the computational bottleneck resulting from the large number of regularization terms, we design efficient approximation methods by selecting a few coarse-to-fine tasks for regularization. Extensive experiments on two widely-used datasets demonstrate the proposed methods only lead to small extra overheads in training but no additional overheads in testing, and achieve consistent improvements (up to +1.3 BLEU) compared to the state-of-the-art translation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) has witnessed great successes in recent years (Bahdanau et al., 2014; Wu et al., 2016) . Current state-of-the-art (SOTA) NMT models are mainly constructed by a stacked neural architecture consisting of multiple hidden layers from bottom-up, where a classifier is built upon the topmost layer to solve the target task of translation (Gehring et al., 2017; Vaswani et al., 2017) . Most works tend to focus on the translation performance of the classifier defined on the topmost layer, however, they do not deeply understand the learned representations of hidden layers. Shi et al. (2016) and Belinkov et al. (2017) attempt * Conghui Zhu is the corresponding author. to understand the hidden representations through the lens of a few linguistic tasks, while Ding et al. (2017) and Strobelt et al. (2018) propose appealing visualization approaches to understand NMT models including the representation of hidden layers. However, employing the analyses to motivate new methods for better translation, the ultimate goal of understanding NMT, is not achieved in these works. In our paper, we aim at understanding the hidden representation of NMT from an alternative viewpoint, and particularly we propose simple yet effective methods to improve the translation performance based on our understanding. We start from a fundamental question: what are the characteristics of the hidden representation for better translation modeling? Inspired by the lessons from transfer learning (Yosinski et al., 2014) , we propose to empirically verify the argument: good hidden representation for a target task should be able to generalize well across any similar tasks. Unlike Shi et al. (2016) and Belinkov et al. (2017) who employ one or two linguistic tasks involving human annotated data to evaluate the feature generalization ability of the hidden representation, which might make understanding bias to a specific task, we instead construct a nested sequence of many relative tasks with entailment structure induced by a hierarchical clustering tree over the output label space (target vocabulary). Each task is defined as predicting the cluster of the next token according to a given source sentence and its translation prefix. Similar to Yu et al. (2018) , Zamir et al. (2018) and Belinkov et al. (2017) , we measure the feature generalization ability of the hidden representation regarding each task. Our observations are ( \u00a72):",
"cite_spans": [
{
"start": 79,
"end": 102,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 103,
"end": 119,
"text": "Wu et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 365,
"end": 387,
"text": "(Gehring et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 388,
"end": 409,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 601,
"end": 618,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 623,
"end": 645,
"text": "Belinkov et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 788,
"end": 806,
"text": "Ding et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 811,
"end": 833,
"text": "Strobelt et al. (2018)",
"ref_id": "BIBREF22"
},
{
"start": 1503,
"end": 1526,
"text": "(Yosinski et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 1688,
"end": 1705,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 1710,
"end": 1732,
"text": "Belinkov et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 2256,
"end": 2272,
"text": "Yu et al. (2018)",
"ref_id": "BIBREF28"
},
{
"start": 2275,
"end": 2294,
"text": "Zamir et al. (2018)",
"ref_id": "BIBREF29"
},
{
"start": 2299,
"end": 2321,
"text": "Belinkov et al. (2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The hidden representations learned by NMT indeed has decent feature generalization ability for the tree-induced relative tasks compared to the randomly initialized NMT model and a strong baseline with lexical features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The hidden representations from the higher layers generalize better across tasks than those from the lower layers. And more similar tasks have closer performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on the above findings, we decide to regularize and improve the hidden representations of NMT for better predictive performances regarding those relative tasks, in hope of achieving improved performance in terms of the target translation task. One natural solution is to feed all relative tasks to every hidden layer of the NMT decoder under the framework of multi-task learning. This may make the full coverage of the potential regularization effect. Unfortunately, this vanilla method is inefficient in training because there are more than one hundred task-layer combinations. 1 Based on the second finding, to approximate the vanilla method, we instead feed a single relative task to each hidden layer as a regularization auxiliary in a coarseto-fine manner ( \u00a73.1). Furthermore, we design another regularization criterion to encourage predictive decision consistency between a pair of adjacent hidden layers, which leads to better approximated regularization effect ( \u00a73.2). Our method is simple to implement and efficient for training and testing. Figure 1 illustrates the representation regularization framework. To summarize, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 1058,
"end": 1066,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose an approach to understand hidden representation of multilayer NMT by measuring their feature generalization ability across relative tasks constructed by a hierarchical clustering tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose two simple yet effective methods to regularize the hidden representation. These two methods serve as trade-offs between regularization coverage and efficiency with respect to the tree-induced tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct experiments on two widely used datasets and obtain consistent improvements (up to +1.3 BLEU) over the current SOTA Transformer (Vaswani et al., 2017) model.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we first introduce some background knowledge and notations of the multilayer NMT model. Then, we present a simple approach to better understand hidden representation through transfer learning. By analyzing the feature generalization ability, we draw some constructive conclusions which are used for designing regularization methods in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding Hidden Representation",
"sec_num": "2"
},
{
"text": "Suppose x = x 1 , \u2022 \u2022 \u2022 , x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Notations",
"sec_num": "2.1"
},
{
"text": "|x| is a source sentence, i.e. a sequence of source tokens, and a target sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Notations",
"sec_num": "2.1"
},
{
"text": "y = y 1 , \u2022 \u2022 \u2022 , y |y| is a translation of x,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Notations",
"sec_num": "2.1"
},
{
"text": "where each y t in y belongs to Y, the target vocabulary. A translation model minimizes the following chain-rule factorized negative log-likelihood loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Notations",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "mle = \u2212 log P (y | x; \u03b8) = \u2212 t log P (y t | x, y "
},
"TABREF3": {
"html": null,
"text": "BLEU comparison on the LDC dataset.",
"type_str": "table",
"num": null,
"content": "
"
},
"TABREF5": {
"html": null,
"text": "BLEU comparison on the WMT14 dataset.",
"type_str": "table",
"num": null,
"content": "Here MT13 and MT14 denote newstest2013 and new- |
stest2014, which are used as development and test set |
respectively. |
"
},
"TABREF7": {
"html": null,
"text": "Method MT02 MT03 MT04 MT05 MT06 MT08 Avg. Baseline 46.08 44.09 46.50 44.45 45.26 37.10 43.48 L2 46.67 44.50 47.35 45.02 46.20 38.43 44.30 L3 46.40 44.65 46.90 45.02 45.95 37.92 44.08 L4 46.35 44.30 46.97 45.10 46.06 37.31 43.95 L5 46.29 44.57 46.97 44.75 45.45 37.74 43.",
"type_str": "table",
"num": null,
"content": " | | 89 |
HR | 46.28 | 44.04 47.80 44.56 45.56 38.17 44.08 |
SHR | 47.05 | 44.80 48.15 45.55 46.30 39.02 44.78 |
"
},
"TABREF8": {
"html": null,
"text": "BLEU comparison on the LDC dataset with independently regularized layers.",
"type_str": "table",
"num": null,
"content": ""
}
}
}
}