ACL-OCL / Base_JSON /prefixA /json /autosimtrans /2020.autosimtrans-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
89.3 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:09:45.976029Z"
},
"title": "Improving Autoregressive NMT with Non-Autoregressive Model",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "long.zhou@nlpr.ia.ac.cn"
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "jjzhang@nlpr.ia.ac.cn"
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CASIA",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "cqzong@nlpr.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Autoregressive neural machine translation (NMT) models are often used to teach nonautoregressive models via knowledge distillation. However, there are few studies on improving the quality of autoregressive translation (AT) using non-autoregressive translation (NAT). In this work, we propose a novel Encoder-NAD-AD framework for NMT, aiming at boosting AT with global information produced by NAT model. Specifically, under the semantic guidance of source-side context captured by the encoder, the nonautoregressive decoder (NAD) first learns to generate target-side hidden state sequence in parallel. Then the autoregressive decoder (AD) performs translation from left to right, conditioned on source-side and target-side hidden states. Since AD has global information generated by low-latency NAD, it is more likely to produce a better translation with less time delay. Experiments on WMT14 En\u21d2De, WMT16 En\u21d2Ro, and IWSLT14 De\u21d2En translation tasks demonstrate that our framework achieves significant improvements with only 8% speed degeneration over the autoregressive NMT.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Autoregressive neural machine translation (NMT) models are often used to teach nonautoregressive models via knowledge distillation. However, there are few studies on improving the quality of autoregressive translation (AT) using non-autoregressive translation (NAT). In this work, we propose a novel Encoder-NAD-AD framework for NMT, aiming at boosting AT with global information produced by NAT model. Specifically, under the semantic guidance of source-side context captured by the encoder, the nonautoregressive decoder (NAD) first learns to generate target-side hidden state sequence in parallel. Then the autoregressive decoder (AD) performs translation from left to right, conditioned on source-side and target-side hidden states. Since AD has global information generated by low-latency NAD, it is more likely to produce a better translation with less time delay. Experiments on WMT14 En\u21d2De, WMT16 En\u21d2Ro, and IWSLT14 De\u21d2En translation tasks demonstrate that our framework achieves significant improvements with only 8% speed degeneration over the autoregressive NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) based on encoder-decoder framework has gained rapid progress over recent years (Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; . All these high-performance NMT models generate target languages from left to right in an autoregressive manner. An obvious limitation of autoregressive translation (AT) is that the inference process can hardly be parallelized, and the inference time is linear with respect to the length of the target sequence.",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 137,
"end": 159,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 160,
"end": 176,
"text": "Wu et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 177,
"end": 198,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 199,
"end": 220,
"text": "Vaswani et al., 2017;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To speed up the inference of machine translation, non-autoregressive translation (NAT) models have been proposed, which generate all target tokens independently and simultaneously (Gu et al., 2017; Lee et al., 2018; Kaiser et al., 2018; Libovick\u00fd and Helcl, 2018) . Although NAT is successfully trained with the help from an AT model as its teacher via knowledge distillation (Kim and Rush, 2016) , there is no work focusing on improving the quality of AT using NAT. Therefore, a natural question arises, can we boost AT with NAT?",
"cite_spans": [
{
"start": 180,
"end": 197,
"text": "(Gu et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 198,
"end": 215,
"text": "Lee et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 216,
"end": 236,
"text": "Kaiser et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 237,
"end": 263,
"text": "Libovick\u00fd and Helcl, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 376,
"end": 396,
"text": "(Kim and Rush, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel and effective Encoder-NAD-AD framework for NMT, in which the newly added non-autoregressive decoder (NAD) can provide target-side global information when autoregressive decoder (AD) translates, as illustrated in Figure 1 . Briefly speaking, the encoder is first used to encode the source sequence into a sequence of vector representations. NAD then reads the encoder representations and generates a coarse target sequence in parallel. Given the source-side and target-side contexts separately captured by the encoder and NAD, AD learns to generate final translation token by token.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed model can fully combine two major advantages compared to previous work (Vaswani et al., 2017; Xia et al., 2017) . On the one hand, due to the lower latency during inference of NAT, the decoding efficiency of our proposed framework is only slightly lower than Figure 2 : The extended Transformer translation model that exploits global information produced by NAT. We omit the residual connection and layer normalization in each sub-layer for simplicity. the standard NMT models, as shown in Figure 1 . On the other hand, since AD can asses the global target-side context provided by NAD, it has the potential to generate a better translation by fully exploiting source-side and target-side contexts. We conduct massive experiments on WMT14 En\u21d2De, WMT16 En\u21d2Ro and IWSLT14 De\u21d2En translations tasks. Experimental results demonstrate that our proposed model achieves substantial improvements with only 8% degradation in decoding efficiency compared to the standard NMT.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 107,
"end": 124,
"text": "Xia et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 272,
"end": 280,
"text": "Figure 2",
"ref_id": null
},
{
"start": 503,
"end": 511,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Self-Attention Source Embedding Copied Source Embedding Softmax Feed-Forward Enc-NAD Cross-Attention Feed-Forward Unmask Self- Attention \uf0b4 N Position Attention Target Embedding Enc-AD Cross-Attention Feed-Forward Mask Self- Attention NAD-AD Cross-Attention Softmax \uf0b4 N \uf0b4 N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal in this work is to improve autoregressive NMT using the non-autoregressive model with lower latency during inference. Figure 2 shows the model architecture of the proposed framework. Next, we will detail individual components and introduce an algorithm for training and inference.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Framework",
"sec_num": "2"
},
{
"text": "The neural encoder of our model is identical to that of the dominant Transformer model, which is modeled using the self-attention network. The encoder is composed of a stack of N identical layers, each of which has two sub-layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Encoder",
"sec_num": "2.1"
},
{
"text": "h l = LN(h l\u22121 + MHAtt(h l\u22121 , h l\u22121 , h l\u22121 )) h l = LN( h l + FFN( h l )) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Encoder",
"sec_num": "2.1"
},
{
"text": "where the superscript l indicates layer depth, h l denotes the source hidden state of l-th layer, LN is layer normalization, FFN means feed-forward networks, and MHAtt denotes the multi-head attention mechanism (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 211,
"end": 233,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Encoder",
"sec_num": "2.1"
},
{
"text": "We initialize the non-autoregressive decoder inputs using copied source inputs from the encoder side by the fertility mechanism (Gu et al., 2017) . For each layer in non-autoregressive decoder, the lowest sublayer is the unmasked multi-head self-attention network, and it also uses residual connections around each of the sublayers, followed by layer normalization.",
"cite_spans": [
{
"start": 128,
"end": 145,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Autoregressive Decoder",
"sec_num": "2.2"
},
{
"text": "z l 1 = LN(z l\u22121 + MHAtt(z l\u22121 , z l\u22121 , z l\u22121 )) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Autoregressive Decoder",
"sec_num": "2.2"
},
{
"text": "The second sub-layer is a positional attention. We follow (Gu et al., 2017) and use the positional encoding p as both query and key and the decoder states as the value:",
"cite_spans": [
{
"start": 58,
"end": 75,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Autoregressive Decoder",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z l 2 = LN(z l 1 + MHAtt(z l 1 , p l , p l ))",
"eq_num": "(3)"
}
],
"section": "Non-Autoregressive Decoder",
"sec_num": "2.2"
},
{
"text": "The third sub-layer is Enc-NAD cross-attention that integrates the representation of corresponding source sentence, and the fourth sub-layer is a FFN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Autoregressive Decoder",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z l 3 = LN(z l 2 + MHAtt(z l 2 , h N , h N )) z l = LN(z l 3 + FFN(z l 3 ))",
"eq_num": "(4)"
}
],
"section": "Non-Autoregressive Decoder",
"sec_num": "2.2"
},
{
"text": "where h N is the source hidden state of top layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Autoregressive Decoder",
"sec_num": "2.2"
},
{
"text": "For each layer in autoregressive decoder, the lowest sub-layer is the masked multi-head self-attention network:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "s l 1 = LN(s l\u22121 + MHAtt(s l\u22121 , s l\u22121 , s l\u22121 )) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "The second sub-layer is NAD-AD cross-attention that integrates non-autoregressive sequence context into autoregressive decoder:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "s l 2 = LN(s l 1 + MHAtt(s l 1 , z N , z N )) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "In addition, the decoder both stacks Enc-AD crossattention and FFN sub-layers to seek task-relevant input semantics to bridge the gap between the input and output languages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s l 3 = LN(s l 2 + MHAtt(s l 2 , h N , h N )) s l = LN(s l 3 + FFN(s l 3 ))",
"eq_num": "(7)"
}
],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "2.4 Training and Inference",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "Given a set of training examples {x (z) , y (z) } Z z=1 , the training algorithm aims to find the model parameters that maximize the likelihood of the training data:",
"cite_spans": [
{
"start": 36,
"end": 39,
"text": "(z)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J(\u03b8) = 1 Z Z z=1 {log P (y (z) ad |x (z) , \u03b8enc, \u03b8 nad , \u03b8 ad ) +\u03bb * log P ( y (z) nad |x (z) , \u03b8enc, \u03b8 nad )}",
"eq_num": "(8)"
}
],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "where y nad is the reference of NAT, which can be obtained from standard NMT model via sequencelevel knowledge distillation (Gu et al., 2017; Lee et al., 2018; Wang et al., 2019) , and \u03bb is a hyperparameter used to balance the preference between the two terms. Once our model is trained, we use the decoding algorithm shown in Figure 1 to translate source language with little time wasted over the autoregressive NMT.",
"cite_spans": [
{
"start": 124,
"end": 141,
"text": "(Gu et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 142,
"end": 159,
"text": "Lee et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 160,
"end": 178,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 327,
"end": 335,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Autoregressive Decoder",
"sec_num": "2.3"
},
{
"text": "We use 4-gram NIST BLEU (Papineni et al., 2002) as the evaluation metric, and sign-test (Collins et al., 2005) to test for statistical significance.",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 88,
"end": 110,
"text": "(Collins et al., 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We conduct experiments on three widely used public machine translation corpora: WMT14 English-German 2 (En\u21d2De), WMT16 English-Romanian 3 (En\u21d2Ro), and IWSLT14 German-English 4 (De\u21d2En), whose training sets consist of 4.5M, 600K, 153K sentence pairs, respectively. We employ 37K, 40K, and 10K shared BPE (Sennrich et al., 2016) tokens for En\u21d2De, En\u21d2Ro, and De\u21d2en respectively. For En\u21d2De, we use newstest2013 as the validation set and newstest2014 as the test set. For En\u21d2Ro, we use newsdev-2016 and newstest-2016 as development and test sets. For De\u21d2En, we use 7K data split from the training set as the validation set and use the concatenation of dev2010, tst2010, tst2011, and tst2012 as the test set, which is widely used in prior works (Bahdanau et al., 2017; Wang et al., 2019) .",
"cite_spans": [
{
"start": 301,
"end": 324,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 737,
"end": 760,
"text": "(Bahdanau et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 761,
"end": 779,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We build the described models modified from the open-sourced tensor2tensor 5 toolkit. For our proposed model, we employ the Adam optimizer with \u03b2 1 =0.9, \u03b2 2 =0.998, and =10 \u22129 . For En\u21d2De and En\u21d2Ro, we use the hyperparameter settings of base Transformer model as Vaswani et al. (2017) , whose encoder and decoder both have 6 layers, 8 attention-heads, and 512 hidden sizes. We follow Gu et al. (2017) to use the same small Transformer setting for IWSLT14 because of its smaller dataset. For evaluation, we use argmax decoding for NAD, and beam search with a beam size of k=4 and length penalty \u03b1=0.6 for AD. We also re-implement and compare with deliberate network (Xia et al., 2017) based on strong Transformer, which adopts the two-pass decoding method and uses the autoregressive decoding manner for the first decoder. ",
"cite_spans": [
{
"start": 264,
"end": 285,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 385,
"end": 401,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF5"
},
{
"start": 666,
"end": 684,
"text": "(Xia et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "3.2"
},
{
"text": "In this section, we evaluate and analyze the proposed approach on En\u21d2De, En\u21d2Ro, and De\u21d2En translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "3.3"
},
{
"text": "We first compare the model parameters and training speed in De\u21d2En for Transformer baseline, deliberate network, and our proposed model, which have 10.3M, 16.3M, and 18.0M parameters, respectively. Although our model uses more parameter than deliberate network due to additional position attention network, its training speed is significantly faster than deliberate network (1.8 steps/s vs. 0.7 steps/s) Translation Quality We report the translation performance in Table 1 , from which we can make the following conclusions: (1) Our proposed model (row 10) significantly outperforms Transformer baseline (row 8) by 0.59, 0.89, and 1.14 BLEU points in three translation tasks, respectively. (2) Compared to the existing deliberate network which uses greedy search for the one-pass decoding, our model can obtain a comparable performance. (3) Our NAT model (row 9) can achieve a competitive or even better model accuracy than previous NAT models (rows 1-3).",
"cite_spans": [],
"ref_spans": [
{
"start": 464,
"end": 471,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model Complexity",
"sec_num": null
},
{
"text": "Decoding Speed Table 2 shows the decoding efficiency of different models. The deliberate network achieves the translation improvement at the cost of the substantial drop in decoding speed (68% degeneration). However, due to the high efficiency during inference of non-autoregressive models (16\u00d7 speedup than Transformer), the decoding efficiency of our proposed framework is only slightly lower (8% degeneration) than the standard autoregressive Transformer models.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Model Complexity",
"sec_num": null
},
{
"text": "Case Study To better understand how our model works, we present a translation example sampled form De\u21d2En task in Table 3 . The standard AT model incorrectly translates the phrase \"geschrieben sein k\u00f6nnte\" into \"may be\", and omits word \"geschrieben\". This problem is well ad-Source ich sage dann mit meinen eigenen worten, was zwischen diesem ger\u00fcst could :: be ::::::: written between this framework ? dressed by the Encoder-NAD-AD framework, since AD can access the global information contained in the draft sequence generated by NAD, and therefore outputs a better sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Model Complexity",
"sec_num": null
},
{
"text": "There are many design choices in the encoderdecoder framework based on different types of layers, such as RNN-based (Sutskever et al., 2014) , CNN-based (Gehring et al., 2017) , and selfattention based (Vaswani et al., 2017) approaches. Particularly, relying entirely on the attention mechanism, the Transformer introduced by Vaswani et al. (2017) can improve the training speed as well as model performance.",
"cite_spans": [
{
"start": 116,
"end": 140,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 153,
"end": 175,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 202,
"end": 224,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 326,
"end": 347,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In term of speeding up the decoding of the neural Transformer, Gu et al. (2017) modified the autoregressive architecture to directly generate target words in parallel. In past two years, non-autoregressive and semi-autoregressive models have been extensively studied (Oord et al., 2017; Kaiser et al., 2018; Lee et al., 2018; Libovick\u00fd and Helcl, 2018; Wang et al., 2019; Guo et al., 2018; Zhou et al., 2019a) . Previous work shows that NAT can be improved via knowledge distillation from AT models. In contrast, the idea of improving AT with NAT is not well explored.",
"cite_spans": [
{
"start": 63,
"end": 79,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF5"
},
{
"start": 267,
"end": 286,
"text": "(Oord et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 287,
"end": 307,
"text": "Kaiser et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 308,
"end": 325,
"text": "Lee et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 326,
"end": 352,
"text": "Libovick\u00fd and Helcl, 2018;",
"ref_id": "BIBREF10"
},
{
"start": 353,
"end": 371,
"text": "Wang et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 372,
"end": 389,
"text": "Guo et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 390,
"end": 409,
"text": "Zhou et al., 2019a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "The most relevant to our proposed framework is deliberation network (Xia et al., 2017) , which leverages the global information by observing both back and forward information in sequence decoding through a deliberation process. Recently, Zhang et al. (2018) proposed asynchronous bidirectional decoding for NMT (ABD-NMT), which extended the conventional encoder-decoder framework by introducing a backward decoder. Different from ABD-NMT, synchronous bidirectional sequence generation model perform left-to-right decoding and right-to-left decoding simultaneously and interactively (Zhou et al., 2019b; . Besides, Geng et al. (2018) introduced a adaptive multi-pass decoder to standard NMT models. However, the above models improve translation quality while greatly reducing inference efficiency.",
"cite_spans": [
{
"start": 68,
"end": 86,
"text": "(Xia et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 238,
"end": 257,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF22"
},
{
"start": 582,
"end": 602,
"text": "(Zhou et al., 2019b;",
"ref_id": "BIBREF24"
},
{
"start": 614,
"end": 632,
"text": "Geng et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In this work, we propose a novel Encoder-NAD-AD framework for NMT, aiming at improving the quality of autoregressive decoder with global information produced by the newly added nonautoregressive decoder. We extensively evaluate the proposed model on three machine translation tasks (En\u21d2De, En\u21d2Ro, and De\u21d2En). Compared to existing deliberation network (Xia et al., 2017) which suffers from serious decoding speed degradation, our proposed model achieves a significant improvement in translation quality with little degradation of decoding efficiency compared to the stateof-the-art autoregressive NMT.",
"cite_spans": [
{
"start": 351,
"end": 369,
"text": "(Xia et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.statmt.org/wmt14/translation-task.html 3 http://www.statmt.org/wmt16/translation-task.html. 4 https://wit3.fbk.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/tensorflow/tensor2tensor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An actor-critic algorithm for sequence prediction",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Philemon",
"middle": [],
"last": "Brakel",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In Proceedings of ICLR 2017.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Clause restructuring for statistical machine translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Ivona",
"middle": [],
"last": "Ku\u010derov\u00e1",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "531--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Philipp Koehn, and Ivona Ku\u010derov\u00e1. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd annual meeting on association for computational linguis- tics, pages 531-540. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1243-1252, International Convention Centre, Sydney, Australia. PMLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adaptive multi-pass decoder for neural machine translation",
"authors": [
{
"first": "Xinwei",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "523--532",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1048"
]
},
"num": null,
"urls": [],
"raw_text": "Xinwei Geng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2018. Adaptive multi-pass decoder for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 523-532, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Non-autoregressive neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR 2017",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Non-autoregressive neural machine translation. In Proceedings of ICLR 2017.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Non-autoregressive neural machine translation with enhanced decoder input",
"authors": [
{
"first": "Junliang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Linli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2018. Non-autoregressive neural ma- chine translation with enhanced decoder input. In Proceedings of AAAI 2019.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fast decoding in sequence models using discrete latent variables",
"authors": [
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Aurko",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Pamar",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u0141ukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Pa- mar, Samy Bengio, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of ICML 2018.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sequencelevel knowledge distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1317--1327",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1139"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317-1327. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Elman",
"middle": [],
"last": "Mansimov",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1173--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "End-toend non-autoregressive neural machine translation with connectionist temporal classification",
"authors": [
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Helcl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3016--3021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jind\u0159ich Libovick\u00fd and Jind\u0159ich Helcl. 2018. End-to- end non-autoregressive neural machine translation with connectionist temporal classification. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3016- 3021, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parallel wavenet: Fast high-fidelity speech synthesis",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "Yazhe",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Babuschkin",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Van Den Driessche",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Lockhart",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Luis",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Cobo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stimberg",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.10433"
]
},
"num": null,
"urls": [],
"raw_text": "Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. 2017. Parallel wavenet: Fast high-fidelity speech synthesis. arXiv preprint arXiv:1711.10433.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Advances in Neural Information Processing Systems",
"authors": [
{
"first": "K",
"middle": [
"Q"
],
"last": "Lawrence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Non-autoregressive machine translation with auxiliary regularization",
"authors": [
{
"first": "Yiren",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In Proceedings of AAAI 2019.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Deliberation networks: Sequence generation beyond one-pass decoding",
"authors": [
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "1784--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass de- coding. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 1784-1794. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Synchronous bidirectional inference for neural sequence generation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2020,
"venue": "Artif. Intell",
"volume": "281",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang, Long Zhou, Yang Zhao, and Chengqing Zong. 2020. Synchronous bidirectional inference for neural sequence generation. Artif. Intell., 281:103234.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural machine translation: Challenges, progress and future",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang and Chengqing Zong. 2020. Neural ma- chine translation: Challenges, progress and future. volume abs/2004.05809.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Asynchronous bidirectional decoding for neural machine translation",
"authors": [
{
"first": "Xiangwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rongrong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Hongji",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangwen Zhang, Jinsong Su, Yue Qin, Yang Liu, Ron- grong Ji, and Hongji Wang. 2018. Asynchronous bidirectional decoding for neural machine transla- tion. In Proceedings of AAAI 2018.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sequence generation: From both sides to the middle",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Zhou, Jiajun Zhang, Heng Yu, and Chengqing Zong. 2019a. Sequence generation: From both sides to the middle. In Proceedings of IJCAI 2019.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Synchronous bidirectional neural machine translation",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "91--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Zhou, Jiajun Zhang, and Chengqing Zong. 2019b. Synchronous bidirectional neural machine transla- tion. Transactions of the Association for Computa- tional Linguistics, 7:91-105.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Decoding illustration of our proposed Encoder-NAD-AD framework including an encoder, non-autoregressive decoder (NAD) and autoregressive decoder (AD).",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "::::::::: geschrieben ::::: sein :::::: k\u00f6nnte . Reference then i will say , in my own words , what ::::: could ::: be ::::::: written within this framework . AT i then say to my own words , which :::::: may be between that framework . NAT i i say with my own words , which ::::: could ::: be ::::::: written between this scaffold . Our Model i then say , in my own words , what :::::",
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Decoding efficiency of different models. Latency is computed as average of per sentence decoding time on the test set of De\u21d2En.",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Translation examples from De\u21d2En task. The italic fonts indicate the incomplete translation problem.",
"html": null
}
}
}
}