{ "paper_id": "N19-1043", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:59:43.514405Z" }, "title": "Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation", "authors": [ { "first": "Xing", "middle": [], "last": "Niu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "xingniu@cs.umd.edu" }, { "first": "Weijia", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "weijia@cs.umd.edu" }, { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "marine@cs.umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We aim to better exploit the limited amounts of parallel text available in low-resource settings by introducing a differentiable reconstruction loss for neural machine translation (NMT). This loss compares original inputs to reconstructed inputs, obtained by backtranslating translation hypotheses into the input language. We leverage differentiable sampling and bi-directional NMT to train models end-to-end, without introducing additional parameters. This approach achieves small but consistent BLEU improvements on four language pairs in both translation directions, and outperforms an alternative differentiable reconstruction strategy based on hidden states.", "pdf_parse": { "paper_id": "N19-1043", "_pdf_hash": "", "abstract": [ { "text": "We aim to better exploit the limited amounts of parallel text available in low-resource settings by introducing a differentiable reconstruction loss for neural machine translation (NMT). This loss compares original inputs to reconstructed inputs, obtained by backtranslating translation hypotheses into the input language. We leverage differentiable sampling and bi-directional NMT to train models end-to-end, without introducing additional parameters. This approach achieves small but consistent BLEU improvements on four language pairs in both translation directions, and outperforms an alternative differentiable reconstruction strategy based on hidden states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) performance degrades sharply when parallel training data is limited (Koehn and Knowles, 2017) . Past work has addressed this problem by leveraging monolingual data (Sennrich et al., 2016a; Ramachandran et al., 2017) or multilingual parallel data (Zoph et al., 2016; Johnson et al., 2017; Gu et al., 2018a) . We hypothesize that the traditional training can be complemented by better leveraging limited training data. To this end, we propose a new training objective for this model by augmenting the standard translation cross-entropy loss with a differentiable input reconstruction loss to further exploit the source side of parallel samples.", "cite_spans": [ { "start": 101, "end": 126, "text": "(Koehn and Knowles, 2017)", "ref_id": "BIBREF19" }, { "start": 197, "end": 221, "text": "(Sennrich et al., 2016a;", "ref_id": "BIBREF30" }, { "start": 222, "end": 248, "text": "Ramachandran et al., 2017)", "ref_id": "BIBREF29" }, { "start": 279, "end": 298, "text": "(Zoph et al., 2016;", "ref_id": "BIBREF41" }, { "start": 299, "end": 320, "text": "Johnson et al., 2017;", "ref_id": "BIBREF17" }, { "start": 321, "end": 338, "text": "Gu et al., 2018a)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Input reconstruction is motivated by the idea of round-trip translation. Suppose sentence f is translated forward to e using model \u03b8 f e and then translated back tof using model \u03b8 ef , then e is more likely to be a good translation if the distance betweenf and f is small (Brislin, 1970) . Prior work applied round-trip translation to monolingual examples and sampled the intermediate translation e from a K-best list generated by model \u03b8 f e using beam search (Cheng et al., 2016; . However, beam search is not differentiable which prevents back-propagating reconstruction errors to \u03b8 f e . As a result, reinforcement learning algorithms, or independent updates to \u03b8 f e and \u03b8 ef were required.", "cite_spans": [ { "start": 272, "end": 287, "text": "(Brislin, 1970)", "ref_id": "BIBREF5" }, { "start": 461, "end": 481, "text": "(Cheng et al., 2016;", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on the problem of making input reconstruction differentiable to simplify training. In past work, Tu et al. (2017) addressed this issue by reconstructing source sentences from the decoder's hidden states. However, this reconstruction task can be artificially easy if hidden states over-memorize the input. This approach also requires a separate auxiliary reconstructor, which introduces additional parameters.", "cite_spans": [ { "start": 121, "end": 137, "text": "Tu et al. (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose instead to combine benefits from differentiable sampling and bi-directional NMT to obtain a compact model that can be trained endto-end with back-propagation. Specifically,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Translations are sampled using the Straight-Through Gumbel Softmax (STGS) estimator (Jang et al., 2017; Bengio et al., 2013) , which allows back-propagating reconstruction errors.", "cite_spans": [ { "start": 86, "end": 105, "text": "(Jang et al., 2017;", "ref_id": "BIBREF16" }, { "start": 106, "end": 126, "text": "Bengio et al., 2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Our approach builds on the bi-directional NMT model (Niu et al., 2018; Johnson et al., 2017) , which improves low-resource translation by jointly modeling translation in both directions (e.g., Swahili \u2194 English). A single bi-directional model is used as a translator and a reconstructor (i.e. \u03b8 ef = \u03b8 f e ) without introducing more parameters.", "cite_spans": [ { "start": 54, "end": 72, "text": "(Niu et al., 2018;", "ref_id": "BIBREF25" }, { "start": 73, "end": 94, "text": "Johnson et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments show that our approach outperforms reconstruction from hidden states. It achieves consistent improvements across various low-resource language pairs and directions, showing its effectiveness in making better use of limited parallel data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using round-trip translations (f \u2192 e \u2192f ) as a training signal for NMT usually requires auxiliary models to perform back-translation and cannot be trained end-to-end without reinforcement learning. For instance, Cheng et al. (2016) added a reconstruction loss for monolingual examples to the training objective. evaluated the quality of e by a language model andf by a reconstruction likelihood. Both approaches have symmetric forward and backward translation models which are updated alternatively. This require policy gradient algorithms for training, which are not always stable.", "cite_spans": [ { "start": 212, "end": 231, "text": "Cheng et al. (2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Back-translation (Sennrich et al., 2016a) performs half of the reconstruction process, by generating a synthetic source side for monolingual target language examples: e \u2192f . It uses an auxiliary backward model to generate the synthetic data but only updates the parameters of the primary forward model. Iteratively updating forward and backward models (Zhang et al., 2018; Niu et al., 2018) is an expensive solution as back-translations are regenerated at each iteration.", "cite_spans": [ { "start": 17, "end": 41, "text": "(Sennrich et al., 2016a)", "ref_id": "BIBREF30" }, { "start": 352, "end": 372, "text": "(Zhang et al., 2018;", "ref_id": "BIBREF40" }, { "start": 373, "end": 390, "text": "Niu et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Prior work has sought to simplify the optimization of reconstruction losses by side-stepping beam search. Tu et al. (2017) first proposed to reconstruct NMT input from the decoder's hidden states while Wang et al. (2018a,b) suggested to use both encoder and decoder hidden states to improve translation of dropped pronouns. However, these models might achieve low reconstruction errors by learning to copy the input to hidden states. To avoid copying the input, Artetxe et al. (2018) and Lample et al. (2018) used denoising autoencoders (Vincent et al., 2008) in unsupervised NMT.", "cite_spans": [ { "start": 106, "end": 122, "text": "Tu et al. (2017)", "ref_id": "BIBREF33" }, { "start": 202, "end": 223, "text": "Wang et al. (2018a,b)", "ref_id": null }, { "start": 462, "end": 483, "text": "Artetxe et al. (2018)", "ref_id": "BIBREF0" }, { "start": 488, "end": 508, "text": "Lample et al. (2018)", "ref_id": "BIBREF21" }, { "start": 537, "end": 559, "text": "(Vincent et al., 2008)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Our approach is based instead on the Gumbel Softmax (Jang et al., 2017; Maddison et al., 2017) , which facilitates differentiable sampling of sequences of discrete tokens. It has been successfully applied in many sequence generation tasks, including artificial language emergence for multiagent communication (Havrylov and Titov, 2017) , composing tree structures from text (Choi et al., 2018) , and tasks under the umbrella of generative adversarial networks (Goodfellow et al., 2014) such as generating the context-free grammar (Kusner and Hern\u00e1ndez-Lobato, 2016), machine comprehension (Wang et al., 2017) and machine translation (Gu et al., 2018b) .", "cite_spans": [ { "start": 52, "end": 71, "text": "(Jang et al., 2017;", "ref_id": "BIBREF16" }, { "start": 72, "end": 94, "text": "Maddison et al., 2017)", "ref_id": "BIBREF22" }, { "start": 309, "end": 335, "text": "(Havrylov and Titov, 2017)", "ref_id": "BIBREF13" }, { "start": 374, "end": 393, "text": "(Choi et al., 2018)", "ref_id": "BIBREF7" }, { "start": 460, "end": 485, "text": "(Goodfellow et al., 2014)", "ref_id": "BIBREF10" }, { "start": 589, "end": 608, "text": "(Wang et al., 2017)", "ref_id": "BIBREF35" }, { "start": 633, "end": 651, "text": "(Gu et al., 2018b)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "NMT is framed as a conditional language model, where the probability of predicting target token e t at step t is conditioned on the previously generated sequence of tokens e