{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:09.196245Z" }, "title": "Domain Adaptation for NMT via Filtered Iterative Back-Translation", "authors": [ { "first": "Surabhi", "middle": [], "last": "Kumari", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "surabhi.kumari6@tcs.com" }, { "first": "Nikhil", "middle": [], "last": "Jaiswal", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "nikhil.jais@tcs.com" }, { "first": "Mayur", "middle": [], "last": "Patidar", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "patidar.mayur@tcs.com" }, { "first": "Manasi", "middle": [], "last": "Patwardhan", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "manasi.patwardhan@tcs.com" }, { "first": "Shirish", "middle": [], "last": "Karande", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "shirish.karande@tcs.com" }, { "first": "Puneet", "middle": [], "last": "Agarwal", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "dr.puneet.a@outlook.com" }, { "first": "Lovekesh", "middle": [], "last": "Vig", "suffix": "", "affiliation": { "laboratory": "", "institution": "TCS Research", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "lovekesh.vig@tcs.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Domain-specific Neural Machine Translation (NMT) model can provide improved performance, however, it is difficult to always access a domain-specific parallel corpus. Iterative Back-Translation can be used for fine-tuning an NMT model for a domain even if only a monolingual domain corpus is available. The quality of synthetic parallel corpora in terms of closeness to in-domain sentences can play an important role in the performance of the translation model. Recent works have shown that filtering at different stages of the back translation and weighting the sentences can provide state-of-the-art performance. In comparison, in this work, we observe that a simpler filtering approach based on a domain classifier, applied only to the pseudo-training data can consistently perform better, providing performance gains of 1.40, 1.82 and 0.76 in terms of BLEU score for Medical, Law and IT in one direction, and 1.28, 1.60 and 1.60 in the other direction in low resource scenario over competitive baselines. In the high resource scenario, our approach is at par with competitive baselines.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Domain-specific Neural Machine Translation (NMT) model can provide improved performance, however, it is difficult to always access a domain-specific parallel corpus. Iterative Back-Translation can be used for fine-tuning an NMT model for a domain even if only a monolingual domain corpus is available. The quality of synthetic parallel corpora in terms of closeness to in-domain sentences can play an important role in the performance of the translation model. Recent works have shown that filtering at different stages of the back translation and weighting the sentences can provide state-of-the-art performance. In comparison, in this work, we observe that a simpler filtering approach based on a domain classifier, applied only to the pseudo-training data can consistently perform better, providing performance gains of 1.40, 1.82 and 0.76 in terms of BLEU score for Medical, Law and IT in one direction, and 1.28, 1.60 and 1.60 in the other direction in low resource scenario over competitive baselines. In the high resource scenario, our approach is at par with competitive baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) (Bahdanau et al., 2015; Vaswani et al., 2017) systems heavily rely on the availability of the parallel corpora to produce good quality translations (Koehn and Knowles, 2017) . Even for high resource language pairs, in-domain parallel corpora are scarce. Chu and Wang (2018) ; address this challenge of domain adaptation with the objective of improving the performance of an NMT system by exploiting the in-domain monolingual corpora and out-of-domain parallel corpora for a given language pair.", "cite_spans": [ { "start": 33, "end": 56, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF0" }, { "start": 57, "end": 78, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF34" }, { "start": 181, "end": 206, "text": "(Koehn and Knowles, 2017)", "ref_id": "BIBREF22" }, { "start": 287, "end": 306, "text": "Chu and Wang (2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the current work, we build on top of the existing data centric approaches for domain adaptation (Chu and Wang, 2018) , i.e., Back-Translation (BT) (Sennrich et al., 2016a) and Iterative Back-Translation (IBT) (Hoang et al., 2018) . IBT is a variant of BT, which leverages both source and target-side monolingual corpora along with the out-of-domain parallel corpora and train N M T s\u2192t and N M T t\u2192s in alternate fashion till convergence, where N M T s\u2192t generates the synthetic parallel corpora for N M T t\u2192s and vice versa.", "cite_spans": [ { "start": 99, "end": 119, "text": "(Chu and Wang, 2018)", "ref_id": "BIBREF5" }, { "start": 128, "end": 149, "text": "Back-Translation (BT)", "ref_id": null }, { "start": 150, "end": 174, "text": "(Sennrich et al., 2016a)", "ref_id": "BIBREF28" }, { "start": 212, "end": 232, "text": "(Hoang et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The performance of the NMT is influenced by the quality of synthetic parallel corpora as noted by Poncelas et al. (2018) ; Fadaee and Monz (2018) . Hence, for the domain adaptation task, Dou et al. (2020) proposed a curriculum-based approach (DDSWIBT) for sentence selection from the in-domain monolingual corpora and use Junczys-Dowmunt (2018) for weight assignment to synthetic parallel corpora. In initial iterations of IBT, DDSWIBT prefer simple sentences over representative in-domain sentences, in later iterations they use more representative sentences as compared to simple sentences. Meanwhile, Imankulova et al. (2017) use an in-domain language model (sent-LM) and \"Round-Trip BLEU\" score for synthetic parallel corpora filtering.", "cite_spans": [ { "start": 98, "end": 120, "text": "Poncelas et al. (2018)", "ref_id": "BIBREF27" }, { "start": 123, "end": 145, "text": "Fadaee and Monz (2018)", "ref_id": "BIBREF8" }, { "start": 187, "end": 204, "text": "Dou et al. (2020)", "ref_id": "BIBREF6" }, { "start": 604, "end": 628, "text": "Imankulova et al. (2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a \"classifier augmented filtered iterative back-translation\" (CFIBT) for the domain adaptation task. We train two Convolutional Neural Network (CNN) (Kim, 2014) based binary classifiers, one in source and the other in the target language on the combination of in-domain and out-of-domain corpora. We use IBT for synthetic parallel corpora generation and classifierbased filtering to remove the pair of sentences where the synthetic sentence in the pair does not belong to the domain. This entire procedure is depicted in Figure 1 . We do not employ sentence selection over the monolingual corpora, or a weighting mechanism for the synthetic corpora, and neither utilize any \"Round-Trip\" criteria for scoring the synthetic parallel corpora. We first train two Base NMT models -one in each direction on the out-of-domain parallel corpora. (b) We train two classifier-based filtering models -one for each source and target language to distinguish between in-domain and out-domain translated sentences. (c) We then use the trained NMT models to translate in-domain monolingual corpora. The translated sentences are then filtered to remove out of domain sentences. The remaining sentences along with their corresponding true source/target sentences are used to curate synthetic parallel data which is then utilized to fine-tune the NMT models and this entire cycle is iterated until convergence.", "cite_spans": [ { "start": 175, "end": 186, "text": "(Kim, 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 547, "end": 555, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the current work, we present our domain adaptation results for the German (de) -English (en) language pair on three different domains -Medical, Law and IT under low and high resource scenarios. In the low resource scenario, our proposed method CFIBT outperforms all the baselines in every domain. In the high resource settings, CFIBT outperforms the baselines in most of the scenarios, whereas it performs competitively with the best baseline results in the rest of the scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows: We describe related work in Section 2, our problem statement in Section 3, the proposed approach in Section 4. We present the results of the proposed and other baseline approaches in Section 5 and conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the current work, our objective is to improve the performance of the NMT model on in-domain sentences given out-of-domain parallel corpora and indomain monolingual corpora in both source and target language, which is known as domain-adaptation for NMT (Chu and Wang, 2018; . Approaches for domain-adaptation (Chu and Wang, 2018) are categorized into data-centric and modelcentric. Data-Centric approaches for domain adaptation focuses on the use of in-domain monolingual corpora (Zhang and Zong, 2016; Cheng et al., 2016) , synthetic corpora (Sennrich et al., 2016a; Hoang et al., 2018; Hu et al., 2019) , or parallel corpora (Luong and Manning, 2015; Chu et al., 2017) along with the out-of-domain parallel corpora. On the other hand, model-centric approaches modify the NMT architecture to include domain information i.e. domain-tags (Britz et al., 2017) , domain embedding with word embeddings (Kobus et al., 2017) and assign higher weights to in-domain sentences as compared to out-of-domain sentences (Wang et al., 2017) . In our current work, we use the data-centric approach for domain adaptation to generate synthetic parallel data via Iterative Back-Translation. Shimodaira (2000) ; Jiang and Zhai (2007) ; Foster et al. (2010); S\u00f8gaard (2011); Sgaard (2013) have proposed different instance weighting based approaches for domain adaptation in NLP, where in-domain instances are assigned more weight as compared to the out-of-domain instances. In NMT, Poncelas et al. (2018); Fadaee and Monz (2018) observe that noisy sentences in synthetic parallel corpora can affect the performance of the translation model. Round-Trip BLEU score (Papineni et al., 2002) between the authentic and synthetic versions of the same sentence is used by Imankulova et al. (2017 Imankulova et al. ( , 2019 to filter noisy synthetic corpora. And they also use the language model trained on indomain monolingual data to filter the noisy sentences. Similarly, Jaiswal et al. 2020 embeddings of the source and synthetic corpus to filter out noisy pairs. Instead of filtering the noisy sentences from the training data, He et al. 2016; Zhang et al. 2018; Wang et al. (2019) assign the lower weight to them during model training. In Dou et al. (2020) , they use a variant of Moore and Lewis (2010) for data selection from in-domain monolingual corpora and use Junczys-Dowmunt (2018) for weight assignment to synthetic corpora generated by IBT. In the current approach, we use the whole in-domain monolingual data and filter the noisy synthetic corpora with the help of a simple binary classifier which is trained on in-domain and out-of-domain corpora.", "cite_spans": [ { "start": 255, "end": 275, "text": "(Chu and Wang, 2018;", "ref_id": "BIBREF5" }, { "start": 311, "end": 331, "text": "(Chu and Wang, 2018)", "ref_id": "BIBREF5" }, { "start": 482, "end": 504, "text": "(Zhang and Zong, 2016;", "ref_id": null }, { "start": 505, "end": 524, "text": "Cheng et al., 2016)", "ref_id": "BIBREF2" }, { "start": 545, "end": 569, "text": "(Sennrich et al., 2016a;", "ref_id": "BIBREF28" }, { "start": 570, "end": 589, "text": "Hoang et al., 2018;", "ref_id": "BIBREF11" }, { "start": 590, "end": 606, "text": "Hu et al., 2019)", "ref_id": "BIBREF13" }, { "start": 629, "end": 654, "text": "(Luong and Manning, 2015;", "ref_id": "BIBREF23" }, { "start": 655, "end": 672, "text": "Chu et al., 2017)", "ref_id": "BIBREF3" }, { "start": 839, "end": 859, "text": "(Britz et al., 2017)", "ref_id": "BIBREF1" }, { "start": 900, "end": 920, "text": "(Kobus et al., 2017)", "ref_id": "BIBREF20" }, { "start": 1009, "end": 1028, "text": "(Wang et al., 2017)", "ref_id": "BIBREF36" }, { "start": 1175, "end": 1192, "text": "Shimodaira (2000)", "ref_id": "BIBREF30" }, { "start": 1195, "end": 1216, "text": "Jiang and Zhai (2007)", "ref_id": "BIBREF17" }, { "start": 1645, "end": 1668, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF26" }, { "start": 1746, "end": 1769, "text": "Imankulova et al. (2017", "ref_id": "BIBREF14" }, { "start": 1770, "end": 1796, "text": "Imankulova et al. ( , 2019", "ref_id": "BIBREF15" }, { "start": 2141, "end": 2159, "text": "Wang et al. (2019)", "ref_id": "BIBREF37" }, { "start": 2218, "end": 2235, "text": "Dou et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Given out-of-domain parallel corpora D p , and indomain monolingual corpora M s , M t in source and target language respectively. Our objective is to create a pair of in-domain NMT models N M T s\u2192t , N M T t\u2192s which can translate in-domain sentences with high efficacy from source to target (s \u2192 t) and target to source language (t \u2192 s) respectively. Similar to Edunov et al. 2018, we use more D p in high resource scenario as compared to low resource scenario and the same amount of M s , M t in both the scenario.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Description", "sec_num": "3" }, { "text": "We describe our proposed approach \"classifier augmented filtered iterative back-translation\" (CFIBT) in Algorithm 1. We assume that we have access to out-of-domain parallel corpora and in-domain ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": "N M T s\u2192t \u2190 Train NMT using D p for s \u2192 t 2: N M T t\u2192s \u2190 Train NMT using D p for t \u2192 s 3: while N M T s\u2192t and N M T t\u2192s not converged do 4: M s \u2190 Translate M t using N M T t\u2192s 5: M t \u2190 Translate M s using N M T s\u2192t 6: F M s \u2190 Filtering(M s ) 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": ":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": "F M t \u2190 Filtering(M t ) 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": ":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": "S p \u2190 synthetic Parallel Data(F M s , M t ) 9: S p \u2190 synthetic Parallel Data(F M t , M s ) 10: N M T s\u2192t \u2190 Fine-tune N M T s\u2192t using S p 11: N M T t\u2192s \u2190 Fine-tune N M T t\u2192s using S p 12: end while 13: return N M T s\u2192t , N M T t\u2192s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": "monolingual corpora in both source and target languages. Firstly, we train N M T s\u2192t and N M T t\u2192s using out-of-domain parallel corpora D p . Then we use these trained N M T s\u2192t , N M T t\u2192s for translating M s to M t and M t to M s respectively. We then apply a classifier-based filtering technique which is described below on these translated sentences. The filtered sentences F M s and F M t and their corresponding in-domain monolingual sentences M t and M s are used to curate synthetic parallel data. Thereafter, N M T s\u2192t and N M T t\u2192s are fine-tuned on this synthetic parallel data. This entire process repeats until convergence. We consider N M T s\u2192t and N M T t\u2192s to be converged when there is no improvement in both the models when compared to its preceding iteration model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "4" }, { "text": "We propose a naive \"classifierbased filtering model\" using a Convolutional Neural Network. The filtering model consists of a binary classifier trained to distinguish between indomain and out-of-domain sentences. For each domain, we train two such models -one for source and other for the target language. The classifier is trained only once at the beginning, and we utilize the same classifier in each successive iterations. Given a translated sentence as input, the classifier", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Model:", "sec_num": null }, { "text": "Source (de)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Model:", "sec_num": null }, { "text": "warnhinweis, dass das arzneimittel fr kinder unerreichbar und nicht sichtbar aufzubewahren ist Target (en) special warning that the medicinal product must be stored out of the reach and sight of children BASE warning that drugs for children is unvisible and not visible .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Model:", "sec_num": null }, { "text": "CFIBT 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Model:", "sec_num": null }, { "text": "warning that the medicinal product is being unabsorbed and has not been visible . CFIBT 2 warning that the medicines for children are not being able and not visible .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Model:", "sec_num": null }, { "text": "CFIBT 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Model:", "sec_num": null }, { "text": "warning that the medicinal product must be stored out of the reach and sight of children predicts the probability of this sentence as being in-domain. All translated sentences having a probability greater than a certain threshold are considered in-domain sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Model:", "sec_num": null }, { "text": "Here, we describe the datasets and the training details of our experiments. We also discuss and analyze the results and key observations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "We perform our experiments on German-English (de-en) language pair. In both low and high resource scenarios, we use the same out-of-domain News dataset as used by Dou et al. (2020) , which is described in Table 1 . For in-domain data, as described in Table 2 , we use the same dev and test set as used by Dou et al. (2020) for Medical (EMEA), Law (Acquis) and also the same number of monolingual sentences during domain adaptation. In addition to Medical and Law, in the current work, we also report results on the IT domain and for that, we use the dataset described in Tiedemann (2012) .", "cite_spans": [ { "start": 163, "end": 180, "text": "Dou et al. (2020)", "ref_id": "BIBREF6" }, { "start": 305, "end": 322, "text": "Dou et al. (2020)", "ref_id": "BIBREF6" }, { "start": 571, "end": 587, "text": "Tiedemann (2012)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 205, "end": 212, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 251, "end": 258, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Dataset Description", "sec_num": "5.1" }, { "text": "We tokenize the out-of-domain sentence pairs as well as in-domain sentences using moses (Koehn et al., 2007) and apply byte-pair-encoding (Sennrich et al., 2016b) with 37K merge operations.", "cite_spans": [ { "start": 88, "end": 108, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Description", "sec_num": "5.1" }, { "text": "We train two types of models i.e. filtering models and NMTs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "5.2" }, { "text": "Filtering Models: We train the filtering models in English and German languages for each domain. We use language model and classifier as filtering models for sent-LM and CFIBT respectively. For the language model, we use one layer LSTM (Hochreiter and Schmidhuber, 1997) having 512 embedding size and 50 sequence length using TF-LM Toolkit (Verwimp et al., 2018) . The model is trained till convergence with the patience of three. We train on the tokenized monolingual in-domain dataset for each domain in English and German with vocabulary size \u2248 60K and \u2248 80K respectively. The Classifier architecture is inspired by Kim (2014) . For training the binary classifier, we use sub-sampled out-of-domain data as one class and in-domain as another. The tokenized sentences are used with a vocabulary size of 50K. For filtering models, we obtain the optimal values of thresholds based on the development set, where the objective is to maximize the true positives (i.e. in-domain sentences) and minimize the false positives (i.e. out-of-domain sentences) in the synthetic parallel corpus. The overall intuition is that the classifier should help to select the in-domain sentences, which could then be utilized to further train the NMT models. In sent-LM for all domains, we use 60 and 80 as a value of perplexity to filter-out out-of-domain sentences from synthetic sentences in English and German respectively. For CFIBT, we use 0.6 for Medical and 0.5 for Law and IT as a threshold over classifier probability for filtering sentences in English and German.", "cite_spans": [ { "start": 236, "end": 270, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF12" }, { "start": 340, "end": 362, "text": "(Verwimp et al., 2018)", "ref_id": "BIBREF35" }, { "start": 619, "end": 629, "text": "Kim (2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "5.2" }, { "text": "NMT: We use Base-Transformer (Vaswani et al., 2017) for our experiments. We use FairSeq (Ott et al., 2019) for training all the NMT models. We use out-of-domain parallel corpora to train the initial NMT model i.e. BASE in both low and high resource scenarios. We finetune the model obtained from BASE in BT, IBT, sent-LM and CFIBT with the synthetic parallel in-domain dataset, curated with respective approaches. We use the value of patience as five for all approaches.", "cite_spans": [ { "start": 29, "end": 51, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF34" }, { "start": 88, "end": 106, "text": "(Ott et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "5.2" }, { "text": "Here, we compare and discuss the results of our proposed approach along with other baseline methods. As shown in Table 4 : We use the BLEU (Papineni et al., 2002) score to compare CFIBT with existing baselines, BASE (Vaswani et al., 2017) , BT (Sennrich et al., 2016a) , sent-LM (Imankulova et al., 2017) , IBT (Hoang et al., 2018) and DDSWIBT (Dou et al., 2020) . Except DDSWIBT, we implemented all baselines.", "cite_spans": [ { "start": 139, "end": 162, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF26" }, { "start": 216, "end": 238, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF34" }, { "start": 244, "end": 268, "text": "(Sennrich et al., 2016a)", "ref_id": "BIBREF28" }, { "start": 279, "end": 304, "text": "(Imankulova et al., 2017)", "ref_id": "BIBREF14" }, { "start": 307, "end": 331, "text": "IBT (Hoang et al., 2018)", "ref_id": null }, { "start": 344, "end": 362, "text": "(Dou et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "viz., high resource and low resource, on three different domains i.e. Medical, Law and IT in both directions for German-English (de-en) language pair on in-domain test set. With monolingual data only in both source and target language, we get performance gains of 27.56, 18.66 and 24.51 in terms of BLEU score for Medical, Law and IT in one direction (de-en), and 24.1, 12.04 and 22.12 in the other direction (en-de) in low resource scenario over the BASE. In the low resource scenario, CFIBT outperformed sent-LM in both directions and all the domains. CFIBT also outperformed sent-LM in high resource scenario except in one direction for the IT domain. In the low resource scenario, filtering based approaches performs better than the IBT . And CFIBT outperformed in all the cases. Our results show that CFIBT is efficient when the base model is not adequately trained. In high resource scenario results of CFIBT are comparable with other baselines. CFIBT outperformed in both directions for Law and one direction IT domain, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "Why CFIBT works? According to Figure 2 (Appendix), IBT is trained with all synthetic bilingual sentences without filtering, which may hurt the performance of IBT in subsequent iterations because the current model is used to generate the data for the next iteration. But in CFIBT, as shown in Figure 3 , filtering prevents training of the NMT model on out-of-domain sentence pairs which leads to a better domain model in subsequent iterations.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 39, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 293, "end": 301, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.3" }, { "text": "In the context of domain adaptation for NMT, we propose a simple and effective approach for filtering the synthetic parallel corpus, which is as good as more involved approaches for the same task. In the low resource scenario, the proposed approach outperforms all the existing baselines whereas we get similar results to baselines in the high resource scenario. As a part of future work, we would like to validate our findings on different language pairs and multiple domains and also would like to explore the combination of different filtering techniques. And instead of training different filtering models for source and target language, we would like to use a single multilingual filtering model. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "A.1 Results and Analysis Figure 2 describes the variation in the number of synthetic parallel sentence pairs as well as BLEU score on test data with the different number of iterations for the IBT, sent-LM and CFIBT models for low resource scenario. For IBT, the number of synthetic sentence pairs remains the same for all the iteration. For filtering models, the number of filtered synthetic sentences increases with every upcoming iteration. This increase in good quality synthetic pairs helps in the improvement of the translation model resulting in a better BLEU score. This process is iterated until we do not observe any improvement in the BLEU score compared to the last iteration. Since one direction translation model creates synthetic data for the other direction, we observe that if there is an improvement in the former case for a given iteration, then there is an improvement in the later model for the next iteration. ", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 33, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "A Appendix", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Dr. Gautam Shroff for his valuable comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Effective domain mixing for neural machine translation", "authors": [ { "first": "Denny", "middle": [], "last": "Britz", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Reid", "middle": [], "last": "Pryzant", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "118--126", "other_ids": { "DOI": [ "10.18653/v1/W17-4712" ] }, "num": null, "urls": [], "raw_text": "Denny Britz, Quoc Le, and Reid Pryzant. 2017. Ef- fective domain mixing for neural machine transla- tion. In Proceedings of the Second Conference on Machine Translation, pages 118-126, Copenhagen, Denmark. Association for Computational Linguis- tics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semisupervised learning for neural machine translation", "authors": [ { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Wei", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1965--1974", "other_ids": { "DOI": [ "10.18653/v1/P16-1185" ] }, "num": null, "urls": [], "raw_text": "Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- supervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1965-1974, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An empirical comparison of domain adaptation methods for neural machine translation", "authors": [ { "first": "Chenhui", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Raj", "middle": [], "last": "Dabre", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "385--391", "other_ids": { "DOI": [ "10.18653/v1/P17-2061" ] }, "num": null, "urls": [], "raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of domain adaptation methods for neural machine translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 385-391, Vancouver, Canada. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A comprehensive empirical comparison of domain adaptation methods for neural machine translation", "authors": [ { "first": "Chenhui", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Raj", "middle": [], "last": "Dabre", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2018, "venue": "Journal of Information Processing", "volume": "26", "issue": "", "pages": "529--538", "other_ids": { "DOI": [ "10.2197/ipsjjip.26.529" ] }, "num": null, "urls": [], "raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2018. A comprehensive empirical comparison of domain adaptation methods for neural machine translation. Journal of Information Processing, 26:529-538.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A survey of domain adaptation for neural machine translation", "authors": [ { "first": "Chenhui", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1304--1319", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenhui Chu and Rui Wang. 2018. A survey of do- main adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1304-1319, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Dynamic data selection and weighting for iterative back-translation", "authors": [ { "first": "Zi-Yi", "middle": [], "last": "Dou", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5894--5904", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.475" ] }, "num": null, "urls": [], "raw_text": "Zi-Yi Dou, Antonios Anastasopoulos, and Graham Neubig. 2020. Dynamic data selection and weight- ing for iterative back-translation. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 5894- 5904, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Understanding back-translation at scale", "authors": [ { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Backtranslation sampling by targeting difficult words in neural machine translation", "authors": [ { "first": "Marzieh", "middle": [], "last": "Fadaee", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "436--446", "other_ids": { "DOI": [ "10.18653/v1/D18-1040" ] }, "num": null, "urls": [], "raw_text": "Marzieh Fadaee and Christof Monz. 2018. Back- translation sampling by targeting difficult words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 436-446, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Discriminative instance weighting for domain adaptation in statistical machine translation", "authors": [ { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Goutte", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "451--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Foster, Cyril Goutte, and Roland Kuhn. 2010. Discriminative instance weighting for domain adap- tation in statistical machine translation. In Proceed- ings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, pages 451- 459, Cambridge, MA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dual learning for machine translation", "authors": [ { "first": "Di", "middle": [], "last": "He", "suffix": "" }, { "first": "Yingce", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Nenghai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wei-Ying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "820--828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 820-828. Curran Associates, Inc.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Iterative backtranslation for neural machine translation", "authors": [ { "first": "Duy", "middle": [], "last": "Vu Cong", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "18--24", "other_ids": { "DOI": [ "10.18653/v1/W18-2703" ] }, "num": null, "urls": [], "raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "Jrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "", "issue": "", "pages": "1735--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural computation, pages 1735-80.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Domain adaptation of neural machine translation by lexicon induction", "authors": [ { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Mengzhou", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2989--3001", "other_ids": { "DOI": [ "10.18653/v1/P19-1286" ] }, "num": null, "urls": [], "raw_text": "Junjie Hu, Mengzhou Xia, Graham Neubig, and Jaime Carbonell. 2019. Domain adaptation of neural ma- chine translation by lexicon induction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2989-3001, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improving low-resource neural machine translation with filtered pseudo-parallel corpus", "authors": [ { "first": "Aizhan", "middle": [], "last": "Imankulova", "suffix": "" }, { "first": "Takayuki", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Asian Translation (WAT2017)", "volume": "", "issue": "", "pages": "70--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aizhan Imankulova, Takayuki Sato, and Mamoru Ko- machi. 2017. Improving low-resource neural ma- chine translation with filtered pseudo-parallel cor- pus. In Proceedings of the 4th Workshop on Asian Translation (WAT2017), pages 70-78, Taipei, Tai- wan. Asian Federation of Natural Language Process- ing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Filtered pseudo-parallel corpus improves low-resource neural machine translation", "authors": [ { "first": "Aizhan", "middle": [], "last": "Imankulova", "suffix": "" }, { "first": "Takayuki", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2019, "venue": "ACM Trans. Asian Low-Resour. Lang. Inf. Process", "volume": "19", "issue": "2", "pages": "", "other_ids": { "DOI": [ "10.1145/3341726" ] }, "num": null, "urls": [], "raw_text": "Aizhan Imankulova, Takayuki Sato, and Mamoru Komachi. 2019. Filtered pseudo-parallel corpus improves low-resource neural machine translation. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 19(2).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Puneet Agarwal, and Lovekesh Vig. 2020. Improving NMT via filtered back translation", "authors": [ { "first": "Nikhil", "middle": [], "last": "Jaiswal", "suffix": "" }, { "first": "Mayur", "middle": [], "last": "Patidar", "suffix": "" }, { "first": "Surabhi", "middle": [], "last": "Kumari", "suffix": "" }, { "first": "Manasi", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Shirish", "middle": [], "last": "Karande", "suffix": "" } ], "year": null, "venue": "Proceedings of the 7th Workshop on Asian Translation", "volume": "", "issue": "", "pages": "154--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikhil Jaiswal, Mayur Patidar, Surabhi Kumari, Man- asi Patwardhan, Shirish Karande, Puneet Agarwal, and Lovekesh Vig. 2020. Improving NMT via filtered back translation. In Proceedings of the 7th Workshop on Asian Translation, pages 154-159, Suzhou, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Instance weighting for domain adaptation in NLP", "authors": [ { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "264--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In Pro- ceedings of the 45th Annual Meeting of the Associ- ation of Computational Linguistics, pages 264-271, Prague, Czech Republic. Association for Computa- tional Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dual conditional cross-entropy filtering of noisy parallel corpora", "authors": [ { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "888--895", "other_ids": { "DOI": [ "10.18653/v1/W18-6478" ] }, "num": null, "urls": [], "raw_text": "Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888-895, Belgium, Brussels. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Domain control for neural machine translation", "authors": [ { "first": "Catherine", "middle": [], "last": "Kobus", "suffix": "" }, { "first": "Josep", "middle": [], "last": "Crego", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "372--378", "other_ids": { "DOI": [ "10.26615/978-954-452-049-6_049" ] }, "num": null, "urls": [], "raw_text": "Catherine Kobus, Josep Crego, and Jean Senellart. 2017. Domain control for neural machine transla- tion. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 372-378, Varna, Bulgaria. IN- COMA Ltd.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Six challenges for neural machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Knowles", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Neural Machine Translation", "volume": "", "issue": "", "pages": "28--39", "other_ids": { "DOI": [ "10.18653/v1/W17-3204" ] }, "num": null, "urls": [], "raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39, Vancouver. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Stanford neural machine translation systems for spoken language domain", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spo- ken language domain. In International Workshop on Spoken Language Translation.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Intelligent selection of language model training data", "authors": [ { "first": "C", "middle": [], "last": "Robert", "suffix": "" }, { "first": "William", "middle": [], "last": "Moore", "suffix": "" }, { "first": "", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Conference Short Papers", "volume": "", "issue": "", "pages": "220--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Pro- ceedings of the ACL 2010 Conference Short Papers, pages 220-224, Uppsala, Sweden. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Investigating backtranslation in neural machine translation", "authors": [ { "first": "Alberto", "middle": [], "last": "Poncelas", "suffix": "" }, { "first": "Dimitar", "middle": [], "last": "Sht", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Shterionov", "suffix": "" }, { "first": "", "middle": [], "last": "Way", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alberto Poncelas, Dimitar Sht. Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. 2018. Investigating backtranslation in neu- ral machine translation. CoRR, abs/1804.06189.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": { "DOI": [ "10.18653/v1/P16-1009" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improving predictive inference under covariate shift by weighting the loglikelihood function", "authors": [ { "first": "Hidetoshi", "middle": [], "last": "Shimodaira", "suffix": "" } ], "year": 2000, "venue": "Journal of Statistical Planning and Inference", "volume": "90", "issue": "2", "pages": "227--244", "other_ids": { "DOI": [ "10.1016/S0378-3758(00)00115-4" ] }, "num": null, "urls": [], "raw_text": "Hidetoshi Shimodaira. 2000. Improving predictive in- ference under covariate shift by weighting the log- likelihood function. Journal of Statistical Planning and Inference, 90(2):227-244.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Data point selection for crosslanguage adaptation of dependency parsers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "682--686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 682-686, Portland, Ore- gon, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Semi-supervised learning and domain adaptation in natural language processing", "authors": [ { "first": "Anders", "middle": [], "last": "Sgaard", "suffix": "" } ], "year": 2013, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "6", "issue": "2", "pages": "1--103", "other_ids": { "DOI": [ "10.2200/S00497ED1V01Y201304HLT021" ] }, "num": null, "urls": [], "raw_text": "Anders Sgaard. 2013. Semi-supervised learning and domain adaptation in natural language processing. Synthesis Lectures on Human Language Technolo- gies, 6(2):1-103.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Parallel data, tools and interfaces in OPUS", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "2214--2218", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218. European Language Resources Association (ELRA).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "TF-LM: TensorFlow-based language modeling toolkit", "authors": [ { "first": "Lyan", "middle": [], "last": "Verwimp", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Van Hamme", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Wambacq", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lyan Verwimp, Hugo Van hamme, and Patrick Wambacq. 2018. TF-LM: TensorFlow-based lan- guage modeling toolkit. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Instance weighting for neural machine translation domain adaptation", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Lemao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kehai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1482--1488", "other_ids": { "DOI": [ "10.18653/v1/D17-1155" ] }, "num": null, "urls": [], "raw_text": "Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482-1488, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Improving back-translation with uncertainty-based confidence estimation", "authors": [ { "first": "Shuo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D19-1073" ] }, "num": null, "urls": [], "raw_text": "Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, and Maosong Sun. 2019. Improving back-translation with uncertainty-based confidence estimation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Overall procedure of our proposed approach. (a)" }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "We represent the variation in Synthetic Parallel Sentence Pairs and BLEU scores for different number of iterations in IBT, CFIBT and sent-LM models for low resources. The numerical value represents the iteration number for that particular model (a) & (b) represents de-en and en-de models for Law domain, (c) & (d) represents de-en and en-de models for Medical domain, (e) & (f) represents de-en and en-de models for IT domain" }, "TABREF1": { "html": null, "content": "
Dataset Monolingual Dev Test
Medical 400K2K2K
Law500K2K2K
IT240K2.5K 1.8K
", "type_str": "table", "text": "Out-of-domain Dataset description in high and low resource scenario, where Train, Dev and Test set consists of bilingual sentences.", "num": null }, "TABREF2": { "html": null, "content": "", "type_str": "table", "text": "In-domain Dataset description for three domains Medical, Law and IT where Monolingual refers to in-domain sentences both in source and target language. Dev and Test set consists of in-domain bilingual sentences.", "num": null }, "TABREF3": { "html": null, "content": "
", "type_str": "table", "text": "Algorithm 1 Filtering Augmented Iterative Back-Translation for Domain Adaptation Require: D p Out-of-Domain parallel Corpora Require: M s In-Domain monolingual Corpora in source language Require: M t In-Domain monolingual Corpora in target language 1:", "num": null }, "TABREF4": { "html": null, "content": "
", "type_str": "table", "text": "", "num": null }, "TABREF5": { "html": null, "content": "
, we compare
", "type_str": "table", "text": "en-de de-en en-de de-en en-de High Resource BASE 33.61 24.98 33.07 23.33 21.93 16.27 BT 41.05 36.32 38.27 28.32 35.31 24.80 sent-LM 47.44 37.85 40.82 30.35 39.24 30.11", "num": null }, "TABREF6": { "html": null, "content": "
Zhirui Zhang, S. Liu, Mu Li, M. Zhou, and E. Chen.
2018. Joint training for neural machine translation
models with monolingual data. In AAAI.
", "type_str": "table", "text": "9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 791-802, Hong Kong, China. Association for Computational Linguistics. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1535-1545, Austin, Texas. Association for Computational Linguistics.", "num": null } } } }