{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:14:02.736652Z" }, "title": "The UET-ICTU Submissions to the VLSP 2020 News Translation Task", "authors": [ { "first": "Thi-Vinh", "middle": [], "last": "Ngo", "suffix": "", "affiliation": { "laboratory": "", "institution": "TNU", "location": { "region": "Viet Nam" } }, "email": "" }, { "first": "Minh-Thuan", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Engineering and Technology, VNU", "location": { "region": "Viet Nam" } }, "email": "" }, { "first": "Minh", "middle": [], "last": "Cong", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Nguyen", "middle": [], "last": "Hoang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Engineering and Technology, VNU", "location": { "region": "Viet Nam" } }, "email": "" }, { "first": "Hoang-Quan", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Engineering and Technology, VNU", "location": { "region": "Viet Nam" } }, "email": "" }, { "first": "Phuong-Thai", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Engineering and Technology, VNU", "location": { "region": "Viet Nam" } }, "email": "" }, { "first": "Van-Vinh", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Engineering and Technology, VNU", "location": { "region": "Viet Nam" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We participate in the VLSP 2020 Shared Task for Machine Translation which focuses on the news domain translation in one direction English \u2192 Vietnamese. Our neural machine translation (NMT) system uses Back Translation (BT) of monolingual data in the target language to augment synthetic training data. Besides, we leverage the Term Frequency and Inverse Document Frequency (TF-IDF) method to data selection close to the indomain from other monolingual and parallel resources. To enhance the effectiveness of the system translation, we also employ other techniques such as fine-tuning and assembly translation. Our experiments showed that the system can achieve a significant improvement in BLEU score up to + 16.57 overcoming the indomain baseline system.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We participate in the VLSP 2020 Shared Task for Machine Translation which focuses on the news domain translation in one direction English \u2192 Vietnamese. Our neural machine translation (NMT) system uses Back Translation (BT) of monolingual data in the target language to augment synthetic training data. Besides, we leverage the Term Frequency and Inverse Document Frequency (TF-IDF) method to data selection close to the indomain from other monolingual and parallel resources. To enhance the effectiveness of the system translation, we also employ other techniques such as fine-tuning and assembly translation. Our experiments showed that the system can achieve a significant improvement in BLEU score up to + 16.57 overcoming the indomain baseline system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The University of Engineering and Technology (UET) and Thai Nguyen University of Information and Communication Technology (ICTU) participate in the VLSP 2020 Shared Task for Machine Translation on news domain translation from English to Vietnamese . From datasets in different domains of the Shared Task, we use various strategies to improve the quality of translation in the news domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Data selection Data selection techniques help MT systems better translate on a specific domain by eliminating irrelevant data from resources outside the in-domains. This reduces training time but still preserve performance when using smaller datasets instead of training on the large ones. Many works show several methods to select sentences close to background corpus such as: (Axelrod et al., 2011; van der Wees et al., 2017) compute scores for sentences out of domain corpus based on crossentropy difference (CED) (Moore and Lewis, 2010) from language models; (Wang et al., 2017; Zhang and Xiong, 2018) use sentence embeddings to rank source sentences. This method is only suitable for recurrent networks in NMT. (Wang et al., 2018; Zhang and Xiong, 2018) investigate the translation probability P (y|x, \u03b8) to be a dynamic criterion to extract sentence pairs during the training process. (Peris et al., 2016 ) train a neural network classifier to classify sentences into negative or positive fields. These works require training either language models or neural networks and they are less effective in the data sparse situations. (Silva et al., 2018) show empirical results in three various strategies as CED (Moore and Lewis, 2010), TF-IDF (Salton and Yang, 1973) and Feature Decay Algorithms (FDA) (Poncelas et al., 2017) . They show that the TF-IDF method has achieved the best improvements in both BLEU and TER (Translation Error Rate) measures. This technique is simple, fast, and does not require training language models or neural networks. Therefore, in this paper, we will leverage it to rank sentences in the scenario that in-domain corpus is small. The detail of this method will be presented in section 3.", "cite_spans": [ { "start": 378, "end": 400, "text": "(Axelrod et al., 2011;", "ref_id": "BIBREF0" }, { "start": 401, "end": 427, "text": "van der Wees et al., 2017)", "ref_id": "BIBREF24" }, { "start": 563, "end": 582, "text": "(Wang et al., 2017;", "ref_id": "BIBREF22" }, { "start": 583, "end": 605, "text": "Zhang and Xiong, 2018)", "ref_id": "BIBREF25" }, { "start": 716, "end": 735, "text": "(Wang et al., 2018;", "ref_id": "BIBREF23" }, { "start": 736, "end": 758, "text": "Zhang and Xiong, 2018)", "ref_id": "BIBREF25" }, { "start": 891, "end": 910, "text": "(Peris et al., 2016", "ref_id": "BIBREF12" }, { "start": 1133, "end": 1153, "text": "(Silva et al., 2018)", "ref_id": "BIBREF18" }, { "start": 1244, "end": 1267, "text": "(Salton and Yang, 1973)", "ref_id": "BIBREF14" }, { "start": 1303, "end": 1326, "text": "(Poncelas et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using monolingual resource Monolingual data is used widely in machine translation (MT) (Sennrich et al., 2015; Ha et al., 2017; Lample et al., 2018; Siddhant et al., 2020) due to its widely available. In this paper, we create additional synthetic parallel training data using BT method in (Sennrich et al., 2015) and investigate its effectiveness in our MT systems by combining with genuine parallel data.", "cite_spans": [ { "start": 87, "end": 110, "text": "(Sennrich et al., 2015;", "ref_id": "BIBREF15" }, { "start": 111, "end": 127, "text": "Ha et al., 2017;", "ref_id": "BIBREF3" }, { "start": 128, "end": 148, "text": "Lample et al., 2018;", "ref_id": "BIBREF5" }, { "start": 149, "end": 171, "text": "Siddhant et al., 2020)", "ref_id": "BIBREF17" }, { "start": 289, "end": 312, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fine-tuning (Luong and Manning, 2015; Zoph et al., 2016) have proposed the fine-tuning process to transfer some of the learned parameters from the parent model to the child model and have shown significant improvements in many translation tasks. Our systems also fine-tuning on subcorpus (a smaller corpus is extracted from a large corpus) to achieve the best translation effectiveness.", "cite_spans": [ { "start": 12, "end": 37, "text": "(Luong and Manning, 2015;", "ref_id": "BIBREF6" }, { "start": 38, "end": 56, "text": "Zoph et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Ensemble translation Ensemble translation enable to incorporate the outputs of trained models to enhance translation systems. We attempt to investigate this strategy in our MT system. Our paper demonstrates a substantial improvement in translating the news domain from the VLSP 2020 Shared Task when combining the aforementioned techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 2, we present an overview of Neural Machine Translation and focus on the transformer architecture. The details of the methods in our paper are presented in Section 3. The settings of the translation system and experimental results are discussed the Section 4. Related works are showed in Section 5. Finally, conclusions and future works are described in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Neural Machine Translation (Cho et al., 2014; Sutskever et al., 2014) uses memory units such as Gated Recurrent Units (GRU) or Long Short-Term Memory (LSTM) to overcome the exploding or vanishing gradient problem in recurrent networks. They suggest a new architectural type for MT systems in the form of end-to-end. It includes an encoder to present the sentence in the source language including n tokens X = (x 1 , x 2 , ..., x n ) into the continue space and a decoder to generate the predicted sentence Y = (y 1 , y 2 , ..., y m ) in the target language containing m tokens.", "cite_spans": [ { "start": 27, "end": 45, "text": "(Cho et al., 2014;", "ref_id": "BIBREF2" }, { "start": 46, "end": 69, "text": "Sutskever et al., 2014)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "The attention mechanism (Luong et al., 2015a; Bahdanau et al., 2015) is considered as the softalignment between a source sentence and the corresponding target sentence to enhance the effectiveness of the systems.", "cite_spans": [ { "start": 24, "end": 45, "text": "(Luong et al., 2015a;", "ref_id": "BIBREF7" }, { "start": 46, "end": 68, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "Due to the fact that recurrent neural networks (RNN) have limited parallelization in the training process, (Vaswani et al., 2017) propose the transformer architecture that may be highly parallelizable as well as better in translating long sentences. In the transformer, instead of using GRU or LSTM units, a word attends to the other words in a sentence using the self-attention mechanism as the following:", "cite_spans": [ { "start": 107, "end": 129, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "Attention(Q, K, V ) = Sof tmax( QK T \u221a d )V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "(1) where K (key), Q (query), V (value) present the hidden states of tokens in the input sentence from encoder or decoder and d is the size of the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "The attention mechanism in the transformer is the variant of the original attention (Luong et al., 2015a; Bahdanau et al., 2015) when we replace queries by the decoder's hidden states while keys and values come from the encoder's hidden states in the equation 1.", "cite_spans": [ { "start": 84, "end": 105, "text": "(Luong et al., 2015a;", "ref_id": "BIBREF7" }, { "start": 106, "end": 128, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "The NMT system is trained to optimize its parameters \u03b8 through minimizing the maximum likelihood of all sentence pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u03b8) = 1 T k=T k=1 logP (Y k |X k ; \u03b8)", "eq_num": "(2)" } ], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "where T is the number of sentence pairs in the bilingual corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "3 The strategies improve our MT system", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "As mentioned in section 1, in this paper, we utilize the TF-IDF method (Salton and Yang, 1973) to extract a subset of data from large datasets. In the method, TF is the term frequency which presents the ratio between the number of times a term (a word or a sub-word) appears in a sentence and the total number of terms in the sentence. IDF is the inverse document frequency which specifies the ratio between the total number of documents and the number of documents containing the term. Thus, an in-domain corpus D contains T sentence pairs, the TF-IDF score of the token w in the sentence s in the general domain G is evaluated as:", "cite_spans": [ { "start": 71, "end": 94, "text": "(Salton and Yang, 1973)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Data selection", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score w = T F \u2212 IDF w = F G w W G s . T D K D w", "eq_num": "(3)" } ], "section": "Data selection", "sec_num": "3.1" }, { "text": "where F G w is the frequency of w in s, W G s is the length of s, and K w is the number of sentences in D contain w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data selection", "sec_num": "3.1" }, { "text": "The score of the sentence s \u2208 G is calculated as :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data selection", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score s = i=W G s i=1 score w i", "eq_num": "(4)" } ], "section": "Data selection", "sec_num": "3.1" }, { "text": "These scores are then used to rank sentences in corpus G. The sentence which has the highest score is nearest to the background corpus, and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data selection", "sec_num": "3.1" }, { "text": "Our work employs this technique to extract both bilingual and monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data selection", "sec_num": "3.1" }, { "text": "In order to improve the translation system from the source language X to the target language Y , (Sennrich et al., 2015) trained the backward translation system from Y X, and it is then used to infer monolingual data from the language Y to predict hypotheses in the language X. We will gain the synthetic bilingual data and it is then mixed with the original bilingual data to augment the training corpus. This technique is called Back Translation (BT).", "cite_spans": [ { "start": 97, "end": 120, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Back Translation", "sec_num": "3.2" }, { "text": "Our paper applied BT to generate pseudo parallel data English-Vietnamese in the limited bilingual data scenario. In reality, the monolingual data is available but the inference in NMT takes a long time, so we leverage the data selection mentioned in section 3.1 to filter monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Back Translation", "sec_num": "3.2" }, { "text": "NMT systems are trained on a large corpus, and then continuously fine-tuned on the in-domain corpus to achieve better performance. We train the NMT system on the mixed datasets from various domains, and then fine-tuning on a smaller corpus extracted from original generic corpus using the strategy in section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning", "sec_num": "3.3" }, { "text": "The outputs of NMT models can be saturated together to predict better hypotheses. We call this ensemble translation . The combination vector is simply selected from maximum, or minimum or, average (can be then normalized) probabilities of the output vectors. In this work, we attempt to exhaustive the mean of probabilities from three models and find that a trivial improvement comparing to an individual one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Translation", "sec_num": "3.4" }, { "text": "Our work only employs the datasets from the VLSP 2020 Shared Task for Machine Translation. It includes six bilingual corpora in divergent domains and one Vietnamese monolingual corpus. This Shared Task focuses on translating the News do-main. The bilingual datasets are described in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 283, "end": 291, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Training dev test 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "No. Domains", "sec_num": null }, { "text": "News (in-domain) 20K 1007 1220 2 Basic 8.8K --3 EVBcorpus 45K --4 TED-like 546K --5 Wiki-ALT 20K --6 Open subtitle 3.5M -- We use 5 datasets from (1) to (5) for training experiments, the Open subtitle corpus is only used for learning sub-word units in English. The Vietnamese monolingual corpus which includes 20M sentences is exploited for the back translation.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 115, "text": "(in-domain) 20K 1007 1220 2 Basic 8.8K --3 EVBcorpus 45K --4 TED-like 546K --5 Wiki-ALT 20K --6", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "No. Domains", "sec_num": null }, { "text": "We firstly tokenized and true-cased English texts using Moses's scripts. Next, we concated all 6 bilingual corpora to learn 40.000 operators Byte Pair Encoding (BPE) codes like (Sennrich et al., 2016) . Lastly, the tokenized and true-cased texts were applied to BPE codes.", "cite_spans": [ { "start": 177, "end": 200, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.2" }, { "text": "Vietnamese texts were tokenized and true-cased using Moses's scripts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.2" }, { "text": "We conduct our experiments using the source code from NMTGMinor 1 . Our NMT system included four layers for both encoder and decoder and the embedding and hidden sizes are 512. The systems are trained with each mini-batch size of 64 sentence pairs (except the baseline system uses 32 sentence pairs). The vocabulary sizes are 50K tokens for both source and target sides. We use dropout with a probability of 0.2 for embedding and attention layers. The Adam optimizer is applied for updating parameters with an initial learning rate of 1.0. A beam size of 10 is employed for the decoding process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems and Training", "sec_num": "4.3" }, { "text": "We train our NMT systems after 50 epochs, and then they are fine-tuned on extracted and in-domain corpus to enhance the accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems and Training", "sec_num": "4.3" }, { "text": "We present empirical results in two measures: BLEU (Papineni et al., 2002) and Translation Er-ror Rate (TER) (Snover et al., 2006) . They are implemented in sacreBLEU 2 .The higher scores in BLEU specify the better translations while the lower scores in TER indicate better ones. Table 2 shows our experimental results. In-domain system (baseline) We train the baseline system on News corpus. We learn 10K operators BPE codes and then English texts are applied them.", "cite_spans": [ { "start": 51, "end": 74, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF11" }, { "start": 109, "end": 130, "text": "(Snover et al., 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "News + 4 corpus We find that the Open subtitle corpus contains sentences that are not news domain. Therefore, we only combine the background corpus with the 4 remaining corpora. We have shown the improvements of +14.13 BLEU points and -0.259 TER scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "+ Back Translation We rank sentences from Vietnamese monolingual corpus using the data selection method mentioned in section 3.1, and then extract the top 200K sentences from the ranked text. We employ the backward translation system from Vietnamese \u2192 English to generate synthetic bilingual data. The synthetic data are then concatenated to the corpus in the system (2) to train again. We obtain +15.31 BLEU and -0.295 TER points.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "+ Fine-tuning on ranked corpus We rank 4 parallel corpora from (2) to (5) in Table 1 using the TF-IDF method in section 3.1 again, and then we also extract the top 200K sentence pairs. The extracted data is combined with the background corpus to continuously fine-tuning the system (3) with an initial learning rate at 0.5. The improvements can be found as +16.21 BLEU and -0.297 TER scores.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "+ Fine-tuning on News domain We continue to fine-tune the system (4) with an initial learning rate at 0.25 in the in-domain corpus to gain the best performance, + 16.57 BLEU and -0.3 TER scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "+ Ensemble translation We combine the output of three best models from the system (5) using the method mentioned in 3.4. We see that our system does not improve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "NMT systems are restricted in domain translation, therefore, previous works have proposed a variety of data selection techniques to retrieve sentences that are the most related to a specific domain. (Axelrod et al., 2011; van der Wees et al., 2017) leverage language model to estimates the cross-entropy 2 https://github.com/mjpost/sacrebleu difference (CED) (Moore and Lewis, 2010) for sentences from generic domain. (Wang et al., 2017; Zhang and Xiong, 2018) employed the embedding vectors in the source space from NMT systems to rank sentences. (Wang et al., 2018; Zhang and Xiong, 2018) suggested a dynamic selection based on translation probability to classify sentences during the training process. (Peris et al., 2016 ) train a neural network to separate sentences into individual domains. These methods are quite complex because they require training neural networks or language models. (Silva et al., 2018) conducted experiments on CED, TF-IDF, FDA, and observe that the TF-IDF strategy is very fast and effective for data selection. In this works, we investigate this method again in the English-Vietnamese translation task.", "cite_spans": [ { "start": 199, "end": 221, "text": "(Axelrod et al., 2011;", "ref_id": "BIBREF0" }, { "start": 222, "end": 248, "text": "van der Wees et al., 2017)", "ref_id": "BIBREF24" }, { "start": 418, "end": 437, "text": "(Wang et al., 2017;", "ref_id": "BIBREF22" }, { "start": 438, "end": 460, "text": "Zhang and Xiong, 2018)", "ref_id": "BIBREF25" }, { "start": 548, "end": 567, "text": "(Wang et al., 2018;", "ref_id": "BIBREF23" }, { "start": 568, "end": 590, "text": "Zhang and Xiong, 2018)", "ref_id": "BIBREF25" }, { "start": 705, "end": 724, "text": "(Peris et al., 2016", "ref_id": "BIBREF12" }, { "start": 895, "end": 915, "text": "(Silva et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Due to the lack of bilingual data, some prior studies exploited monolingual data in different ways. (Sennrich et al., 2015) proposed BT method by using used monolingual from the target language. (Ha et al., 2017) shown the mix-source technique to create synthetic data by making a copy of the target language. (Lample et al., 2018) used monolingual data for unsupervised NMT. (Siddhant et al., 2020; Ngo et al., 2020) investigated monolingual data in multilingual NMT. Our work also attempts to using BT method to enhance our NMT system in the data sparse issue.", "cite_spans": [ { "start": 100, "end": 123, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF15" }, { "start": 195, "end": 212, "text": "(Ha et al., 2017)", "ref_id": "BIBREF3" }, { "start": 310, "end": 331, "text": "(Lample et al., 2018)", "ref_id": "BIBREF5" }, { "start": 376, "end": 399, "text": "(Siddhant et al., 2020;", "ref_id": "BIBREF17" }, { "start": 400, "end": 417, "text": "Ngo et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "To gain the best performance in the background domain, (Luong and Manning, 2015; Zoph et al., 2016) demonstrate the effectiveness when transferring the knowledge from the parent model to then child model by the fine-tuning technique. We also apply this approach to our NMT system to achieve better improvements. Besides, we attempt to estimates the quality of the system when using ensemble translation in (Luong and Manning, 2015) 6 Conclusion and Future Work", "cite_spans": [ { "start": 55, "end": 80, "text": "(Luong and Manning, 2015;", "ref_id": "BIBREF6" }, { "start": 81, "end": 99, "text": "Zoph et al., 2016)", "ref_id": "BIBREF26" }, { "start": 406, "end": 431, "text": "(Luong and Manning, 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Our NMT systems have achieved significant improvements when integrating simple techniques such as data section, BT, fine-tuning. In the future, we will leverage more data from other resources as well as using pre-trained models to improve the translation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We would like to thank the organizers and sponsors of the VLSP 2020. We also thank reviewers who review our paper carefully and give us helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" }, { "text": "https://github.com/quanpn90/NMTGMinor", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Domain adaptation via pseudo in-domain data selection", "authors": [ { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "355--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355-362, Edinburgh, Scotland, UK. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Aglar G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representa- tions using RNN encoder-decoder for statistical ma- chine translation. CoRR, abs/1406.1078.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Effective Strategies in Zero-Shot Neural Machine Translation", "authors": [ { "first": "Thanh-Le", "middle": [], "last": "Ha", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2017. Effective Strategies in Zero-Shot Neural Ma- chine Translation.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Goals, challenges and findings of the vlsp 2020 english-vietnamese news translation shared task", "authors": [ { "first": "Thanh-Le", "middle": [], "last": "Ha", "suffix": "" }, { "first": "Kim-Anh", "middle": [], "last": "Van-Khanh Tran", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2020, "venue": "VLSP 2020", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thanh-Le Ha, Van-Khanh Tran, and Kim-Anh Nguyen. 2020. Goals, challenges and findings of the vlsp 2020 english-vietnamese news translation shared task. In VLSP 2020.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised machine translation using monolingual corpora only", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised ma- chine translation using monolingual corpora only.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Stanford neural machine translation systems for spoken language domain", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spo- ken language domain. In International Workshop on Spoken Language Translation, Da Nang, Vietnam.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Addressing the rare word problem in neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "11--19", "other_ids": { "DOI": [ "10.3115/v1/P15-1002" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11-19, Beijing, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Intelligent selection of language model training data", "authors": [ { "first": "C", "middle": [], "last": "Robert", "suffix": "" }, { "first": "William", "middle": [], "last": "Moore", "suffix": "" }, { "first": "", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Conference Short Papers", "volume": "", "issue": "", "pages": "220--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Pro- ceedings of the ACL 2010 Conference Short Papers, pages 220-224, Uppsala, Sweden. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving multilingual neural machine translation for low-resource languages: French, english -vietnamese", "authors": [ { "first": "Thi-Vinh", "middle": [], "last": "Ngo", "suffix": "" }, { "first": "Phuong-Thai", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Thanh-Le", "middle": [], "last": "Ha", "suffix": "" }, { "first": "Khac-Quy", "middle": [], "last": "Dinh", "suffix": "" }, { "first": "Le-Minh", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "55--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thi-Vinh Ngo, Phuong-Thai Nguyen, Thanh-Le Ha, Khac-Quy Dinh, and Le-Minh Nguyen. 2020. Im- proving multilingual neural machine translation for low-resource languages: French, english -viet- namese. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 55-61.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Neural networks classifier for data selection in statistical machine translation", "authors": [ { "first": "Alvaro", "middle": [], "last": "Peris", "suffix": "" }, { "first": "Mara", "middle": [], "last": "Chinea-Rios", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Casacuberta", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alvaro Peris, Mara Chinea-Rios, and Francisco Casacuberta. 2016. Neural networks classifier for data selection in statistical machine translation. CoRR, abs/1612.05555.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Extending feature decay algorithms using alignment entropy", "authors": [ { "first": "Alberto", "middle": [], "last": "Poncelas", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "170--182", "other_ids": { "DOI": [ "10.1007/978-3-319-69365-1_14" ] }, "num": null, "urls": [], "raw_text": "Alberto Poncelas, Andy Way, and Antonio Toral. 2017. Extending feature decay algorithms using alignment entropy. pages 170-182.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "On the specification of term values in automatic indexing", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Yang", "suffix": "" } ], "year": 1973, "venue": "Journal of Documentation", "volume": "29", "issue": "4", "pages": "351--372", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Salton and C. S. Yang. 1973. On the specification of term values in automatic indexing. Journal of Docu- mentation., 29(4):351-372.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. CoRR, abs/1511.06709.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural Machine Translation of Rare Words with Subword Units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Association for Computa- tional Linguistics (ACL 2016).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Leveraging monolingual data with self-supervision for multilingual neural machine translation", "authors": [ { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Mia", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sneha", "middle": [], "last": "Kudugunta", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Fi- rat, Mia Chen, Sneha Kudugunta, Naveen Arivazha- gan, and Yonghui Wu. 2020. Leveraging monolin- gual data with self-supervision for multilingual neu- ral machine translation.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Extracting in-domain training corpora for neural machine translation using data selection methods", "authors": [ { "first": "Catarina", "middle": [ "Cruz" ], "last": "Silva", "suffix": "" }, { "first": "Chao-Hong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Poncelas", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "224--231", "other_ids": { "DOI": [ "10.18653/v1/W18-6323" ] }, "num": null, "urls": [], "raw_text": "Catarina Cruz Silva, Chao-Hong Liu, Alberto Poncelas, and Andy Way. 2018. Extracting in-domain training corpora for neural machine translation using data se- lection methods. In Proceedings of the Third Con- ference on Machine Translation: Research Papers, pages 224-231, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "Matthew", "middle": [], "last": "Snover", "suffix": "" }, { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "L", "middle": [], "last": "Micciulla", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Snover, Bonnie J. Dorr, R. Schwartz, and L. Micciulla. 2006. A study of translation edit rate with targeted human annotation.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sentence embedding for neural machine translation domain adaptation", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Finch", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "560--566", "other_ids": { "DOI": [ "10.18653/v1/P17-2089" ] }, "num": null, "urls": [], "raw_text": "Rui Wang, Andrew Finch, Masao Utiyama, and Ei- ichiro Sumita. 2017. Sentence embedding for neural machine translation domain adaptation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 560-566, Vancouver, Canada. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Dynamic sentence sampling for efficient training of neural machine translation", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "298--304", "other_ids": { "DOI": [ "10.18653/v1/P18-2048" ] }, "num": null, "urls": [], "raw_text": "Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018. Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 298-304, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Dynamic data selection for neural machine translation", "authors": [ { "first": "Arianna", "middle": [], "last": "Marlies Van Der Wees", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1400--1410", "other_ids": { "DOI": [ "10.18653/v1/D17-1147" ] }, "num": null, "urls": [], "raw_text": "Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural ma- chine translation. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 1400-1410, Copenhagen, Den- mark. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Sentence weighting for neural machine translation domain adaptation", "authors": [ { "first": "Shiqi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3181--3190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiqi Zhang and Deyi Xiong. 2018. Sentence weight- ing for neural machine translation domain adapta- tion. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 3181- 3190, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Transfer learning for low-resource neural machine translation", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1568--1575", "other_ids": { "DOI": [ "10.18653/v1/D16-1163" ] }, "num": null, "urls": [], "raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "num": null, "content": "
: The English-Vietnamese parallel datasets are
used in our work
", "html": null, "text": "" }, "TABREF2": { "type_str": "table", "num": null, "content": "", "html": null, "text": "The results of our English \u2192 Vietnamese MT systems are measured in BLEU and TER scores." } } } }