{ "paper_id": "I17-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:02.859350Z" }, "title": "Improving Sequence to Sequence Neural Machine Translation by Utilizing Syntactic Dependency Information", "authors": [ { "first": "An", "middle": [ "Nguyen" ], "last": "Le", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ander", "middle": [], "last": "Martinez", "suffix": "", "affiliation": {}, "email": "ander.martinez.zy4@is.naist.jp" }, { "first": "Akifumi", "middle": [], "last": "Yoshimoto", "suffix": "", "affiliation": { "laboratory": "This author's present affiliation is CyberAgent, Inc", "institution": "", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "akifumi-y@is.naist.jp" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sequence to Sequence Neural Machine Translation has achieved significant performance in recent years. Yet, there are some existing issues that Neural Machine Translation still does not solve completely. Two of them are translation of long sentences and \"over-translation\". To address these two problems, we propose an approach that utilize more grammatical information such as syntactic dependencies, so that the output can be generated based on more abundant information. In addition, the output of the model is presented not as a simple sequence of tokens but as a linearized tree construction. Experiments on the Europarl-v7 dataset of French-to-English translation demonstrate that our proposed method improves BLEU scores by 1.57 and 2.40 on datasets consisting of sentences with up to 50 and 80 tokens, respectively. Furthermore, the proposed method also solved the two existing problems, ineffective translation of long sentences and over-translation in Neural Machine Translation.", "pdf_parse": { "paper_id": "I17-1003", "_pdf_hash": "", "abstract": [ { "text": "Sequence to Sequence Neural Machine Translation has achieved significant performance in recent years. Yet, there are some existing issues that Neural Machine Translation still does not solve completely. Two of them are translation of long sentences and \"over-translation\". To address these two problems, we propose an approach that utilize more grammatical information such as syntactic dependencies, so that the output can be generated based on more abundant information. In addition, the output of the model is presented not as a simple sequence of tokens but as a linearized tree construction. Experiments on the Europarl-v7 dataset of French-to-English translation demonstrate that our proposed method improves BLEU scores by 1.57 and 2.40 on datasets consisting of sentences with up to 50 and 80 tokens, respectively. Furthermore, the proposed method also solved the two existing problems, ineffective translation of long sentences and over-translation in Neural Machine Translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Our task is to construct a model which learns input in sequence form and decodes output as a linearized dependency tree. In this work, we propose an approach in which dependency labels are incorporated into the model to represent more grammatical information in the output sequence. As we know, the Sequence to Sequence (Seq2Seq) Learning model Aharoni et al., 2016) is extremely effective on a va-riety of tasks that require a mapping between a sequence to sequence. Therefore, it is used to solve many tasks in natural language processing. The Seq2Seq model consists of an encoder-decoder neural network which encodes a variable-length input sequence into a vector and decodes it into a variable-length output. Since the model uses the information of the source representation and the previously generated words to produce the next-word token, this distributed representation allows the Seq2Seq model to generate appropriate mapping between the input and the output (Li et al., 2016) . For specific tasks, Neural Machine Translation (NMT) model, which is based on the Seq2Seq learning, has achieved excellent translation performance in recent years Bahdanau et al., 2015; Luong et al., 2015; Firat et al., 2016) . In particular, the NMT model which is built upon an encoder-decoder framework with attention mechanism (Bahdanau et al., 2015) can also pay attention and its decoder knows which part of the input is relevant for the word that is currently being translated. Therefore, it has shown competitive results and outperformed conventional statistical methods (Bentivogli et al., 2016) . Despite of these advantages, NMT model still has a couple particular issues to be solved such as dealing with fixed vocabulary, not applicable to small-data, additional phrases, wrong lexical choice errors, long sentence translation, over and under translation, etc. In this paper, we touch upon the following two major problems:", "cite_spans": [ { "start": 345, "end": 366, "text": "Aharoni et al., 2016)", "ref_id": "BIBREF1" }, { "start": 968, "end": 985, "text": "(Li et al., 2016)", "ref_id": "BIBREF5" }, { "start": 1151, "end": 1173, "text": "Bahdanau et al., 2015;", "ref_id": "BIBREF2" }, { "start": 1174, "end": 1193, "text": "Luong et al., 2015;", "ref_id": "BIBREF14" }, { "start": 1194, "end": 1213, "text": "Firat et al., 2016)", "ref_id": "BIBREF8" }, { "start": 1319, "end": 1342, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF2" }, { "start": 1567, "end": 1592, "text": "(Bentivogli et al., 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Translation of long sentences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Over-translation Since the decoder of the Seq2Seq model produces the target language word by word simply based on the previous target words and the sourceside representation vector until it reaches the spe-cial end token, it is incapable in capturing longdistance dependencies in history, so ineffective for long sentences translation Toral and S\u00e1nchez-Cartagena, 2017) . Even with an attention mechanism, the Seq2Seq model just pays attention to the current alignment information between the inputs and the output at the current position but ignores past alignments information. Therefore, it cannot keep track of the attention history when it updates information at each current time step, leading to the over-production (Tu et al., 2016a,c; Mi et al., 2016; Tu et al., 2016b) .", "cite_spans": [ { "start": 337, "end": 371, "text": "Toral and S\u00e1nchez-Cartagena, 2017)", "ref_id": "BIBREF20" }, { "start": 725, "end": 745, "text": "(Tu et al., 2016a,c;", "ref_id": null }, { "start": 746, "end": 762, "text": "Mi et al., 2016;", "ref_id": "BIBREF15" }, { "start": 763, "end": 780, "text": "Tu et al., 2016b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to address the above two issues, it is worth considering that using syntactic dependency information and representing the output as a tree structure would be effective. This approach allows the next tokens to be output based on not only the previous tokens but also the syntactic dependencies so far, thereby conditioning them on more abundant information so it has the ability to make smarter predictions. Basically, in this paper, we train the model with an encoder-decoder neural network and using dependencies in which the input of the source language is in sequence form and the output of the target language will be generated in a linearized dependency-based tree structure. That is, instead of predicting only words at each time step, the model trains the network to predict both words and their grammatical dependencies as a dependency tree at each time step. Therefore, it is hoped that the accuracy of output will be improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The major contributions of this work are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. To utilize the information of both \"head\" words and syntactic dependencies between them to produce better output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. To settle the problems in the NMT task. In this paper, we desire to solve two tasks. First is the ineffective translation for long sentences. Second is the over-translation in NMT task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Empirically, to assess the performance of the proposed method, we used Conditional Gated Recurrent Unit with Attention mechanism model of Bahdanau (2015) on the French-English portions of the Europarl-v7 dataset. As a result, the BLEU score is improved by 1.57 and 2.40 points for sentences of length up to 50 and 80 tokens, respec-tively. Also, we compare and analyze the results of attention-based Seq2Seq model and the proposed approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In fact, the effectiveness of using dependency information of words has been reported in some previous NLP tasks, for example, in dependencybased word embeddings, relation classification and sentence classification tasks (Liu et al., 2015; Socher et al., 2014; Levy and Goldberg, 2014; Komnios, 2016; Ono and Hatano, 2014) . It has been shown that the combination of words and their dependency information can boost performance. Besides, in the work of Vinyals et al. , they also represent output as a linearized tree structure, but their work showed that generic sequence-to-sequence approaches can achieve excellent results on syntactic constituency parsing. At a glance, our proposed method is a little similar to the works of Dyer et al., Aharoni et al., Eriguchi et al., Wu et al. (Dyer et al., 2016; Aharoni and Goldberg, 2017; Eriguchi et al., 2017; Wu et al., 2017) in use of parse tree and generation. However, Dyer et al. and Aharoni et al.'s works concern predicting constituent trees. Eriguchi et al.'s model employs syntactic dependency parsing but their model is hybridized the decoder of NMT and the Recurrent Neural Network Grammars, and the target sentences are parsed in transition-based parsing. Wu et al.'s model also employs dependency parsing but their model separately predicts the target translation sequence and parsing action sequence which maps to translation. On the other hand, our proposed model's decoder directly predicts the linearized dependency tree itself in a single neural network in Depth-first preorder order so that the next-word token is generated based on syntactic relations and tree construction itself. In other words, our model is able to learn and produce a tree of words and their dependency relations by itself.", "cite_spans": [ { "start": 221, "end": 239, "text": "(Liu et al., 2015;", "ref_id": "BIBREF13" }, { "start": 240, "end": 260, "text": "Socher et al., 2014;", "ref_id": "BIBREF18" }, { "start": 261, "end": 285, "text": "Levy and Goldberg, 2014;", "ref_id": "BIBREF11" }, { "start": 286, "end": 300, "text": "Komnios, 2016;", "ref_id": "BIBREF9" }, { "start": 301, "end": 322, "text": "Ono and Hatano, 2014)", "ref_id": "BIBREF16" }, { "start": 743, "end": 805, "text": "Aharoni et al., Eriguchi et al., Wu et al. (Dyer et al., 2016;", "ref_id": null }, { "start": 806, "end": 833, "text": "Aharoni and Goldberg, 2017;", "ref_id": "BIBREF0" }, { "start": 834, "end": 856, "text": "Eriguchi et al., 2017;", "ref_id": "BIBREF7" }, { "start": 857, "end": 873, "text": "Wu et al., 2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In our proposed approach, the neural network model is trained to map the target-side output in a linearized dependency tree construction from the source-side input in a sequence. Thus, we call this model Sequence-to-Dependency (Seq2Dep) model. The problem is defined as follows: Given a source sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Dependency Model", "sec_num": "3" }, { "text": "X = (x 1 , x 2 , . . . , x N ) of length N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Dependency Model", "sec_num": "3" }, { "text": ", we want the model to encode the input sequence X and decode it to a tree structure with both words and dependency information conditioned on the encoded vector. Therefore, the output will be represented in the form (LY ) = (ly 1 , ly 2 , . . . , ly M ). The conditional probability p(ly|x) is decomposed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-to-Dependency Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(ly|x) = \u221e i=1 p(ly i |ly