{ "paper_id": "Q18-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:10:58.786304Z" }, "title": "Scheduled Multi-Task Learning: From Syntax to Translation", "authors": [ { "first": "Eliyahu", "middle": [], "last": "Kiperwasser", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "", "affiliation": {}, "email": "miguel.ballesteros@ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural encoder-decoder models of machine translation have achieved impressive results, while learning linguistic knowledge of both the source and target languages in an implicit end-to-end manner. We propose a framework in which our model begins learning syntax and translation interleaved, gradually putting more focus on translation. Using this approach, we achieve considerable improvements in terms of BLEU score on relatively large parallel corpus (WMT14 English to German) and a lowresource (WIT German to English) setup. * Work carried out during summer internship at IBM Research.", "pdf_parse": { "paper_id": "Q18-1017", "_pdf_hash": "", "abstract": [ { "text": "Neural encoder-decoder models of machine translation have achieved impressive results, while learning linguistic knowledge of both the source and target languages in an implicit end-to-end manner. We propose a framework in which our model begins learning syntax and translation interleaved, gradually putting more focus on translation. Using this approach, we achieve considerable improvements in terms of BLEU score on relatively large parallel corpus (WMT14 English to German) and a lowresource (WIT German to English) setup. * Work carried out during summer internship at IBM Research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently become the stateof-the-art approach to machine translation (Bojar et al., 2016) . One of the main advantages of neural approaches is the impressive ability of RNNs to act as feature extractors over the entire input (Kiperwasser and Goldberg, 2016) , rather than focusing on local information. Neural architectures are able to extract linguistic properties from the input sentence in the form of morphology (Belinkov et al., 2017) or syntax (Linzen et al., 2016) .", "cite_spans": [ { "start": 33, "end": 65, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF27" }, { "start": 66, "end": 89, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF53" }, { "start": 90, "end": 112, "text": "Bahdanau et al., 2014)", "ref_id": "BIBREF4" }, { "start": 185, "end": 205, "text": "(Bojar et al., 2016)", "ref_id": "BIBREF10" }, { "start": 341, "end": 373, "text": "(Kiperwasser and Goldberg, 2016)", "ref_id": "BIBREF29" }, { "start": 532, "end": 555, "text": "(Belinkov et al., 2017)", "ref_id": "BIBREF7" }, { "start": 566, "end": 587, "text": "(Linzen et al., 2016)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nonetheless, as shown in and Dyer (2017) , systems that ignore explicit linguistic structures are incorrectly biased and they tend to make overly strong linguistic generalizations. Providing explicit linguistic information Kuncoro et al., 2017; Niehues and Cho, 2017; Eriguchi et al., 2017; Aharoni and Goldberg, 2017; Nadejde et al., 2017; Bastings et al., 2017; Matthews et al., 2018) has proven to be beneficial, achieving higher results in language modeling and machine translation.", "cite_spans": [ { "start": 29, "end": 40, "text": "Dyer (2017)", "ref_id": "BIBREF17" }, { "start": 223, "end": 244, "text": "Kuncoro et al., 2017;", "ref_id": "BIBREF31" }, { "start": 245, "end": 267, "text": "Niehues and Cho, 2017;", "ref_id": "BIBREF45" }, { "start": 268, "end": 290, "text": "Eriguchi et al., 2017;", "ref_id": "BIBREF19" }, { "start": 291, "end": 318, "text": "Aharoni and Goldberg, 2017;", "ref_id": "BIBREF1" }, { "start": 319, "end": 340, "text": "Nadejde et al., 2017;", "ref_id": "BIBREF43" }, { "start": 341, "end": 363, "text": "Bastings et al., 2017;", "ref_id": "BIBREF5" }, { "start": 364, "end": 386, "text": "Matthews et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-task learning (MTL) consists of being able to solve synergistic tasks with a single model by jointly training multiple tasks that look alike. The final dense representations of the neural architectures encode the different objectives, and they leverage the information from each task to help the others. For example, tasks like multiword expression detection and part-of-speech tagging have been found very useful for others like combinatory categorical grammar (CCG) parsing, chunking and super-sense tagging (Bingel and S\u00f8gaard, 2017) .", "cite_spans": [ { "start": 516, "end": 542, "text": "(Bingel and S\u00f8gaard, 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to perform accurate translations, we proceed by analogy to humans. It is desirable to acquire a deep understanding of the languages; and, once this is acquired it is possible to learn how to translate gradually and with experience (including revisiting and re-learning some aspects of the languages). We propose a similar strategy by introducing the concept of Scheduled Multi-Task Learning (Section 4) in which we propose to interleave the different tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose to learn the structure of language (through syntactic parsing and part-ofspeech tagging) with a multi-task learning strategy with the intentions of improving the performance of tasks like machine translation that use that structure and make generalizations. We achieve considerable improvements in terms of BLEU score on a relatively large parallel corpus (WMT14 English to Ger-man) and a low-resource (WIT German to English) setup. Our different scheduling strategies show interesting differences in performance both in the lowresource and standard setups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Neural Machine Translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2014) directly models the conditional probability p(y|x) of the target sequence of words y =< y 1 , . . . , y T > given a source sequence x =< x 1 , . . . , x S >. In this paper, we base our neural architecture on the same sequence to sequence with attention model; in the following we explain the details and describe the nuances of our architecture.", "cite_spans": [ { "start": 33, "end": 57, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF53" }, { "start": 58, "end": 80, "text": "Bahdanau et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence to Sequence with Attention", "sec_num": "2" }, { "text": "We use bidirectional LSTMs to encode the source sentences (Graves, 2012) . Given a source sentence x =< x 1 , . . . , x m >, we embed the words into vectors through an embedding matrix W S , the vector of the i-th word is W S x i . We get the representations of the i-th word by summarizing the information of neighboring words using bidirectional LSTMs (Bahdanau et al., 2014) ,", "cite_spans": [ { "start": 58, "end": 72, "text": "(Graves, 2012)", "ref_id": "BIBREF21" }, { "start": 354, "end": 377, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h F i = LST M F (h F i\u22121 , W S x i )", "eq_num": "(1)" } ], "section": "Encoder", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h B i = LST M B (h B i+1 , W S x i ).", "eq_num": "(2)" } ], "section": "Encoder", "sec_num": "2.1" }, { "text": "The forward and backward representation are concatenated to get the bi-directional encoder representation of word i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "2.1" }, { "text": "h i = [h F i , h B i ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "2.1" }, { "text": "The decoder generates one target word per timestep, hence, we can decompose the conditional probability as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log p(y|x) = j p(y j |y