{ "paper_id": "D09-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:38:55.655834Z" }, "title": "Feature-Rich Translation by Quasi-Synchronous Lattice Parsing", "authors": [ { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "kgimpel@cs.cmu.edu" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "nasmith@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a machine translation framework that can incorporate arbitrary features of both input and output sentences. The core of the approach is a novel decoder based on lattice parsing with quasisynchronous grammar (Smith and Eisner, 2006), a syntactic formalism that does not require source and target trees to be isomorphic. Using generic approximate dynamic programming techniques, this decoder can handle \"non-local\" features. Similar approximate inference techniques support efficient parameter estimation with hidden variables. We use the decoder to conduct controlled experiments on a German-to-English translation task, to compare lexical phrase, syntax, and combined models, and to measure effects of various restrictions on nonisomorphism.", "pdf_parse": { "paper_id": "D09-1023", "_pdf_hash": "", "abstract": [ { "text": "We present a machine translation framework that can incorporate arbitrary features of both input and output sentences. The core of the approach is a novel decoder based on lattice parsing with quasisynchronous grammar (Smith and Eisner, 2006), a syntactic formalism that does not require source and target trees to be isomorphic. Using generic approximate dynamic programming techniques, this decoder can handle \"non-local\" features. Similar approximate inference techniques support efficient parameter estimation with hidden variables. We use the decoder to conduct controlled experiments on a German-to-English translation task, to compare lexical phrase, syntax, and combined models, and to measure effects of various restrictions on nonisomorphism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We have seen rapid recent progress in machine translation through the use of rich features and the development of improved decoding algorithms, often based on grammatical formalisms. 1 If we view MT as a machine learning problem, features and formalisms imply structural independence assumptions, which are in turn exploited by efficient inference algorithms, including decoders (Koehn et al., 2003; Yamada and Knight, 2001) . Hence a tension is visible in the many recent research efforts aiming to decode with \"non-local\" features (Chiang, 2007; Huang and Chiang, 2007) . Lopez (2009) recently argued for a separation between features/formalisms (and the indepen-dence assumptions they imply) from inference algorithms in MT; this separation is widely appreciated in machine learning. Here we take first steps toward such a \"universal\" decoder, making the following contributions:", "cite_spans": [ { "start": 379, "end": 399, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF23" }, { "start": 400, "end": 424, "text": "Yamada and Knight, 2001)", "ref_id": "BIBREF43" }, { "start": 533, "end": 547, "text": "(Chiang, 2007;", "ref_id": "BIBREF9" }, { "start": 548, "end": 571, "text": "Huang and Chiang, 2007)", "ref_id": "BIBREF19" }, { "start": 574, "end": 586, "text": "Lopez (2009)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Arbitrary feature model ( \u00a72): We define a single, direct log-linear translation model (Papineni et al., 1997; Och and Ney, 2002) that encodes most popular MT features and can be used to encode any features on source and target sentences, dependency trees, and alignments. The trees are optional and can be easily removed, allowing simulation of \"string-to-tree,\" \"tree-to-string,\" \"treeto-tree,\" and \"phrase-based\" models, among many others. We follow the widespread use of log-linear modeling for direct translation modeling; the novelty is in the use of richer feature sets than have been previously used in a single model.", "cite_spans": [ { "start": 87, "end": 110, "text": "(Papineni et al., 1997;", "ref_id": "BIBREF33" }, { "start": 111, "end": 129, "text": "Och and Ney, 2002)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Decoding as QG parsing ( \u00a73-4): We present a novel decoder based on lattice parsing with quasisynchronous grammar (QG; Smith and Eisner, 2006) . 2 Further, we exploit generic approximate inference techniques to incorporate arbitrary \"nonlocal\" features in the dynamic programming algorithm (Chiang, 2007; Gimpel and Smith, 2009) .", "cite_spans": [ { "start": 119, "end": 142, "text": "Smith and Eisner, 2006)", "ref_id": "BIBREF38" }, { "start": 145, "end": 146, "text": "2", "ref_id": null }, { "start": 290, "end": 304, "text": "(Chiang, 2007;", "ref_id": "BIBREF9" }, { "start": 305, "end": 328, "text": "Gimpel and Smith, 2009)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Parameter estimation ( \u00a75): We exploit similar approximate inference methods in regularized pseudolikelihood estimation (Besag, 1975) with hidden variables to discriminatively and efficiently train our model. Because we start with inference (the key subroutine in training), many other learning algorithms are possible.", "cite_spans": [ { "start": 120, "end": 133, "text": "(Besag, 1975)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental platform ( \u00a76): The flexibility of our model/decoder permits carefully controlled experiments. We compare lexical phrase and dependency syntax features, as well as a novel com-\u03a3, T source and target language vocabularies, respectively Trans : \u03a3 \u222a {NULL} \u2192 2 T function mapping each source word to target words to which it may translate s = s0, . . . , sn \u2208 \u03a3 n source language sentence (s0 is the NULL word) t = t1, . . . , tm \u2208 T m target language sentence, translation of s \u03c4s : {1, . . . , n} \u2192 {0, . . . , n} dependency tree of s, where \u03c4s(i) is the index of the parent of si (0 is the root, $) \u03c4t : {1, . . . , m} \u2192 {0, . . . , m} dependency tree of t, where \u03c4t(i) is the index of the parent of ti (0 is the root, $) a : {1, . . . , m} \u2192 2 {1,...,n} alignments from words in t to words in s; \u2205 denotes alignment to NULL \u03b8 parameters of the model g trans (s, a, t) lexical translation features ( \u00a72.1):", "cite_spans": [ { "start": 758, "end": 767, "text": "{1,...,n}", "ref_id": null }, { "start": 872, "end": 881, "text": "(s, a, t)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "f lex (s, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "word-to-word translation features for translating s as t f phr (s j i , t k ) phrase-to-phrase translation features for translating s j i as t k g lm (t) language model features ( \u00a72.2):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "f N (t j j\u2212N +1 ) N -gram probabilities g syn (t, \u03c4t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "target syntactic features ( \u00a72.3):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "f att (t, j, t , k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "syntactic features for attaching target word t at position k to target word t at position j f val (t, j, I) syntactic valence features with word t at position j having children I \u2286 {1, . . . , m} g reor (s, \u03c4s, a, t, \u03c4t) reordering features ( \u00a72.4):", "cite_spans": [ { "start": 203, "end": 220, "text": "(s, \u03c4s, a, t, \u03c4t)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "f dist (i, j)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "distortion features for a source word at position i aligned to a target word at position j g tree 2 (\u03c4s, a, \u03c4t) tree-to-tree syntactic features ( \u00a73):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "f qg (i, i , j, k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "configuration features for source pair si/s i being aligned to target pair", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "tj/t k g cov (a) coverage features ( \u00a74.2) f scov (a), f zth (a), f sunc (a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "counters for \"covering\" each s word each time, the zth time, and leaving it \"uncovered\" bination of the two. We quantify the effects of our approximate inference. We explore the effects of various ways of restricting syntactic non-isomorphism between source and target trees through the QG. We do not report state-of-the-art performance, but these experiments reveal interesting trends that will inform continued research. (Table 1 explains notation.) Given a sentence s and its parse tree \u03c4 s , we formulate the translation problem as finding the target sentence t * (along with its parse tree \u03c4 * t and alignment a * to the source tree) such that 3", "cite_spans": [], "ref_spans": [ { "start": 423, "end": 440, "text": "(Table 1 explains", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "t * , \u03c4 * t , a * = argmax t,\u03c4 t ,a p(t, \u03c4 t , a | s, \u03c4 s ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "In order to include overlapping features and permit hidden variables during training, we use a single globally-normalized conditional log-linear model. That is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(t, \u03c4 t , a | s, \u03c4 s ) = exp{\u03b8 g(s, \u03c4 s , a, t, \u03c4 t )} a ,t ,\u03c4 t exp{\u03b8 g(s, \u03c4 s , a , t , \u03c4 t )}", "eq_num": "(2)" } ], "section": "Model", "sec_num": "2" }, { "text": "where the g are arbitrary feature functions and the \u03b8 are feature weights. If one or both parse trees or the word alignments are unavailable, they can be ignored or marginalized out as hidden variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "In a log-linear model over structured objects, the choice of feature functions g has a huge effect 3 We assume in this work that s is parsed. In principle, we might include source-side parsing as part of decoding. on the feasibility of inference, including decoding. Typically these feature functions are chosen to factor into local parts of the overall structure. We next define some key features used in current MT systems, explaining how they factor. We will use subscripts on g to denote different groups of features, which may depend on subsets of the structures t, \u03c4 t , a, s, and \u03c4 s . When these features factor into parts, we will use f to denote the factored vectors, so that if x is an object that breaks into parts", "cite_spans": [ { "start": 99, "end": 100, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "{x i } i , then g(x) = i f (x i ). 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "Classical lexical translation features depend on s and t and the alignment a between them. The simplest are word-to-word features, estimated as the conditional probabilities p(t | s) and p(s | t) for s \u2208 \u03a3 and t \u2208 T. Phrase-to-phrase features generalize these, estimated as p(t | s ) and p(s | t ) where s (respectively, t ) is a substring of s (t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Translations", "sec_num": "2.1" }, { "text": "A major difference between the phrase features used in this work and those used elsewhere is that we do not assume that phrases segment into disjoint parts of the source and target sentences 4 There are two conventional definitions of feature functions. One is to let the range of these functions be conditional probability estimates (Och and Ney, 2002) . These estimates are usually heuristic and inconsistent (Koehn et al., 2003) . An alternative is to instantiate features for different structural patterns (Liang et al., 2006; . This offers more expressive power but may require much more training data to avoid overfitting. For this reason, and to keep training fast, we opt for the former convention, though our decoder can handle both, and the factorings we describe are agnostic about this choice. (Koehn et al., 2003) ; they can overlap. 5 Additionally, since phrase features can be any function of words and alignments, we permit features that consider phrase pairs in which a target word outside the target phrase aligns to a source word inside the source phrase, as well as phrase pairs with gaps (Chiang, 2005; Ittycheriah and Roukos, 2007) .", "cite_spans": [ { "start": 191, "end": 192, "text": "4", "ref_id": null }, { "start": 334, "end": 353, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF30" }, { "start": 411, "end": 431, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF23" }, { "start": 510, "end": 530, "text": "(Liang et al., 2006;", "ref_id": "BIBREF26" }, { "start": 806, "end": 826, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF23" }, { "start": 1109, "end": 1123, "text": "(Chiang, 2005;", "ref_id": "BIBREF8" }, { "start": 1124, "end": 1153, "text": "Ittycheriah and Roukos, 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Translations", "sec_num": "2.1" }, { "text": "Lexical translation features factor as in Eq. 3 (Tab. 2). We score all phrase pairs in a sentence pair that pair a target phrase with the smallest source phrase that contains all of the alignments in the target phrase; if k:i\u2264k\u2264j a(k) = \u2205, no phrase feature fires for t j i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Translations", "sec_num": "2.1" }, { "text": "N -gram language models have become standard in machine translation systems. For bigrams and trigrams (used in this paper), the factoring is in Eq. 4 (Tab. 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N -gram Language Model", "sec_num": "2.2" }, { "text": "There have been many features proposed that consider source-and target-language syntax during translation. Syntax-based MT systems often use features on grammar rules, frequently maximum likelihood estimates of conditional probabilities in a probabilistic grammar, but other syntactic features are possible. For example, Quirk et al. (2005) use features involving phrases and sourceside dependency trees and Mi et al. (2008) use features from a forest of parses of the source sentence. There is also substantial work in the use of target-side syntax (Galley et al., 2006; Shen et al., 2008) . In addition, researchers have recently added syntactic features to phrase-based and hierarchical phrase-based models (Gimpel and Smith, 2008; Haque et al., 2009; Chiang et al., 2008) . In this work, we focus on syntactic features of target-side dependency trees, \u03c4 t , along with the words t. These include attachment features that relate a word to its syntactic parent, and valence features. They factor as in Eq. 5 (Tab. 2). Features that consider only target-side syntax and words without considering s can be seen as \"syntactic language model\" features (Shen et al., 2008) . 5 Segmentation might be modeled as a hidden variable in future work. ", "cite_spans": [ { "start": 321, "end": 340, "text": "Quirk et al. (2005)", "ref_id": "BIBREF36" }, { "start": 408, "end": 424, "text": "Mi et al. (2008)", "ref_id": "BIBREF29" }, { "start": 550, "end": 571, "text": "(Galley et al., 2006;", "ref_id": "BIBREF15" }, { "start": 572, "end": 590, "text": "Shen et al., 2008)", "ref_id": "BIBREF37" }, { "start": 710, "end": 734, "text": "(Gimpel and Smith, 2008;", "ref_id": "BIBREF16" }, { "start": 735, "end": 754, "text": "Haque et al., 2009;", "ref_id": "BIBREF18" }, { "start": 755, "end": 775, "text": "Chiang et al., 2008)", "ref_id": "BIBREF7" }, { "start": 1150, "end": 1169, "text": "(Shen et al., 2008)", "ref_id": "BIBREF37" }, { "start": 1172, "end": 1173, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Target Syntax", "sec_num": "2.3" }, { "text": "g trans (s, a, t) = P m j=1 P i\u2208a(j) f lex (si, tj) (3) + P i,j:1\u2264i