{ "paper_id": "D09-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:38:25.567227Z" }, "title": "Non-Projective Parsing for Statistical Machine Translation", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "", "affiliation": { "laboratory": "", "institution": "MIT CSAIL", "location": { "postCode": "02139", "settlement": "Cambridge", "region": "MA", "country": "USA" } }, "email": "carreras@csail.mit.edu" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "", "affiliation": { "laboratory": "", "institution": "MIT CSAIL", "location": { "postCode": "02139", "settlement": "Cambridge", "region": "MA", "country": "USA" } }, "email": "mcollins@csail.mit.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe a novel approach for syntaxbased statistical MT, which builds on a variant of tree adjoining grammar (TAG). Inspired by work in discriminative dependency parsing, the key idea in our approach is to allow highly flexible reordering operations during parsing, in combination with a discriminative model that can condition on rich features of the sourcelanguage string. Experiments on translation from German to English show improvements over phrase-based systems, both in terms of BLEU scores and in human evaluations.", "pdf_parse": { "paper_id": "D09-1021", "_pdf_hash": "", "abstract": [ { "text": "We describe a novel approach for syntaxbased statistical MT, which builds on a variant of tree adjoining grammar (TAG). Inspired by work in discriminative dependency parsing, the key idea in our approach is to allow highly flexible reordering operations during parsing, in combination with a discriminative model that can condition on rich features of the sourcelanguage string. Experiments on translation from German to English show improvements over phrase-based systems, both in terms of BLEU scores and in human evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Syntax-based models for statistical machine translation (SMT) have recently shown impressive results; many such approaches are based on either synchronous grammars (e.g., (Chiang, 2005) ), or tree transducers (e.g., (Marcu et al., 2006) ). This paper describes an alternative approach for syntax-based SMT, which directly leverages methods from non-projective dependency parsing. The key idea in our approach is to allow highly flexible reordering operations, in combination with a discriminative model that can condition on rich features of the source-language input string.", "cite_spans": [ { "start": 171, "end": 185, "text": "(Chiang, 2005)", "ref_id": "BIBREF5" }, { "start": 216, "end": 236, "text": "(Marcu et al., 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach builds on a variant of tree adjoining grammar (TAG; (Joshi and Schabes, 1997) ) (specifically, the formalism of (Carreras et al., 2008) ). The models we describe make use of phrasal entries augmented with subtrees that provide syntactic information in the target language. As one example, when translating the sentence wir m\u00fcssen auch diese kritik ernst nehmen from German into English, the following sequence of syntactic phrasal entries might be used (we show each English syntactic fragment above its associated German sub-string): TAG parsing operations are then used to combine these fragments into a full parse tree, giving the final English translation we must also take these criticisms seriously. Some key aspects of our approach are as follows:", "cite_spans": [ { "start": 65, "end": 90, "text": "(Joshi and Schabes, 1997)", "ref_id": "BIBREF10" }, { "start": 125, "end": 148, "text": "(Carreras et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We impose no constraints on entries in the phrasal lexicon. The method thereby retains the full set of lexical entries of phrase-based systems (e.g., (Koehn et al., 2003) ). 1 \u2022 The model allows a straightforward integration of lexicalized syntactic language models-for example the models of (Charniak, 2001 )-in addition to a surface language model.", "cite_spans": [ { "start": 152, "end": 172, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF11" }, { "start": 176, "end": 177, "text": "1", "ref_id": null }, { "start": 294, "end": 309, "text": "(Charniak, 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The operations used to combine tree fragments into a complete parse tree are significant generalizations of standard parsing operations found in TAG; specifically, they are modified to be highly flexible, potentially allowing any possible permutation (reordering) of the initial fragments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As one example of the type of parsing operations that we will consider, we might allow the tree fragments shown above for these criticisms and take to be combined to form a new structure with the sub-string take these criticisms. This step in the derivation is necessary to achieve the correct English word order, and is novel in a couple of respects: first, these criticisms is initially seen to the left of take, but after the adjunction this order is reversed; second, and more unusually, the treelet for seriously has been skipped over, with the result that the German words translated at this point (diese, kritik, and nehmen) form a non-contiguous sequence. More generally, we will allow any two tree fragments to be combined during the translation process, irrespective of the reorderings which are introduced, or the non-projectivity of the parsing operations that are required. The use of flexible parsing operations raises two challenges that will be a major focus of this paper. First, these operations will allow the model to capture complex reordering phenomena, but will in addition introduce many spurious possibilities. Inspired by work in discriminative dependency parsing (e.g., (McDonald et al., 2005 )), we add probabilistic constraints to the model through a discriminative model that links lexical dependencies in the target language to features of the source language string. We also investigate hard constraints on the dependency structures that are created during parsing. Second, there is a need to develop efficient decoding algorithms for the models. We describe approximate search methods that involve a significant extension of decoding algorithms originally developed for phrase-based translation systems.", "cite_spans": [ { "start": 1197, "end": 1219, "text": "(McDonald et al., 2005", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments on translation from German to English show a 0.5% improvement in BLEU score over a phrase-based system. Human evaluations show that the syntax-based system gives a significant improvement over the phrase-based system. The discriminative dependency model gives a 1.5% BLEU point improvement over a basic model that does not condition on the source language string; the hard constraints on dependency structures give a 0.8% BLEU improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of syntax-based translation systems have framed translation as a parsing problem, where search for the most probable translation is achieved using algorithms that are generalizations of conventional parsing methods. Early examples of this work include (Alshawi, 1996; Wu, 1997) ; more recent models include (Yamada and Knight, 2001; Eisner, 2003; Melamed, 2004; Zhang and Gildea, 2005; Chiang, 2005; Quirk et al., 2005; Marcu et al., 2006; Zollmann and Venugopal, 2006; Nesson et al., 2006; Cherry, 2008; Mi et al., 2008; Shen et al., 2008) . The majority of these methods make use of synchronous grammars, or tree transducers, which operate over parse trees in the source and/or target languages. Reordering rules are typically specified through rotations or transductions stated at the level of contextfree rules, or larger fragments, within parse trees. These rules can be learned automatically from cor-pora.", "cite_spans": [ { "start": 261, "end": 276, "text": "(Alshawi, 1996;", "ref_id": "BIBREF0" }, { "start": 277, "end": 286, "text": "Wu, 1997)", "ref_id": "BIBREF24" }, { "start": 316, "end": 341, "text": "(Yamada and Knight, 2001;", "ref_id": "BIBREF25" }, { "start": 342, "end": 355, "text": "Eisner, 2003;", "ref_id": "BIBREF9" }, { "start": 356, "end": 370, "text": "Melamed, 2004;", "ref_id": "BIBREF16" }, { "start": 371, "end": 394, "text": "Zhang and Gildea, 2005;", "ref_id": "BIBREF26" }, { "start": 395, "end": 408, "text": "Chiang, 2005;", "ref_id": "BIBREF5" }, { "start": 409, "end": 428, "text": "Quirk et al., 2005;", "ref_id": "BIBREF21" }, { "start": 429, "end": 448, "text": "Marcu et al., 2006;", "ref_id": "BIBREF14" }, { "start": 449, "end": 478, "text": "Zollmann and Venugopal, 2006;", "ref_id": "BIBREF27" }, { "start": 479, "end": 499, "text": "Nesson et al., 2006;", "ref_id": "BIBREF18" }, { "start": 500, "end": 513, "text": "Cherry, 2008;", "ref_id": "BIBREF4" }, { "start": 514, "end": 530, "text": "Mi et al., 2008;", "ref_id": "BIBREF17" }, { "start": 531, "end": 549, "text": "Shen et al., 2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to Previous Work", "sec_num": "2" }, { "text": "A critical difference in our work is to allow arbitrary reorderings of the source language sentence (as in phrase-based systems), through the use of flexible parsing operations. Rather than stating reordering rules at the level of source or target language parse trees, we capture reordering phenomena using a discriminative dependency model. Other factors that distinguish us from previous work are the use of all phrases proposed by a phrase-based system, and the use of a dependency language model that also incorporates constituent information (although see (Charniak et al., 2003; Shen et al., 2008) for related approaches).", "cite_spans": [ { "start": 562, "end": 585, "text": "(Charniak et al., 2003;", "ref_id": "BIBREF2" }, { "start": 586, "end": 604, "text": "Shen et al., 2008)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to Previous Work", "sec_num": "2" }, { "text": "Our work builds on the variant of tree adjoining grammar (TAG) introduced by (Carreras et al., 2008) . In this formalism the basic units in the grammar are spines, which associate tree fragments with lexical items. These spines can be combined using a sister-adjunction operation (Rambow et al., 1995) , to form larger pieces of structure. 2 For example, we might have the following operation: In this case the spine for there has sister-adjoined into the S node in the spine for is; we refer to the spine for there as being the modifier spine, and the spine for is being the head spine. There are close connections to dependency formalisms: in particular in this operation we see a lexical dependency between the modifier word there and the head word is. It is possible to define syntactic language models, similar to (Charniak, 2001 ), which associate probabilities with these dependencies, roughly speaking of the form", "cite_spans": [ { "start": 77, "end": 100, "text": "(Carreras et al., 2008)", "ref_id": "BIBREF1" }, { "start": 280, "end": 301, "text": "(Rambow et al., 1995)", "ref_id": "BIBREF22" }, { "start": 819, "end": 834, "text": "(Charniak, 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3.1" }, { "text": "P (w m , s m |w h , s h , pos, \u03c3),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3.1" }, { "text": "where w m and s m are the identities of the modifier word and spine, w h and s h are the identities of the head word and spine, pos is the position in the head spine that is being adjoined into, and \u03c3 is some additional state (e.g., state that tracks previous modifiers that have adjoined into the same spine). In this paper we will also consider treelets, which are a generalization of spines, and which allow lexical entries that include more than one word. These treelets can again be combined using a sister-adjunction operation. As an example, consider the following operation: In this case the treelet for to respond sister-adjoins into the treelet for be able. This operation introduces a bi-lexical dependency between the modifier word to and the head word able.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3.1" }, { "text": "This section describes how phrase entries from phrase-based translation systems can be modified to include associated English syntactic structures. These syntactic phrase-entries (from here on referred to as \"s-phrases\") will form the basis of the translation models that we describe. We extract s-phrases from training examples consisting of a source-language string paired with a target-language parse tree. For example, consider the training example in figure 1. We assume some method that enumerates a set of possible phrase entries for each training example: each phrase entry is a pair (i, j), (k, l) specifying that source-language words f i . . . f j correspond to target-language words e k . . . e l in the example. For example, one phrase entry for the example might be (1, 2), (1, 2) , representing the pair es gibt \u21d2 there is . In our experiments we use standard methods in phrase-based systems (Koehn et al., 2003) to define the set of phrase entries for each sentence in training data. For each phrase entry, we add syntactic information to the English string. To continue our example, the resulting entry would be as follows:", "cite_spans": [ { "start": 907, "end": 927, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "es gibt \u21d2 S NP there VP is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "To give a more formal description of how syntactic structures are derived for phrases, first note that each parse tree t is mapped to a TAG derivation using the method described in (Carreras et al., 2008) . This procedure uses the head finding rules of (Collins, 1997) . The resulting derivation consists of a TAG spine for each word seen in the sentence, together with a set of adjunction operations which each involve a modifier spine and a head spine. Given an English string e = e 1 . . . e n , with an associated parse tree t, the syntactic structure associated with a substring e k . . . e l (e.g., there is) is then defined as follows:", "cite_spans": [ { "start": 181, "end": 204, "text": "(Carreras et al., 2008)", "ref_id": "BIBREF1" }, { "start": 253, "end": 268, "text": "(Collins, 1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "\u2022 For each word in the English sub-string, include its associated TAG spine in t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "\u2022 In addition, include any adjunction operations in t where both the head and modifier word are in the sub-string e j . . . e k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "In the above example, the resulting structure (i.e., the structure for there is) is a single treelet. In other cases, however, we may get a sequence of treelets, which are disconnected from each other. For example, another likely phrase-entry for this training example is es gibt keine \u21d2 there is no resulting in the first lexical entry in figure 2, which has two treelets. Allowing s-phrases with multiple treelets ensures that all phrases used by phrasebased systems can be used within our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "As a final step, we add additional alignment information to each s-phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "Consider an s-phrase which contains source-language words f 1 . . . f n paired with target-language words e 1 . . . e m . The alignment information is a vector (a 1 , b 1 ) . . . (a m , b m ) that specifies for each word e i its alignment to words f a i . . . f b i in the source language. For example, for the phrase en-try es gibt \u21d2 there is a correct alignment would be (1, 1), (2, 2) , specifying that there is aligned to es, and is is aligned to gibt (note that in many, but not all, cases a i = b i , i.e., a target language word is aligned to a single source language word).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "The alignment information in s-phrases will be useful in tying syntactic dependencies created in the target language to positions in the source language string. In particular, we will consider discriminative models (analogous to models for dependency parsing, e.g., see (McDonald et al., 2005) ) that estimate the probability of targetlanguage dependencies conditioned on properties of the source-language string. Alignments may be derived in a number of ways; in our method we directly use phrase entries proposed by a phrasebased system. Specifically, for each target word e i in a phrase entry f 1 . . . f n , e 1 . . . e m for a training example, we find the smallest 5 phrase entry in the same training example that includes e i on the target side, and is a subset of f 1 . . . f n on the source side; the word e i is then aligned to the subset of source language words in this \"minimal\" phrase.", "cite_spans": [ { "start": 270, "end": 293, "text": "(McDonald et al., 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "In conclusion, s-phrases are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "Definition 1 An s-phrase is a 4-tuple f, e, t, a where: f is a sequence of foreign words; e is a sequence of English words; t is a sequence of treelets specifying a TAG spine for each English word, and potentially some adjunctions between these spines; and a is an alignment. For an sphrase q we will sometimes refer to the 4 elements of q as f (q), e(q), t(q) and a(q).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S-phrases", "sec_num": "3.2" }, { "text": "We now introduce a model that makes use of sphrases, and which is flexible in the reorderings that it allows. To provide some intuition, and some motivation for the use of reordering operations, figure 3 gives several examples of German strings which have different word orders from English. The crucial idea will be to use TAG adjunction operations to combine treelets to form a complete parse tree, but with a complete relaxation on the order in which the treelets are combined. For example, consider again the example given in the introduction to this paper. In the first step of a derivation that builds on these treelets, the treelet is the original German string, with a possible segmentation marked with \"[\" and \"]\"; (b) is a translation for (a); and (c) is a sequence of phrase entries, including syntactic structures, for the segmentation given in (a).", "cite_spans": [], "ref_spans": [ { "start": 195, "end": 203, "text": "figure 3", "ref_id": null } ], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "for these criticisms might adjoin into the treelet for take, giving the following new sequence: In the final step the second treelet adjoins into the VP above must, giving a parse tree for the string we must also take these criticisms seriously, and completing the translation. Formally, given an input sentence f , a derivation d is a pair q, \u03c0 where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "\u2022 q = q 1 . . . q n is a sequence of s-phrases such that f = f (q 1 ) \u2295 f (q 2 ) \u2295 . . . \u2295 f (q n ) (where u \u2295 v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "denotes the concatenation of strings u and v).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "\u2022 \u03c0 is a set of adjunction operations that connects the sequence of treelets contained in t(q 1 ), t(q 2 ), . . . , t(q n ) into a parse tree in the target language. The operations allow a complete relaxation of word order, potentially allowing any of the n! possible orderings of the n sphrases. We make use of both sister-adjunction and r-adjunction operations, as defined in (Carreras et al., 2008) . Given a derivation d = q, \u03c0 , we define e(d) to be the target-language string defined by the derivation, and t(d) to be the complete targetlanguage parse tree created by the derivation. The most likely derivation for a foreign sentence f is arg max d\u2208G(f ) score(d), where G(f ) is the set of possible derivations for f , and the score for a derivation is defined as 7", "cite_spans": [ { "start": 378, "end": 401, "text": "(Carreras et al., 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "score(d) = score LM (e(d)) + score SY N (t(d)) + score R (d) + n j=1 score P (q j ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "The components of the model are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "\u2022 score LM (e(d)) is the log probability of the English string under a trigram language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "\u2022 score SY N (t(d)) is the log probability of the English parse tree under a syntactic language model, similar to (Charniak, 2001) , that associates probabilities with lexical dependencies.", "cite_spans": [ { "start": 114, "end": 130, "text": "(Charniak, 2001)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "\u2022 score R (d) will be used to score the parsing operations in \u03c0, based on the source-language string and the alignments in the s-phrases. This part of the model is described extensively in section 4.1 of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "\u2022 score P (q) is the score for an s-phrase q. This score is a log-linear combination of various features, including features that are commonly found in phrase-based systems: for example log P (f (q)|e(q)), log P (e(q)|f (q)), and lexical translation probabilities. In addition, we include a feature log P (t(q)|f (q), e(q)), which captures the probability of the phrase in question having the syntactic structure t(q).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "Note that a model that includes the terms score LM (e(d)) and n j=1 score P (q j ) alone would essentially be a basic phrase-based model (with no distortion terms). The terms score SY N (t(d)) and score R (d) add syntactic information to this basic model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "A key motivation for this model is the flexibility of the reordering operations that it allows. However, the approach raises two major challenges:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "Constraints on reorderings. Relaxing the operations in the parsing model will allow complex reorderings to be captured, but will also introduce many spurious possibilities. As one example, consider the derivation step shown in figure 4. This step may receive a high probability from a syntactic or surface language model-no discrimination is a quite plausible NP in English-but it should be ruled out for other reasons, for example because it does not respect the dependencies in the original German (i.e., keine/no is not a modifier to diskriminierung/discrimination in the German string). The challenge will be to develop either hard constraints which rule out spurious derivation steps such as these, or soft constraints, encapsulated in score R (d), which penalize them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "Efficient search. Exact search for the derivation which maximizes the score in Eq. 1 cannot be accomplished efficiently using dynamic programming (as in phrase-based systems, it is easy to show that the decoding problem is NP-complete). Approximate search methods will be needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "The next two sections of this paper describe solutions to these two challenges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Model", "sec_num": "3.3" }, { "text": "We now describe the model score R introduced in the previous section. Recall that \u03c0 specifies k adjunction operations that are used to build a full parse tree, where k \u2265 n is the number of treelets within the sequence of s-phrases q = q 1 . . . q n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Discriminative Dependency Model", "sec_num": "4.1" }, { "text": "Each of the k adjunction operations creates a dependency between a modifier word w m within a phrase q m , and a head word w h within a phrase q h . For example, in the example in section 3.3 where these criticisms was combined with take, the modifier word is criticisms and the head word is take. The modifier and head words have TAG spines s m and s h respectively. In addition we can define (a m , b m ) to be the start and end indices of the words in the foreign string to which the word w m is aligned; this information can be recovered because the s-phrase q m contains alignment information for all target words in the phrase, including w m . Similarly, we can define (a h , b h ) to be alignment information for the head word w h . Finally, we can define \u03c1 to be a binary flag specifying whether or not the adjunction operation involves reordering (in the take criticism example, this flag is set to true, because the order in En- glish is reversed from that in German). This leads to the following definition: Definition 2 Given a derivation d = q, \u03c0 , we define \u0393(d) to be the set of \u0393-dependencies in d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Discriminative Dependency Model", "sec_num": "4.1" }, { "text": "Each \u0393-dependency is a tuple w m , s m , a m , b m , w h , s h , a h , b h , \u03c1 of elements as described above. Figure 5 gives an illustration of how an adjunction creates one such \u0393-dependency.", "cite_spans": [], "ref_spans": [ { "start": 111, "end": 119, "text": "Figure 5", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "A Discriminative Dependency Model", "sec_num": "4.1" }, { "text": "The model is then defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Discriminative Dependency Model", "sec_num": "4.1" }, { "text": "score R (d) = \u03b3\u2208\u0393(d) score r (\u03b3, f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Discriminative Dependency Model", "sec_num": "4.1" }, { "text": "where score r (\u03b3, f ) is a score associated with the \u0393-dependency \u03b3. This score can potentially be sensitive to any information in \u03b3 or the sourcelanguage string f ; in particular, note that the alignment indices (a m , b m ) and (a h , b h ) essentially anchor the target-language dependency to positions in the source-language string, allowing the score for the dependency to be based on features that have been widely used in discriminative dependency parsing, for example features based on the proximity of the two positions in the sourcelanguage string, the part-of-speech tags in the surrounding context, and so on. These features have been shown to be powerful in the context of regular dependency parsing, and our intent is to leverage them in the translation problem. In our model, we define score r as follows. We estimate a model P (y|\u03b3, f ) where y \u2208 {\u22121, +1}, and y = +1 indicates that a dependency does exist between w m and w h , and y = \u22121 indicates that a dependency does not exist. We then define score r (\u03b3, f ) = log P (+1|\u03b3, f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Discriminative Dependency Model", "sec_num": "4.1" }, { "text": "To estimate P (y|\u03b3, f ), we first extract a set of labeled training examples of the form y i , \u03b3 i , f i for i = 1 . . . N from our training data as follows: for each pair of target-language words (w m , w h ) seen in the training data, we can extract associated spines (s m , s h ) from the relevant parse tree, and also extract a label y indicating whether or not a head-modifier dependency is seen between the two words in the parse tree. Given an s-phrase in the training example that includes w m , we can extract alignment information (a m , b m ) from the sphrase; we can extract similar information (a h , b h ) for w h . The end result is a training example of the form y, \u03b3, f . 8 We then estimate P (y|\u03b3, f ) using a simple backed-off model that takes into account the identity of the two spines, the value for the flag r, the distance between (a m , b m ) and (a h , b h ), and part-of-speech information in the source language.", "cite_spans": [ { "start": 689, "end": 690, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Discriminative Dependency Model", "sec_num": "4.1" }, { "text": "We now describe a second type of constraint, which limits the amount of non-projectivity in derivations. Consider again the k adjunction operations in \u03c0, which are used to connect treelets into a full parse tree. Each adjunction operation involves a head treelet that dominates a modifier treelet. Thus for any treelet t, we can consider its descendants, that is, the entire set of treelets that are directly or indirectly dominated by t. We define a \u03c0-constituent for treelet t to be the subset of source-language words dominated by t and its descendants. We then introduce the following constraint on \u03c0-constituents:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contiguity of \u03c0-Constituents", "sec_num": "4.2" }, { "text": "A \u03c0constituent is contiguous iff it consists of a contiguous sequence of words in the source language. A derivation \u03c0 satisfies the \u03c0-constituent constraint iff all \u03c0-constituents that it contains are contiguous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (\u03c0-constituent constraint.)", "sec_num": null }, { "text": "In this paper we constrain all derivations to satisfy the \u03c0-constituent constraint (future work may consider probabilistic versions of the constraint).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (\u03c0-constituent constraint.)", "sec_num": null }, { "text": "The intuition behind the constraint deserves more discussion. The constraint specifies that the modifiers to each treelet can appear in any order around the treelet, with arbitrary reorderings or non-projective operations. However, once a treelet has taken all its modifiers, the resulting \u03c0constituent must form a contiguous sub-sequence of the source-language string. As one set of examples, consider the translations in figure 3, and the example given in the introduction. These examples involve reordering of arguments and adjuncts within clauses, a very common case of reordering in translation from German to English. The reorderings in these translations are quite flexible, but in all cases satisfy the \u03c0-constituent constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (\u03c0-constituent constraint.)", "sec_num": null }, { "text": "As an illustration of a derivation that violates the constraint, consider again the derivation step shown in figure 4. This step has formed a partial hypothesis, no discrimination, which corresponds to the German words keine and diskriminierung, which do not form a contiguous substring in the German. Consider now a complete derivation, which derives the string there is hierarchy of no discrimination, and which includes the \u03c0-constituent no discrimination shown in the figure (i.e., where the treelet discrimination takes no as its only modifier). This derivation will violate the \u03c0-constituent constraint. 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (\u03c0-constituent constraint.)", "sec_num": null }, { "text": "We now describe decoding algorithms for the syntactic models: we first describe inference rules that are used to combine pieces of structure, and then describe heuristic search algorithms that use these inference rules. Throughout this section, for brevity and simplicity, we describe algorithms that apply under the assumption that each s-phrase has a single associated treelet. The generalization to the case where an s-phrase may have multiple treelets is discussed in section 5.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "5" }, { "text": "Parsing operations for the TAG grammars described in (Carreras et al., 2008) are based on the dynamic programming algorithms in (Eisner, 2000) . A critical idea in dynamic programming algorithms such as these is to associate constituents in a chart with spans of the input sentence, and to introduce inference rules that combine constituents into larger pieces of structure. The crucial step in generalizing these algorithms to the nonprojective case, and to translation, will be to make use of bit-strings that keep track of which words in the German have already been translated in a chart entry. To return to the example from the introduction, again assume that the selected s-phrases 0. Data structures: Qi for i = 1 . . . n is a set of hypotheses for each length i, S is a set of chart entries 1. S \u2190 \u2205 2. Initialize Q1 . . . Qn with basic chart entries derived from phrase entries 3. For i = 1 . . . n 4. For any A \u2208 BEAM(Qi) 5.", "cite_spans": [ { "start": 53, "end": 76, "text": "(Carreras et al., 2008)", "ref_id": "BIBREF1" }, { "start": 128, "end": 142, "text": "(Eisner, 2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "5.1" }, { "text": "If S contains a chart entry with the same signature as A, and which has a higher inside score, 6. continue 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "5.1" }, { "text": "Else 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "5.1" }, { "text": "Add A to S 9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "5.1" }, { "text": "For any chart entry C that can be derived from A together with another chart entry B \u2208 S, add C to the set Qj where j = length(C) 10. Return Qn, a set of items of length n [nehmen] , and the treelets are as shown in the introduction. Each of these treelets will form a basic entry in the chart, and will have an associated bit-string indicating which German words have been translated by that entry.", "cite_spans": [ { "start": 172, "end": 180, "text": "[nehmen]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "5.1" }, { "text": "These basic chart entries can then be combined to form larger pieces of structure. For example, the following inferential step is possible: We have shown the bit-string representation for each consituent: for example, the new constituent has the bit-string 0001101 representing the fact that the non-contiguous sub-strings diese kritik and nehmen have been translated at this point. Any two constituents can be combined, providing that the logical AND of their bit-strings is all 0's.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "5.1" }, { "text": "Inference steps such as that shown above will have an associated score corresponding to the TAG adjunction that is involved: in our models, both score SY N and score R will contribute to this score. In addition, we add state-specifically, word bigrams at the start and end of constituentsthat allows trigram language model scores to be calculated as constituents are combined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "5.1" }, { "text": "There are 2 n possible bit-strings for a sentence of length n, hence the search space is of exponential size; approximate algorithms are therefore required in search for the highest scoring derivation. Figure 6 shows a beam search algorithm which makes use of the inference rules described in the previous section. The algorithm stores sets Q i for i = 1 . . . n, where n is the source-language sentence length; each set Q i stores hypotheses of length i (i.e., hypotheses with an associated bitstring with i ones). These sets are initialized with basic entries derived from s-phrases.", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 210, "text": "Figure 6", "ref_id": "FIGREF10" } ], "eq_spans": [], "section": "Approximate Search", "sec_num": "5.2" }, { "text": "The function BEAM(Q i ) returns all items within Q i that have a high enough score to fall within a beam (more details for BEAM are given below). At each iteration (step 4), each item in turn is taken from BEAM(Q i ) and added to a chart; the inference rules described in the previous section are used to derive new items which are added to the appropriate set Q j , where j > i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximate Search", "sec_num": "5.2" }, { "text": "We have found the definition of BEAM(Q i ) to be critical to the success of the method. As a first step, each item in Q i receives a score that is a sum of an inside score (the cost of all derivation steps used to create the item) and a future score (an estimate of the cost to complete the translation). The future score is based on the source-language words that are still to be translated-this can be directly inferred from the item's bit-string-this is similar to the use of future scores in Pharoah (Koehn et al., 2003) , and in fact we use Pharoah's future scores in our model. We then give the following definition, where N is a parameter (the beam size):", "cite_spans": [ { "start": 504, "end": 524, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Approximate Search", "sec_num": "5.2" }, { "text": "Definition 4 (BEAM) Given Q i , define Q i,j for j = 1 . . . n to be the subset of items in Q i which have their j'th bit equal to one (i.e., have the j'th source language word translated). Define Q \u2032 i,j to be the N highest scoring elements in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximate Search", "sec_num": "5.2" }, { "text": "Q i,j . Then BEAM(Q i ) = \u222a n j=1 Q \u2032 i,j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximate Search", "sec_num": "5.2" }, { "text": "To motivate this definition, note that a naive method would simply define BEAM(Q i ) to be the N highest scoring elements of Q i . This definition, however, assumes that constituents which form translations of different parts of a sentence have scores that can be compared-an assumption that would be true if the future scores were highly accurate, but which quickly breaks down when future scores are inaccurate. In contrast, the definition above ensures that the top N analyses for each of the n source language words are stored at each stage, and hence that all parts of the source sentence are well represented. In experiments, the naive approach was essentially a failure, with parsing of some sentences either failing or being hopelessly inefficient, depending on the choice of N . In contrast, definition 4 gives good results. 23.7 (-1.5) Syntax (no \u03c0-c constraint) 24.4 (-0.8) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approximate Search", "sec_num": "5.2" }, { "text": "The decoding algorithms that we have described apply in the case where each s-phrase has a single treelet. The extension of these algorithms to the case where a phrase may have multiple treelets (e.g., see figure 2) is straightforward, but for brevity the details are omitted. The basic idea is to extend bit-string representations with a record of \"pending\" treelets which have not yet been included in a derivation. It is also possible to enforce the \u03c0-constituent constraint during decoding, as well as a constraint that ensures that reordering operations do not \"break apart\" English sub-strings within s-phrases that have multiple treelets (for example, for the s-phrase in figure 2, we ensure that there is no remains as a contiguous sequence of words in any translation using this s-phrase).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Allowing Multiple Treelets per s-Phrase", "sec_num": "5.3" }, { "text": "We trained the syntax-based system on 751,088 German-English translations from the Europarl corpus (Koehn, 2005) . A syntactic language model was also trained on the English sentences in the training data. We used Pharoah (Koehn et al., 2003 ) as a baseline system for comparison; the s-phrases used in our system include all phrases, with the same scores, as those used by Pharoah, allowing a direct comparison. For efficiency reasons we report results on sentences of length 30 words or less. 10 The syntax-based method gives a BLEU (Papineni et al., 2002) score of 25.04, a 0.46 BLEU point gain over Pharoah. This result was found to be significant (p = 0.021) under the paired bootstrap resampling method of Koehn (2004) , and is close to significant (p = 0.058) under the sign test of Collins et al. (2005) . Table 1 shows results for the full syntax-based system, and also results for the system with the discriminative dependency scores (see section 4.1) and the \u03c0-contituent constraint removed from the system. In both cases we see a clear impact of these components of the model, with 1.5 and 0.8 BLEU point decrements respectively. R: in our eyes , the opportunity created by this directive of introducing longer buses on international routes is efficient . S: the opportunity now presented by this directive is effective in our opinion , to use long buses on international routes . P: the need for this directive now possibility of longer buses on international routes to is in our opinion , efficiently . R: europe and asia must work together to intensify the battle against drug trafficking , money laundering , international crime , terrorism and the sexual exploitation of minors . S: europe and asia must work together in order to strengthen the fight against drug trafficking , money laundering , against international crime , terrorism and the sexual exploitation of minors . P: europe and asia must cooperate in the fight against drug trafficking , money laundering , against international crime , terrorism and the sexual exploitation of minors strengthened . R: equally important for the future of europe -at biarritz and later at nice -will be the debate on the charter of fundamental rights . S: it is equally important for the future of europe to speak on the charter of fundamental rights in biarritz , and then in nice . P: just as important for the future of europe , it will be in biarritz and then in nice on the charter of fundamental rights to speak . R: the convention was thus a muddled system , generating irresponsibility , and not particularly favourable to well-ordered democracy . S: therefore , the convention has led to a system of a promoter of irresponsibility of the lack of clarity and hardly coincided with the rules of a proper democracy . P: the convention therefore led to a system of full of lack of clarity and hardly a promoter of the irresponsibility of the rules of orderly was a democracy . Figure 7 : Examples where both annotators judged the syntactic system to give an improved translation when compared to the baseline system. 51 out of 200 translations fall into this category. These examples were chosen at random from these 51 examples. R is the human (reference) translation; S is the translation from the syntax-based system; P is the output from the baseline (phrase-based) system. Syntax 51 3 7 61 PB 1 25 11 37 = 21 14 67 102 Total 73 42 85 200 Table 2 : Human annotator judgements. Rows show results for annotator 1, and columns for annotator 2. Syntax and PB show the number of cases where an annotator respectively preferred/dispreferred the syntax-based system. = gives counts of translations judged to be equal in quality.", "cite_spans": [ { "start": 99, "end": 112, "text": "(Koehn, 2005)", "ref_id": "BIBREF13" }, { "start": 222, "end": 241, "text": "(Koehn et al., 2003", "ref_id": "BIBREF11" }, { "start": 535, "end": 558, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF20" }, { "start": 712, "end": 724, "text": "Koehn (2004)", "ref_id": "BIBREF12" }, { "start": 790, "end": 811, "text": "Collins et al. (2005)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 814, "end": 821, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 2945, "end": 2953, "text": "Figure 7", "ref_id": null }, { "start": 3346, "end": 3436, "text": "Syntax 51 3 7 61 PB 1 25 11 37 = 21 14 67 102 Total 73 42 85 200 Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "In addition, we obtained human evaluations on 200 sentences chosen at random from the test data, using two annotators. For each example, the reference translation was presented to the annotator, followed by translations from the syntax-based and phrase-based systems (in a random order). For each example, each annotator could either decide that the two translations were of equal quality, or that one translation was better than the other. Table 2 shows results of this evaluation. Both annotators show a clear preference for the syntaxbased system: for annotator 1, 73 translations are judged to be better for the syntax-based system, with 42 translations being worse; for annotator 2, 61 translations are improved with 37 being worse; both annotators' results are statistically significant with p < 0.05 under the sign test. Figure 7 shows some translation examples where the syntax-based system was judged to give an improvement.", "cite_spans": [], "ref_spans": [ { "start": 828, "end": 836, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Syntax PB = Total", "sec_num": null }, { "text": "We have described a translation model that makes use of flexible parsing operations, critical ideas being the definition of s-phrases, \u0393-dependencies, the \u03c0-constituent constraint, and an approximate search algorithm. A key area for future work will be further development of the discriminative dependency model (section 4.1). The model of score r (\u03b3, f ) that we have described in this paper is relatively simple; in general, however, there is the potential for score r to link target language dependencies to arbitrary properties of the source language string f (recall that \u03b3 contains a head and modifier spine in the target language, along with positions in the source-language string to which these spines are aligned). For example, we might introduce features that: a) condition dependencies created in the target language on dependency relations between their aligned words in the source language; b) condition target-language dependencies on whether they are aligned to words that are in the same clause or segment in the source language string; or, c) condition the grammatical roles of nouns in the target language on grammatical roles of aligned words in the source language. These features should improve translation quality by giving a tighter link between syntax in the source and target languages, and would be easily incorporated in the approach we have described.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" }, { "text": "Note that in the above example each English phrase consists of a completely connected syntactic structure; this is not, however, a required constraint, see section 3.2 for discussion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also make use of the r-adjunction operation defined in(Carreras et al., 2008), which, together with sister-adjunction, allows us to model the full range of structures found in the Penn treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The \"size\" of a phrase entry is defined to be ns + nt where ns is the number of source language words in the phrase, nt is the number of target language words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In principle we allow any treelet to adjoin into any other treelet-for example there are no hard, grammar-based constraints ruling out the combination of certain pairs of nonterminals. Note however that in some cases operations will have probability 0 under the syntactic language model introduced later in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In practice, MERT training(Och, 2003) will be used to train relative weights for the different model components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To be precise, there may be multiple (or even zero) sphrases which include wm or w h , and these s-phrases may include conflicting alignment information. Given nm different alignments seen for wm, and n h different alignments seen for w h , we create nm \u00d7 n h training examples, which include all possible combinations of alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note, however, that the derivation step show in figure 4 will be considered in the search, because if discrimination takes additional modifiers, and thereby forms a \u03c0-constituent that dominates a contiguous sub-string in the German, then the resulting derivation will be valid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Both Pharoah and our system have weights trained using MERT(Och, 2003) on sentences of length 30 words or less, to ensure that training and test conditions are matched.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Head automata and bilingual tiling: Translation with minimal representations", "authors": [ { "first": "H", "middle": [], "last": "Alshawi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "167--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Alshawi. 1996. Head automata and bilingual tiling: Translation with minimal representations. In Pro- ceedings of ACL, pages 167-176.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "TAG, dynamic programming and the perceptron for efficient, feature-rich parsing", "authors": [ { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "T", "middle": [], "last": "Koo", "suffix": "" } ], "year": 2008, "venue": "Proc. of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Carreras, M. Collins, and T. Koo. 2008. TAG, dy- namic programming and the perceptron for efficient, feature-rich parsing. In Proc. of CoNLL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Syntax-based language models for machine translation", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "K", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2003, "venue": "Proceedings of MT Summit IX", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, K. Knight, and K. Yamada. 2003. Syntax-based language models for machine transla- tion. In Proceedings of MT Summit IX.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Immediate-head parsing for language models", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 2001. Immediate-head parsing for lan- guage models. In Proceedings of ACL 2001.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Cohesive phrase-based decoding for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "72--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Cherry. 2008. Cohesive phrase-based decoding for statistical machine translation. In Proceedings of ACL-08: HLT, pages 72-80, Columbus, Ohio, June. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Clause restructuring for statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "I", "middle": [], "last": "Kucerova", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins, P. Koehn, and I. Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Three generative, lexicalised models for statistical parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 1997. Three generative, lexicalised mod- els for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Com- putational Linguistics, pages 16-23, Madrid, Spain, July. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bilexical grammars and their cubictime parsing algorithms", "authors": [ { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2000, "venue": "New Developments in Natural Language Parsing", "volume": "", "issue": "", "pages": "29--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Eisner. 2000. Bilexical grammars and their cubic- time parsing algorithms. In H. C. Bunt and A. Ni- jholt, editors, New Developments in Natural Lan- guage Parsing, pages 29-62. Kluwer Academic Publishers.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning non-isomorphic tree mappings for machine translation", "authors": [ { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Eisner. 2003. Learning non-isomorphic tree map- pings for machine translation. In Proceedings of ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Tree-adjoining grammars", "authors": [ { "first": "A", "middle": [ "K" ], "last": "Joshi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1997, "venue": "Handbook of Formal Languages", "volume": "3", "issue": "", "pages": "169--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.K. Joshi and Y. Schabes. 1997. Tree-adjoining grammars. In G. Rozenberg and K. Salomaa, ed- itors, Handbook of Formal Languages, volume 3, pages 169-124. Springer.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Statistical phrase-based translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F.J. Och, and D. Marcu. 2003. Statis- tical phrase-based translation. In Proceedings of HLT/NAACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "388--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2004. Statistical significance tests for ma- chine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388-395, Barcelona, Spain, July. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of MT Summit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2005. Europarl: A parallel corpus for sta- tistical machine translation. In Proceedings of MT Summit.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Spmt: Statistical machine translation with syntactified target language phrases", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "A", "middle": [], "last": "Echihabi", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Marcu, W. Wang, A. Echihabi, and K. Knight. 2006. Spmt: Statistical machine translation with syntac- tified target language phrases. In Proceedings of EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Online large-margin training of dependency parsers", "authors": [ { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "K", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005. On- line large-margin training of dependency parsers. In Proceedings of ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Statistical machine translation by parsing", "authors": [ { "first": "D", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Melamed. 2004. Statistical machine translation by parsing. In Proceedings of ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Forest-based translation", "authors": [ { "first": "H", "middle": [], "last": "Mi", "suffix": "" }, { "first": "L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Mi, L. Huang, and Q. Liu. 2008. Forest-based translation. In Proceedings of ACL-08: HLT, pages 192-199. Association for Computational Linguis- tics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Induction of probabilistic synchronous tree-insertion grammars for machine translation", "authors": [ { "first": "R", "middle": [], "last": "Nesson", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Shieber", "suffix": "" }, { "first": "A", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 7th AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Nesson, S.M. Shieber, and A. Rush. 2006. In- duction of probabilistic synchronous tree-insertion grammars for machine translation. In Proceedings of the 7th AMTA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Minimum error rate training for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.J. Och. 2003. Minimum error rate training for statis- tical machine translation. In Proceedings of ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Dependency tree translation: Syntactically informed phrasal smt", "authors": [ { "first": "C", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "A", "middle": [], "last": "Menezes", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Quirk, A. Menezes, and Colin Cherry. 2005. De- pendency tree translation: Syntactically informed phrasal smt. In Proceedings of ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "D-tree grammars", "authors": [ { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "K", "middle": [], "last": "Vijay-Shanker", "suffix": "" }, { "first": "D", "middle": [], "last": "Weir", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "151--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Rambow, K. Vijay-Shanker, and D. Weir. 1995. D-tree grammars. In Proceedings of the 33rd Annual Meeting of the Association for Computa- tional Linguistics, pages 151-158, Cambridge, Mas- sachusetts, USA, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A new string-to-dependency machine translation algorithm with a target dependency language model", "authors": [ { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "J", "middle": [], "last": "Xu", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Shen, J. Xu, and R. Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Pro- ceedings of ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Wu. 1997. Stochastic inversion transduction gram- mars and bilingual parsing of parallel corpora. Com- putational Linguistics, 23(3):377-404.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A syntax-based statistical translation model", "authors": [ { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Yamada and K. Knight. 2001. A syntax-based sta- tistical translation model. In Proceedings of ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Stochastic lexicalized inversion transduction grammar for alignment", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "473--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Zhang and D. Gildea. 2005. Stochastic lexicalized inversion transduction grammar for alignment. In Proceedings of ACL, pages 473-482.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Syntax augmented machine translation via chart parsing", "authors": [ { "first": "A", "middle": [], "last": "Zollmann", "suffix": "" }, { "first": "A", "middle": [], "last": "Venugopal", "suffix": "" } ], "year": 2006, "venue": "Proceedings of NAACL 2006 Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Zollmann and A. Venugopal. 2006. Syntax aug- mented machine translation via chart parsing. In Proceedings of NAACL 2006 Workshop on Statisti- cal Machine Translation.", "links": null } }, "ref_entries": { "FIGREF3": { "num": null, "uris": null, "text": "A training example consisting of an English (target language) tree and a German (source language) sentence.", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "Example syntactic phrase entries. We show German sub-strings above their associated sequence of treelets.4", "type_str": "figure" }, "FIGREF6": { "num": null, "uris": null, "text": "derivation step seriously is adjoined to the right of take, giving the following treelets:", "type_str": "figure" }, "FIGREF7": { "num": null, "uris": null, "text": "A spurious derivation step. The treelets arise from [keine] [hierarchie der] [diskriminierung].", "type_str": "figure" }, "FIGREF9": { "num": null, "uris": null, "text": "An adjunction operation that involves the modifier criticisms and the head take. The phrases involved are underlined; the dotted lines show alignments within s-phrases between English words and positions in the German string. The \u0393-dependency in this case includes the head and modifier words, together with their spines, and their alignments to positions in the German string (kritik and nehmen).", "type_str": "figure" }, "FIGREF10": { "num": null, "uris": null, "text": "A beam search algorithm. A dynamicprogramming signature consists of the regular dynamicprogramming state for the parsing algorithm, together with the span (bit-string) associated with a constituent. segment the German input into [wir m\u00fcssen auch] [diese kritik] [ernst]", "type_str": "figure" }, "TABREF2": { "html": null, "type_str": "table", "num": null, "text": "Development set results showing the effect of removing ScoreR or the \u03c0-constituent constraint.", "content": "