Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I11-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:32:19.880641Z"
},
"title": "Context-Sensitive Syntactic Source-Reordering by Statistical Transduction",
"authors": [
{
"first": "Maxim",
"middle": [],
"last": "Khalilov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"postBox": "P.O. Box 94242",
"postCode": "1090 GE",
"settlement": "Amsterdam",
"country": "The Netherlands"
}
},
"email": "m.khalilov@uva.nl"
},
{
"first": "Khalil",
"middle": [],
"last": "Sima'an",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"postBox": "P.O. Box 94242",
"postCode": "1090 GE",
"settlement": "Amsterdam",
"country": "The Netherlands"
}
},
"email": "k.simaan@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "How well can a phrase translation model perform if we permute the source words to fit target word order as perfectly as word alignment might allow? And how well would it perform if we limit the allowed permutations to ITGlike tree-transduction operations on the source parse tree? First we contribute oracle results showing great potential for performance improvement by source-reordering, ranging from 1.5 to 4 BLEU points depending on language pair. Although less outspoken, the potential of tree-based source-reordering is also significant. Our second contribution is a source reordering model that works with two kinds of tree transductions: the one permutes the order of sibling subtrees under a node, and the other first deletes layers in the parse tree in order to exploit sibling permutation at the remaining levels.The statistical parameters of the model we introduce concern individual tree transductions conditioned on contextual features of the tree resulting from all preceding transductions. Experiments in translating from English to Spanish/Dutch/Chinese show significant improvements of respectively 0.6/1.2/2.0 BLEU points. 1 Motivation Word order differences between languages are a major challenge in Machine Translation (MT).",
"pdf_parse": {
"paper_id": "I11-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "How well can a phrase translation model perform if we permute the source words to fit target word order as perfectly as word alignment might allow? And how well would it perform if we limit the allowed permutations to ITGlike tree-transduction operations on the source parse tree? First we contribute oracle results showing great potential for performance improvement by source-reordering, ranging from 1.5 to 4 BLEU points depending on language pair. Although less outspoken, the potential of tree-based source-reordering is also significant. Our second contribution is a source reordering model that works with two kinds of tree transductions: the one permutes the order of sibling subtrees under a node, and the other first deletes layers in the parse tree in order to exploit sibling permutation at the remaining levels.The statistical parameters of the model we introduce concern individual tree transductions conditioned on contextual features of the tree resulting from all preceding transductions. Experiments in translating from English to Spanish/Dutch/Chinese show significant improvements of respectively 0.6/1.2/2.0 BLEU points. 1 Motivation Word order differences between languages are a major challenge in Machine Translation (MT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "deals with word order differences in two subcomponents of a translation model. Firstly, using the local word reordering implicitly encoded in phrase pairs. Secondly, using an explicit reordering model which may reorder target phrases relative to their source sides, e.g., as a monotone phrase sequence generation process with the possibility of swapping neighboring phrases (Tillman, 2004) .",
"cite_spans": [
{
"start": 374,
"end": 389,
"text": "(Tillman, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Arguably, local phrase reordering models cannot account for long-range reordering phenomena, e.g., (Chiang, 2005; Chiang, 2007) . Hierarchical models of phrase reordering employ synchronous grammars or tree transducers, e.g., (Wu and Wong, 1998; Chiang, 2005) . These models explore a more varied range of reordering phenomena, e.g., defined by at most inverting the order of sibling subtrees under each node in binary source/target trees (akin to ITG (Wu and Wong, 1998) ).",
"cite_spans": [
{
"start": 99,
"end": 113,
"text": "(Chiang, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 114,
"end": 127,
"text": "Chiang, 2007)",
"ref_id": "BIBREF2"
},
{
"start": 226,
"end": 245,
"text": "(Wu and Wong, 1998;",
"ref_id": "BIBREF24"
},
{
"start": 246,
"end": 259,
"text": "Chiang, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 452,
"end": 471,
"text": "(Wu and Wong, 1998)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Undoubtedly, the word order of source and target sentences is intertwined with the lexical choices on both side. Statistically speaking, however, one may first select a target word order given the source only, and then choose target words given the selected target word order and source words. One application of this idea is known as source reordering (or -permutation), e.g., (Collins et al., 2005; Xia and McCord, 2004; Wang et al., 2007; Li et al., 2007; Khalilov and Sima'an, 2010) . Briefly, the words of the source string s are reordered to minimize word order differences with the target string t, leading to the source permuted strings. Presumably, a standard PBSMT system trained to translate froms to t should have an easier task than translating directly from s to t. The source reordering part, s tos, can be realized in various ways and may manipulate morpho-syntactic parse trees of s, e.g., (Collins et al., 2005; Xia and McCord, 2004; Li et al., 2007) .",
"cite_spans": [
{
"start": 378,
"end": 400,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 401,
"end": 422,
"text": "Xia and McCord, 2004;",
"ref_id": "BIBREF25"
},
{
"start": 423,
"end": 441,
"text": "Wang et al., 2007;",
"ref_id": "BIBREF22"
},
{
"start": 442,
"end": 458,
"text": "Li et al., 2007;",
"ref_id": "BIBREF11"
},
{
"start": 459,
"end": 486,
"text": "Khalilov and Sima'an, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 907,
"end": 929,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 930,
"end": 951,
"text": "Xia and McCord, 2004;",
"ref_id": "BIBREF25"
},
{
"start": 952,
"end": 968,
"text": "Li et al., 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It may seem that source reordering should provide only limited improvement over the standard PB-SMT approach. The literature reports mixed performance improvements for different language pairs, e.g., (Collins et al., 2005; Xia and McCord, 2004; Wang et al., 2007; Li et al., 2007; Khalilov and Sima'an, 2010) . But what is the potential improvement of source reordering? We contribute experiments measuring oracle performance improvement for English to Dutch/Spanish/Chinese translations. Beside string-driven oracles, we report results using ITG-like transductions over a single syntactic parse tree of s. Our results confirm that reordering a single syntactic tree could be insufficient (e.g., (Huang et al., 2009) ), yet they show substantial potential.",
"cite_spans": [
{
"start": 200,
"end": 222,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 223,
"end": 244,
"text": "Xia and McCord, 2004;",
"ref_id": "BIBREF25"
},
{
"start": 245,
"end": 263,
"text": "Wang et al., 2007;",
"ref_id": "BIBREF22"
},
{
"start": 264,
"end": 280,
"text": "Li et al., 2007;",
"ref_id": "BIBREF11"
},
{
"start": 281,
"end": 308,
"text": "Khalilov and Sima'an, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 696,
"end": 716,
"text": "(Huang et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our second contribution is a novel source reordering model that manipulates the source parse tree with two kinds of tree transduction operators: the one permutes the order of sibling subtrees under a node, and the other first abolishes layers in the parse tree in order to exploit sibling permutation at the remaining levels. The latter is the opposite of parse binarization using Expectation-Maximization (Huang et al., 2009) . We use Maximum-Entropy training (Berger et al., 1996) to learn a sequence of tree transductions, each conditioned on contextual features in tree resulting from outcome of the preceding transduction. The conditioning on the outcome of preceding transductions is a departure from earlier approaches at learning independent source permutation steps, e.g., (Tromble and Eisner, 2009; Visweswariah et al., 2010) .",
"cite_spans": [
{
"start": 406,
"end": 426,
"text": "(Huang et al., 2009)",
"ref_id": "BIBREF6"
},
{
"start": 461,
"end": 482,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 782,
"end": 808,
"text": "(Tromble and Eisner, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 809,
"end": 835,
"text": "Visweswariah et al., 2010)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The aim for the rest of this paper is firstly, to quantify the potential performance improvement of a standard PBSMT system if preceded by source reordering and secondly, to show that statistical Markov approach to tree transduction, where the probability of each transduction step is conditioned on the outcome of preceding steps, can improve the quality of PBSMT output significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We start out from a word-aligned parallel corpus, consisting of triples s, a, t , a source s, target t and word alignment a. Source reordering assumes that a permutation of s, calleds, is first generated with a model P r (s | s) followed by a phrase translation model P t (t |s). The desired permutations is one that has minimum word order divergence from t, i.e., when word-aligned again with t would have least number of crossing alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-Reordering: Framework",
"sec_num": "2"
},
{
"text": "Practically, the original parallel corpus { s, a, t } is split to two parallel corpora: (1) a source-topermutation parallel corpus (consisting of s, a,s ) and (2) a permutation-to-target parallel corpus (consisting of gs,\u00e0, t ), where gs is the output of a source reordering model (guessing ats), and\u00e0 results from automatically word aligning gs, t . The latter parallel corpus is used for training a phrasebased translation system P t (t | gs), while the former corpus is used for training a source reordering model P r (s | s). The problem of permuting the source string to unfold the crossing alignments is computationally intractable (see (Tromble and Eisner, 2009) ). However, various constraints can be made on unfolding the crossing alignments in a. A common approach is to assume a binary parse tree for the source string, and define a set of eligible permutations by binary ITG transductions. This defines permutations resulting from at most inverting pairs of children under nodes of the source tree. Figure 1 exhibits a long-range reordering of the verb in English-to-Dutch translation: inverting the order of the children of the VP node would unfold the crossing alignment. However, crossing alignments represented as non-constituents cannot be resolved. This difficulty can be circumvented by employing multiple alternative parse trees, by applying heuristic transforms (e.g., binarization) to the tree to fit the alignments (Wang et al., June 2010) , or by defin-ing new local transductions, on top child permutation (ITG) as we do next.",
"cite_spans": [
{
"start": 643,
"end": 669,
"text": "(Tromble and Eisner, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 1438,
"end": 1462,
"text": "(Wang et al., June 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 1011,
"end": 1017,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Source-Reordering: Framework",
"sec_num": "2"
},
{
"text": "Source reordering has been shown useful for PB-SMT for a wide variety of language pairs with high mutual word order disparity (Collins et al., 2005; Popovic' and Ney, 2006; Zwarts and Dras, 2007; Xia and McCord, 2004) . In Costa-juss\u00e0 and Fonollosa (2006) statistical word classes as well as POS tags are used as patterns for reordering the input sentences and producing a new bilingual pair.",
"cite_spans": [
{
"start": 126,
"end": 148,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 149,
"end": 172,
"text": "Popovic' and Ney, 2006;",
"ref_id": "BIBREF16"
},
{
"start": 173,
"end": 195,
"text": "Zwarts and Dras, 2007;",
"ref_id": "BIBREF27"
},
{
"start": 196,
"end": 217,
"text": "Xia and McCord, 2004)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing work on source permutation",
"sec_num": "3"
},
{
"text": "A rather popular class of source reordering algorithms involves syntactic information and aims at minimizing the need for reordering during translation by permuting the source sentence (Collins et al., 2005; Wang et al., 2007; Khalilov and Sima'an, 2010; Li et al., 2007) . Some systems perform source permutation using a set of handcrafted rules (Collins et al., 2005; Wang et al., 2007; Ramanathan et al., 2008) , others make use of automatically learned reordering patterns extracted from the plain training data, the corresponding parse or dependency trees and the alignment matrix (Visweswariah et al., 2010) .",
"cite_spans": [
{
"start": 185,
"end": 207,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 208,
"end": 226,
"text": "Wang et al., 2007;",
"ref_id": "BIBREF22"
},
{
"start": 227,
"end": 254,
"text": "Khalilov and Sima'an, 2010;",
"ref_id": "BIBREF7"
},
{
"start": 255,
"end": 271,
"text": "Li et al., 2007)",
"ref_id": "BIBREF11"
},
{
"start": 347,
"end": 369,
"text": "(Collins et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 370,
"end": 388,
"text": "Wang et al., 2007;",
"ref_id": "BIBREF22"
},
{
"start": 389,
"end": 413,
"text": "Ramanathan et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 586,
"end": 613,
"text": "(Visweswariah et al., 2010)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing work on source permutation",
"sec_num": "3"
},
{
"text": "Inspiring this work, source reordering as a pretranslation step is viewed as a word permutation learning problem in Tromble and Eisner (2009) and Li et al. (2007) . The space of permutations is approached efficiently using a binary ITG-like synchronous context-free grammar put on the parallel data. Similarly, a local ITG-based tree transducer with contextual conditioning is used in Khalilov and Sima'an (2010) and Li et al. (2007) , and preliminary experiments on a single language pair show improved performance.",
"cite_spans": [
{
"start": 116,
"end": 141,
"text": "Tromble and Eisner (2009)",
"ref_id": "BIBREF20"
},
{
"start": 146,
"end": 162,
"text": "Li et al. (2007)",
"ref_id": "BIBREF11"
},
{
"start": 385,
"end": 412,
"text": "Khalilov and Sima'an (2010)",
"ref_id": "BIBREF7"
},
{
"start": 417,
"end": 433,
"text": "Li et al. (2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing work on source permutation",
"sec_num": "3"
},
{
"text": "Particularly, the model in (Li et al., 2007) is explicitly aimed at long-distance reorderings (English-Chinese), prunes the alignment matrix gradually to fit the source syntactic parse and employs Maximum-Entropy modeling to choose the optimal local ITG-like permutation step of sister subtrees but interleaves that step with a translation step. The model which we present in Section 2 differs substantially from (Li et al., 2007) and other earlier work because it (1) incorporates other kinds of tree transduction operations than those promoted by ITG, and",
"cite_spans": [
{
"start": 27,
"end": 44,
"text": "(Li et al., 2007)",
"ref_id": "BIBREF11"
},
{
"start": 413,
"end": 430,
"text": "(Li et al., 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing work on source permutation",
"sec_num": "3"
},
{
"text": "(2) works with the unmodified alignment matrix but learns reorderings only from those alignments that are consistent with the tree, thereby avoiding the effects of heuristics for pruning alignments to fit the tree-structure, e.g., (Li et al., 2007) .",
"cite_spans": [
{
"start": 231,
"end": 248,
"text": "(Li et al., 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing work on source permutation",
"sec_num": "3"
},
{
"text": "In this paper we take the idea of learning source permutation one step further along a few dimensions. We show the utility of other kinds of tree transduction operations, besides those promoted by ITG, stress the importance of using a wide range of conditioning context features during learning, and report oracle and test results on three language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Existing work on source permutation",
"sec_num": "3"
},
{
"text": "The majority of existing work reports encouraging performance improvements by source reordering. Next we aim at quantifying the potential improvement by oracle source reordering at the string level, if all permutations were to be allowed, and at the source syntactic tree level, by limiting the permutations with two kinds of local transductions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Existing work on source permutation",
"sec_num": "3"
},
{
"text": "Source reordering for PBSMT assumes that permuting the source words to minimize the order differences with the target sentence could improve translation performance. However, the question \"how much?\" is rarely asked. Here, we attempt answering this question with a set of oracle systems 1 , in which we perform unfolding operations on the crossing links in alignment a (estimated between corpora s and t) that leads to a more monotone alignment\u00e0 (betweens, which is a permutation of s, and t). We introduce a set of tree-based constraints that control the unfolding of alignment crossings. We measure the impact of (un)folded alignment crossings on the performance of the PBSMT system (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 690,
"end": 697,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Oracle source reordering results",
"sec_num": "4"
},
{
"text": "Oracle String. This method scans the alignment a from left-to-right and unfolds all the crossing links between bilingual phrases (Oracle string). Figure 2 shows an example of word reordering done on the string level. NULL aligned words do not move from their positions.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Oracle source reordering results",
"sec_num": "4"
},
{
"text": "Oracle parse tree with permute siblings. The oracle system unfolds an alignment crossing if and only if the source side of the alignment crossing is covered by the same node in the syntactic source tree, and the alignment pair subject to crossing can be unfolded by permuting the order of the sibling nodes. NULL aligned words do not prevent unfolding crossings because we include them with the adjacent words that are involved in the crossings. We call this configuration Oracle tree. Figure 1 shows an example. According to the Oracle tree constraint, the word \"went\" can be placed in the end of the sentence since the replacement can be done as a swapping of \"VBP\" and \"PP\" categories. The same happens for the word \"reflect\" swapping with \"S\" constituent in Figure 1 , but not for the chunks \"the positions\" and \"not properly\": this crossing cannot be resolved under the tree constraint since they are not dominated by sibling syntactic categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 486,
"end": 494,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 762,
"end": 770,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Oracle source reordering results",
"sec_num": "4"
},
{
"text": "Oracle tree with delete descendants, permute siblings. This oracle implements an additional mechanism of tree modification to increase the number of reordering permutations in comparison with Oracle tree algorithm. Here we allow for an additional tree transduction operation that deletes intervening layers before applying sibling permutation. This is illustrated in Figure 3 . The word \"must\" can not be moved to the beginning of the sentence in Figure 3a by Oracle tree. Instead, this is done in two steps. Firstly, the VP dominating the words \"must\" and \"apply\" is deleted under the current node S, the transformed tree is shown in Figure 3b . Subsequently, the siblings under S in the resulting tree are permuted, \"must\" is reordered across the whole clause and placed to the first position (see Figure 3c) . We call this system Oracle mod. Figure 4 shows an example in which crossing alignment links cannot be unfolded without deleting 4 intervening layers. Table 1 contrasts the oracle results with the performance shown by standard PBSMT systems. The experimental setup is detailed in Section 6. We consider the following baseline configurations: PBSMT -Moses-based PBSMT with distance-based reordering model; PBSMT+MSD -Moses-based PBSMT with distance-based reordering model and MSD and Moses-chart -hierarchical Moses-chart-based PBSMT.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 375,
"text": "Figure 3",
"ref_id": "FIGREF4"
},
{
"start": 447,
"end": 456,
"text": "Figure 3a",
"ref_id": "FIGREF4"
},
{
"start": 635,
"end": 644,
"text": "Figure 3b",
"ref_id": "FIGREF4"
},
{
"start": 800,
"end": 810,
"text": "Figure 3c)",
"ref_id": "FIGREF4"
},
{
"start": 845,
"end": 853,
"text": "Figure 4",
"ref_id": "FIGREF5"
},
{
"start": 963,
"end": 970,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Oracle source reordering results",
"sec_num": "4"
},
{
"text": "Depending on number of parse tree levels allowed to be deleted, we consider three Oracle mod systems: with two (2lt), three (3lt) and five (5lt) levels of descendants allowed to be deleted for a more flat parse tree structure before sibling permutation. The impact of corpus monotonization on translation system performance is measured using the final point of weight optimization on the development set (Dev BLEU), as well as on the test set (Test BLEU/NIST).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle results",
"sec_num": null
},
{
"text": "The major conclusion that can be drawn from the oracle results is that the source reordering defined in terms of parse tree transduction can potentially lead to increased translation quality (up to 1.2 BLEU points for English-Dutch, 0.5 for English-Spanish and 1.7 for English-Chinese). At the same time, a huge gap between performance shown by Oracle string and tree oracle systems (\u22482.2 BLEU points for English-Dutch, \u22481.3 for English-Spanish and \u22482.5 for English-Chinese) shows that there are many crossing alignments which cannot be unfolded with simple, local transductions over a single source-side syntactic tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle results",
"sec_num": null
},
{
"text": "Our model aims at learning from the source permuted parallel corpus (containing tuples s, a,s ) a probabilistic optimization (arg max \u03c0(s) P r (\u03c0(s) | s, \u03c4 s )), where \u03c4 s is the source parse and \u03c0(s) is some eligible permutation of s. We view the permutations leading from s tos as a sequence of local tree transductions \u03c4s 0 \u2192 . . . \u2192 \u03c4s n , wheres 0 = s ands n =s, and each transduction \u03c4s i\u22121 \u2192 \u03c4s i is defined using any of two kinds of local tree transduction operations used in Section 4 or alternatively NOP (No Operation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "The sequence \u03c4s 0 \u2192 . . . \u2192 \u03c4s n is obtained by taking the next node in a top-down tree traversal, then statistically selecting the most likely of three transduction operations and applying the selected operation to the current node. If the current tree is \u03c4s i\u22121 , and the current node has address x, is syntactically labeled N x , directly dominates \u03b1 x (the ordered sequence of node labels under x), we approximate the conditional probability P (\u03c4s i | \u03c4s i\u22121 ) with the transduction operation it employs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "\u2022 Permute the children of x in \u03c4s i\u22121 with probability where \u03c0(\u03b1 x ) is a permutation of \u03b1 x (the ordered sequence of node labels under x) and C x is a local tree context of node x in tree \u03c4 s i\u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2248 P (\u03c0(\u03b1 x ) | N x \u2192 \u03b1 x , C x )",
"eq_num": "(1)"
}
],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "\u2022 Select a child of x to delete, pull its children up directly under x, effectively changing \u03b1 x to some \u03b1 d x , and then permute the children of the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "\u2248 P (\u03b1 x ; \u03b1 d x , \u03c0(\u03b1 x ) | N x \u2192 \u03b1 x , C x ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "where (\u03b1 x ; \u03b1 d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "x symbolizes the result of deleting a subtree under a child of x. This operation applies also to subtrees of depth n \u2208 {1, 2, 3, 5} under x, i.e., a child is depth 1, a child with its children is depth 2 and so on. Obviously, the number of possible permutations of \u03b1 x is factorial in the length of \u03b1 x . Fortunately, the source permuted training data exhibits only a fraction of possible permutations even for longer \u03b1 x sequences. Furthermore, by conditioning the probability on local context C x , the number of permutations is limited to a handful set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "Theoretically, we could define the probability of the sequence of local tree transductions \u03c4s 0 \u2192 . . . \u2192 \u03c4s n as However, unlike earlier work (e.g., (Tromble and Eisner, 2009 )), we cannot afford to do so because every local transduction conditions on context C x of an intermediate tree, which quickly risks becoming intractable (even when we use packed forests). Furthermore, the problem of calculating the most likely permutation under such a model is made difficult by the fact that different transduction sequences may lead to the same permutation, which demands summing over these sequences (another intractable summation). Earlier work has avoided conditioning context, effectively assuming that the each intermediate permutation is independent from the preceding ones. Instead, we take a pragmatic approach and greedily select at every intermediate point \u03c4s i\u22121 \u2192 \u03c4s i the single most likely local transduction that can be applied to a node in the current intermediate tree \u03c4s i\u22121 using an interpolation of the terms in Equations 1 and 2 with probability ratios of the language model s as follows:",
"cite_spans": [
{
"start": 150,
"end": 175,
"text": "(Tromble and Eisner, 2009",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (\u03c4s 0 \u2192 . . . \u2192 \u03c4s n ) = n i=1 P (\u03c4s i | \u03c4s i\u22121 )",
"eq_num": "(3)"
}
],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "P (T RAN S i | N x \u2192 \u03b1 x , C x ) \u00d7 P lm (s i\u22121 ) P lm (s i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "where T RAN S i is any of the two transduction operations or NOP, and P lm is a language model trained on thes side of the corpus { s, a,s }. The rationale behind this log-linear interpolation is that our source permutation approach aims at finding the optimal permutations of s that can serve as input for a subsequent translation model. Hence, we aim at tree transductions that are syntactically motivated that also lead to improved string permutation. In this sense, the tree transduction definitions can be seen as an efficient and syntactically informed way to define the space of possible permutations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "Estimates. We estimate the string probabilities P lm (.) using 3-gram language models trained on thes side of the source permuted parallel corpus { s, a,s }. We estimate the conditional probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "P r (T RAN S | N x \u2192 \u03b1 x , C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "x ) using a Maximum-Entropy framework, where feature functions are defined to capture the permutation as a class, the node label N x and its head POS tag, the child sequence \u03b1 x together with the corresponding sequence of head POS tags and other features corresponding to different contextual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "Features in use. We used a set of 15 features to capture reordering permutations from the syntactic and linguistic perspectives: Local tree topology: sub-tree instances that include parent node and the ordered sequence of child node labels (1); Dependency features: features that determine the POS tag of the head word of the current node (2), together with the sequence of POS tags of the head words of its child nodes (3) and the POS tag of the head word of the parent (4) and grandparent nodes (5); Syntactic features: apart from the whole path from the current node to the tree root node (6), we used three binary features from this class describe: (7) whether the parent node is a child of the node annotated with the same syntactic category, (8) whether the parent node is a descendant of the node annotated with the same syntactic category, and (9) if the current subtree is embedded into a \"S-SBAR\" sub-tree 2 ; POS lexical features: bi-and tri-grams of POS tags of the left-and right-hand side neighboring words (10-13); Counters: a number of words covered by a given constituent 14and a number of children of the given node (15).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Tree-Transductions",
"sec_num": "5"
},
{
"text": "Data. In our experiments we used English-Dutch and English-Spanish European Parliament data and an extraction from the English-Chinese Hong Kong Parallel Corpus. All the sets were provided with one reference translation. Basic statistics of the training data can be found in Table 2 , development datasets contained 0.5K, 1.9K and 0.5K lines and test datasets contained 1K, 1.9K and 0.5K for Dutch, Spanish and English, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Translation and reordering experiments",
"sec_num": "6"
},
{
"text": "Experimental setup. Word alignment was found using GIZA++ 3 (Och, 2003) , supported by mkcls 4 (Och, 1999) tool. The PBSMT systems we consider in this study is based on Moses toolkit . We followed the guidelines provided on the Moses web page 5 . Two phrase reordering methods are widely used in phrase-based systems. A distance-based reordering model providing the decoder with a cost linear to the distance between words that are being reordered. This model constitutes the default for the Moses system. And, a lexicalized block-oriented data-driven reordering model (Tillman, 2004) considers three orientations: monotone (M), swap (S), and discontinuous (D), while the reordering probabilities are conditioned on the lexical context of each phrase pair.",
"cite_spans": [
{
"start": 60,
"end": 71,
"text": "(Och, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 95,
"end": 106,
"text": "(Och, 1999)",
"ref_id": "BIBREF13"
},
{
"start": 569,
"end": 584,
"text": "(Tillman, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation and reordering experiments",
"sec_num": "6"
},
{
"text": "All language models were trained with SRI LM toolkit (Stolcke, 2002) . Language models for Dutch, Spanish and Chinese use 5-grams, while the ideal-ized English (s) is modeled using 3-grams.",
"cite_spans": [
{
"start": 53,
"end": 68,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation and reordering experiments",
"sec_num": "6"
},
{
"text": "We use Stanford parser 6 (Klein and Manning, 2003) as a source-side parsing engine. The parser was trained on the WSJ Penn treebank provided with 14 syntactic categories and 48 POS tags. The evaluation conditions were case-sensitive and included punctuation marks. For Maximum Entropy modeling we used the maxent toolkit 7 .",
"cite_spans": [
{
"start": 25,
"end": 50,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation and reordering experiments",
"sec_num": "6"
},
{
"text": "Translation scores. Table 3 shows the results of automatic evaluation using BLEU (Papineni et al., 2002) and NIST (Doddington, 2002) metrics.",
"cite_spans": [
{
"start": 81,
"end": 104,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 114,
"end": 132,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Translation and reordering experiments",
"sec_num": "6"
},
{
"text": "MERrd configuration corresponds to the PBSMT system with the source side of the parallel corpus reordered using our Maximum Entropy model, but the transduction operations are limited to permutation of the children only. MERrd+xlt configuration refers to the set of systems which, beside child permutation, includes a deletion operation with the maximum number of tree layers that can be deleted set to x. All reordered systems include a MSD model as a supporting reordering mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation and reordering experiments",
"sec_num": "6"
},
{
"text": "BLEU scores measured on the test data, which are statistically significant from the best PBSMT results are marked with bold. The statistical significance calculations have been done for a 95% confidence interval and 1000 resamples, following the guidelines in Koehn (2004) .",
"cite_spans": [
{
"start": 260,
"end": 272,
"text": "Koehn (2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation and reordering experiments",
"sec_num": "6"
},
{
"text": "Our results show that source-reordering is beneficial for the language pairs with high mutual word order disparity. In contrast to English-Dutch and English-Chinese translation tasks, the statistical significance test reveals that all but the MERrd+5lt English-Spanish PBSMT systems with rearranged input are not different from the translation quality delivered by Moses. This disappointing result for the English-to-Spanish translation task may be explained by the fact that many reordering differences are resolved by standard reordering models (distance-based and MSD). Table 3 shows the results of automatic translation quality evaluation. A gap between the maximum reachable performance shown by tree transduction systems and the translation quality delivered ",
"cite_spans": [],
"ref_spans": [
{
"start": 573,
"end": 580,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis.",
"sec_num": null
},
{
"text": "We present a source reordering system for PBSMT, in which the reordering decisions are conditioned on features from the source parse tree. Our system allows for two operations over the parse tree: permuting the order of sibling nodes and deleting child nodes in order to make the tree flatter and exploit sibling permutations at the remaining layers. Our contribution can be summarized as follows: (1) we report detailed results of maximum potential performance that can be achieved with source reordering under different constraints, (2) we define a source-reordering process through an efficient sequence of greedy, context-conditioned transduction operations over the source parse trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "The method was tested on three different translation tasks. The results show that our approach is more effective for language pairs with significant difference in word order. Another important observation is that our model demonstrate translation quality comparable with the one delivered by SMT systems based on hierarchical phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "The introduced reordering algorithm and the results obtained present many opportunities for future work. We plan to perform a detailed analysis of the structure of the extracted phrases to find out the particular cases where the improvement comes from. We also propose to discover other possible transduction operations to better explore the reordering space. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "All the source permutation methods presented in this Section are based on automatic alignments, which inevitably contain wrong links. In the future we plan to involve manual alignments to the computation of oracle permutation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The latter feature intends to model the divergence in word order in relative clauses between Dutch and English which is illustrated inFigure 1.3 code.google.com/p/giza-pp/ 4 http://www.fjoch.com/mkcls.html 5 http://www.statmt.org/moses/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.stanford.edu/software/ lex-parser.shtml 7 http://homepages.inf.ed.ac.uk/lzhang10/ maxent_toolkit.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "1",
"issue": "22",
"pages": "39--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language pro- cessing. Computational Linguistics, 1(22):39-72.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL'05",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL'05, pages 263-270, Ann Arbor, MI, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "2",
"issue": "33",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 2(33):201-228.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Clause restructuring for statistical machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Ku\u010derov\u00e1",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL'05",
"volume": "",
"issue": "",
"pages": "531--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins, P. Koehn, and I. Ku\u010derov\u00e1. 2005. Clause re- structuring for statistical machine translation. In Pro- ceedings of ACL'05, pages 531-540, Ann Arbor, MI, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical machine reordering",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "J",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT/EMNLP'06",
"volume": "",
"issue": "",
"pages": "70--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. R. Costa-juss\u00e0 and J. A. R. Fonollosa. 2006. Statistical machine reordering. In Proceedings of HLT/EMNLP'06, pages 70-76, New York, NY, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic evaluation of machine translation quality using n-grams co-occurrence statistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of HLT'02",
"volume": "",
"issue": "",
"pages": "128--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Doddington. 2002. Automatic evaluation of machine translation quality using n-grams co-occurrence statis- tics. In Proceedings of HLT'02, pages 128-132, San Diego, CA, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Binarization of synchronous context-free grammars",
"authors": [
{
"first": "H",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "559--595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, H. Zhang, D. Gildea, and K. Knight. 2009. Binarization of synchronous context-free grammars. Computational Linguistics, 35(4):559-595.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A discriminative syntactic model for source permutation via tree transduction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Khalilov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sima",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of SSST-4 workshop at COL-ING'10",
"volume": "",
"issue": "",
"pages": "92--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Khalilov and K. Sima'an. 2010. A discriminative syntactic model for source permutation via tree trans- duction. In Proceedings of SSST-4 workshop at COL- ING'10, pages 92-100, Beijing, China.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL'03",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL'03, pages 423-430, Sapporo, Japan.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Moses: open-source toolkit for statistical machine translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Ph",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ph",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the HLT-NAACL'03",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ph. Koehn, F. Och, and D. Marcu. 2003. Statistical phrase-based machine translation. In Proceedings of the HLT-NAACL'03, pages 48-54, Edmonton, Canada. Ph. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: open-source toolkit for statistical machine translation. In Proceedings of ACL'07, pages 177-180, Prague, Czech Republic.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "",
"middle": [],
"last": "Ph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP'04",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ph. Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP'04, pages 388-395, Barcelona, Spain.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A probabilistic approach to syntaxbased reordering for statistical machine translation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Minghui",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL'07",
"volume": "",
"issue": "",
"pages": "720--727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, L. Minghui, D. Zhang, M. Li, M. Zhou, and Y. Guan. 2007. A probabilistic approach to syntax- based reordering for statistical machine translation. In Proceedings of ACL'07, pages 720-727, Prague, Czech Republic.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney. 2004. The alignment template ap- proach to statistical machine translation. Computa- tional Linguistics, 30(4):417-449.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An efficient method for determining bilingual word classes",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of ACL'99",
"volume": "",
"issue": "",
"pages": "71--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och. 1999. An efficient method for determining bilin- gual word classes. In Proceedings of ACL'99, pages 71-76, Maryland, MD, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL'03",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL'03, pages 160-167, Sapporo, Japan.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL'02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL'02, pages 311- 318, Philadelphia, PA, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "POS-based word reorderings for statistical machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC'06",
"volume": "",
"issue": "",
"pages": "1278--1283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Popovic' and H. Ney. 2006. POS-based word re- orderings for statistical machine translation. In Pro- ceedings of LREC'06, pages 1278-1283, Genoa, Italy, May.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Simple syntactic and morphological processing can help english-hindi statistical machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ramanathan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hegde",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Shah",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sasikumar",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of IJCNLP'08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ramanathan, P. Bhattacharyya, J. Hegde, R.M. Shah, and M. Sasikumar. 2008. Simple syntactic and mor- phological processing can help english-hindi statistical machine translation. In In Proceedings of IJCNLP'08, Hyderabad, India.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SRILM: an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of SLP'02",
"volume": "",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 2002. SRILM: an extensible language mod- eling toolkit. In Proceedings of SLP'02, pages 901- 904.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A unigram orientation model for statistical machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Tillman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL'04",
"volume": "",
"issue": "",
"pages": "101--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Tillman. 2004. A unigram orientation model for sta- tistical machine translation. In Proceedings of HLT- NAACL'04, pages 101-104, Boston, MA, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning linear ordering problems for better translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP'09",
"volume": "",
"issue": "",
"pages": "1007--1016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Tromble and J. Eisner. 2009. Learning linear order- ing problems for better translation. In Proceedings of EMNLP'09, pages 1007-1016, Singapore.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Syntax based reordering with automatically derived rules for improved statistical machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Visweswariah",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Navratil",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Chenthamarakshan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kambhatla",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceeding of COL-ING'10",
"volume": "",
"issue": "",
"pages": "1119--1127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Visweswariah, J. Navratil, J. Sorensen, V. Chenthama- rakshan, and N. Kambhatla. 2010. Syntax based re- ordering with automatically derived rules for improved statistical machine translation. In Proceeding of COL- ING'10, pages 1119-1127, Beijing, China.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Chinese syntactic reordering for statistical machine translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ph",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL'07",
"volume": "",
"issue": "",
"pages": "737--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, M. Collins, and Ph. Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of EMNLP-CoNLL'07, pages 737-745, Prague, Czech Republic.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Re-structuring, re-labeling, and re-aligning for syntaxbased machine translation",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "",
"pages": "247--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wang, J. May, K. Knight, and D. Marcu. June 2010. Re-structuring, re-labeling, and re-aligning for syntax- based machine translation. Computational Linguis- tics, 36:247-277.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Machine translation wih a stochastic grammatical channel",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ACL-COLING'98",
"volume": "",
"issue": "",
"pages": "1408--1415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu and H. Wong. 1998. Machine translation wih a stochastic grammatical channel. In Proceedings of ACL-COLING'98, pages 1408-1415, Columbus, OH, USA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving a statistical MT system with automatically learned rewrite patterns",
"authors": [
{
"first": "F",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mccord",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING'04",
"volume": "",
"issue": "",
"pages": "508--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Xia and M. McCord. 2004. Improving a statistical MT system with automatically learned rewrite patterns. In Proceedings of COLING'04, pages 508-514, Geneva, Switzerland.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Phrase-based statistical machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of KI: Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "18--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zens, F. Och, and H. Ney. 2002. Phrase-based sta- tistical machine translation. In Proceedings of KI: Ad- vances in Artificial Intelligence, pages 18-32.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Syntax-based word reordering in phrase-based statistical machine translation: Why does it work?",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zwarts",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the MT Summit XI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Zwarts and M. Dras. 2007. Syntax-based word re- ordering in phrase-based statistical machine transla- tion: Why does it work? In Proceedings of the MT Summit XI, Copenhagen, Denmark.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Example crossing alignments and long-distance reordering using a source parse tree.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "(a) Original bilingual phrase. (b) Reordered bilingual phrase.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Example of Oracle string unfolding.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "(a) Original parse tree. (b) Parse tree with deleted VP category. (c) Reordered parse tree.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Example of text monotonization with tree transformation.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "Example of unfoldable alignment crossings.",
"num": null
},
"TABREF1": {
"text": "Summary of oracle results.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF3": {
"text": "Statistics of the training, development and test corpora.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>by our model is 0.05-0.29 BLEU points for English-</td></tr><tr><td>Dutch, 0.01-0.09 for English-Spanish and 0.27-</td></tr><tr><td>0.76 for English-Chinese. These numbers demon-</td></tr><tr><td>strate that there are some potentially usable regu-</td></tr><tr><td>larities not captured by our current conditional tree-</td></tr><tr><td>transduction model.</td></tr></table>"
},
"TABREF4": {
"text": "This work is supported by The Netherlands Organization for Scientific Research (NWO) under VIDI grant (nr. 639.022.604).",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td>EnNl</td><td/><td/><td>EnEs</td><td/><td/><td>EnZh</td><td/></tr><tr><td>System</td><td>Dev</td><td>Test</td><td/><td>Dev</td><td>Test</td><td/><td>Dev</td><td>Test</td><td/></tr><tr><td/><td colspan=\"9\">BLEU BLEU NIST BLEU BLEU NIST BLEU BLEU NIST</td></tr><tr><td/><td/><td/><td/><td>Baselines</td><td/><td/><td/><td/><td/></tr><tr><td>PBSMT</td><td>23.88</td><td>24.04</td><td>6.29</td><td>32.31</td><td>31.70</td><td>7.48</td><td>18.71</td><td>22.21</td><td>5.28</td></tr><tr><td>PBSMT+MSD</td><td>24.07</td><td>24.04</td><td>6.28</td><td>32.45</td><td>31.85</td><td>7.47</td><td>18.99</td><td>21.18</td><td>5.30</td></tr><tr><td>Moses-chart</td><td>23.94</td><td>24.93</td><td>6.39</td><td>30.58</td><td>31.80</td><td>7.41</td><td>19.93</td><td>23.90</td><td>5.41</td></tr><tr><td/><td/><td/><td colspan=\"3\">Reordering systems</td><td/><td/><td/><td/></tr><tr><td>MERrd</td><td>24.64</td><td>24.72</td><td>6.33</td><td>31.97</td><td>32.19</td><td>7.52</td><td>19.82</td><td>23.17</td><td>5.33</td></tr><tr><td>MERrd+2lt</td><td>24.61</td><td>24.99</td><td>6.35</td><td>31.70</td><td>32.11</td><td>7.50</td><td>20.02</td><td>23.01</td><td>5.33</td></tr><tr><td>MERrd+3lt</td><td>24.82</td><td>24.98</td><td>6.34</td><td>31.65</td><td>32.25</td><td>7.52</td><td>20.21</td><td>23.14</td><td>5.34</td></tr><tr><td>MERrd+5lt</td><td>24.78</td><td>25.12</td><td>6.37</td><td>31.99</td><td>32.38</td><td>7.52</td><td>20.29</td><td>23.17</td><td>5.35</td></tr></table>"
},
"TABREF5": {
"text": "Experimental results.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}