{ "paper_id": "D09-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:39:51.552784Z" }, "title": "Effective Use of Linguistic and Contextual Information for Statistical Machine Translation", "authors": [ { "first": "Libin", "middle": [], "last": "Shen", "suffix": "", "affiliation": { "laboratory": "", "institution": "BBN Technologies Cambridge", "location": { "postCode": "02138", "region": "MA", "country": "USA" } }, "email": "lshen@bbn.com" }, { "first": "Jinxi", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "BBN Technologies Cambridge", "location": { "postCode": "02138", "region": "MA", "country": "USA" } }, "email": "" }, { "first": "Bing", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "BBN Technologies Cambridge", "location": { "postCode": "02138", "region": "MA", "country": "USA" } }, "email": "bzhang@bbn.com" }, { "first": "Spyros", "middle": [], "last": "Matsoukas", "suffix": "", "affiliation": { "laboratory": "", "institution": "BBN Technologies Cambridge", "location": { "postCode": "02138", "region": "MA", "country": "USA" } }, "email": "smatsouk@bbn.com" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "", "affiliation": { "laboratory": "", "institution": "BBN Technologies Cambridge", "location": { "postCode": "02138", "region": "MA", "country": "USA" } }, "email": "weisched@bbn.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Current methods of using lexical features in machine translation have difficulty in scaling up to realistic MT tasks due to a prohibitively large number of parameters involved. In this paper, we propose methods of using new linguistic and contextual features that do not suffer from this problem and apply them in a state-ofthe-art hierarchical MT system. The features used in this work are non-terminal labels, non-terminal length distribution, source string context and source dependency LM scores. The effectiveness of our techniques is demonstrated by significant improvements over a strong baseline. On Arabic-to-English translation, improvements in lower-cased BLEU are 2.0 on NIST MT06 and 1.7 on MT08 newswire data on decoding output. On Chinese-to-English translation, the improvements are 1.0 on MT06 and 0.8 on MT08 newswire data.", "pdf_parse": { "paper_id": "D09-1008", "_pdf_hash": "", "abstract": [ { "text": "Current methods of using lexical features in machine translation have difficulty in scaling up to realistic MT tasks due to a prohibitively large number of parameters involved. In this paper, we propose methods of using new linguistic and contextual features that do not suffer from this problem and apply them in a state-ofthe-art hierarchical MT system. The features used in this work are non-terminal labels, non-terminal length distribution, source string context and source dependency LM scores. The effectiveness of our techniques is demonstrated by significant improvements over a strong baseline. On Arabic-to-English translation, improvements in lower-cased BLEU are 2.0 on NIST MT06 and 1.7 on MT08 newswire data on decoding output. On Chinese-to-English translation, the improvements are 1.0 on MT06 and 0.8 on MT08 newswire data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Linguistic and context features, especially sparse lexical features, have been widely used in recent machine translation (MT) research. Unfortunately, existing methods of using such features are not ideal for large-scale, practical translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we will propose several probabilistic models to effectively exploit linguistic and contextual information for MT decoding, and these new features do not suffer from the scalability problem. Our new models are tested on NIST MT06 and MT08 data, and they provide significant improvement over a strong baseline system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The ideas of using labels, length preference and source side context in MT decoding were explored previously. Broadly speaking, two approaches were commonly used in existing work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "1.1" }, { "text": "One is to use a stochastic gradient descent (SGD) or Perceptron like online learning algorithm to optimize the weights of these features directly for MT (Shen et al., 2004; Liang et al., 2006; Tillmann and Zhang, 2006) . This method is very attractive, since it opens the door to rich lexical features. However, in order to robustly optimize the feature weights, one has to use a substantially large development set, which results in significantly slower tuning. Alternatively, one needs to carefully select a development set that simulates the test set to reduce the risk of over-fitting, which however is not always realistic for practical use.", "cite_spans": [ { "start": 153, "end": 172, "text": "(Shen et al., 2004;", "ref_id": "BIBREF20" }, { "start": 173, "end": 192, "text": "Liang et al., 2006;", "ref_id": "BIBREF14" }, { "start": 193, "end": 218, "text": "Tillmann and Zhang, 2006)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "1.1" }, { "text": "A remedy is to aggressively limit the feature space, e.g. to syntactic labels or a small fraction of the bi-lingual features available, as in (Chiang et al., 2008; Chiang et al., 2009) , but that reduces the benefit of lexical features. A possible generic solution is to cluster the lexical features in some way. However, how to make it work on such a large space of bi-lingual features is still an open question.", "cite_spans": [ { "start": 142, "end": 163, "text": "(Chiang et al., 2008;", "ref_id": "BIBREF3" }, { "start": 164, "end": 184, "text": "Chiang et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "1.1" }, { "text": "The other approach is to estimate a single score or likelihood of a translation with rich features, for example, with the maximum entropy (Max-Ent) method as in (Carpuat and Wu, 2007; Ittycheriah and Roukos, 2007; He et al., 2008) . This method avoids the over-fitting problem, at the expense of losing the benefit of discriminative training of rich features directly for MT. However, the feature space problem still exists in these published models. He et al. (2008) extended the WSD-like approached proposed in (Carpuat and Wu, 2007) to hierarchical decoders. In (He et al., 2008) , lexical features were limited on each single side due to the feature space problem. In order to further reduce the complexity of MaxEnt training, they \"trained a MaxEnt model for each ambiguous hierarchical LHS\" (left-hand side or source side) of translation rules. Different target sides were treated as possible labels. Therefore, the sample sets of each individual MaxEnt model were very small, while the number of features could easily exceed the number of samples. Furthermore, optimizing individual MaxEnt models in this way does not lead to global maximum. In addition, MaxEnt models trained on small sets are unstable.", "cite_spans": [ { "start": 161, "end": 183, "text": "(Carpuat and Wu, 2007;", "ref_id": "BIBREF2" }, { "start": 184, "end": 213, "text": "Ittycheriah and Roukos, 2007;", "ref_id": "BIBREF12" }, { "start": 214, "end": 230, "text": "He et al., 2008)", "ref_id": "BIBREF11" }, { "start": 451, "end": 467, "text": "He et al. (2008)", "ref_id": "BIBREF11" }, { "start": 513, "end": 535, "text": "(Carpuat and Wu, 2007)", "ref_id": "BIBREF2" }, { "start": 565, "end": 582, "text": "(He et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "1.1" }, { "text": "The MaxEnt model in (Ittycheriah and Roukos, 2007) was optimized globally, so that it could better employ the distribution of the training data. However, one has to filter the training data according to the test data to get competitive performance with this model 1 . In addition, the filtering method causes some practical issues. First, such methods are not suitable for real MT tasks, especially for applications with streamed input, since the model has to be retrained with each new input sentence or document and training is slow. Furthermore, the model is ill-posed. The translation of a source sentence depends on other source sentences in the same batch with which the MaxEnt model is trained. If we add one more sentence to the batch, translations of other sentences may become different due to the change of the MaxEnt model.", "cite_spans": [ { "start": 20, "end": 50, "text": "(Ittycheriah and Roukos, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "1.1" }, { "text": "To sum up, the existing models of employing rich bi-lingual lexical information in MT are imperfect. Many of them are not ideal for practical translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "1.1" }, { "text": "As for our approach, we mainly use simple probabilistic models, i.e. Gaussian and n-gram models, which are more robust and suitable for large-scale training of real data, as manifested in state-of-theart systems of speech recognition. The unique contribution of our work is to design effective and efficient statistical models to capture useful linguistic and context information for MT decoding. Feature functions defined in this way are robust and ideal for practical translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "1.2" }, { "text": "In this paper, we will introduce four new linguistic and contextual feature functions. Here, we first provide a high-level description of these features. Details of the features are discussed in Section 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "1.2.1" }, { "text": "The first feature is based on non-terminal labels, i.e. POS tags of the head words of target nonterminals in transfer rules. This feature reduces the ambiguity of translation rules. The other benefit is that POS tags help to weed out bad target side tree structures, as an enhancement to the target dependency language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "1.2.1" }, { "text": "The second feature is based on the length distribution of non-terminals. In English as well as in other languages, the same deep structure can be represented in different syntactic structures depending on the complexity of its constituents. We model such preferences by associating each nonterminal of a transfer rule with a probability distribution over its length. Similar ideas were explored in (He et al., 2008 ). However their length features only provided insignificant improvement of 0.1 BLEU point. A crucial difference of our approach is how the length preference is modeled. We approximate the length distribution of non-terminals with a smoothed Gaussian, which is more robust and gives rise to much larger improvement consistently.", "cite_spans": [ { "start": 398, "end": 414, "text": "(He et al., 2008", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "1.2.1" }, { "text": "The third feature utilizes source side context information, i.e. the neighboring words of an input span, to influence the selection of the target translation for a span. While the use of context information has been explored in MT, e.g. (Carpuat and Wu, 2007) and (He et al., 2008) , the specific technique we used by means of a context language model is rather different. Our model is trained on the whole training data, and it is not limited by the constraint of MaxEnt training.", "cite_spans": [ { "start": 237, "end": 259, "text": "(Carpuat and Wu, 2007)", "ref_id": "BIBREF2" }, { "start": 264, "end": 281, "text": "(He et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "1.2.1" }, { "text": "The fourth feature exploits structural information on the source side. Specifically, the decoder simultaneously generates both the source and target side dependency trees, and employs two dependency LMs, one for the source and the other for the target, for scoring translation hypotheses. Our intuition is that the likelihood of source structures provides another piece of evidence about the plausibility of a translation hypothesis and as such would help weed out bad ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "1.2.1" }, { "text": "Setup We take BBN's HierDec, a string-to-dependency decoder as described in (Shen et al., 2008) , as our baseline for the following two reasons:", "cite_spans": [ { "start": 76, "end": 95, "text": "(Shen et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline System and Experimental", "sec_num": "1.2.2" }, { "text": "\u2022 It provides a strong baseline, which ensures the validity of the improvement we would obtain. The baseline model used in this paper showed state-of-the-art performance at NIST 2008 MT evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline System and Experimental", "sec_num": "1.2.2" }, { "text": "\u2022 The baseline algorithm can be easily extended to incorporate the features proposed in this paper. The use of source dependency structures is a natural extension of the stringto-tree model to a tree-to-tree model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline System and Experimental", "sec_num": "1.2.2" }, { "text": "To ensure the generality of our results, we tested the features on two rather different language pairs, Arabic-to-English and Chinese-to-English, using two metrics, IBM BLEU (Papineni et al., 2001) and TER (Snover et al., 2006) . Our experiments show that each of the first three features: nonterminal labels, length distribution and source side context, improves MT performance. Surprisingly, the source dependency feature does not produce an improvement.", "cite_spans": [ { "start": 174, "end": 197, "text": "(Papineni et al., 2001)", "ref_id": "BIBREF17" }, { "start": 206, "end": 227, "text": "(Snover et al., 2006)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline System and Experimental", "sec_num": "1.2.2" }, { "text": "In the original string-to-dependency model (Shen et al., 2008) , a translation rule is composed of a string of words and non-terminals on the source side and a well-formed dependency structure on the target side. A well-formed dependency structure could be either a single-rooted dependency tree or a set of sibling trees. As in the Hiero system (Chiang, 2007) , there is only one non-terminal X in the string-to-dependency model. Any sub dependency structure can be used to replace a nonterminal in a rule.", "cite_spans": [ { "start": 43, "end": 62, "text": "(Shen et al., 2008)", "ref_id": "BIBREF21" }, { "start": 346, "end": 360, "text": "(Chiang, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Non-terminal Labels", "sec_num": "2.1" }, { "text": "For example, we have a source sentence in Chinese as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-terminal Labels", "sec_num": "2.1" }, { "text": "The literal translation for individual words is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 jiantao zhuyao baohan liang fangmian", "sec_num": null }, { "text": "\u2022 'review' 'mainly' 'to consist of' 'two' 'part'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 jiantao zhuyao baohan liang fangmian", "sec_num": null }, { "text": "The reference translation is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 jiantao zhuyao baohan liang fangmian", "sec_num": null }, { "text": "\u2022 the review mainly consists of two parts A single source word can be translated into many English words. For example, jiantao can be translated into a review, the review, reviews, the reviews, reviewing, reviewed, etc. Suppose we have source-string-to-target-dependency translation rules as shown in Figure 1 . Since there is no constraint on substitution, any translation for jiantao could replace the X-1 slot.", "cite_spans": [], "ref_spans": [ { "start": 301, "end": 309, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "\u2022 jiantao zhuyao baohan liang fangmian", "sec_num": null }, { "text": "One way to alleviate this problem is to limit the search space by using a label system. We could assign a label to each non-terminal on the target side of the rules. Furthermore, we could assign a label to the whole target dependency structure, as shown in Figure 2 . In decoding, each target dependency sub-structure would be associated with a label. Whenever substitution happens, we would check whether the label of the sub-structure and the label of the slot are the same. Substitutions with unmatched labels would be prohibited.", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 265, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "\u2022 jiantao zhuyao baohan liang fangmian", "sec_num": null }, { "text": "In practice, we use a soft constraint by penalizing substitutions with unmatched labels. We introduce a new feature: the number of times substitutions with unmatched labels appear in the derivation of a translation hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 jiantao zhuyao baohan liang fangmian", "sec_num": null }, { "text": "Obviously, to implement this feature we need to associate a label with each non-terminal in the target side of a translation rule. The labels are generated during rule extraction. When we create a rule from a training example, we replace a subtree or dependency structure with a non-terminal and associate it with the POS tag of the head word if the non-terminal corresponds to a single-rooted tree on the target side. Otherwise, it is assigned the generic label X. (In decoding, all substitutions of X are considered unmatched ones and incur a penalty.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 jiantao zhuyao baohan liang fangmian", "sec_num": null }, { "text": "In English, the length of a phrase may determine the syntactic structure of a sentence. For example, possessive relations can be represented either as \"A's B\" or \"B of A\". The former is preferred if A is a short phrase (e.g. \"the boy's mother\") while the latter is preferred if A is a complex structure (e.g. \"the mother of the boy who is sick\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "Our solution is to build a model of length preference for each non-terminal in each translation rule. To address data sparseness, we assume the length distribution of each non-terminal in a transfer rule is a Gaussian, whose mean and variance can be estimated from the training data. In rule extrac- tion, each time a translation rule is generated from a training example, we can record the length of the source span corresponding to a non-terminal. In the end, we have a frequency histogram for each non-terminal in each translation rule. From the histogram, a Gaussian distribution can be easily computed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "In practice, we do not need to collect the frequency histogram. Since all we need to know are the mean and the variance, it is sufficient to collect the sum of the length and the sum of squared length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "Let r be a translation rule that occurs N r times in training. Let x be a specific non-terminal in that rule. Let l(r, x, i) denote the length of the source span corresponding to non-terminal x in the i-th occurrence of rule r in training. Then, we can compute the following quantities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m r,x = 1 N r Nr i=1 l(r, x, i) (1) s r,x = 1 N r Nr i=1 l(r, x, i) 2 ,", "eq_num": "(2)" } ], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "which can be subsequently used to estimate the mean \u00b5 r,x and variance \u03c3 2 r,x of x's length distribution in rule r as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00b5 r,x = m r,x (3) \u03c3 2 r,x = s r,x \u2212 m 2 r,x", "eq_num": "(4)" } ], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "Since many of the translation rules have few occurrences in training, smoothing of the above estimates is necessary. A common smoothing method is based on maximum a posteriori (MAP) estimation as in (Gauvain and Lee, 1994) .", "cite_spans": [ { "start": 199, "end": 222, "text": "(Gauvain and Lee, 1994)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "m r,x = N r N r + \u03c4 m r,x + \u03c4 N r + \u03c4m r,x s r,x = N r N r + \u03c4 s r,x + \u03c4 N r + \u03c4s r,x ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "where\u02c6stands for an MAP distribution and\u02dcrepresents a prior distribution.m r,x ands r,x can be obtained from a prior Gaussian distribution N (\u03bc r,x ,\u03c3 r,x ) via equations 3and 4, and \u03c4 is a weight of smoothing. There are many ways to approximate the prior distribution. For example, we can have one prior for all the non-terminals or one for individual nonterminal type. In practice, we assume\u03bc r,x = \u00b5 r,x , and approximate\u03c3 r,x as (\u03c3 2 r,x + s r,x ) 1 2 . In this way, we do not change the mean, but relax the variance with s r,x . We tried different smoothing methods, but the performance did not change much, therefore we kept this simplest setup. We also tried the Poisson distribution, and the performance is similar to Gaussian distribution, which is about 0.1 point lower in BLEU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "When a rule r is applied during decoding, we compute a penalty for each non-terminal x in r according to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "P (l | r, x) = 1 \u03c3 r,x \u221a 2\u03c0 e \u2212 (l\u2212\u00b5r,x ) 2 2\u03c3 2 r,x ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "where l is length of source span corresponding to x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "Our method to address the problem of length bias in rule selection is very different from the maximum entropy method used in existing studies, e.g. (He et al., 2008) .", "cite_spans": [ { "start": 148, "end": 165, "text": "(He et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Length Distribution", "sec_num": "2.2" }, { "text": "In the baseline string-to-dependency system, the probability a translation rule is selected in decoding does not depend on the sentence context. In reality, translation is highly context dependent. To address this defect, we introduce a new feature, called context language model. The motivation of this feature is to exploit surrounding words to influence the selection of the desired transfer rule for a given input span.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "To illustrate the problem, we use the same example mentioned in Section 2.1. Suppose the source span for rule selection is zhuyao baohan, whose literal translation is mainly and to consist of. There are many candidate translations for this phrase, for example, mainly consist of, mainly consists of, mainly including, mainly includes, etc. The surrounding words can help to decide which translation is more appropriate for zhuyao baohan. We compare the following two context-based probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "\u2022 P ( jiantao | mainly consist ) \u2022 P ( jiantao | mainly consists )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "Here, jiantao is the source word preceding the source span zhuyao baohan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "In the training data, jiantao is usually translated into the review, third-person singular, then the probability P ( jiantao | mainly consists ) will be higher than P ( jiantao | mainly consist ), since we have seen more context events like the former in the training data. Now we introduce context LM formally. Let the source words be f 1 f 2 ..f i ..f j ..f n . Suppose source sub-string f i ..f j is translated into e p ..e q . We can define tri-gram probabilities on the left and right sides of the source span:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "\u2022 left : P L (f i\u22121 |e p , e p+1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "\u2022 right :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "P R (f j+1 |e q , e q\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "In our implementation, the left and right context LMs are estimated from the training data as part of the rule extraction procedure. When we exact a rule, we collect two 3-gram events, one for the left side and the other for the right side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "In decoding, whenever a partial hypothesis is generated, we calculate the context LM scores based on the leftmost two words and the rightmost two words of the hypothesis as well as the source context. The product of the left and right context LM scores is used as a new feature in the scoring function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "Please note that our approach is very different from other approaches to context dependent rule selection such as (Ittycheriah and Roukos, 2007) and (He et al., 2008) . Instead of using a large number of fine grained features with weights optimized using the maximum entropy method, we treat context dependency as an ngram LM problem, and it is smoothed with Witten-Bell discounting. The estimation of the context LMs is very efficient and robust.", "cite_spans": [ { "start": 114, "end": 144, "text": "(Ittycheriah and Roukos, 2007)", "ref_id": "BIBREF12" }, { "start": 149, "end": 166, "text": "(He et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "The benefit is two fold. The estimation of the context LMs is very efficient. It adds only one new weight to the scoring function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Language Model", "sec_num": "2.3" }, { "text": "The context LM proposed in the previous section only employs source words immediately before and after the current source span in decoding. To exploit more source context, we use a source side dependency language model as another feature. The motivation is to take advantage of the long distance dependency relations between source words in scoring a translation theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Dependency Language Model", "sec_num": "2.4" }, { "text": "We extended string-to-dependency rules in the baseline system to dependency-to-dependency rules. In each dependency-to-dependency rule, we keep record of the source string as well as the source dependency structure. Figure 3 shows examples of dependency-to-dependency rules.", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 224, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Source Dependency Language Model", "sec_num": "2.4" }, { "text": "We extended the string-to-dependency decoding algorithm in the baseline to accommodate dependency-to-dependency theories. In decoding, we build both the source and the target dependency structures simultaneously in chart parsing over the source string. Thus, we can compute the source dependency LM score in the same way we compute the target side score, using a procedure described in (Shen et al., 2008) .", "cite_spans": [ { "start": 386, "end": 405, "text": "(Shen et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Source Dependency Language Model", "sec_num": "2.4" }, { "text": "We introduce two new features for the source side dependency LM as follows, in a way similar to the target side. Figure 3 : Dependency-to-dependency translation rules dependency tree generated by the decoder. The source dependency tree with the highest score is the one that is most likely to be generated by the dependency model that created the source side of the training data.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Source Dependency Language Model", "sec_num": "2.4" }, { "text": "Source dependency trees are composed of fragments embedded in the translation rules. Therefore, a source dependency LM score can be viewed as a measure whether the translation rules are put together in a way similar to the training data. Therefore, a source dependency LM score serves as a feature to represent structural context information that is capable of modeling longdistance relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Dependency Language Model", "sec_num": "2.4" }, { "text": "However, unlike source context LMs, the structural context information is used only when two partial dependency structures are combined, while source context LMs work as a look-ahead feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Dependency Language Model", "sec_num": "2.4" }, { "text": "We designed our experiments to show the impact of each feature separately as well as their cumulative impact:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "\u2022 BASE: baseline string-to-dependency system All the models were optimized on lower-cased IBM BLEU with Powell's method (Powell, 1964; Brent, 1973) on n-best translations (Ostendorf et al., 1991) , but evaluated on both IBM BLEU and TER. The motivation is to detect if an improvement is artificial, i.e., specific to the tuning metric. For both Arabic-to-English and Chinese-to-English MT, we tuned on NIST MT02-05 and tested on MT06 and MT08 newswire sets.", "cite_spans": [ { "start": 120, "end": 134, "text": "(Powell, 1964;", "ref_id": "BIBREF19" }, { "start": 135, "end": 147, "text": "Brent, 1973)", "ref_id": "BIBREF0" }, { "start": 171, "end": 195, "text": "(Ostendorf et al., 1991)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The training data are different from what was usd at MT06 or MT08. Our Arabic-to-English data contain 29M Arabic words and 38M English words from 11 corpora: LDC2004T17, LDC2004T18, LDC2005E46, LDC2006E25, LDC2006G05, LDC2005E85, LDC2006E36, LDC2006E82, LDC2006E95, Sakhr-A2E and Sakhr-E2A. The Chinese-to-English data contain 107M Chinese words and 132M English words from eight corpora: LDC2002E18, LDC2005T06, LDC2005T10, LDC2006E26, LDC2006G05, LDC2002L27, LDC2005T34 and LDC2003E07. They are available under the DARPA GALE program. Traditional 3-gram and 5-gram string LMs were trained on the English side of the parallel data plus the English Gigaword corpus V3.0 in a way described in (Bulyko et al., 2007) .", "cite_spans": [ { "start": 692, "end": 713, "text": "(Bulyko et al., 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The target dependency LMs were trained on the English side of the parallel training data. For that purpose, we parsed the English side of the parallel data. Two separate models were trained: one for Arabic from the Arabic training data and the other for Chinese from the Chinese training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "To compute the source dependency LM for Chinese-to-English MT, we parsed the Chinese side of the Chinese-to-English parallel data. Due to the lack of a good Arabic parser compatible with the Sakhr tokenization that we used on the source side, we did not test the source dependency LM for Arabic-to-English MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "When extracting rules with source dependency structures, we applied the same well-formedness constraint on the source side as we did on the target side, using a procedure described by (Shen et al., 2008) . Some candidate rules were thrown away due to the source side constraint. On the other hand, one string-to-dependency rule may split into several dependency-to-dependency rules due to different source dependency structures. The size of the dependency-to-dependency rule set is slightly smaller than the size of the string-todependency rule set. Tables 1 and 2 show the BLEU and TER percentage scores on MT06 and MT08 for Arabicto-English and Chinese-to-English translation respectively. The context LM feature, the length feature and the syntax label feature all produce a small improvement for most of the conditions. When we combined the three features, we observed significant improvements over the baseline. For Arabic-to-English MT, the LBL+LEN+CLM system improved lower-cased BLEU by 2.0 on MT06 and 1.7 on MT08 on decoding output. For Chinese-to-English MT, the improvements in lower-cased BLEU were 1.0 on MT06 and 0.8 on MT08. After re-scoring, the improvements became smaller, but still noticeable, ranging from 0.7 to 1.4. TER scores were also improved noticeably for all conditions, suggesting there was no metric specific over-tuning.", "cite_spans": [ { "start": 184, "end": 203, "text": "(Shen et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 550, "end": 564, "text": "Tables 1 and 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Surprisingly, source dependency LM did not provide any improvement over the baseline. There are two possible reasons for this. One is that the source and target parse trees were generated by two stand-alone parsers, which may cause incompatible structures on the source and target sides. By applying the well-formed constraints on both sides, a lot of useful transfer rules are discarded. A bi-lingual parser, trained on parallel treebanks recently made available to the NLP community, may overcome this problem. The other is that the search space of dependency-todependency decoding is much larger, since we need to add source dependency information into the chart parsing states. We will explore techniques to address these problems in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Linguistic information has been widely used in SMT. For example, in (Wang et al., 2007) , syntactic structures were employed to reorder the source language as a pre-processing step for phrase-based decoding. In (Koehn and Hoang, 2007) , shallow syntactic analysis such as POS tagging and morphological analysis were incorporated in a phrasal decoder.", "cite_spans": [ { "start": 68, "end": 87, "text": "(Wang et al., 2007)", "ref_id": "BIBREF25" }, { "start": 211, "end": 234, "text": "(Koehn and Hoang, 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "In ISI's syntax-based system (Galley et al., 2006 ) and CMU's Hiero extension (Venugopal et al., 2007) , non-terminals in translation rules have labels, which must be respected by substitutions during decoding. In (Post and Gildea, 2008; Shen et al., 2008) , target trees were employed to improve the scoring of translation theories. introduced features defined on constituent labels to improve the Hiero system (Chiang, 2005) . However, due to the limitation of MER training, only part of the feature space could used in the system. This problem was fixed by Table 2 : BLEU and TER percentage scores on MT06 and MT08 Chinese-to-English newswire sets. Chiang et al. (2008) , which used an online learning method (Crammer and Singer, 2003) to handle a large set of features.", "cite_spans": [ { "start": 29, "end": 49, "text": "(Galley et al., 2006", "ref_id": "BIBREF8" }, { "start": 78, "end": 102, "text": "(Venugopal et al., 2007)", "ref_id": "BIBREF24" }, { "start": 214, "end": 237, "text": "(Post and Gildea, 2008;", "ref_id": "BIBREF18" }, { "start": 238, "end": 256, "text": "Shen et al., 2008)", "ref_id": "BIBREF21" }, { "start": 412, "end": 426, "text": "(Chiang, 2005)", "ref_id": "BIBREF5" }, { "start": 652, "end": 672, "text": "Chiang et al. (2008)", "ref_id": "BIBREF3" }, { "start": 712, "end": 738, "text": "(Crammer and Singer, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 560, "end": 567, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Most SMT systems assume that translation rules can be applied without paying attention to the sentence context. A few studies (Carpuat and Wu, 2007; Ittycheriah and Roukos, 2007; He et al., 2008; Hasan et al., 2008) addressed this defect by selecting the appropriate translation rules for an input span based on its context in the input sentence. The direct translation model in (Ittycheriah and Roukos, 2007) employed syntactic (POS tags) and context information (neighboring words) within a maximum entropy model to predict the correct transfer rules. A similar technique was applied by He et al. (2008) to improve the Hiero system.", "cite_spans": [ { "start": 126, "end": 148, "text": "(Carpuat and Wu, 2007;", "ref_id": "BIBREF2" }, { "start": 149, "end": 178, "text": "Ittycheriah and Roukos, 2007;", "ref_id": "BIBREF12" }, { "start": 179, "end": 195, "text": "He et al., 2008;", "ref_id": "BIBREF11" }, { "start": 196, "end": 215, "text": "Hasan et al., 2008)", "ref_id": "BIBREF10" }, { "start": 379, "end": 409, "text": "(Ittycheriah and Roukos, 2007)", "ref_id": "BIBREF12" }, { "start": 589, "end": 605, "text": "He et al. (2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Our model differs from previous work on the way in which linguistic and contextual information is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "In this paper, we proposed four new linguistic and contextual features for hierarchical decoding. The use of non-terminal labels, length distribution and context LM features gave rise to significant improvement on Arabic-to-English and Chineseto-English translation on NIST MT06 and MT08 newswire data over a state-of-the-art string-to-dependency baseline. Unlike previous work, we employed robust probabilistic models to capture useful linguistic and contextual information. Our methods are more suitable for practical translation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "In future, we will continue this work in two directions. We will employ a Gaussian model to unify various linguistic and contextual features. We will also improve the dependency-todependency method with a better bi-lingual parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "According to footnote 2 of(Ittycheriah and Roukos, 2007), test set adaptation by test set sampling of the training corpus showed an advantage of more than 2 BLEU points over a general system trained on all data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by DARPA/IPTO Contract No. HR0011-06-C-0022 under the GALE program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithms for Minimization Without Derivatives", "authors": [ { "first": "R", "middle": [ "P" ], "last": "Brent", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. P. Brent. 1973. Algorithms for Minimization With- out Derivatives. Prentice-Hall.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Language model adaptation in machine translation from speech", "authors": [ { "first": "I", "middle": [], "last": "Bulyko", "suffix": "" }, { "first": "S", "middle": [], "last": "Matsoukas", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "L", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Bulyko, S. Matsoukas, R. Schwartz, L. Nguyen, and J. Makhoul. 2007. Language model adaptation in machine translation from speech. In Proceedings of the 32nd IEEE International Conference on Acous- tics, Speech, and Signal Processing (ICASSP).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Context-dependent phrasal translation lexicons for statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Machine Translation Summit XI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Carpuat and D. Wu. 2007. Context-dependent phrasal translation lexicons for statistical machine translation. In Proceedings of Machine Translation Summit XI.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Online large-margin training of syntactic and structural translation features", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Marton", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang, Y. Marton, and P. Resnik. 2008. On- line large-margin training of syntactic and structural translation features. In Proceedings of the 2008 Conference of Empirical Methods in Natural Lan- guage Processing.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "11,001 new features for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Human Language Technology Conference of the North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang, K. Knight, and W. Wang. 2009. 11,001 new features for statistical machine translation. In Proceedings of the 2009 Human Language Technol- ogy Conference of the North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Ultraconservative online algorithms for multiclass problems", "authors": [ { "first": "K", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "951--991", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Crammer and Y. Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951-991.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Scalable inference and training of context-rich syntactic models", "authors": [ { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "S", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "I", "middle": [], "last": "Thayer", "suffix": "" } ], "year": 2006, "venue": "COLING-ACL '06: Proceedings of 44th Annual Meeting of the Association for Computational Linguistics and 21st Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Galley, J. Graehl, K. Knight, D. Marcu, S. DeNeefe, W. Wang, and I. Thayer. 2006. Scalable infer- ence and training of context-rich syntactic models. In COLING-ACL '06: Proceedings of 44th Annual Meeting of the Association for Computational Lin- guistics and 21st Int. Conf. on Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Maximum a posteriori estimation for multivariate gaussian mixtureobservations of markov chains", "authors": [ { "first": "J.-L", "middle": [], "last": "Gauvain", "suffix": "" }, { "first": "Chin-Hui", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "2", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-L. Gauvain and Chin-Hui Lee. 1994. Maximum a posteriori estimation for multivariate gaussian mix- tureobservations of markov chains. IEEE Transac- tions on Speech and Audio Processing, 2(2).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Triplet lexicon models for statistical machine translation", "authors": [ { "first": "S", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "J", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "J", "middle": [], "last": "Andr\u00e9s-Ferrer", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Hasan, J. Ganitkevitch, H. Ney, and J. Andr\u00e9s-Ferrer. 2008. Triplet lexicon models for statistical machine translation. In Proceedings of the 2008 Conference of Empirical Methods in Natural Language Process- ing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improving statistical machine translation using lexicalized rule selection", "authors": [ { "first": "Z", "middle": [], "last": "He", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "Proceedings of COLING '08: The 22nd Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. He, Q. Liu, and S. Lin. 2008. Improving statistical machine translation using lexicalized rule selection. In Proceedings of COLING '08: The 22nd Int. Conf. on Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Direct translation model 2", "authors": [ { "first": "A", "middle": [], "last": "Ittycheriah", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Human Language Technology Conference of the North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Ittycheriah and S. Roukos. 2007. Direct translation model 2. In Proceedings of the 2007 Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Factored translation models", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "H", "middle": [], "last": "Hoang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Conference of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn and H. Hoang. 2007. Factored translation models. In Proceedings of the 2007 Conference of Empirical Methods in Natural Language Process- ing.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An end-to-end discriminative approach to machine translation", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "A", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "B", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2006, "venue": "COLING-ACL '06: Proceedings of 44th Annual Meeting of the Association for Computational Linguistics and 21st Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang, A. Bouchard-C\u00f4t\u00e9, D. Klein, and B. Taskar. 2006. An end-to-end discriminative approach to ma- chine translation. In COLING-ACL '06: Proceed- ings of 44th Annual Meeting of the Association for Computational Linguistics and 21st Int. Conf. on Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Soft syntactic constraints for hierarchical phrased-based translation", "authors": [ { "first": "Y", "middle": [], "last": "Marton", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Marton and P. Resnik. 2008. Soft syntactic con- straints for hierarchical phrased-based translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Integration of diverse recognition methodologies through reevaluation of nbest sentence hypotheses", "authors": [ { "first": "M", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "A", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "S", "middle": [], "last": "Austin", "suffix": "" }, { "first": "O", "middle": [], "last": "Kimball", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Rohlicek", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the DARPA Workshop on Speech and Natural Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Ostendorf, A. Kannan, S. Austin, O. Kimball, R. Schwartz, and J. R. Rohlicek. 1991. Integra- tion of diverse recognition methodologies through reevaluation of nbest sentence hypotheses. In Pro- ceedings of the DARPA Workshop on Speech and Natural Language.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, and T. Ward. 2001. Bleu: a method for automatic evaluation of machine transla- tion. IBM Research Report, RC22176.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Parsers as language models for statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Post", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2008, "venue": "The Eighth Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Post and D. Gildea. 2008. Parsers as language models for statistical machine translation. In The Eighth Conference of the Association for Machine Translation in the Americas.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An efficient method for finding the minimum of a function of several variables without calculating derivatives", "authors": [ { "first": "M", "middle": [ "J D" ], "last": "Powell", "suffix": "" } ], "year": 1964, "venue": "The Computer Journal", "volume": "7", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. J. D. Powell. 1964. An efficient method for finding the minimum of a function of several variables with- out calculating derivatives. The Computer Journal, 7(2).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Discriminative reranking for machine translation", "authors": [ { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Human Language Technology Conference of the North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Shen, A. Sarkar, and F. J. Och. 2004. Discriminative reranking for machine translation. In Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A New String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model", "authors": [ { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "J", "middle": [], "last": "Xu", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Shen, J. Xu, and R. Weischedel. 2008. A New String-to-Dependency Machine Translation Algo- rithm with a Target Dependency Language Model. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "M", "middle": [], "last": "Snover", "suffix": "" }, { "first": "B", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "L", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proceedings of Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Ameri- cas.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A discriminative global training algorithm for statistical mt", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "T", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2006, "venue": "COLING-ACL '06: Proceedings of 44th Annual Meeting of the Association for Computational Linguistics and 21st Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann and T. Zhang. 2006. A discrimina- tive global training algorithm for statistical mt. In COLING-ACL '06: Proceedings of 44th Annual Meeting of the Association for Computational Lin- guistics and 21st Int. Conf. on Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An efficient two-pass approach to synchronous-cfg driven statistical mt", "authors": [ { "first": "A", "middle": [], "last": "Venugopal", "suffix": "" }, { "first": "A", "middle": [], "last": "Zollmann", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Venugopal, A. Zollmann, and S. Vogel. 2007. An efficient two-pass approach to synchronous-cfg driven statistical mt. In Proceedings of the 2007 Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Chinese syntactic reordering for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Wang", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Conference of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Wang, M. Collins, and P. Koehn. 2007. Chinese syntactic reordering for statistical machine transla- tion. In Proceedings of the 2007 Conference of Em- pirical Methods in Natural Language Processing.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Translation rules with multiple labels", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Source dependency LM score \u2022 Discount on ill-formed source dependency structuresThe source dependency LM is trained on the source side of the bi-lingual training data with Witten-Bell smoothing. The source dependency LM score represents the likelihood of the source", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "SLM: baseline + source dependency LM \u2022 CLM: baseline + context LM \u2022 LEN: baseline + length distribution \u2022 LBL: baseline + syntactic labels \u2022 LBL+LEN: baseline + syntactic labels + length distribution \u2022 LBL+LEN+CLM: baseline + syntactic labels + length distribution + context LM", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "num": null, "html": null, "type_str": "table", "text": "Decoding (3-gram LM) BASE 37.44 35.62 54.64 56.47 33.05 31.26 56.79 58.69 SLM 37.30 35.48 54.24 55.90 33.03 31.00 56.59 58.46 CLM 37.66 35.81 53.45 55.19 32.97 31.01 55.99 57.77 LEN 38.09 36.26 53.98 55.81 33.23 31.34 56.51 58.41 LBL 38.37 36.53 54.14 55.99 33.25 31.34 56.60 58.49 LBL+LEN 38.36 36.59 53.95 55.60 33.72 31.83 56.79 58.65 LBL+LEN+CLM 38.41 36.57 53.83 55.70 33.83 31.79 56.55 58.51 Rescoring (5-gram LM) BASE 38.91 37.04 53.65 55.45 34.34 32.32 55.60 57.60 SLM 38.27 36.38 53.64 55.29 34.25 32.28 55.35 57.21 CLM 38.79 36.88 53.09 54.80 35.01 32.98 55.39 57.28 LEN 39.22 37.30 53.34 55.06 34.65 32.70 55.61 57.51 LBL 39.11 37.30 53.61 55.29 35.02 33.00 55.39 57.48 LBL+LEN 38.91 37.17 53.56 55.27 35.03 33.08 55.47 57.46 LBL+LEN+CLM 39.58 37.62 53.21 54.94 35.72 33.63 54.88 56.98", "content": "
MT06MT08
ModelBLEUTERBLEUTER
lower
" } } } }