{ "paper_id": "O09-3001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:11:06.691913Z" }, "title": "Fertility-based Source-Language-biased Inversion Transduction Grammar for Word Alignment", "authors": [ { "first": "Chung-Chi", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "NTHU", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "jason.jschang@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a version of Inversion Transduction Grammar (ITG) model with IBM-style notation of fertility to improve word-alignment performance. In our approach, binary context-free grammar rules of the source language, accompanied by orientation preferences of the target language and fertilities of words, are leveraged to construct a syntax-based statistical translation model. Our model, inherently possessing the characteristics of ITG restrictions and allowing for many consecutive words aligned to one and vice-versa, outperforms the Bracketing Transduction Grammar (BTG) model and GIZA++, a state-of-the-art word aligner, not only in alignment error rate (23% and 14% error reduction) but also in consistent phrase error rate (13% and 9% error reduction). Better performance in these two evaluation metrics suggests that, based on our word alignment result, more accurate phrase pairs may be acquired, leading to better machine translation quality.", "pdf_parse": { "paper_id": "O09-3001", "_pdf_hash": "", "abstract": [ { "text": "We propose a version of Inversion Transduction Grammar (ITG) model with IBM-style notation of fertility to improve word-alignment performance. In our approach, binary context-free grammar rules of the source language, accompanied by orientation preferences of the target language and fertilities of words, are leveraged to construct a syntax-based statistical translation model. Our model, inherently possessing the characteristics of ITG restrictions and allowing for many consecutive words aligned to one and vice-versa, outperforms the Bracketing Transduction Grammar (BTG) model and GIZA++, a state-of-the-art word aligner, not only in alignment error rate (23% and 14% error reduction) but also in consistent phrase error rate (13% and 9% error reduction). Better performance in these two evaluation metrics suggests that, based on our word alignment result, more accurate phrase pairs may be acquired, leading to better machine translation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A statistical translation model is a model which detects word correspondences within sentence pairs, whether relying on lexical information or on syntactic aspects of the involved languages or both. In spite of the fact that methodologies vary, the intention is clear: to obtain better word alignment results so that a better translation model implies better performance in different linguistic applications. Among the methodologies are phrase-based (Och & Ney, 2004; Chiang, 2005; Liu et al., 2006) and syntax-based machine translation systems (Galley et al., 2004; Galley et al., 2006) .", "cite_spans": [ { "start": 450, "end": 467, "text": "(Och & Ney, 2004;", "ref_id": "BIBREF11" }, { "start": 468, "end": 481, "text": "Chiang, 2005;", "ref_id": "BIBREF4" }, { "start": 482, "end": 499, "text": "Liu et al., 2006)", "ref_id": "BIBREF9" }, { "start": 545, "end": 566, "text": "(Galley et al., 2004;", "ref_id": "BIBREF6" }, { "start": 567, "end": 587, "text": "Galley et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "More recently, Zhang and Gildea (2005) presented a lexicalized BTG model where orientation choices are also dependent on the head words of the structural constituents. They expect lexical pairs passed up from the bottom (i.e., leaf nodes) of the bilingual parse tree will make BTG models more knowledgeable in determining straight/inverted word order. Nonetheless, they found that lexical information at the lower levels of trees is more deterministic in word orientations than that at the higher levels.", "cite_spans": [ { "start": 15, "end": 38, "text": "Zhang and Gildea (2005)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To explore the power of ITG a little more (and inspired by Zhang et al. (2006) , who suggest that binarized rules improve both speed and accuracy of a syntax-based machine translation system), in this paper, we describe a version of ITG model where the binary grammatical rules (e.g., S\u2192NP VP) of the source language (e.g., English) are used as the Inversion Transduction Grammar for Word Alignment skeleton of our synchronous rules. Since the rules are biased toward the syntactic labels of the source language, our model is referred to as bITG model, short for biased ITG model. In our model, based on word-aligned sentence pairs, binary SL CFG rules are automatically annotated with the target language's word orientations and the associated orientation probabilities are automatically computed via Maximum Likelihood Estimation (MLE).", "cite_spans": [ { "start": 59, "end": 78, "text": "Zhang et al. (2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For example, take the languages of English, Chinese, and Japanese. The higher probability of our binary bITG rule VP\u2192 [VP NP] , where the square brackets denote the same ordering (straight) of the two right-hand-side constituents in both languages when expanding the left-hand-side symbol, indicates a similar VO construct exists in English (SVO language) and Chinese (SVO language). On the contrary, the different VO construct in English and Japanese (SOV language) is modeled through the high inverted probability of the binary bITG rule VP\u2192 where the pointed brackets denote that we expand the left-hand-side symbol into two right-hand-side symbols in reverse orientation in two languages. Notice that these two bITG rules originate from the same binary CFG rule (VP\u2192VP NP) of the source language, English, only with different ordering tendencies on the TL (i.e., Chinese or Japanese) end.", "cite_spans": [ { "start": 118, "end": 125, "text": "[VP NP]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In addition, we leverage IBM-style fertility probabilities of words to accommodate many-to-one or one-to-many word alignment links. In other words, in our model, many contiguous words in the source can be aligned to one word in the target and vice-versa. Originally, Wu's BTG model (1997) only allowed for a maximum of one-to-one word correspondences, which may affect the performance on word alignments and the accuracy of the bilingual parse trees. This one-to-one mapping restriction is especially not suitable for a language pair involving a language without clear word delimiters since the tokenization (or segmentation) of sentences of that language (e.g., Chinese) prior to word alignment is independent of words of another (e.g., English), resulting in tokens being under-or over-segmented for the corresponding words and, subsequently, abundant many-to-one/one-to-many word alignments.", "cite_spans": [ { "start": 267, "end": 288, "text": "Wu's BTG model (1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The paper is organized as follows. Sections 2 and 3 describe our model in detail. Section 4 shows empirical results. Discussions are made before the conclusion in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this section, we begin with an example of how bITG rules and fertilities of words are utilized to assist in word-aligning sentence pairs. Thereafter, a more formal description of our model will be discussed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "Once a sentence pair and the part-of-speech (POS) information of the SL sentence are fed into our model, it synchronously parses the sentence pair using unary lexical translation rules (e.g., JJ\u2192positive/\u7a4d\u6975 where / denotes word correspondence in two languages) and binary SL CFG rules attached with orientation preferences in the target language (e.g., VP\u2192[VP NP]). Also, the leaves of the bilingual parse tree are the word alignment results for this sentence pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example sentence pair and its bilingual parse tree", "sec_num": null }, { "text": "During bilingual parsing, the model assigns probabilities to substring pairs of the bitext after each of them is associated with possible syntactic labels on the source side. For example, take the sentence pair and its parse in Figure 1 , where spaces in the Chinese sentence are used to distinguish the boundaries of segments, \u03b5 stands for NULL, and * denotes the inverted orientation of the node's two children on the target. The substring pair (positive role, \u7a4d\u6975 \u4f5c \u7528) associated with linguistic symbol NP will be assigned a probability. In this particular parse, the probability is the product of probabilities of the straight binary bITG rule, NP\u2192[JJ NN], S English sentence: These factors will continue to play a positive role after its return. and the lexical rules of bITG, JJ\u2192positive/\u7a4d\u6975, and NN\u2192role/\u4f5c\u7528. In our model, the higher probability of rule NP\u2192[JJ NN] than the probability of the corresponding inverted rule NP\u2192 does not merely instruct the model to align the two right-hand-side counterparts (i.e., JJ and NN) of two languages in a straight fashion more, but also implies English and Chinese exhibit similar word-order regularity regarding the syntactic constituents.", "cite_spans": [], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Figure 1. An example sentence pair and its bilingual parse tree", "sec_num": null }, { "text": "On the other hand, in the example sentence pair, the beginning half, \"These factors will continue to play a positive role,\" is translated into the back of the Chinese sentence whereas the ending half, \"after its return,\" is translated into the beginning. Inverted rules (e.g., S\u2192) are designed to capture such systematic differences in the languages' grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example sentence pair and its bilingual parse tree", "sec_num": null }, { "text": "What is more, since only monolingual information is exploited to segment Chinese sentences, it is likely that the word alignments will not be constrained to one-to-one, one-to-zero, and zero-to-one mappings. For instance, \u9999\u6e2f is often segmented as a word in Chinese but needs to be aligned to two words (Hong and Kong) in English, a case of two-to-one mapping. Therefore, we incorporate notion of fertility into our model. As for the example of \"Hong Kong\" aligned to \"\u9999\u6e2f\", three possible word-aligning scenarios concerning fertility will be considered at runtime parsing: zero fertility of Hong and singular fertilities of Kong and \u9999\u6e2f where Hong is aligned to NULL but Kong is aligned to \u9999\u6e2f; zero fertility of Kong and singular fertilities of Hong and \u9999\u6e2f where Kong is aligned to NULL but Hong is aligned to \u9999\u6e2f; singular fertilities of Hong and Kong and dual fertility of \u9999\u6e2f where both Hong and Kong are aligned to \u9999\u6e2f.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example sentence pair and its bilingual parse tree", "sec_num": null }, { "text": "Taking into account the probabilities of lexical translations, binary grammatical rewrite rules, and fertilities of words, our model manages to find a better parse tree that applies more appropriate synchronous rules to match the structural divergences and more suitable lexical mapping relations (one-to-one, one-to-two, et al.) in two languages. Better parses are more likely to yield better word alignment results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example sentence pair and its bilingual parse tree", "sec_num": null }, { "text": "We actually estimate the probabilities of bITG rules, consisting of unary lexical translation rules and binary SL CFG rules with word orientation on the TL, and those of the fertilities of words from a parallel corpus and an SL CFG. We will discuss the training algorithm in more detail in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example sentence pair and its bilingual parse tree", "sec_num": null }, { "text": "We now formally describe our statistical translation model. To be comparable to previous work, the English-French notation is used throughout this paper. E and F denote the source and target language, respectively, and i e stands for the i-th word in sentence e in language E and j f for the j-th word in sentence f in F. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Given ( ) , e f = (", "eq_num": ") 1" } ], "section": "Formal Description", "sec_num": "2.2" }, { "text": "( ) { } arg max Pr , , t t B B e f \u03c4 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "with the by-product of word-level correspondences. Intuitively, the probability of a bilingual parse tree t B provided with e, f, and \u03c4 is modeled as the product of probabilities associated with grammatical rewrite rules and lexical information:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "( ) ( ) ( ) Pr , , Pr , , Pr , , t B e f e f e f \u03c4 \u03c4 \u03c4 = \u00d7 D A (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "where, by inspecting the parse tree t B , D, and A represent the set of its production rules with syntactic labels on the right hand side (e.g., NP\u2192JJ NN) and the set of rules with word alignments on the right (e.g., JJ\u2192positive/\u7a4d\u6975), respectively. \u03b1 \u2208 N has one child k \u03b2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "In our model, the probability of constructing t B is the product of the probabilities of two sources: the first estimating the probabilities of the applied binary bITG rules; the second estimating those of the unary lexical translation rules and the fertilities of words in the tree. Assuming each applied rule is independent of one another, we rewrite the grammatical-related term in Equation (1) as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "( ) ( ) 1 2 2 1 Pr , , P k k k k e f \u03bb \u03b1 \u03c4 \u03b1 \u03b1 \u03b1 + \u2208 \u2245 \u2192 \u220f 2 N D (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "where can be straight [ ] or inverted . On the other hand, the lexical-related term in Equation 1is decomposed into three factors, as shown in Equation 3: one for the product of probabilities of lexical translation rules given \u03c4 , another for the product of fertility probabilities of words in e, and the other for the product of fertility probabilities of words in f.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) ( ) 2 2 2 1 1 1 Pr , , P P P i j k k k m n e f i j e f \u03bb \u03bb \u03bb \u03b1 \u03c4 \u03b1 \u03b2 \u03c4 \u03c6 \u03c6 \u2208 = = \u2245 \u2192 \u00d7 \u03a6 = \u00d7 \u03a6 = \u220f \u220f \u220f N A", "eq_num": "(3)" } ], "section": "Formal Description", "sec_num": "2.2" }, { "text": "In Equation (3), \u03a6 is the random variable for fertilities of words, and ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "\u2192 \u00d7 \u03a6 = \u00d7 \u220f \u220f N ( ) 2 1 P j j n f f j \u03bb \u03c6 = \u03a6 = \u220f (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "in which the sum of the weight 1 \u03bb and 2 \u03bb is one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Description", "sec_num": "2.2" }, { "text": "In this subsection, we depict a CYK-like parsing algorithm for obtaining the most likely bilingual parse tree given the sentence pair ( ) Notice that our model is a data-driven one as is Wu (1997) . In other words, it synchronously parses sentence pair via bITG rules without a monolingual (SL or TL) parse tree. Figure 2 shows the run-time parsing algorithm.", "cite_spans": [ { "start": 187, "end": 196, "text": "Wu (1997)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 313, "end": 321, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Runtime Parsing", "sec_num": "2.3" }, { "text": "//Initial Step For 1 ,1 i m j n \u2264 \u2264 \u2264 \u2264 (1) ( ) ( ) ( ) 2 2 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": ", 1, , 1,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "P P 1 P 1 i i j i i j e t i i j j f t e f \u03bb \u03bb \u03bb \u03b4 \u2212 \u2212 = \u2192 \u00d7 \u03a6 = \u00d7 \u03a6 = (2) For every in i L t G E \u2192 \u2208 (3) ( ) ( ) ( ) 2 2 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": ", 1, , 1,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P P 1 P 1 i j i j e L i i j j f L e f \u03bb \u03bb \u03bb \u03b4 \u2212 \u2212 = \u2192 \u00d7 \u03a6 = \u00d7 \u03a6 = For 1 , 0 i m j n \u2264 \u2264 \u2264 \u2264 (4) ( ) ( ) 2 2 , 1, , , P P 0 i i i i e t i i j j t e \u03bb \u03bb \u03b4 \u03b5 \u2212 = \u2192 \u00d7 \u03a6 = (5) For every in i L t G E \u2192 \u2208 (6)", "eq_num": "( ) ( ) 2 2" } ], "section": "Parsing Algorithm", "sec_num": null }, { "text": ", 1, , , attached with a syntactic symbol p on E side, is constructed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "P P 0 i i e L i i j j L e \u03bb \u03bb \u03b4 \u03b5 \u2212 = \u2192 \u00d7 \u03a6 = For 0 ,1 , syntactic labels on end i m j n L E \u2264 \u2264 \u2264 \u2264 \u2208 (7) ( ) ( ) 2 2 , , , 1, P P 0 j j L i i j j f L f \u03bb \u03bb \u03b4 \u03b5 \u2212 = \u2192 \u00d7 \u03a6 = //Recurrent Step For any possible (s,t,u,v) //1 , ,1 , s t m u v n \u2264 \u2264 \u2264 \u2264", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ ] ( ) ( ) 1 1 , syntax labels on , , , , , , , , , , , , , , , ,", "eq_num": ", , , , P ," } ], "section": "Parsing Algorithm", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P max q r E s s t u u v q s s u u r s t u v p s t u v q s s u v r s t u u p q r p q r \u03bb \u03bb \u03b4 \u03b4 \u03b4 \u03b4 \u03b4 \u2208 \u2032 \u2264 \u2264 \u2032 \u2264 \u2264 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2192 \u00d7 \u00d7 = \u2192 \u00d7 \u00d7 \u23a7 \u23ab \u23aa \u23aa \u23a8 \u23ac \u23aa \u23aa \u23a9 \u23ad //for backtracking [ ] ( ) ( ) 1 1 , syntax labels on , , , , , , , , , , , , , , , ,", "eq_num": ", , , , P , b" } ], "section": "Parsing Algorithm", "sec_num": null }, { "text": "P arg max q r E s s t u u v q s s u u r s t u v p s t u v q s s u v r s t u u p q r p q r \u03bb \u03bb \u03b4 \u03b4 \u03b4 \u03b4 \u2208 \u2032 \u2264 \u2264 \u2032 \u2264 \u2264 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2192 \u00d7 \u00d7 = \u2192 \u00d7 \u00d7 \u23a7 \u23ab \u23aa \u23aa \u23a8 \u23ac \u23aa \u23aa \u23a9 \u23ad (9) Backtrack()", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "Step (1) of Figure 2 , we compute the probability of a one-to-one word correspondence one-to-one mapping). Since the POS tag i t can be derived from some possible phrasal constituents in G (Step (2)) (e.g., NN can be derived from NP), we also compute their associated probabilities (Step (3)). Similarly, in Steps (4) to (7), we calculate the probabilities of the one-to-zero and zero-to-one word correspondences limited to the scope of the sentence pair.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "Afterwards, relying on the work done previously, word correspondences and parsing results of longer substring pairs would unveil themselves in a bottom-up manner. In Step 8 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "1 ' u u f f + or ' 1 u v f f +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": ", both straight and inverted orientation of the SL CFG rules \"p\u2192q r \" ought to be considered. Note that the computation in Step (8) does not properly deal with the cases of many-to-one or one-to-many word-level alignments. For many-to-one alignments, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": ", syntax labels on , , 1, 1, , 1, , 1, P m a x ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "P P 1 P 1 u u u q r E q s s u u r s t u u f f f t s p q r t s \u03bb \u03bb \u03bb \u03bb \u03b4 \u03b4 \u2208 + \u2212 + \u2212 \u23a7 \u23ab \u23aa \u23aa \u03a6 = \u2212 \u00d7 \u2192 \u00d7 \u00d7 \u23a8 \u23ac \u03a6 = \u03a6 = \u2212 \u2212 \u23aa \u23aa \u23a9 \u23ad", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "Finally, using the standard CYK backtracking technique, we can find the most probable bilingual parse tree of the sentence pair with word alignment results. The integration of fertilities of words into the model aims to improve the parsing and the word-aligning quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": null }, { "text": "Although the complexity of the described algorithm is polynomial-time (proportion to 3 3 m n ), the execution time grows rapidly with the increase in the variety of syntactic labels, from three structural labels (Wu, 1997) to the grammatical categories of the source language's syntax in our model. As a result, pruning techniques are essential to reduce the time spent on parsing.", "cite_spans": [ { "start": 212, "end": 222, "text": "(Wu, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Pruning", "sec_num": "2.4" }, { "text": "We adopt pruning in the following two manners. The first pruning technique is, for a given SL substring and a length of the TL substring, and \u03c3 is a real number between 0 and 1. In other words, we remove inferior parse trees that are not in the set of the best N \u03c3 \u00d7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning", "sec_num": "2.4" }, { "text": "ones. Since N varies from case to case (depending on the SL substring and the length of TL substring), only the more probable trees within the ratio (i.e., \u03c3 ) of N will remain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning", "sec_num": "2.4" }, { "text": "The second pruning technique is related to the ratio of the length of the SL and TL substring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning", "sec_num": "2.4" }, { "text": ", , , , p s t u v \u03b4 will not be calculated if t s v u \u2264 \u2264 , since few words will be aligned to more than 1 ratio \u03b8 words in another language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning", "sec_num": "2.4" }, { "text": "By applying the aforementioned pruning techniques, the time spent on parsing each sentence pair can be reduced by more than half. Empirically, pruning unlikely parses has little affect on the word alignment quality but reduces computational overhead significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning", "sec_num": "2.4" }, { "text": "In this section, we describe how to estimate the probabilities of our unary bITG rules (e.g., JJ\u2192positive/\u7a4d\u6975) and binary bITG rules (e.g., VP\u2192[VP NP]) which denote the association of bilingual lexical words and model the structural divergences of the two languages, respectively. Figure 3 shows the probabilistic estimation procedure.", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 288, "text": "Figure 3", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "(1) In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= H WA (2) ( ) ( ) 2 2 2 2 1 1 1 1 For , ,", "eq_num": ", , , , , , , , , i j" } ], "section": "Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) 2 1 If 1 i i\u2032 = \u2212 (4) For every in L L L G E \u2032\u2032 \u2032 \u2192 \u2208 (5) ( ) 2 1 2 If 1 j j j \u03b4 \u2032 + \u2264 \u2264 + (6) ( ) { } 2 2 1 1 , ,", "eq_num": ", , , j" } ], "section": "Probability Estimation", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i i j r e f L L L Straight \u2032 \u2032 \u2032\u2032 \u2032 = \u222a H H (7) ( ) 2 1 2 If 1 j j j \u03b4 \u2032 \u2032 + \u2264 \u2264 + (8) ( ) { } 2 2 1 1 , ,", "eq_num": ", , , i j" } ], "section": "Probability Estimation", "sec_num": "3." }, { "text": "Step (1) of our training procedure, an existing word-aligning strategy or tool (e.g., GIZA++) is employed to obtain the word alignments (i.e., WA) of a parallel corpus C. WA comprises elements of the form ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "2 2 1 1 , , , , , i j i j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "r e f L rhs rel , which represents that the substring pair ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "1 2 1 2 , i i j j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "e e f f in sentence pair r has L rhs \u2192 as the production rule leading to the bilingual structure and has rel (either straight or inverted) as the cross-language word-order relation of the constituents of rhs . rhs denotes either a sequence of syntactic labels or a terminating bilingual word pair. Following this format, the example parses of (positive,\u7a4d\u6975) JJ and (after its return,\u9999\u6e2f \u56de\u6b78 \u5f8c) PP in Figure 1 would be denoted by the 6-tuple (193, 8 9 8 9 , e f ,JJ,positive/ \u7a4d \u6975 ,don't_care) and (193, 12 3 10 1 , e f ,PP, IN NP,Inverted) respectively, where 193 is the record number of this sentence pair.", "cite_spans": [ { "start": 438, "end": 447, "text": "(193, 8 9", "ref_id": null }, { "start": 493, "end": 503, "text": "(193, 12 3", "ref_id": null }, { "start": 520, "end": 535, "text": "IN NP,Inverted)", "ref_id": null } ], "ref_spans": [ { "start": 397, "end": 405, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "Then, we recursively select two sections of a sentence pair, which have not yet been (Step (3) ), based on word alignment result (Step (5) andStep 7), a new straight-ordered (Step (6)) or inverted-ordered (Step (8)) section representing these two will be added into H. Specifically, once the SL substrings are related to some possible binary SL CFG rules, the right-hand-side constituents of these rules will be associated with an orientation on the TL end based on word alignment links. Since our model is a synchronous bilingual parsing one, without a monolingual parse tree, it enumerates all possible syntactic symbols to derive L and L\u2032 in Step (4). Note that, in Steps (5) and 7, \u03b4 , a small positive integer, is utilized to tolerate aligning errors introduced by the automatic word aligner or explicitness issue 1 during translation from one language to another, when determining cross-language straight/inverted word order phenomenon.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 94, "text": "(Step (3)", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Probability Estimation", "sec_num": "3." }, { "text": "Step (10) toStep (12), in which |W| stands for the number of entries in set W and count(p;Q) for the frequency of p in set Q, we estimate probabilities of bITG rules via Maximum Likelihood Estimation. In our model, the probabilities of lexical translation rules (e.g., JJ\u2192positive/\u7a4d\u6975) and binary bITG rules (e.g., VP\u2192[VP NP]) are estimated from the same source (i.e., H). Alternative probabilistic estimation of these two kinds of rules can be adopted. For example, the probabilities of lexical translation rules can be derived from pure word alignment set WA while those of binary bITG rules can be derived from set H without word-level alignment links. We employ the former estimation approach and, in experiments, it yields satisfying results (see Section 4), suggesting word-order tendencies of the two languages are properly modeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From", "sec_num": null }, { "text": "Finally, fertility probabilities related to words in both languages are also calculated (Step (13)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From", "sec_num": null }, { "text": "In experiments, we trained our model on a large English-Chinese parallel corpus. We examined word alignments produced by our bITG model using the evaluation metrics proposed by Och and Ney (2000) . For comparison, we also trained GIZA++, a state-of-the-art word-aligning system, on the same parallel corpus.", "cite_spans": [ { "start": 177, "end": 195, "text": "Och and Ney (2000)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "We used the news portion of Hong Kong Parallel Text 2 (HKPT) distributed by Linguistic Data Consortium as our sentence-aligned corpus C, which consisted of 739,919 English-Chinese sentence pairs. The average length was 24.4 words for English and 21.5 words for Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Proposed Model", "sec_num": "4.1" }, { "text": "In our model, English sentences were considered to be the source while Chinese sentences were the target. SL sentences were POS tagged and TL sentences were segmented prior to word alignment. During training (as described in Section 3), we employed a GIZA++ run with default settings to obtain the word alignment set WA and our binary SL CFG G was based upon PTB section 23 3 production rules distributed by Andrew B. Clegg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Proposed Model", "sec_num": "4.1" }, { "text": "To evaluate our statistical translation model, 114 sentence pairs were chosen randomly from the news portion of HKPT as our testing data set. For the sake of execution time, we only selected sentence pairs whose SL and TL length did not exceed 15. Sentence pairs satisfying such a length constraint covered approximately 40% of the sentence pairs in the news portion of HKPT and were expected to be better word aligned via GIZA++.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "We examined the word-aligning performance using the metrics of alignment error rate (AER) proposed by Och and Ney (2000) , in which the quality of a word alignment result A produced by an automatic system is evaluated by: (sure) denotes the set whose alignments are not ambiguous and P (possible) denotes the set consisting of alignments that might or might not exist ( ) \u2286 S P . Thus, human annotations may contain many-to-one, one-to-many, or even many-to-many word alignments. Table 1 shows the experimental results of GIZA++, the BTG model (Wu, 1997) , and our fertility-based SL-biased ITG model. In this table 4 , P, R, and F stand for precision, recall, and F-measure 5 , respectively. The performance of the E-to-F alignments (E stands for English and F for Chinese), the F-to-E alignments, and the refined alignments (proposed by Och and Ney (2000) ) from both E-to-F and F-to-E directions of GIZA++ are shown in first three rows, along with that of BTG, which also trained on the word-aligning output of GIZA++. The results of our translation model without or with the capability of making many-to-one/one-to-many links are listed in the last two rows.", "cite_spans": [ { "start": 102, "end": 120, "text": "Och and Ney (2000)", "ref_id": "BIBREF10" }, { "start": 544, "end": 554, "text": "(Wu, 1997)", "ref_id": "BIBREF16" }, { "start": 839, "end": 857, "text": "Och and Ney (2000)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 480, "end": 487, "text": "Table 1", "ref_id": "TABREF8" }, { "start": 602, "end": 617, "text": "In this table 4", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "Compared with the BTG model that does not distinguish the constituent categories and makes the orientation choices merely on lexical evidence (without the information of languages' grammars), our model without fertility probability which allows for at most one-to-one alignment, as the BTG model does, achieved 9% reduction in the alignment error rate. This indicates that the binary SL CFG rules encoding with TL ordering preference in our model do capture the linguistic information of the languages such as word-order regularities or grammar and do impose more realistic and accurate reordering constraints on word alignment in the language pairs. Furthermore, in comparison to the refined alignments of both word-aligning directions, our model with the concept of fertility (allowing for many-to-one/one-to-many links), which is quite similar to the refined approach accommodating many-to-many word mappings, increased the recall by 9% while maintaining high precision and achieved 14% alignment error reduction overall (increased F-measure by 5%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "As suggested by Table 1 , it is safe to say that the proposed model yields more accurate bilingual parse trees, thus better word alignment quality, by introducing binary CFG rules of a language (i.e., the source language) and fertility notation of IBM models into ITG model.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 23, "text": "Table 1", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "In this section, we examine how the learnt similarities (straight) and differences (inverted) in word orders of two languages aid the word-aligning process of our model by means of the adjacency feature and cohesion constraint, mentioned in Cherry and Lin (2003) . Subsequently, to evaluate the possibility of better machine translation quality by providing our model's output (i.e., word correspondences), we adopt the recently-proposed metric, consistent phrase error rate (CPER) by Ayan and Dorr (2006) . Table 2 shows the accuracy of adjacent alignments made by our model, and the accuracy achieved by the refined approach is shown for comparison. If compared against the gold standard in the sure set (i.e., S in Section 4), our model with bITG rules relatively increased the accuracy by more than 3%, suggesting the similar (or straight) word orientations of the binary syntactic constituents (e.g., JJ and NN) in the languages are better captured in our model than in GIZA++. Note that alignments must have orders before an adjacency feature exists (see Cherry and Lin (2003) ) in them. Therefore, an ordering, depending on the position of the English word in the sentence, was imposed to examine the feature. Additionally, we examined whether the inverted binary bITG rules captured the diversities of the two grammars and helped to make correct crossing (or reverse) alignment links or not. For that purpose, we first acquired the dependency relations of the source (i.e., English) sentences via a Stanford parser, and computed the percentage of links violating the cohesion constraint (see Cherry and Lin (2003) ). The ratios of having crossing dependencies in the mapped Chinese dependency trees 6 are summarized in Table 3. As suggested by Table 3 , our model reduced sixteen percent of the links violating the cohesion constraint (compared to the refined approach).", "cite_spans": [ { "start": 241, "end": 262, "text": "Cherry and Lin (2003)", "ref_id": "BIBREF3" }, { "start": 485, "end": 505, "text": "Ayan and Dorr (2006)", "ref_id": "BIBREF0" }, { "start": 1061, "end": 1082, "text": "Cherry and Lin (2003)", "ref_id": "BIBREF3" }, { "start": 1600, "end": 1621, "text": "Cherry and Lin (2003)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 508, "end": 515, "text": "Table 2", "ref_id": "TABREF9" }, { "start": 1727, "end": 1760, "text": "Table 3. As suggested by Table 3", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": ".044 bITG w/ fertility .037", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Refined", "sec_num": null }, { "text": "The above statistics indicate that the probabilities related to straight and inverted word orders of bITG rules in our model not only impose a more suitable alignment constraint but properly model the systematic similarities and differences in two languages' grammars. 6 Chinese dependency trees are mapped from English dependency trees based on word correspondences. ", "cite_spans": [ { "start": 269, "end": 270, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Refined", "sec_num": null }, { "text": "According to Ayan and Dorr (2006) , the intrinsic evaluation metric of AER (Och and Ney, 2000) examines only the quality of word-level alignments and correlates poorly with the MT-community metric-BLEU score. As a result, we exploited consistent phrase error rate (CPER) to evaluate words alignments in the context of machine translation. CPER is reported to better correlate with translation quality (the smaller the CPER is, the better the translation quality) in that it evaluates phrase-level alignments and in that phrase-level alignments (bilingual phrase pairs) constitute the key essences of a MT system.", "cite_spans": [ { "start": 13, "end": 33, "text": "Ayan and Dorr (2006)", "ref_id": "BIBREF0" }, { "start": 75, "end": 94, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "CPER", "sec_num": "5.2" }, { "text": "In Ayan and Dorr (2006) \u00d7 \u00d7 = \u2212 + where A P and G P stand for two sets of phrases generated by an automatic alignment A and manual alignment G, respectively. In Table 4 , the proposed fertility-based source-language-based ITG model yielded the lowest CPER. This indicates that MT systems, accepting our word alignment output, are more likely to lead to better translation performance. ", "cite_spans": [ { "start": 3, "end": 23, "text": "Ayan and Dorr (2006)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 4", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "CPER", "sec_num": "5.2" }, { "text": "To combine the strengths of the competing models, a thought-provoking fusion of IBM-style fertility with syntax-based ITG model is described. In our model, the orientation probabilities of the binary SL-based ITG rules are automatically estimated based on a word-aligned parallel corpus and are devised to better capture structural divergences of the involved languages. The proposed bITG model with fertility reduces AER by 14% and 23%, and reduces CPER by 9% and 13% compared to GIZA++ and Wu's BTG (1997) , respectively. Lower CPER suggests MT systems chained after our statistical translation model are likely to yield better translation quality. In this paper, the performance of ITG models trained on large-scale bitexts is shown for the first time with quite encouraging results.", "cite_spans": [ { "start": 492, "end": 507, "text": "Wu's BTG (1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "As for future work, we would like to explore methods (e.g. (Brown, 1992) ) for partitioning long sentences into shorter ones so that the time spent on bilingual parsing in our model can be reduced. We also like to see whether word-aligning quality can be further improved if our bITG rules are lexicalized, especially when lexical contents play an important role in determining word orders of the languages.", "cite_spans": [ { "start": 59, "end": 72, "text": "(Brown, 1992)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Some translations may be omitted for conciseness, or some of the function words in one language may have no counterparts in another. 2 LDC2004T08", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Going beyond AER: an extensive analysis of word alignments and their impact on MT", "authors": [ { "first": "N", "middle": [ "F" ], "last": "Ayan", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL-2006", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ayan, N. F. & Dorr, B. J. (2006). Going beyond AER: an extensive analysis of word alignments and their impact on MT. In Proceedings of ACL-2006, 9-16.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dividing and conquering long sentence in a translation system", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A D" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "J D" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" }, { "first": "S", "middle": [], "last": "Mohanty", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Workshop on Speech and Natural Language", "volume": "", "issue": "", "pages": "267--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., Pietra, S. A. D., Pietra, V. J. D., Mercer, R. L., & Mohanty, S. (1992). Dividing and conquering long sentence in a translation system. In Proceedings of the Workshop on Speech and Natural Language, 267-271.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A D" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "J D" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., Pietra, S. A. D., Pietra, V. J. D., & Mercer, R. L. (1993). The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2), 263-311.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A probability model to improve word alignment", "authors": [ { "first": "C", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cherry, C. & Lin, D. (2003). A probability model to improve word alignment. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, 88-95.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43 rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "263--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chiang, D. (2005). A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43 rd Annual Meeting of the Association for Computational Linguistics, 263-270.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Evaluating and integrating Treebank parsers on a biomedical corpus", "authors": [ { "first": "A", "middle": [ "B" ], "last": "Clegg", "suffix": "" }, { "first": "A", "middle": [], "last": "Shepherd", "suffix": "" } ], "year": 2005, "venue": "Association for Computational Linguistics Workshop on software", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clegg, A. B. & Shepherd, A. (2005). Evaluating and integrating Treebank parsers on a biomedical corpus. In Association for Computational Linguistics Workshop on software 2005.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "What's in a translation rule", "authors": [ { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "M", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT/NAACL-2004", "volume": "", "issue": "", "pages": "273--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galley, M., Hopkins, M., Knight, K., & Marcu, D. (2004). What's in a translation rule? In Proceedings of HLT/NAACL-2004, 273-280.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Scable inference and training of context-rich syntactic translation models", "authors": [ { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "S", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 44 th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "961--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galley, M., Graehl, J., Knight, K., Marcu, D., DeNeefe, S., Wang, W. et al. (2006). Scable inference and training of context-rich syntactic translation models. In Proceedings of the 44 th Annual Conference of the Association for Computational Linguistics, 961-968.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Dependencies vs. constituents for tree-based alignment", "authors": [ { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "214--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gildea, D. (2004). Dependencies vs. constituents for tree-based alignment. In Proceedings of the EMNLP, 214-221.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tree-to-string alignment template for statistical machine translation", "authors": [ { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 44 th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "609--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Y., Liu, Q., & Lin, S. (2006). Tree-to-string alignment template for statistical machine translation. In Proceedings of the 44 th Annual Conference of the Association for Computational Linguistics, 609-616.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improved statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38 th Annual Conference of ACL-2000", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J. & Ney, H. (2000). Improved statistical alignment models. In Proceedings of the 38 th Annual Conference of ACL-2000, 440-447.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J. & Ney, H. (2004). The alignment template approach to statistical machine translation. Computational Linguistics, 30(4), 417-449.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Fertility-based Source-Language-biased 17", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fertility-based Source-Language-biased 17", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Inversion Transduction Grammar for Word Alignment", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inversion Transduction Grammar for Word Alignment", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Extentions to HMM-based statistical word alignment models", "authors": [ { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ilhan", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Processing Language", "volume": "", "issue": "", "pages": "87--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toutanova, K., Ilhan, H. T., & Manning, C. D. (2002). Extentions to HMM-based statistical word alignment models. In Proceedings of the Conference on Empirical Methods in Natural Processing Language, 87-94.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th conference on Computational linguistics", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vogel, S., Ney, H., & Tillmann, C. (1996). HMM-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics, 836-841.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, D. (1997). Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3), 377-403.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A syntax-based statistical translation model", "authors": [ { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39 th Annual Conference of ACL-2001", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamada, K. & Knight, K. (2001). A syntax-based statistical translation model. In Proceedings of the 39 th Annual Conference of ACL-2001, 523-530.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A comparative study on reordering constraints in statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "144--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zens, R. & Ney, H. (2003). A comparative study on reordering constraints in statistical machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 144-151.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Syntax-based alignment: supervised or unsupervised?", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20 th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "418--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, H. & Gildea, D. (2004). Syntax-based alignment: supervised or unsupervised? In Proceedings of the 20 th International Conference on Computational Linguistics, 418-424.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Stochastic lexicalized inversion transduction grammar for alignment", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43 rd Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "475--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, H. & Gildea, D. (2005). Stochastic lexicalized inversion transduction grammar for alignment. In Proceedings of the 43 rd Annual Meeting of the ACL, 475-482.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Synchronous binarization for machine translation", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the NAACL-HLT", "volume": "", "issue": "", "pages": "256--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, H., Huang, L., Gildea, D., & Knight, K. (2006). Synchronous binarization for machine translation. In Proceedings of the NAACL-HLT, 256-263.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "f , the pre-determined POS tag sequence, ( ) 1 , , m t t , of sentence e, and the grammar G in E (i.e., SL grammar).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Run-time parsing.During a parse of a sentence pair in our model, a table of ,", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "e i 's pre-determined POS tag i t , according to the probability of the unary", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "a possible grammatical symbol of the first part and r as a possible symbol of the second, while u'", "num": null, "uris": null, "type_str": "figure" }, "FIGREF6": { "text": "The procedure of probabilistic estimation.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF7": { "text": "for Word Alignment paired up, from H (Step (2)). If the SL substring of the first section (i.e.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF9": { "text": "56% in human-annotated test data.5 Calculated using the formula ( )2 P R P+R \u00d7 \u00d7 .", "num": null, "uris": null, "type_str": "figure" }, "FIGREF10": { "text": ", precision (P), recall (R), and CPER are computed via:", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "html": null, "content": "
Fertility-based Source-Language-biased5
Inversion Transduction Grammar for Word Alignment
", "text": "English POS tags: DT NNS MD VB TO VB DT JJ NN IN PRP$ NN Chinese sentence: \u9999\u6e2f \u56de\u6b78 \u5f8c \u9019\u4e9b \u689d\u4ef6 \u5c07\u6703 \u7e7c\u7e8c \u767c\u63ee \u7a4d\u6975 \u4f5c\u7528", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "content": "
Fertility-based Source-Language-biased9
Inversion Transduction Grammar for Word Alignment
where, 1, , 1, r s t u u \u03b4 + \u2212
1 + one-to-many mapping (i.e., the calculation of s t e e are all aligned to u f . A similar principal applies to , 1, , , p s s u v \u03b4 \u2212
", "text": "needs to be constructed from many-to-one or one-to-one word mapping relation since words", "type_str": "table" }, "TABREF8": { "num": null, "html": null, "content": "
PRAERF
E to F.891.385.459.537
F to E.882.533.333.664
Refined.879.635.261.737
BTG.844.610.290.708
bITG w/o fertility.866.638.263.735
bITG w/ fertility.878.692.224.774
3 http://textmining.cryst.bbk.ac.uk/acl05/
", "text": "", "type_str": "table" }, "TABREF9": { "num": null, "html": null, "content": "
Compared tosureCompared to possible
linkslinks
Refined.835.869
bITG w/ fertility.863.881
", "text": "", "type_str": "table" }, "TABREF11": { "num": null, "html": null, "content": "
PRCPER
E to F.479.383.574
F to E.544.518.470
Refined.573.606.411
BTG.569.569.431
bITG w/o fertility.598.597.402
bITG w/ fertility.624.626.375
", "text": "", "type_str": "table" } } } }