{ "paper_id": "I08-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:42:09.298498Z" }, "title": "Improving Word Alignment by Adjusting Chinese Word Segmentation", "authors": [ { "first": "Ming-Hong", "middle": [], "last": "Bai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": {} }, "email": "mhbai@sinica.edu.tw" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": {} }, "email": "kchen@iis.sinica.edu.tw" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing-Hua University", "location": {} }, "email": "jschang@cs.nthu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most of the current Chinese word alignment tasks often adopt word segmentation systems firstly to identify words. However, word-mismatching problems exist between languages and will degrade the performance of word alignment. In this paper, we propose two unsupervised methods to adjust word segmentation to make the tokens 1-to-1 mapping as many as possible between the corresponding sentences. The first method is learning affix rules from a bilingual terminology bank. The second method is using the concept of impurity measure motivated by the decision tree. Our experiments showed that both of the adjusting methods improve the performance of word alignment significantly.", "pdf_parse": { "paper_id": "I08-1033", "_pdf_hash": "", "abstract": [ { "text": "Most of the current Chinese word alignment tasks often adopt word segmentation systems firstly to identify words. However, word-mismatching problems exist between languages and will degrade the performance of word alignment. In this paper, we propose two unsupervised methods to adjust word segmentation to make the tokens 1-to-1 mapping as many as possible between the corresponding sentences. The first method is learning affix rules from a bilingual terminology bank. The second method is using the concept of impurity measure motivated by the decision tree. Our experiments showed that both of the adjusting methods improve the performance of word alignment significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word alignment is an important preprocessing task for statistical machine translation. There have been many statistical word alignment methods proposed since the IBM models have been introduced. Most existing methods treat word tokens as basic alignment units (Brown et al., 1993; Vogel et al., 1996; Deng and Byrne, 2005) , however, many languages have no explicit word boundary markers, such as Chinese and Japanese. In these languages, word segmentation (Chen and Liu, 1992; Chen and Bai, 1998; Chen and Ma, 2002; Ma and Chen, 2003; Gao et al., 2005) is often carried out firstly to identify words before word alignment (Wu and Xia, 1994) . However, the differences in lexicalization may degrade word alignment performance, for different languages may realize the same concept using different numbers of words (Ma et al., 2007; Wu, 1997) . For instance, Chinese multi-syllabic words composed of more than one meaningful morpheme which may be translated to several English words. For example, the Chinese word \u6559\u80b2\u7f72 is composed of two meaning units, \u6559\u80b2 and \u7f72, and is translated to Department of Education in English. The morphemes \u6559\u80b2 and \u7f72 have their own meanings and are translated to Education and Department respectively. The phenomenon of lexicalization mismatch will degrade the performance of word alignment for several reasons. The first reason is that it will reduce the cooccurrence counts of Chinese and English tokens. Consider the previous example.", "cite_spans": [ { "start": 260, "end": 280, "text": "(Brown et al., 1993;", "ref_id": "BIBREF2" }, { "start": 281, "end": 300, "text": "Vogel et al., 1996;", "ref_id": "BIBREF21" }, { "start": 301, "end": 322, "text": "Deng and Byrne, 2005)", "ref_id": "BIBREF8" }, { "start": 457, "end": 477, "text": "(Chen and Liu, 1992;", "ref_id": "BIBREF6" }, { "start": 478, "end": 497, "text": "Chen and Bai, 1998;", "ref_id": "BIBREF4" }, { "start": 498, "end": 516, "text": "Chen and Ma, 2002;", "ref_id": "BIBREF5" }, { "start": 517, "end": 535, "text": "Ma and Chen, 2003;", "ref_id": "BIBREF16" }, { "start": 536, "end": 553, "text": "Gao et al., 2005)", "ref_id": "BIBREF10" }, { "start": 623, "end": 641, "text": "(Wu and Xia, 1994)", "ref_id": "BIBREF22" }, { "start": 813, "end": 830, "text": "(Ma et al., 2007;", "ref_id": "BIBREF17" }, { "start": 831, "end": 840, "text": "Wu, 1997)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since \u6559\u80b2\u7f72 is treated as a single unit, it does not contribute to the occurrence counts of Education/ \u6559\u80b2 and Department/\u7f72 token pairs. Secondly, the rarely occurring compound word may cause the garbage collectors effect (Moore, 2004; Liang et al., 2006) , aligning a rare word in source language to too many words in the target language, due to the frequency imbalance with the corresponding translation words in English (Lee, 2004) . Finally, the IBM models (Moore, 2004) impose the limitation that each word in the target sentence can be generated by at most one word in the source sentence. In this case, a many-to-one alignment, links a phrase in the source sentence to a single token in the target sentence, is not allowed, forcing most links of a phrase in the source sentence to be abolished. As in the previous example, when aligning from English to Chinese, \u6559\u80b2\u7f72 can only be linked to one of the English words, say Education, because of the limitation of the IBM model. However for remedy, many of the current word alignment methods combine the results of both alignment directions, via intersection or grow-diag-final heuristic, to improve the alignment reliability (Koehn et al., 2003; Liang et al., 2006; Ayan et al., 2006; DeNero et al., 2007) . However the many-to-one link limitation will undermine the reliability due to the fact that some links are not allowed in one of the directions.", "cite_spans": [ { "start": 219, "end": 232, "text": "(Moore, 2004;", "ref_id": "BIBREF18" }, { "start": 233, "end": 252, "text": "Liang et al., 2006)", "ref_id": "BIBREF15" }, { "start": 420, "end": 431, "text": "(Lee, 2004)", "ref_id": "BIBREF13" }, { "start": 458, "end": 471, "text": "(Moore, 2004)", "ref_id": "BIBREF18" }, { "start": 1174, "end": 1194, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF12" }, { "start": 1195, "end": 1214, "text": "Liang et al., 2006;", "ref_id": "BIBREF15" }, { "start": 1215, "end": 1233, "text": "Ayan et al., 2006;", "ref_id": "BIBREF0" }, { "start": 1234, "end": 1254, "text": "DeNero et al., 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose two novel methods to adjust word segmentation so as to decrease the effect of lexicalization differences to improve word alignment performance. The main idea of our methods is to adjust Chinese word segmentation according to their translation derived from parallel sentences in order to make the tokens compatible to 1-to-1 mapping between the corresponding sentences. The first method is based on learning a set of affix rules from bilingual terminology bank, and adjusting the segmentation according to these affix rules when preprocessing the Chinese part of the parallel corpus. The second method is based on the so-called impurity measure, which was motivated by the decision tree (Duda et al., 2001 ).", "cite_spans": [ { "start": 712, "end": 730, "text": "(Duda et al., 2001", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our methods are motivated by the translationdriven segmentation method proposed by Wu (1997) to segment words in a way to improve word alignment. However, Wu's method needs a translation lexicon to filter out the links which were not in the lexicon and the result was only evaluated on the sentence pairs which were covered by the lexicon.", "cite_spans": [ { "start": 83, "end": 92, "text": "Wu (1997)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "A word packing method has been proposed by Ma et al. (2007) to improve the word alignment task. Before carrying out word alignment, this method packs several consecutive words together when those words believed to correspond to a single word in the other language. Our basic idea is similar to this, but on the contrary, we try to unpack words which are translations of several words in the other language. Since the word packing method treats the packed consecutive words as a single token, as we mentioned in the previous section, it weakens the association strength of translation pairs of their morphemes while applying the IBM word alignment model.", "cite_spans": [ { "start": 43, "end": 59, "text": "Ma et al. (2007)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "A lot of morphological analysis methods have been proposed to improve the performance of word alignment for inflectional language (Lee et al., 2003; Lee, 2004; Goldwater, 2005) . They proposed to split a word into a morpheme sequence of the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). Their experiments showed that morphological analysis can improve the quality of machine translation by reducing data sparseness and by making the tokens in two languages correspond more 1-to-1. However, these segmentation methods were developed from the monolingual perspective.", "cite_spans": [ { "start": 130, "end": 148, "text": "(Lee et al., 2003;", "ref_id": "BIBREF14" }, { "start": 149, "end": 159, "text": "Lee, 2004;", "ref_id": "BIBREF13" }, { "start": 160, "end": 176, "text": "Goldwater, 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "The goal of word segmentation adjustment is to adjust the segmentation of Chinese words such that we have as many 1-to-1 links to the English words as possible. In this task, we will face the problem of finding the proper morpheme boundaries for Chinese words. The challenge is that almost all characters of Chinese are morphemes and therefore almost every character boundary in a word could be the boundary of a morpheme, there is no simple rules to find the suitable boundaries of morphemes. Furthermore, not all meaningful morphemes need to be segmented to meet the requirement of 1-to-1 mapping. For example, washing machine/\u6d17\u8863\u6a5f can be segmented into \u6d17\u8863 and \u6a5f corresponding to washing and machine while heater/\u6696\u6c23\u6a5f does not need, it depends on their translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjusting Word Segmentation", "sec_num": "3" }, { "text": "In this paper, we have proposed two different methods to solve this problem: 1. learning affix rules from terminology bank to segment morphemes and 2. using impurity measure to finding the morpheme boundaries. The detail of these methods will be described in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjusting Word Segmentation", "sec_num": "3" }, { "text": "The main idea of this method is to segment a Chinese word according to some properly designed conditional dependent affix rules. As shown in Figure 1 , each rule is composed of three conditional constraints, a) affix condition, b) English word condition and c) exception condition. In the affix condition, we place a underscore on the left of a morpheme, such as _\u6a5f, to denote a suffix and on the right, such as \u526f_, to denote a prefix.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 149, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Affix Rule Method", "sec_num": "4" }, { "text": "The affix rules are applied to each word by checking the following three conditions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affix Rule Method", "sec_num": "4" }, { "text": "1. The target word has the affix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affix Rule Method", "sec_num": "4" }, { "text": "2. The English word which is the target of translation exists in the parallel sentence. 3. The target word does not contain the morphemes in the exception list (The morpheme in the exception list shows an alternative segmentation.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affix Rule Method", "sec_num": "4" }, { "text": "If the target word satisfies all of the above conditions of any rule, then the morpheme should be separated from the word. The remaining problem will be how to derive the set of affix rules. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affix Rule Method", "sec_num": "4" }, { "text": "We use an unsupervised method to extract affix rules from a Chinese-English terminology bank 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "The bilingual terminology bank a total of 1,046,058 English terms with Chinese translations in 63 categories. Among them, 60% or 629,352 terms are compounds. We take the advantage of the terminology bank, that all terminologies are 1-to-1 well translated, to find the best morpheme segmentation from ambiguous segmentations of a Chinese word according to its English counterpart. Then we extracted affix rules from the word-to-morpheme alignment results of terms and translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "4.1" }, { "text": "The training phase of word-to-morpheme alignment is based loosely on word-to-word alignment of the IBM model 1. Instead of using Chinese words, we considered all the possible morphemes. For example, consider the task of aligning Department of Education and \u6559\u80b2\u7f72 as shown as Figure 2 . We use the EM algorithm to train the translation probabilities of wordmorpheme pairs based on IBM model 1. Figure 2 . Example of word-to-morpheme alignment.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 281, "text": "Figure 2", "ref_id": null }, { "start": 391, "end": 399, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Word-to-Morpheme Alignment", "sec_num": "4.2" }, { "text": "In the aligning phase, the original IBM model 1 does not work properly as we expected. Because the English words prefer to link to single character and it results that some correct Chinese translations will not be linked. The reason is that the probability of a morpheme, say p(\u6559\u80b2|education), is always less than its substring, p(\u6559|education), since whatever \u6559\u80b2 occurs \u6559 and \u80b2 always occur but not vice versa. So the aligning result will be \u6559 /Education and \u7f72 /Department, \u80b2 is abandoned. To overcome this problem, a constraint of alignment is imposed to the model to ensure that the aligning result covers every Chinese characters of a target word and no overlapped characters in the result morpheme sequence. For instances, both", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-to-Morpheme Alignment", "sec_num": "4.2" }, { "text": "\u6559 /Education \u7f72 /Department and \u6559 \u80b2 /Education", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-to-Morpheme Alignment", "sec_num": "4.2" }, { "text": "\u80b2\u7f72/Department are not allowed alignment sequences. The constraint is applied to each possible aligning result. If the alignment violates the constraint, it will be rejected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-to-Morpheme Alignment", "sec_num": "4.2" }, { "text": "Since the new alignment algorithm must enumerate all of the possible alignments, the process is very time consuming. Therefore, it is advantageous to use a bilingual terminology bank rather than a parallel corpus. The average length of terminologies is short and much shorter than a typical sentence in a parallel corpus. This makes words to morphemes alignment computationally feasible and the results highly accurate (Chang et al., 2001; Bai et al., 2006) . This makes it possible to use the result as pseudo gold standards to evaluate affix rules as described in section 4.3.", "cite_spans": [ { "start": 419, "end": 439, "text": "(Chang et al., 2001;", "ref_id": "BIBREF3" }, { "start": 440, "end": 457, "text": "Bai et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word-to-Morpheme Alignment", "sec_num": "4.2" }, { "text": "air|\u7a7a\u6c23 refrigeration|\uf92e\u51cd machine|\u6a5f building|\u5efa\u7bc9 industry|\u696d compound|\u8907\u5f0f steam|\u84b8\u6c7d engine|\u6a5f electronics|\u96fb\u5b50 industry|\u696d vice|\u526f chancellor|\u6821\u9577 Figure 3 . Sample of word-to-morpheme alignment.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 142, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Word-to-Morpheme Alignment", "sec_num": "4.2" }, { "text": "After the alignment task, we will get a word-tomorpheme aligned terminology bank as shown in Figure 3 . We can subsequently extract affix rules from the aligned terminology bank by the following steps:", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Rule Extraction", "sec_num": "4.3" }, { "text": "For each alignment, we produce all alignment links as affix rules. For instance, with (electronics| \u96fb \u5b50 industry| \u696d ), we would produce two rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1) Generate candidates of affix rule:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(a) \u96fb\u5b50_, electronics (b) _\u696d,", "eq_num": "industry" } ], "section": "1) Generate candidates of affix rule:", "sec_num": null }, { "text": "2) Evaluate the rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1) Generate candidates of affix rule:", "sec_num": null }, { "text": "The precision of each candidate rule is estimated by applying the rule to segment the Chinese terms. If a Chinese term contains the affix shown in the rule, the affix will be segmented. The results of segmentation are then to compare with the segmentation results of the alignments done by the algorithm of the section 4.2 as pseudo gold standards. Some example results of rule evaluations are shown in Figure 4 . ", "cite_spans": [], "ref_spans": [ { "start": 403, "end": 411, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "1) Generate candidates of affix rule:", "sec_num": null }, { "text": "In the third step, we sort the rules according to their precision rates in descending order, resulting in rules R 1 ..R n . And then for each R i , we scan R 1 to R i-1 , if there is a rule, R j , have the same English word condition and the affix condition of R i subsume that of R j , then we add affix condition of R j as exception condition of R i . For example, _\u696d, industry and _\u5de5\u696d, industry are rule candidates in the sorted table and have the same English word condition. Furthermore, the condition _ \u696d subsumes that of \u5de5\u696d, we add \u5de5\u696d to the exception condition of the rule with a shorter affix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3) Adding exception condition:", "sec_num": null }, { "text": "After adding the exception conditions, the rules are reevaluated with considering the exception condition to get new evaluation scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reevaluate the rules with exception condition:", "sec_num": null }, { "text": "Finally, filter out the rules with scores lower than a threshold 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5) Select rules by scores:", "sec_num": null }, { "text": "The reason of using exception condition is that an affix is usually an abbreviation of a word, such as _\u696d is an abbreviation of \u5de5\u696d. In general, a full morpheme is preferred to be segmented than its abbreviation while both occurred in a target word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5) Select rules by scores:", "sec_num": null }, { "text": "For example, when applying rules to \u96fb\u5b50\u5de5\u696d /electronic industry, _ \u5de5 \u696d ,industry is preferred than _\u696d,industry. However, in the evaluation step, precision rate of _\u696d,industry will be reduced when applying to full morphemes, such as \u96fb\u5b50\u5de5\u696d /electronic industry, and then could be filtered out if the precision is lower than the threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5) Select rules by scores:", "sec_num": null }, { "text": "The impurity measure was used by decision tree (Duda et al., 2001) to split the training examples into smaller and smaller subsets progressively according to features and hope that all the samples in each subset is as pure as possible. For convenient, they define the impurity function rather than the purity function of a subset as follows: Where P(w j ) is the fraction of examples at set S that are in category w j . By the well-known properties of entropy if all the examples are of the same category the impurity is 0; otherwise it is positive, with the greatest value occurring when the different classes are equal likely.", "cite_spans": [ { "start": 47, "end": 66, "text": "(Duda et al., 2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Impurity Measure Method", "sec_num": "5" }, { "text": "\u2211 \u2212 = j j j w P w P S impurity ) ( log ) ( ) (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impurity Measure Method", "sec_num": "5" }, { "text": "In our experiment, the impurity measure is used to split a Chinese word into two substrings and hope that all the characters in a substring are generated by the parallel English words as pure as possible. Here, we treat a Chinese word as a set of characters, the parallel English words as categories and the fraction of examples is redefined by the expected fraction number of characters that are generated by each English word. So we redefine the entropy impurity as follows: For example, as shown in Figure 5 , the impurity value of \u5916\u4ea4\u90e8\u9577, Figure 5 .(a), is much higher than values of \u5916\u4ea4 and \u90e8\u9577, Figure 5.(b) . Which means that the generating relations from English to Chinese tokens are purified by breaking \u5916\u4ea4\u90e8\u9577 into \u5916\u4ea4 and \u90e8\u9577.", "cite_spans": [], "ref_spans": [ { "start": 502, "end": 510, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 541, "end": 549, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 597, "end": 609, "text": "Figure 5.(b)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Impurity Measure of Translation", "sec_num": "5.1" }, { "text": "The translation probabilities between Chinese characters and English word can be trained using IBM model 1 by treating Chinese characters as tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impurity Measure of Translation", "sec_num": "5.1" }, { "text": "In this experiment, we treat the Chinese words which can be segmented into morphemes and linked to different English words as target words. In order to speedup our impurity method only target words will be segmented during the process. Therefore we investigate the actual distribution of target words first, we have tagged 1,573 Chinese words manually with target and non-target. It turns out that only 6.87% of the Chinese words are tagged as target and 94.4% of target words are nouns. The results show that most of the Chinese words do not need to be re-segmented and their POS distribution is very unbalanced. The results show that we can filter out the non-target words by simple clues. In our experiment, we use three features to filter out non-target words: 1) POS: Since 94.4% of the target words are nouns, we focus our experiment on nouns and filter out words with other POS. 2) One-to-many alignment in GIZA++: Only Chinese words which are linked to multiple English words in the result of GIZA++ are considered to be target words. 3) Impurity measure: the target words are expected to have high impurity values. So the words with a impurity values larger than a threshold are selected as target words 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Word Selection", "sec_num": "5.2" }, { "text": "and we used these annotated data as our gold standard in testing. The goal of segmentation adjustment using impurity is to find the best breaking point of a Chinese word according to parallel English words. When a word is broken into two substrings, the new substrings can be compared to original word by the information gain which is defined in terms of impurity as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "Because of the modification of Chinese tokens caused by the word segmentation adjustment, a problem has been created when we wanted to compare the results to the copy which did not undergo adjustment. Therefore, after the alignment was done, we merged the alignment links related to tokens that were split up during adjustment. For example, the two links of foreign/\u5916\u4ea4 minister/\u90e8 \u9577 were merged as foreign minister/\u5916\u4ea4\u90e8\u9577. ) , ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "( 2 1 Table 1 , including precision-recall and AER evaluation methods. In which the baseline is alignment result of the unadjusted data. The table shows that after the adjustment of word segmentation, both methods obtain significant improvement over the baseline, especially for the English-Chinese direction and the intersection results of both directions. The impurity method in particular improves alignment in both English-Chinese and Chinese-English directions.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 13, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "Where i denotes a break point in f, denotes first i characters of f, and denotes last n-i characters of f. If the information gain of a breaking point is positive, the result substrings are considered to be better, i.e. more pure than original word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "i f 1 n i f 1 +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "The goal of finding the best breaking point can be achieved by finding the point which maximizes the information gain as the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "The improvement of intersection of both directions is important for machine translation. Because the intersection result has higher precision, a lot of machine translation method relies on intersecting the alignment results. The phrasebased machine translation (Koehn et al., 2003) uses the grow-diag-final heuristic to extend the word alignment to phrase alignment by using the intersection result. Liang (Liang et al., 2006) has proposed a symmetric word alignment model that merges two simple asymmetric models into a symmetric model by maximizing a combination of likelihood and agreement between the models. This method uses the intersection as the agreement of both models in the training time. The method has reduced the alignment error significantly over the traditional asymmetric models.", "cite_spans": [ { "start": 261, "end": 281, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF12" }, { "start": 406, "end": 426, "text": "(Liang et al., 2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": ") , , ( max arg 1 1 1 n i i n i f f f IG + < \u2264", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "Note that a word can be separated into two substrings each time. If we want to segment a complex word composed of many morphemes, just split the word again and again like the construction of decision tree, until the information gain is negative or less than a threshold 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Breaking Point", "sec_num": "5.3" }, { "text": "In order to evaluate the effect of our methods on the word alignment task, we preprocessed parallel corpus in three ways: First we use a state-of-the-art word segmenter to tokenize the Chinese part of the corpus. Then, we used the affix rules to adjust word segmentation. Finally, we do the same but by using the impurity measure method. We used the GIZA++ package (Och and Ney, 2003) as the word alignment tool to align tokens on the three copies of preprocessed parallel corpora.", "cite_spans": [ { "start": 365, "end": 384, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "In order to analyze the adjustment results, we also manually segment and link the words of Chinese sentences to make the alignments 1-to-1 mapping as many as possible according to their translations for the 112 gold standard sentences. Table 2 shows the results of our analysis, the performance of impurity measure method is also slightly better than the affix rules in both recall and precision measure.", "cite_spans": [], "ref_spans": [ { "start": 236, "end": 243, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We used the first 100,000 sentences of Hong Kong News parallel corpus from LDC as our training data. And 112 randomly selected parallel sentences were aligned manually with sure and possible tags, as described in (Och and Ney, 2000) , ", "cite_spans": [ { "start": 213, "end": 232, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "In this paper, we have proposed two Chinese word segmentation adjustment methods to improve word alignment. The first method uses the affix rules learned from a bilingual terminology bank and then applies the rules to the parallel corpus to split the compound Chinese words into morphemes according to its counterpart parallel sentence. The second method uses the impurity method, which was motivated by the method of decision tree. The experimental results show that both methods lead to significant improvement in word alignment performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The bilingual terminology bank was compiled by the National Institute for Compilation and Translation. It is freely download at http://terms.nict.gov.tw by registering your information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set the threshold as 0.7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our experiment, we use 0.3 as our threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our experiment, we set 0 as the threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the National Science Council of Taiwan under NSC Grants: NSC95-2422-H-001-031.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements:", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Going Beyond AER: An Extensive Analysis of Word Alignments and Their Impact on MT", "authors": [ { "first": "Bonnie", "middle": [ "J" ], "last": "Necip Fazil Ayan", "suffix": "" }, { "first": "", "middle": [], "last": "Dorr", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL 2006", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Necip Fazil Ayan and Bonnie J. Dorr. 2006. Going Beyond AER: An Extensive Analysis of Word Alignments and Their Impact on MT. In Proceedings of ACL 2006, pages 9-16, Sydney, Australia.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sense Extraction and Disambiguation for Chinese Words from Bilingual Terminology Bank", "authors": [], "year": 2006, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "11", "issue": "3", "pages": "223--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Hong Bai, Keh-Jiann Chen and Jason S. Chang. 2006. Sense Extraction and Disambiguation for Chinese Words from Bilingual Terminology Bank. Computational Linguistics and Chinese Language Processing, 11(3):223-244.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Mathematics of Machine Translation: Parameter Estimation", "authors": [ { "first": "F", "middle": [], "last": "Petter", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Robert L. Mercer. 1993. The Mathematics of Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263- 311.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical Translation Model for Phrases", "authors": [ { "first": "S", "middle": [], "last": "Jason", "suffix": "" }, { "first": "David", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2001, "venue": "Chinese). Computational Linguistics and Chinese Language Processing", "volume": "6", "issue": "", "pages": "43--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason S Chang, David Yu, Chun-Jun Lee. 2001. Statisti- cal Translation Model for Phrases(in Chinese). Com- putational Linguistics and Chinese Language Proc- essing, 6(2):43-64.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unknown Word Detection for Chinese by a Corpus-based Learning Method", "authors": [ { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ming-Hong", "middle": [], "last": "Bai", "suffix": "" } ], "year": 1998, "venue": "International Journal of Computational linguistics and Chinese Language Processing", "volume": "3", "issue": "1", "pages": "27--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keh-Jiann Chen, Ming-Hong Bai. 1998. Unknown Word Detection for Chinese by a Corpus-based Learning Method. International Journal of Computational linguistics and Chinese Language Processing, 1998, Vol.3, #1, pages 27-44.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unknown Word Extraction for Chinese Documents", "authors": [ { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wei-Yun", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING 2002", "volume": "", "issue": "", "pages": "169--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keh-Jiann Chen, Wei-Yun Ma. 2002. Unknown Word Extraction for Chinese Documents. In Proceedings of COLING 2002, pages 169-175, Taipei, Taiwan.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Word Identification for Mandarin Chinese Sentences", "authors": [ { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shing-Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1992, "venue": "Proceedings of 14th COLING", "volume": "", "issue": "", "pages": "101--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keh-Jiann Chen, Shing-Huan Liu. 1992. Word Identification for Mandarin Chinese Sentences. In Proceedings of 14th COLING, pages 101-107.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Tailoring Word Alignments to Syntactic Machine Translation", "authors": [ { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL 2007", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "John DeNero, Dan Klein. 2007. Tailoring Word Alignments to Syntactic Machine Translation. In Proceedings of ACL 2007, pages 17-24, Prague, Czech Republic.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "HMM word and phrase alignment for statistical machine translation", "authors": [ { "first": "Yonggang", "middle": [], "last": "Deng", "suffix": "" }, { "first": "William", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT-EMNLP 2005", "volume": "", "issue": "", "pages": "169--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonggang Deng, William Byrne. 2005. HMM word and phrase alignment for statistical machine translation. In Proceedings of HLT-EMNLP 2005, pages 169-176, Vancouver, Canada.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Chinese word segmentation and named entity recognition: a pragmatic approach", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Gao, Mu Li, Andi Wu and Chang-Ning Huang. 2005. Chinese word segmentation and named entity recognition: a pragmatic approach. Computational Linguistics, 31(4)", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improving Statistical MT through Morphological Analysis", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP 2005", "volume": "", "issue": "", "pages": "676--683", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater, David McClosky. 2005. Improving Statistical MT through Morphological Analysis. In Proceedings of HLT/EMNLP 2005, pages 676-683, Vancouver, Canada.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Statistical Phrase-Based Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL 2003", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz J. Och, Daniel Marcu. 2003. Sta- tistical Phrase-Based Translation. In Proceedings of HLT/NAACL 2003, pages 48-54, Edmonton, Canada.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Morphological Analysis for Statistical Machine Translation", "authors": [ { "first": "Young-Suk", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT-NAACL 2004", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Young-Suk Lee. 2004. Morphological Analysis for Statistical Machine Translation. In Proceedings of HLT-NAACL 2004, pages 57-60, Boston, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Language Model Based Arabic Word Segmentation", "authors": [ { "first": "Young-Suk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL 2003", "volume": "", "issue": "", "pages": "399--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Young-Suk Lee, Kishore Papineni, Salim Roukos. 2003. Language Model Based Arabic Word Segmentation. In Proceedings of ACL 2003, pages 399-406, Sapporo, Japan.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Alignment by Agreement", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT-NAACL 2006", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Ben Taskar, Dan Klein. 2006. Alignment by Agreement. In Proceedings of HLT-NAACL 2006, pages 104-111, New York, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Bottom-up Merging Algorithm for Chinese Unknown Word Extraction", "authors": [ { "first": "Wei-Yun", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL 2003, Second SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "31--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Yun Ma, Keh-Jiann Chen. 2003. A Bottom-up Merging Algorithm for Chinese Unknown Word Extraction. In Proceedings of ACL 2003, Second SIGHAN Workshop on Chinese Language Processing, pp31-38, Sapporo, Japan.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bootstrapping Word Alignment via Word Packing", "authors": [ { "first": "Yanjun", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Stroppa", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL 2007", "volume": "", "issue": "", "pages": "304--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanjun Ma, Nicolas Stroppa, Andy Way. 2007. Bootstrapping Word Alignment via Word Packing. In Proceedings of ACL 2007, pages 304-311, Prague, Czech Republic.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improving IBM Word-Alignment Model 1", "authors": [ { "first": "Robert", "middle": [ "C" ], "last": "Moore", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL 2004", "volume": "", "issue": "", "pages": "519--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert C. Moore. 2004. Improving IBM Word- Alignment Model 1. In Proceedings of ACL 2004, pages 519-526, Barcelona, Spain.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Systematic Comparison of Various Statistical Alignment Models, Computational Linguistics", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "", "volume": "29", "issue": "", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och, Hermann Ney. A Systematic Comparison of Various Statistical Alignment Models, Computational Linguistics, volume 29, number 1, pp. 19-51 March 2003.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improved Statistical Alignment Models", "authors": [ { "first": "Franz", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz J. Och, Hermann Ney., Improved Statistical Alignment Models, In Proceedings of the 38th An- nual Meeting of the Association for Computational Linguistics, 2000, Hong Kong, pp. 440-447.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": "Stefan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of COLING 1996", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Vogel, Hermann Ney, Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of COLING 1996, pages 836-841, Copenhagen, Denmark.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning an English-Chinese Lexicon from a Parallel Corpus", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuanyin", "middle": [], "last": "Xia", "suffix": "" } ], "year": 1994, "venue": "Proceedings of AMTA 1994", "volume": "", "issue": "", "pages": "206--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu, Xuanyin Xia. 1994. Learning an English- Chinese Lexicon from a Parallel Corpus. In Proceedings of AMTA 1994, pages 206-213, Columbia, MD.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "377--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3):377-403.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Samples of affix rules." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Sample evaluations of candidate rules." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "impurity value of \u5916\u4ea4\u90e8\u9577. (b) impurity values of \u5916\u4ea4 and \u90e8\u9577." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Examples of impurity values." }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "In which f denotes the target Chinese word, e and f denote the parallel English and Chinese sentence that f belongs to and is the expected fraction number of characters in f that are generated by word e. The expected fraction number can be defined as follows: c | e) denotes the translation probability of Chinese character c given English word e." }, "FIGREF5": { "uris": null, "type_str": "figure", "num": null, "text": "word alignment results are shown in" }, "TABREF2": { "content": "
directionRecallprecision F-scoreAER
English-Chinese68.361.264.635.7
baselineChinese-English79.667.072.827.8
intersection59.992.072.626.6
English-Chinese78.264.670.829.8
affix rulesChinese-English80.268.073.627.0
intersection69.192.379.020.2
English-Chinese78.164.970.929.7
impurityChinese-English81.470.475.525.0
intersection70.291.979.619.8
Table 1recallprecision
affix rules82.3566.66
impurity84.3167.72
Table 2. Alignment results based on the manual
word segmentation data.
", "html": null, "text": "Alignment results based on the standard word segmentation data.", "num": null, "type_str": "table" } } } }