{ "paper_id": "I08-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:58.844624Z" }, "title": "Name Origin Recognition Using Maximum Entropy Model and Diverse Features", "authors": [ { "first": "Min", "middle": [], "last": "Zhang", "suffix": "", "affiliation": {}, "email": "mzhang@i2r.a-star.edu.sg" }, { "first": "Chengjie", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "cjsun@insun.hit.edu.cn" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Aiti", "middle": [], "last": "Aw", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chew", "middle": [ "Lim" ], "last": "Tan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "country": "China" } }, "email": "wangxl@insun.hit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Name origin recognition is to identify the source language of a personal or location name. Some early work used either rulebased or statistical methods with single knowledge source. In this paper, we cast the name origin recognition as a multi-class classification problem and approach the problem using Maximum Entropy method. In doing so, we investigate the use of different features, including phonetic rules, ngram statistics and character position information for name origin recognition. Experiments on a publicly available personal name database show that the proposed approach achieves an overall accuracy of 98.44% for names written in English and 98.10% for names written in Chinese, which are significantly and consistently better than those in reported work.", "pdf_parse": { "paper_id": "I08-1008", "_pdf_hash": "", "abstract": [ { "text": "Name origin recognition is to identify the source language of a personal or location name. Some early work used either rulebased or statistical methods with single knowledge source. In this paper, we cast the name origin recognition as a multi-class classification problem and approach the problem using Maximum Entropy method. In doing so, we investigate the use of different features, including phonetic rules, ngram statistics and character position information for name origin recognition. Experiments on a publicly available personal name database show that the proposed approach achieves an overall accuracy of 98.44% for names written in English and 98.10% for names written in Chinese, which are significantly and consistently better than those in reported work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many technical terms and proper names, such as personal, location and organization names, are translated from one language into another with approximate phonetic equivalents. The phonetic translation practice is referred to as transliteration; conversely, the process of recovering a word in its native language from a transliteration is called as back-transliteration (Zhang et al, 2004; Knight and Graehl, 1998) . For example, English name \"Smith\" and \" \u53f2\u5bc6\u65af (Pinyin 1 : Shi-Mi-Si)\" in 1 Hanyu Pinyin, or Pinyin in short, is the standard romanization system of Chinese. In this paper, Pinyin is given next to Chinese form a pair of transliteration and backtransliteration. In many natural language processing tasks, such as machine translation and crosslingual information retrieval, automatic name transliteration has become an indispensable component.", "cite_spans": [ { "start": 369, "end": 388, "text": "(Zhang et al, 2004;", "ref_id": null }, { "start": 389, "end": 413, "text": "Knight and Graehl, 1998)", "ref_id": null }, { "start": 487, "end": 488, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Name origin refers to the source language of a name where it originates from. For example, the origin of the English name \"Smith\" and its Chinese transliteration \"\u53f2\u5bc6\u65af (Shi-Mi-Si)\" is English, while both \"Tokyo\" and \"\u4e1c\u4eac (Dong-Jing)\" are of Japanese origin. Following are examples of different origins of a collection of English-Chinese a English-Chinese dictionary, we first have to decide whether the name is of Chinese, Japanese, Korean or some European/English origins. Then we follow the transliteration rules implied by the origin of the source name. Although all English personal names are rendered in 26 letters, they may come from different romanization systems. Each romanization system has its own rewriting rules. English name \"Smith\" could be directly transliterated into Chinese as \"\u53f2\u5bc6\u65af(Shi-Mi-Si)\" since it follows the English phonetic rules, while the Chinese translation of Japanese name \"Koizumi\" becomes \"\u5c0f\u6cc9(Xiao-Quan)\" following the Japanese phonetic rules. The name origins are equally important in back-transliteration practice. Li et al. (2007) incorporated name origin recognition to improve the performance of personal name transliteration. Besides multilingual processing, the name origin also provides useful semantic information (regional and language information) for common NLP tasks, such as co-reference resolution and name entity recognition.", "cite_spans": [ { "start": 1049, "end": 1065, "text": "Li et al. (2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unfortunately, little attention has been given to name origin recognition (NOR) so far in the literature. In this paper, we are interested in two kinds of name origin recognition: the origin of names written in English (ENOR) and the origin of names written in Chinese (CNOR). For ENOR, the origins include English (Eng), Japanese (Jap), Chinese Mandarin Pinyin (Man) and Chinese Cantonese Jyutping (Can). For CNOR, they include three origins: Chinese (Chi, for both Mandarin and Cantonese), Japanese and English (refer to Latinscripted language).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unlike previous work (Qu and Grefenstette, 2004; Li et al., 2006; Li et al., 2007) where NOR was formulated with a generative model, we regard the NOR task as a classification problem. We further propose using a discriminative learning algorithm (Maximum Entropy model: MaxEnt) to solve the problem. To draw direct comparison, we conduct experiments on the same personal name corpora as that in the previous work by Li et al. (2006) . We show that the MaxEnt method effectively incorporates diverse features and outperforms previous methods consistently across all test cases.", "cite_spans": [ { "start": 21, "end": 48, "text": "(Qu and Grefenstette, 2004;", "ref_id": "BIBREF7" }, { "start": 49, "end": 65, "text": "Li et al., 2006;", "ref_id": "BIBREF4" }, { "start": 66, "end": 82, "text": "Li et al., 2007)", "ref_id": "BIBREF5" }, { "start": 416, "end": 432, "text": "Li et al. (2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows: in section 2, we review the previous work. Section 3 elaborates our proposed approach and the features. Section 4 presents our experimental setup and reports our experimental results. Finally, we conclude the work in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most of previous work focuses mainly on ENOR although same methods can be extended to CNOR. We notice that there are two informative clues that used in previous work in ENOR. One is the lexical structure of a romanization system, for example, Hanyu Pinyin, Mandarin Wade-Giles, Japanese Hepbrun or Korean Yale, each has a finite set of syllable inventory (Li et al., 2006) . Another is the phonetic and phonotactic structure of a language, such as phonetic composition, syllable structure. For example, English has unique consonant clusters such as /str/ and /ks/ which Chinese, Japanese and Korean (CJK) do not have. Considering the NOR solutions by the use of these two clues, we can roughly group them into two categories: rule-based methods (for solutions based on lexical structures) and statistical methods (for solutions based on phonotactic structures). Kuo and Yang (2004) proposed using a rulebased method to recognize different romanization system for Chinese only. The left-to-right longest match-based lexical segmentation was used to parse a test word. The romanization system is confirmed if it gives rise to a successful parse of the test word. This kind of approach (Qu and Grefenstette, 2004) is suitable for romanization systems that have a finite set of discriminative syllable inventory, such as Pinyin for Chinese Mandarin. For the general tasks of identifying the language origin and romanization system, rule based approach sounds less attractive because not all languages have a finite set of discriminative syllable inventory.", "cite_spans": [ { "start": 355, "end": 372, "text": "(Li et al., 2006)", "ref_id": "BIBREF4" }, { "start": 862, "end": 881, "text": "Kuo and Yang (2004)", "ref_id": null }, { "start": 1183, "end": 1210, "text": "(Qu and Grefenstette, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Qu and Grefenstette (2004) proposed a NOR identifier using a trigram language model (Cavnar and Trenkle, 1994) to distinguish personal names of three language origins, namely Chinese, Japanese and English. In their work, the training set includes 11,416 Chinese name entries, 83,295 Japanese name entries and 88,000 English name entries. However, the trigram is defined as the joint probabil-", "cite_spans": [ { "start": 84, "end": 110, "text": "(Cavnar and Trenkle, 1994)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "ity 1 2 ( ) i i i p c c c \u2212 \u2212 for 3-character 1 2 i i i c c c \u2212 \u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "rather than the commonly used conditional probability 1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "( | ) i i i p c c c \u2212 \u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": ". Therefore, the so-called trigram in Qu and Grefenstette (2004) is basically a substring unigram probability, which we refer to as the n-gram (n-character) sum model (SUM) in this paper. Suppose that we have the unigram count", "cite_spans": [ { "start": 38, "end": 64, "text": "Qu and Grefenstette (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "1 2 ( ) i i i C c c c \u2212 \u2212 for character substring 1 2 i i i c c c \u2212 \u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": ", the unigram is then computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "1 2 1 2 1 2 1 2 , ( ) ( ) ( ) i i i i i i i i i i i i i c c c C c c c p c c c C c c c \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 = \u2211 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "which is the count of character substring", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "1 2 i i i c c c \u2212 \u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "normalized by the sum of all 3-character string counts in the name list for the language of interest. For origin recognition of Japanese names, this method works well with an accuracy of 92%. However, for English and Chinese, the results are far behind with a reported accuracy of 87% and 70% respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "2) N-gram Perplexity Method (PP): Li et al. (2006) proposed using n-gram character perplexity c PP to identify the origin of a Latin-scripted name. Using bigram, the c PP is defined as:", "cite_spans": [ { "start": 34, "end": 50, "text": "Li et al. (2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "1 1 log ( | ) 2 N c i i 1 i c p c c N c PP \u2212 = \u2212 \u2211 = (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "where c N is the total number of characters in the test name, i c is the i th character in the test name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Method 1) N-gram Sum Method (SUM):", "sec_num": null }, { "text": "i i p c c \u2212 is the bigram probability which is learned from each name list respectively. As a function of model, c PP measures how good the model matches the test data. Therefore, c PP can be used to measure how good a test name matches a training set. A test name is identified to belong to a language if the language model gives rise to the minimum perplexity. Li et al. (2006) shown that the PP method gives much better performance than the SUM method. This may be due to the fact that the PP measures the normalized conditional probability rather than the sum of joint probability. Thus, the PP method has a clearer mathematical interpretation than the SUM method.", "cite_spans": [ { "start": 363, "end": 379, "text": "Li et al. (2006)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "( | )", "sec_num": "1" }, { "text": "The statistical methods attempt to overcome the shortcoming of rule-based method, but they suffer from data sparseness, especially when dealing with a large character set, such as in Chinese (our experiments will demonstrate this point empirically). In this paper, we propose using Maximum Entropy (MaxEnt) model as a general framework for both ENOR and CNOR. We explore and integrate multiple features into the discriminative classifier and use a common dataset for benchmarking. Experimental results show that the MaxEnt model effectively incorporates diverse features to demonstrate competitive performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "( | )", "sec_num": "1" }, { "text": "The principle of maximum entropy (MaxEnt) model is that given a collection of facts, choose a model consistent with all the facts, but otherwise as uniform as possible (Berger et al., 1996) . Max-Ent model is known to easily combine diverse features. For this reason, it has been widely adopted in many natural language processing tasks. The MaxEnt model is defined as:", "cite_spans": [ { "start": 168, "end": 189, "text": "(Berger et al., 1996)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "MaxEnt Model for NOR", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( , ) 1 1 ( | ) j i K f c x i j j p c x Z \u03b1 = = \u220f (3) ( , ) 1 1 1 ( | ) j i K N N f c x i j i i j Z p c x \u03b1 = = = = = \u2211 \u2211\u220f", "eq_num": "(4)" } ], "section": "MaxEnt Model for NOR", "sec_num": "3.1" }, { "text": "where i c is the outcome label, x is the given observation, also referred to as an instance. Z is a normalization factor. N is the number of outcome labels, the number of language origins in our case. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MaxEnt Model for NOR", "sec_num": "3.1" }, { "text": "= \u23a7 = \u23a8 \u23a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MaxEnt Model for NOR", "sec_num": "3.1" }, { "text": "In our implementation, we used Zhang's maximum entropy package 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MaxEnt Model for NOR", "sec_num": "3.1" }, { "text": "Let us use English name \"Smith\" to illustrate the features that we define. All characters in a name are first converted into upper case for ENOR before feature extraction. N-gram Features: N-gram features are designed to capture both phonetic and orthographic structure information for ENOR and orthographic information only for CNOR. This is motivated by the facts that: 1) names written in English but from non-English origins follow different phonetic rules from the English one; they also manifest different character usage in orthographic form; 2) names written in Chinese follows the same pronunciation rules (Pinyin), but the usage of Chinese characters is distinguishable between different language origins as reported in Table 2 of (Li et al., 2007) . We include position information into the n-gram features. This is mainly to differentiate surname from given name in recognizing the origin of CJK personal names written in Chinese. For example, the position specific n-gram features of a Chinese name \"\u6e29\u5bb6\u5b9d(Wen-Jia-Bao)\" are as follows:", "cite_spans": [ { "start": 741, "end": 758, "text": "(Li et al., 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 730, "end": 737, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "1) FPUni: position specific unigram <0 \u6e29(Wen), 1 \u5bb6(Jia), 2 \u5b9d(Bao)> 2) FPBi: position specific bigram <0 \u6e29\u5bb6(Wen-Jia), 1 \u5bb6\u5b9d(Jia-Bao)> 3) FPTri: position specific trigram <0 \u6e29\u5bb6\u5b9d(Wen-Jia-Bao)>", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "Phonetic Rule-based Features: These features are inspired by the rule-based methods (Kuo and Yang, 2004; Qu and Grefenstette, 2004) that check whether an English name is a sequence of syllables of CJK languages in ENOR task. We use the following two features in ENOR task as well.", "cite_spans": [ { "start": 84, "end": 104, "text": "(Kuo and Yang, 2004;", "ref_id": null }, { "start": 105, "end": 131, "text": "Qu and Grefenstette, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "1) FMan: a Boolean feature to indicate whether a name is a sequence of Chinese Mandarin Pinyin. 2) FCan: a Boolean feature to indicate whether a name is a sequence of Cantonese Jyutping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "Other Features: 1) FLen: the number of Chinese characters in a given name. This feature is for CNOR only. The numbers of Chinese characters in personal names vary with their origins. For example, Chinese and Korean names usually consist of 2 to 3 Chinese characters while Japanese names can have up to 4 or 5 Chinese characters 2) FFre: the frequency of n-gram in a given name. This feature is for ENOR only. In CJK names, some consonants or vowels usually repeat in a name as the result of the regular syllable structure. For example, in the Chinese name \"Zhang Wanxiang\", the bigram \"an\" appears three times Please note that the trigram and position specific trigram features are not used in CNOR due to anticipated data sparseness in CNOR 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "We conduct the experiments to validate the effectiveness of the proposed method for both ENOR and CNOR tasks. We prepare two data sets which are collected from publicly accessible sources: D E and D C for the ENOR and CNOR experiment respectively. D E is the one used in (Li et al., 2006) , consisting of personal names of Japanese (Jap), Chinese (Man), Cantonese (Can) and English (Eng) origins. D C consists of personal names of Japanese (Jap), Chinese (Chi, including both Mandarin and Cantonese) and English (Eng) origins. Table 1 and Table 2 list their details. In the experiments, 90% of entries in Table 1 (D E ) and Evaluation Methods: Accuracy is usually used to evaluate the recognition performance (Qu and Gregory, 2004; Li et al., 2006; Li et al., 2007) . However, as we know, the individual accuracy used before only reflects the performance of recall and does not give a whole picture about a multiclass classification task. Instead, we use precision (P), recall (R) and F-measure (F) to evaluate the performance of each origin. In addition, an overall accuracy (Acc) is also given to describe the whole performance. The P, R, F and Acc are calculated as following: Table 5 : Contribution of each feature for CNOR Table 4 reports the feature weights of two features \"FMan\" and \"FCan\" with regard to different origins in ENOR task. It shows that \"FCan\" has positive weight only for origin \"Can\" while \"FMan\" has positive weights for both origins \"Man\" and \"Jap\", although the weight for \"Man\" is higher. This agrees with our observation that the two features favor origins \"Man\" or \"Can\". The feature weights also reflect the fact that some Japanese names can be successfully parsed by the Chinese Mandarin Pinyin system due to their similar syllable structure. For example, the Japanese name \"Tanaka Miho\" is also a sequence of Chinese Pinyin: \"Ta-na-ka Mi-ho\". Table 5 reports the contributions of different features in CNOR task by gradually incorporating the feature set. It shows that: 1) Unigram features are the most informative 2) Bigram features degrade performance. This is largely due to the data sparseness problem as discussed in Section 3.2. 3) FLen is also useful that confirms our intuition about name length. Finally the combination of the above three useful features achieves the best performance of 98.10% in overall accuracy for CNOR as in the last row of Table 5 .", "cite_spans": [ { "start": 271, "end": 288, "text": "(Li et al., 2006)", "ref_id": "BIBREF4" }, { "start": 710, "end": 732, "text": "(Qu and Gregory, 2004;", "ref_id": "BIBREF7" }, { "start": 733, "end": 749, "text": "Li et al., 2006;", "ref_id": "BIBREF4" }, { "start": 750, "end": 766, "text": "Li et al., 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 527, "end": 547, "text": "Table 1 and Table 2", "ref_id": "TABREF1" }, { "start": 606, "end": 613, "text": "Table 1", "ref_id": null }, { "start": 1181, "end": 1188, "text": "Table 5", "ref_id": null }, { "start": 1229, "end": 1236, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1877, "end": 1884, "text": "Table 5", "ref_id": null }, { "start": 2390, "end": 2397, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In Tables 3 and 5 , the effectiveness of each feature may be affected by the order in which the features are incorporated, i.e., the features that are added at a later stage may be underestimated. Thus, we conduct another experiment using \"allbut-one\" strategy to further examine the effectiveness of each kind of features. Each time, one type of the n-gram (n=1, 2, 3) features (including orthographic n-gram, position-specific and n-gram frequency features) is removed from the whole feature set. The results are shown in Table 6 . Table 6 : Effect of n-gram feature for ENOR Table 6 reveals that removing trigram features affects the performance most. This suggests that trigram features are much more effective for ENOR than other two types of features. It also shows that trigram features in ENOR does not suffer from the data sparseness issue.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 17, "text": "Tables 3 and 5", "ref_id": "TABREF3" }, { "start": 524, "end": 531, "text": "Table 6", "ref_id": null }, { "start": 534, "end": 541, "text": "Table 6", "ref_id": null }, { "start": 578, "end": 585, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "4.2" }, { "text": "As observed in Table 5 , in CNOR task, 93.96% accuracy is obtained when removing unigram features, which is much lower than 98.10% when bigram features are removed. This suggests that unigram features are very useful in CNOR, which is mainly due to the data sparseness problem that bigram features may have encountered. (Qu and Gregory, 2004) and the PP model (Li et al., 2006) .", "cite_spans": [ { "start": 320, "end": 342, "text": "(Qu and Gregory, 2004)", "ref_id": "BIBREF7" }, { "start": 360, "end": 377, "text": "(Li et al., 2006)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": null }, { "text": "All the experiments are conducted on the same data sets as described in section 4.1. Tables 7 and 8 show that the proposed MaxEnt model outperforms other models. The results are statistically significant ( 2 \u03c7 test with p<0.01) and consistent across all tests.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 100, "text": "Tables 7 and 8", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Model Complexity and Data Sparseness", "sec_num": "4.3" }, { "text": "We look into the complexity of the models and their effects. Tables 7 and 8 summarize the overall accuracy of three models. Table 9 reports the numbers of parameters in each of the models. We are especially interested in a comparison between the MaxEnt and PP models because their performance is close. We observe that, using trigram features, the MaxEnt model has many more parameters than the PP model does. Therefore, it is not surprising if the MaxEnt model outperforms when more training data are available. However, the experiment results also show that the MaxEnt model consistently outperforms the PP model even with the same size of training data. This is largely attributed to the fact that MaxEnt incorporates more robust features than the PP model does, such as rule-based, length of names features.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 9", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Model Complexity:", "sec_num": null }, { "text": "One also notices that PP clearly outperforms SUM by using the same number of parameters in ENOR and shows comparable performance in CNOR tasks. Note that SUM and PP are different in two areas: one is the PP model employs word length normalization while SUM doesn't; another that the PP model uses n-gram conditional probability while SUM uses n-character joint probability. We believe that the improved performance of PP model can be attributed to the effect of usage of conditional probability, rather than length normalization since length normalization does not change the order of probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Complexity:", "sec_num": null }, { "text": "We understand that we can only assess the effectiveness of a feature when sufficient statistics is available. In CNOR (see Table 8 ), we note that the Chinese transliterations of English origin use only 377 Chinese characters, so data sparseness is not a big issue. Therefore, bigram SUM and bigram PP methods easily achieve good performance for English origin. However, for Japanese origin (represented by 1413 Chinese characters) and Chinese origin (represented by 2319 Chinese characters), the data sparseness becomes acute and causes performance degradation in SUM and PP models. We are glad to find that MaxEnt still maintains a good performance benefiting from other robust features. Table 10 compares the overall accuracy of the three methods using unigram and bigram features in CNOR task, respectively. It shows that the MaxEnt method achieves best performance. Another interesting finding is that unigram features perform better than bigram features for PP and MaxEnt models, which shows that data sparseness remains an issue even for MaxEnt model.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 130, "text": "Table 8", "ref_id": "TABREF10" }, { "start": 690, "end": 698, "text": "Table 10", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Data Sparesness:", "sec_num": null }, { "text": "We propose using MaxEnt model to explore diverse features for name origin recognition. Experiment results show that our method is more effective than previously reported methods. Our contributions include: 1) Cast the name origin recognition problem as a multi-class classification task and propose a MaxEnt solution to it; 2) Explore and integrate diverse features for name origin recognition and propose the most effective feature sets for ENOR and for CNOR In the future, we hope to integrate our name origin recognition method with a machine transliteration engine to further improve transliteration performance. We also hope to study the issue of name origin recognition in context of sentence and use contextual words as additional features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Min Zhang, Jian Su and Haizhou Li. 2004 ", "cite_spans": [ { "start": 31, "end": 39, "text": "Li. 2004", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://homepages.inf.ed.ac.uk/s0450736/maxent.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the test set of CNOR, 1080 out of 2980 names of Chinese origin do not consist of any bigrams learnt from training data, while 2888 out of 2980 names do not consist of any learnt trigrams. This is not surprising as most of Chinese names only have two or three Chinese characters and in our open testing, the train set is exclusive of all entries in the test set. 4 http://www.census.gov/genealogy/names/ 5 http://technology.chtsai.org/namelist/ 6 http://www.csse.monash.edu.au/~jwb/enamdict_doc.html 7 Xinhua News Agency (1992) 8 http://www.ldc.upenn.edu LDC2005T34 9 www.cjk.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Maximum Entropy Approach to Natural Language Processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Pietra", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Stephen A. Della Pietra and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Lin- guistics. 22(1):39-71.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ngram based text categorization", "authors": [ { "first": "B", "middle": [], "last": "William", "suffix": "" }, { "first": "John", "middle": [ "M" ], "last": "Cavnar", "suffix": "" }, { "first": "", "middle": [], "last": "Trenkle", "suffix": "" } ], "year": 1994, "venue": "3rd Annual Symposium on Document Analysis and Information Retrieval", "volume": "", "issue": "", "pages": "275--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "William B. Cavnar and John M. Trenkle. 1994. Ngram based text categorization. In 3rd Annual Symposium on Document Analysis and Information Retrieval, 275-282.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generating Paired Transliterated-Cognates Using Multiple Pronunciation Characteristics from Web Corpora. PA-CLIC 18", "authors": [ { "first": "Jin-Shea", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Ying-Kuei", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "275--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin-Shea Kuo and Ying-Kuei Yan. 2004. Generating Paired Transliterated-Cognates Using Multiple Pro- nunciation Characteristics from Web Corpora. PA- CLIC 18, December 8th-10th, Waseda University, Tokyo, Japan, 275-282.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Transliteration. Advances in Chinese Spoken Language Processing", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shuanhu", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jin-Shea", "middle": [], "last": "Kuo", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "341--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Shuanhu Bai and Jin-Shea Kuo. 2006. Transliteration. Advances in Chinese Spoken Lan- guage Processing. World Scientific Publishing Com- pany, USA, 341-364.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Semantic Transliteration of Personal Names", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Khe Chai", "middle": [], "last": "Sim", "suffix": "" }, { "first": "Jin-Shea", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Minghui", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "120--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Khe Chai Sim, Jin-Shea Kuo and Minghui Dong. 2007. Semantic Transliteration of Personal Names. ACL-2007. 120-127.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Chinese Transliteration of Foreign Personal Names", "authors": [ { "first": "Xinhua", "middle": [], "last": "News Agency", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinhua News Agency. 1992. Chinese Transliteration of Foreign Personal Names. The Commercial Press", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Finding ideographic representations of Japanese names written in Latin script via language identification and corpus validation", "authors": [ { "first": "Yan", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Qu and Gregory Grefenstette. 2004. Finding ideo- graphic representations of Japanese names written in Latin script via language identification and corpus validation. ACL-2004. 183-190.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "parameters. Each parameter corresponds to exactly one feature and can be viewed as a \"weight\" for the corresponding feature.In the NOR task, c is the name origin label; x is a personal name, i f is a feature function. All features used in the MaxEnt model in this paper are binary. For example:", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "The N-gram related features include: 1) FUni: character unigram 2) FBi: character bigram 3) FTri: character trigram Position Specific n-gram Features:", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "type_str": "table", "content": "", "num": null, "text": "D", "html": null }, "TABREF2": { "type_str": "table", "content": "
(D C ) are ran-
domly selected for training and the remaining 10%
are kept for testing for each language origin. Col-
umns 2 and 3 in Tables 7 and 8 list the numbers of
entries in the training and test sets.
", "num": null, "text": "", "html": null }, "TABREF3": { "type_str": "table", "content": "
It shows that:
1) All individual features are useful since the
performance increases consistently when
more features are being introduced.
2) Bigram feature presents the most informa-
tive feature that gives rise to the highest
", "num": null, "text": "reports the experimental results of ENOR. It shows that the MaxEnt approach achieves the best result of 98.44% in overall accuracy when combining all the diverse features as listed in Subsection 3.2.Table 3 also measures the contributions of different features for ENOR by gradually incorporating the feature set. MaxEnt method can integrate the advantages of previous rule-based and statistical methods and easily integrate other features.", "html": null }, "TABREF4": { "type_str": "table", "content": "
Features EngJapManCan
FMan-0.3570.0690.072-0.709
FCan-0.424-0.062 -0.775 0.066
", "num": null, "text": "Contribution of each feature for ENOR", "html": null }, "TABREF5": { "type_str": "table", "content": "
FeatureOriginP(%)R(%)FAcc(%)
Eng 97.89 98.43 98.16
FUniChi 95.80 95.03 95.4296.97
Jap 96.96 97.05 97.00
Eng 96.99 98.27 97.63
+FBiChi 96.86 92.11 94.4396.28
Jap 95.04 97.73 96.36
Eng 97.35 98.38 97.86
+FLenChi 97.29 95.00 96.1397.14
Jap 96.78 97.64 97.21
Eng 97.74 98.65 98.19
+FPUniChi 97.65 96.34 96.9997.77
Jap 97.91 98.05 97.98
Eng 97.50 98.43 97.96
+FPBiChi 97.61 96.04 96.8297.56
Jap 97.59 97.94 97.76
FUniEng 98.08 99.04 98.56
+FLen FPUni +Chi 97.57 96.88 97.22 Jap 98.58 98.11 98.3498.10
", "num": null, "text": "Features weights in ENOR task.", "html": null }, "TABREF7": { "type_str": "table", "content": "", "num": null, "text": "ENOR) and Table 8 (CNOR) compare our MaxEnt model with the SUM model", "html": null }, "TABREF9": { "type_str": "table", "content": "
Origin # training# testBigram SUMBigram PPMaxEnt
entriesentries P(%) R(%)FP(%) R(%)FP(%) R(%)F
Eng37,6443,765 .0094.5398.10
", "num": null, "text": "Benchmarking different methods in ENOR task 95.94 98.65 97.28 97.58 97.61 97.60 98.08 99.04 98.56 Chi 29,795 2,980 96.26 87.35 91.59 95.10 87.35 91.06 97.57 96.88 97.22 Jap 33,897 3,390 93.01 97.67 95.28 90.94 97.43 94.07 98.58 98.11 98.", "html": null }, "TABREF10": { "type_str": "table", "content": "
MaxEnt124,69213,496182,116
PP16,8514,04586,490
SUM16,8514,04586,490
", "num": null, "text": "Benchmarking different methods in CNOR task", "html": null }, "TABREF11": { "type_str": "table", "content": "
SUMPPMaxEnt
Unigram Features90.5597.0998.10
Bigram Features95.0094.5397.56
", "num": null, "text": "Numbers of parameters used in different methods", "html": null }, "TABREF12": { "type_str": "table", "content": "", "num": null, "text": "Overall accuracy using unigram and bigram features in CNOR task", "html": null } } } }