{ "paper_id": "I08-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:41:29.346305Z" }, "title": "Orthographic Disambiguation Incorporating Transliterated Probability", "authors": [ { "first": "Eiji", "middle": [], "last": "Aramaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1 Hongo, Bunkyo-ku", "postCode": "113-8655", "settlement": "Tokyo", "country": "Japan" } }, "email": "aramaki@hcc.h.u-tokyo.ac.jp" }, { "first": "Takeshi", "middle": [], "last": "Imai", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1 Hongo, Bunkyo-ku", "postCode": "113-8655", "settlement": "Tokyo", "country": "Japan" } }, "email": "" }, { "first": "Kengo", "middle": [], "last": "Miyo", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1 Hongo, Bunkyo-ku", "postCode": "113-8655", "settlement": "Tokyo", "country": "Japan" } }, "email": "" }, { "first": "Kazuhiko", "middle": [], "last": "Ohe", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1 Hongo, Bunkyo-ku", "postCode": "113-8655", "settlement": "Tokyo", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Orthographic variance is a fundamental problem for many natural language processing applications. The Japanese language, in particular, contains many orthographic variants for two main reasons: (1) transliterated words allow many possible spelling variations, and (2) many characters in Japanese nouns can be omitted or substituted. Previous studies have mainly focused on the former problem; in contrast, this study has addressed both problems using the same framework. First, we automatically collected both positive examples (sets of equivalent term pairs) and negative examples (sets of inequivalent term pairs). Then, by using both sets of examples, a support vector machine based classifier determined whether two terms (t 1 and t 2) were equivalent. To boost accuracy, we added a transliterated probability P (t 1 |s)P (t 2 |s), which is the probability that both terms (t 1 and t 2) were transliterated from the same source term (s), to the machine learning features. Experimental results yielded high levels of accuracy, demonstrating the feasibility of the proposed approach.", "pdf_parse": { "paper_id": "I08-1007", "_pdf_hash": "", "abstract": [ { "text": "Orthographic variance is a fundamental problem for many natural language processing applications. The Japanese language, in particular, contains many orthographic variants for two main reasons: (1) transliterated words allow many possible spelling variations, and (2) many characters in Japanese nouns can be omitted or substituted. Previous studies have mainly focused on the former problem; in contrast, this study has addressed both problems using the same framework. First, we automatically collected both positive examples (sets of equivalent term pairs) and negative examples (sets of inequivalent term pairs). Then, by using both sets of examples, a support vector machine based classifier determined whether two terms (t 1 and t 2) were equivalent. To boost accuracy, we added a transliterated probability P (t 1 |s)P (t 2 |s), which is the probability that both terms (t 1 and t 2) were transliterated from the same source term (s), to the machine learning features. Experimental results yielded high levels of accuracy, demonstrating the feasibility of the proposed approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Spelling variations, such as \"center\" and \"centre\", which have different spellings but identical meanings, are problematic for many NLP applications including information extraction (IE), question answering (QA), and machine transliteration (MT). In this paper, these variations can be termed orthographic variants. The Japanese language, in particular, contains many orthographic variants, for two main reasons:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. It imports many words from other languages using transliteration, resulting in many possible spelling variations. For example, Masuyama et al. (2004) found at least six different spellings for spaghetti in newspaper articles (Table 1 Left).", "cite_spans": [ { "start": 130, "end": 152, "text": "Masuyama et al. (2004)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 228, "end": 236, "text": "(Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Many characters in Japanese nouns can be omitted or substituted, leading to tons of insertion variations (Daille et al., 1996) (Table 1 Right).", "cite_spans": [ { "start": 108, "end": 129, "text": "(Daille et al., 1996)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 130, "end": 138, "text": "(Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these problems, this study developed a support vector machine (SVM) based classifier that can determine whether two terms are equivalent. Because a SVM-based approach requires positive and negative examples, we also developed a method to automatically generate both examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our proposed method differs from previously developed methods in two ways.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Previous studies have focused solely on the former problem (transliteration); our target scope is wider. We addressed both transliteration and character omissions/substitutions using the same framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Most previous studies have focused on backtransliteration (Knight and Graehl, 1998; Goto et al., 2004) , which has the goal of generating a source word (s) for a Japanese term (t). In contrast, we employed a discriminative approach, which has the goal of determining whether two terms (t 1 and t 2 ) are equivalent. These two goals are related. For example, if two terms (t 1 and t 2 ) were transliterated from the same word (s), they should be orthographic variants. To incorporate this information, we incorporated a transliterated-probability (P (s|t 1 ) \u00d7 P (s|t 2 )) into the SVM features.", "cite_spans": [ { "start": 61, "end": 86, "text": "(Knight and Graehl, 1998;", "ref_id": "BIBREF9" }, { "start": 87, "end": 105, "text": "Goto et al., 2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although we investigated performance using medical terms, our proposed method does not depend on a target domain 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Before developing our methodology, we examined problems related to orthographic variance. First, we investigated the amount of orthographic variance between two dictionaries' entries (DIC1 (Ito et al., 2003) , totaling 69,604 entries, and DIC2 (Nanzando, 2001) , totaling 27,971 entries).", "cite_spans": [ { "start": 189, "end": 207, "text": "(Ito et al., 2003)", "ref_id": null }, { "start": 244, "end": 260, "text": "(Nanzando, 2001)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Orthographic Variance in Dictionary Entries", "sec_num": "2" }, { "text": "Exact matches between entries only occurred for 10,577 terms (15.1% of DIC1, and 37.8% of DIC2). From other entries, we extracted orthographic variance as follows. We extracted term pairs with similar spelling (t 1 and t 2 ) using edit distance-based similarity (defined by Table 2 ). We extracted term pairs with SIM ed > 0.8, and found 5,064 term pairs with similar spelling.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Orthographic Variance in Dictionary Entries", "sec_num": "2" }, { "text": "We then manually judged whether each term pair was composed of orthographic variants (whether or not they had the same meaning).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STEP 2: Judging Orthographic Variance", "sec_num": null }, { "text": "Our results indicated that 1,889 (37.3%) of the terms were orthographic variants. Figure 1 presents the relation between the orthographic variation ratio and similarity threshold (0.8-1.0). As shown in the figure, a higher similarity threshold (SIM=0.96-97) does not always indicate that terms are orthographic variants.", "cite_spans": [], "ref_spans": [ { "start": 82, "end": 90, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "STEP 2: Judging Orthographic Variance", "sec_num": null }, { "text": "The following term pair is a typical example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STEP 2: Judging Orthographic Variance", "sec_num": null }, { "text": "1. (mutated hepatitis type B virus),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STEP 2: Judging Orthographic Variance", "sec_num": null }, { "text": "They have only one character difference (\"B\" and \"C\"), resulting in high levels of spelling similarity, but the meanings are not equivalent. This type of limitation, intrinsic to measurements of spelling similarity, motivated us to develop an SVM-based classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(mutated hepatitis type C virus).", "sec_num": "2." }, { "text": "We developed an SVM-based classifier that determines whether two terms are equivalent. Section 3.1 Table 2 : Edit Distance-based Similarity (SIM ed ).", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "The edit distance-based similarity (SIM ed ) between two terms (t 1 , t2) is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "SIM ed (t 1 , t 2 ) = 1\u2212 EditDistance(t 1 , t 2 ) \u00d7 2 len(t 1 ) + len(t 2 ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "where len(t 1 ) is the number of characters of t 1 , len(t 2 ) is the number of characters of t 2 , Edit Distance(t 1 , t 2 ) is the minimum number of point mutations required to change t 1 into t 2 , where a point mutation is one of: (1) a change in a character, (2) the insertion of a character, and (3) the deletion of a character. For details, see (Levenshtein, 1965) .", "cite_spans": [ { "start": 352, "end": 371, "text": "(Levenshtein, 1965)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "will describe the method we used to build training data, and Section 3.2 will introduce the classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "Our method uses a straight forward approach to extract positive examples. The basic idea is that orthographic variants should have (1) similar spelling, and (2) the same English translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Building of Examples Positive Examples", "sec_num": "3.1" }, { "text": "The method consists of the following two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Building of Examples Positive Examples", "sec_num": "3.1" }, { "text": "STEP 1: First, using two or more translation dictionaries, extract a set of Japanese terms with the same English translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Building of Examples Positive Examples", "sec_num": "3.1" }, { "text": "STEP 2: Then, for each extracted set, generate two possible term pairs (t 1 and t 2 ) and calculate the spelling similarity between them. Spelling similarity is measured by edit distance-based similarity (see Section 2). Any term pair with more than a threshold (SIM ed(t 1 , t 2 ) > 0.8) similarity is considered a positive example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Building of Examples Positive Examples", "sec_num": "3.1" }, { "text": "We based our method of extracting negative examples using the dictionary-based method. As with positive examples, we collected term pairs with similar spellings (SIM ed(t 1 , t 2 ) > 0.8), but differing English translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": "However, the above heuristic is not sufficient to extract negative examples; different English terms might have the same meaning, which could cause unsuitable negative examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": "For example, t 1 \" (stomach cancer)\" and t 2 \" (stomach carcinoma)\": although these words have differing English translations, unfortunately they are not a negative example (\"cancer\" and \"carcinoma\" are synonymous).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": "To address this problem, we employed a corpusbased approach, hypothesizing that if two terms are orthographic variants, they should rarely both appear in the same document. Conversely, if both terms appear together in many documents, they are unlikely to be orthographic variants (negative examples).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": "Based on this assumption, we defined the following scoring method:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": "Score(t 1 , t 2 ) = log(HIT (t 1 , t 2 )) max(log(HIT (t 1 )), log(HIT (t 2 )))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": ",", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": "where HIT (t) is the number of Google hits for a query t. We only used negative examples with the highest K score, and discarded the others 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Examples", "sec_num": null }, { "text": "The next problem was how to convert training-data into machine learning features. We used two types of features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM-Based Classifier", "sec_num": "3.2" }, { "text": "We expressed different characters between two terms and their context (window size \u00b11) as features, shown in Table 3 . Thus, to represent an omission, \"\u03c6 (null)\" is considered a character. Two examples are provided in Figures 2.", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Character-Based Features", "sec_num": null }, { "text": "Note that if terms contain two or more differing parts, all the differing parts are converted into features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-Based Features", "sec_num": null }, { "text": "Another type of feature is the similarity between two terms (t 1 and t 2 ). We employed two similarities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity-based Features", "sec_num": null }, { "text": "1. Edit distance-based similarity SIM ed (t 1 , t 2 ) (see Section 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity-based Features", "sec_num": null }, { "text": "2. Transliterated similarity, which is the probability that two terms (t 1 and t 2 ) were transliterated ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity-based Features", "sec_num": null }, { "text": "Differing characters between two terms, consisting of a pair of n : m characters (n > 0 and m > 0). For example, we regard \" (t)\u2192 \u03c6\" as LEX-DIFF in Figure 2 TOP.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "LEX-DIFF", "sec_num": null }, { "text": "Previous character of DIFF. We regard \" (ge)\" as LEX-PRE in Figure 2 TOP.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 68, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "LEX-PRE", "sec_num": null }, { "text": "Subsequent character of DIFF. We regard \" (te)\" as LEX-POST in Figure 2 TOP.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 71, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "LEX-POST", "sec_num": null }, { "text": "A script type of differing characters between two terms, classified into four categories: (1) HIRAGANA-script, (2) KATAKANA-script, (3) Chinese-character script or (4) others (symbols, numerous expressions etc.)) We regard \"KATAKANA\u2192 \u03c6\" as TYPE-DIFF in Figure 2 TOP.", "cite_spans": [], "ref_spans": [ { "start": 253, "end": 261, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "TYPE-DIFF", "sec_num": null }, { "text": "A type previous character of DIFF. We regard \"KATAKANA\" as TYPE-PRE in Figure 2 TOP.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 79, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "TYPE-PRE", "sec_num": null }, { "text": "A type subsequent character of DIFF. We regard \"KATAKANA\" as TYPE-POST in Figure 2 TOP.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 82, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "TYPE-POST", "sec_num": null }, { "text": "A length (the number of characters) of differing parts. from the same source word (t) (defined in Table 4 ).", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 106, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "LEN-DIFF", "sec_num": null }, { "text": "Note that the latter, transliterated similarity, is applicable to a situation in which the input pair is transliterated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LEN-DIFF", "sec_num": null }, { "text": "To evaluate the performance of our system, we used judged term pairs, as discussed in Section 2 (ALL-SET). We also extracted a sub-set of these pairs in order to focus on a transliteration problem (TRANS-SET).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test-Set", "sec_num": "4.1" }, { "text": "(1,889 orthographic variants of 5,064 pairs)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ALL-SET: This set consisted of all examples", "sec_num": "1." }, { "text": "2. TRANS-SET: This set contained only examples of transliteration (543 orthographic variants or 1,111 pairs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ALL-SET: This set consisted of all examples", "sec_num": "1." }, { "text": "Using the proposed method set out in Section 3, we automatically constructed a training-set from two translation dictionaries (Japan Medical Terminology English-Japanese (Nanzando, 2001 ) and 25-Thousand-Term Medical Dictionary(MEID, 2005)).", "cite_spans": [ { "start": 170, "end": 185, "text": "(Nanzando, 2001", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Training-Set", "sec_num": "4.2" }, { "text": "The resulting training-set consisted of 82,240 examples (41,120 positive examples and 41,120 negative examples).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training-Set", "sec_num": "4.2" }, { "text": "We compared the following methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Methods", "sec_num": "4.3" }, { "text": "1. SIM-ED: An edit distance-based method, which regards an input with a similarity SIM ed (t 1 , t 2 ) > T H as an orthographic variant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Methods", "sec_num": "4.3" }, { "text": "2. SIM-TR: A transliterated based method, which regards an input with a spelling similarity SIM tr (t 1 , t 2 ) > T H as an orthographic variant (TRANS-SET only).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Methods", "sec_num": "4.3" }, { "text": "3. PROPOSED: Our proposed method without SIM tr features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparative Methods", "sec_num": "4.3" }, { "text": "Our proposed method with SIM tr features. (TRANS-SET only).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PROPOSED+TR:", "sec_num": "4." }, { "text": "For SVM learning, we used TinySVM 3 with polynomial kernel (d=2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PROPOSED+TR:", "sec_num": "4." }, { "text": "We used the three following measures to evaluate our method: Table 5 presents the performance of all methods. The accuracy of similarity-based methods (SIM-ED and SIM-TR) varied depending on the threshold (T H). Figure 3 is a precision-recall graph of all methods in TRANS-SET. In ALL-SET, PROPOSED outperformed a similarity-based method (SIM-ED) in F \u03b2=1 , demonstrating the feasibility of the proposed discriminative approach. In TRANS-SET, PROPOSED also outperformed two similarity-based methods (SIM-ED and SIM-TR). In addition, PROPOSED+TR yielded higher levels of accuracy than PROPOSED. Based on this result, we can conclude that adding transliteratedprobability improved accuracy.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 5", "ref_id": "TABREF2" }, { "start": 212, "end": 220, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "P recision = #", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "It was difficult to compare accuracy between the results of our study and previous studies. Previous studies used different corpora, and also focused on (back-) transliteration. However, our accuracy levels were at least as good as those in previous studies (64% by (Knight and Graehl, 1998) and 87.7% by (Goto et al., 2004) ).", "cite_spans": [ { "start": 266, "end": 291, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF9" }, { "start": 305, "end": 324, "text": "(Goto et al., 2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.5" }, { "text": "We investigated errors from PROPOSED and PRO-POSED+TR, and found two main types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.6" }, { "text": "The Japanese language can be expressed using three types of script: KANJI (Chinese characters), KATAKANA, and HIRAGANA. Although each of these scripts can be converted to another, (such as \" \" (\"epilepsia\" in KANJI script) and \" \" (\"epilepsia\" in HIRAGANA script), our method cannot deal with this phenomenon. Future research will need to add steps to solve this problem. guages While our experimental set consisted of medical terms, including a few transliterations from Latin or German, transliteration-probability was trained using transliterations from the English language (using a general dictionary). Therefore, PROPOSED+TR results are inferior when inputs are from non-English languages. In a general domain, SIM-TR and PROPOSED+TR would probably yield higher accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Different Script Types", "sec_num": "1." }, { "text": "As noted in Section 1, transliteration is the most relevant field to our work, because it results in many orthographic variations. Most previous transliteration studies have focused on finding the most suitable back-transliteration of a term. For example, proposed a probabilistic model for transliteration. Goto et al.(2004) proposed a similar method, utilizing surrounding characters.", "cite_spans": [ { "start": 308, "end": 325, "text": "Goto et al.(2004)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "5" }, { "text": "Their method is not only applicable to Japanese; it has already been used for Korean (Oh and Choi, 2002; Oh and Choi, 2005; Oh and Isahara, 2007) , Arabic(Stalls and Sherif and Kondrak, 2007) , Chinese (Li et al., 2007) , and Persian (Karimi et al., 2007) .", "cite_spans": [ { "start": 85, "end": 104, "text": "(Oh and Choi, 2002;", "ref_id": "BIBREF16" }, { "start": 105, "end": 123, "text": "Oh and Choi, 2005;", "ref_id": "BIBREF17" }, { "start": 124, "end": 145, "text": "Oh and Isahara, 2007)", "ref_id": "BIBREF18" }, { "start": 166, "end": 191, "text": "Sherif and Kondrak, 2007)", "ref_id": "BIBREF19" }, { "start": 202, "end": 219, "text": "(Li et al., 2007)", "ref_id": "BIBREF12" }, { "start": 234, "end": 255, "text": "(Karimi et al., 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "5" }, { "text": "Our method uses a different kind of task-setting, compared to previous methods. It is based on determining whether two terms within the same language are equivalent. It provides high levels of accuracy, which should be practical for many applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "5" }, { "text": "Another issue is that of how to represent transliteration phenomena. Methods can be classified into three main types: grapheme-based (Li et al., 2004) ; phoneme-based (Knight and Graehl, 1998) ; and combinations of both these methods( hybrid-model (Bilac and Tanaka, 2004) and correspondence-based model (Oh and Choi, 2002; Oh and Choi, 2005) ). Our proposed method employed a grapheme-based approach. We selected this kind of approach because it allows us to handle not only transliteration but also character omissions/substitutions, which we would not be able to address using a phoneme-based approach (and a combination approach). Yoon et al. (2007) also proposed a discriminative transliteration method, but their system was based on determining whether a target term was transliterated from a source term. Bergsma and Kondrak (2007) and Aramaki et al. (2007) proposed on a discriminative method for similar spelling terms. However, they did not deal with a transliterated probability. Masuyama et al. (2004) collected 178,569 Japanese transliteration variants (positive examples) from a large corpus. In contrast, we collected both positive and negative examples in order to train the classifier.", "cite_spans": [ { "start": 133, "end": 150, "text": "(Li et al., 2004)", "ref_id": "BIBREF11" }, { "start": 167, "end": 192, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF9" }, { "start": 248, "end": 272, "text": "(Bilac and Tanaka, 2004)", "ref_id": "BIBREF3" }, { "start": 304, "end": 323, "text": "(Oh and Choi, 2002;", "ref_id": "BIBREF16" }, { "start": 324, "end": 342, "text": "Oh and Choi, 2005)", "ref_id": "BIBREF17" }, { "start": 635, "end": 653, "text": "Yoon et al. (2007)", "ref_id": "BIBREF21" }, { "start": 812, "end": 838, "text": "Bergsma and Kondrak (2007)", "ref_id": null }, { "start": 843, "end": 864, "text": "Aramaki et al. (2007)", "ref_id": "BIBREF0" }, { "start": 991, "end": 1013, "text": "Masuyama et al. (2004)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "5" }, { "text": "We developed an SVM-based orthographic disambiguation classifier, incorporating transliteration probability. We also developed a method for collecting both positive and negative examples. Experimental results yielded high levels of accuracy, demonstrating the feasibility of the proposed approach. Our proposed classifier could become a fundamental technology for many NLP applications. The transliterated similarity (SIM tr ) between two terms (t 1 , t2) is defined as follows a :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "SIM tr (t 1 , t 2 ) = \u2211 s\u2208S P (t 1 |s)P (t 2 |s),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "where S is a set of back-transliterations that are generated from both t 1 and t 2 , P (e|t) is a probability of Japanese term (t) comes from a source term s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "P (t|s) = |K| \u220f k=1 P (t k |s k ), P (t k |s k ) = frequency of s k \u2192 t k frequency of s k ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "where |K| is the number of characters in a term t, t k is the k-th character of a term t, s k is the k-th character sequence of a term s, \"frequency of s k \u2192 t k \" is the occurrences of the alignments, \"frequency of s k \" is the occurrences of a character s k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "To get alignment, we extracted 100,128 transliterated term pairs from a transliteration dictionary (EDP, 2005) , and estimate its alignment by using GIZA++ b . We aligned in Japanese-to-English direction, and got 1 : m alignments (one Japanese character : m alphabetical characters) to calculate P (t k |s k ). These formulas are equal to (Karimi et al., 2007 ).", "cite_spans": [ { "start": 99, "end": 110, "text": "(EDP, 2005)", "ref_id": "BIBREF5" }, { "start": 339, "end": 359, "text": "(Karimi et al., 2007", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "a SIMtr(t1, t2) is a similarity (not a probability) b http://www.fjoch.com/GIZA++.html ciety for the Promotion of Science (Project Number:16200039, F.Y.2004 and 18700133, F.Y.2006 and the Research Collaboration Project (#047100001247) with Japan Anatomy Laboratory Co.Ltd.", "cite_spans": [ { "start": 122, "end": 156, "text": "(Project Number:16200039, F.Y.2004", "ref_id": null }, { "start": 157, "end": 179, "text": "and 18700133, F.Y.2006", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The domain could affect the performance, because most of medical terms are imported from other languages, leading to many orthographic variants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the experiments in Section 4, we set K is 41,120, which is equal to the number of positive examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Part of this research is supported by Grantin-Aid for Scientific Research of Japan So-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Support vector machine based orthographic disambiguation", "authors": [ { "first": "Eiji", "middle": [], "last": "Aramaki", "suffix": "" }, { "first": "Takeshi", "middle": [], "last": "Imai", "suffix": "" }, { "first": "Kengo", "middle": [], "last": "Miyo", "suffix": "" }, { "first": "Kazuhiko", "middle": [], "last": "Ohe", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Conference on Theoretical and Methodological Issues in Machine Translation (TMI2007)", "volume": "", "issue": "", "pages": "21--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eiji Aramaki, Takeshi Imai, Kengo Miyo, and Kazuhiko Ohe. 2007. Support vector machine based ortho- graphic disambiguation. In Proceedings of the Con- ference on Theoretical and Methodological Issues in Machine Translation (TMI2007), pages 21-30.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Alignment-based discriminative string similarity", "authors": [], "year": null, "venue": "Proceedings of the Association for Computational Linguistics (ACL2007)", "volume": "", "issue": "", "pages": "656--663", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alignment-based discriminative string similarity. In Proceedings of the Association for Computational Lin- guistics (ACL2007), pages 656-663.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A hybrid backtransliteration system for Japanese", "authors": [ { "first": "Slaven", "middle": [], "last": "Bilac", "suffix": "" }, { "first": "Hozumi", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 2004, "venue": "Proceedings of The 20th International Conference on Computational Linguistics (COLING2004)", "volume": "", "issue": "", "pages": "597--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slaven Bilac and Hozumi Tanaka. 2004. A hybrid back- transliteration system for Japanese. In Proceedings of The 20th International Conference on Computational Linguistics (COLING2004), pages 597-603.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Empirical observation of term variations and principles for their description", "authors": [ { "first": "B", "middle": [], "last": "Daille", "suffix": "" }, { "first": "B", "middle": [], "last": "Habert", "suffix": "" }, { "first": "C", "middle": [], "last": "Jacquemin", "suffix": "" }, { "first": "J", "middle": [], "last": "Royaut", "suffix": "" } ], "year": 1996, "venue": "Terminology", "volume": "3", "issue": "2", "pages": "197--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Daille, B. Habert, C. Jacquemin, and J. Royaut. 1996. Empirical observation of term variations and princi- ples for their description. Terminology, 3(2):197-258.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Eijiro Japanese-English dictionary, electronic dictionary project", "authors": [ { "first": "", "middle": [], "last": "Edp", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "EDP. 2005. Eijiro Japanese-English dictionary, elec- tronic dictionary project.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Back transliteration from Japanese to English using target English context", "authors": [ { "first": "Isao", "middle": [], "last": "Goto", "suffix": "" }, { "first": "Naoto", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Terumasa", "middle": [], "last": "Ehara", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 2004, "venue": "Proceedings of The 20th International Conference on Computational Linguistics (COLING2004)", "volume": "", "issue": "", "pages": "827--833", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isao Goto, Naoto Kato, Terumasa Ehara, and Hideki Tanaka. 2004. Back transliteration from Japanese to English using target English context. In Proceed- ings of The 20th International Conference on Compu- tational Linguistics (COLING2004), pages 827-833.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Collapsed consonant and vowel models: New approaches for English-Persian transliteration and backtransliteration", "authors": [ { "first": "Sarvnaz", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "Falk", "middle": [], "last": "Scholer", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Turpin", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL2007)", "volume": "", "issue": "", "pages": "648--655", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarvnaz Karimi, Falk Scholer, and Andrew Turpin. 2007. Collapsed consonant and vowel models: New ap- proaches for English-Persian transliteration and back- transliteration. In Proceedings of the Annual Meet- ing of the Association of Computational Linguistics (ACL2007), pages 648-655.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Machine transliteration", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "4", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599- 612.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Binary codes capable of correcting deletions, insertions and reversals", "authors": [ { "first": "V", "middle": [ "I" ], "last": "Levenshtein", "suffix": "" } ], "year": 1965, "venue": "Doklady Akademii Nauk SSSR", "volume": "163", "issue": "4", "pages": "845--848", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. I. Levenshtein. 1965. Binary codes capable of cor- recting deletions, insertions and reversals. Doklady Akademii Nauk SSSR, 163(4):845-848.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A joint source-channel model for machine transliteration", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Meeting of the Association for Computational Linguistics (ACL2004)", "volume": "", "issue": "", "pages": "159--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Min Zhang, and Jian Su. 2004. A joint source-channel model for machine transliteration. In Proceedings of the Meeting of the Association for Computational Linguistics (ACL2004), pages 159- 166.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Semantic transliteration of personal names", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Khe Chai", "middle": [], "last": "Sim", "suffix": "" }, { "first": "Jin-Shea", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Minghui", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL2007)", "volume": "", "issue": "", "pages": "120--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Khe Chai Sim, Jin-Shea Kuo, and Minghui Dong. 2007. Semantic transliteration of personal names. In Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL2007), pages 120-127.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic construction of Japanese KATAKANA variant list from large corpus", "authors": [ { "first": "Takeshi", "middle": [], "last": "Masuyama", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2004, "venue": "Proceedings of The 20th International Conference on Computational Linguistics (COLING2004)", "volume": "", "issue": "", "pages": "1214--1219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takeshi Masuyama, Satoshi Sekine, and Hiroshi Nak- agawa. 2004. Automatic construction of Japanese KATAKANA variant list from large corpus. In Proceedings of The 20th International Conference on Computational Linguistics (COLING2004), pages 1214-1219.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "25-Mango Medical Dictionary", "authors": [ { "first": "", "middle": [], "last": "Meid", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MEID. 2005. 25-Mango Medical Dictionary. Nichigai Associates, Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Japan Medical Terminology English-Japanese 2nd Edition. Committee of Medical Terminology", "authors": [ { "first": "", "middle": [], "last": "Nanzando", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nanzando. 2001. Japan Medical Terminology English- Japanese 2nd Edition. Committee of Medical Termi- nology, NANZANDO Co.,Ltd.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An English-Korean transliteration model using pronunciation and contextual rules", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of The 19th International Conference on Computational Linguistics (COLING2002)", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh and Key-Sun Choi. 2002. An English- Korean transliteration model using pronunciation and contextual rules. In Proceedings of The 19th In- ternational Conference on Computational Linguistics (COLING2002), pages 758-764.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An ensemble of grapheme and phoneme for machine transliteration", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Second International Joint Conference on Natural Language Processing (IJCNLP2005)", "volume": "", "issue": "", "pages": "450--461", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh and Key-Sun Choi. 2005. An ensemble of grapheme and phoneme for machine transliteration. In Proceedings of Second International Joint Confer- ence on Natural Language Processing (IJCNLP2005), pages 450-461.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Machine transliteration using multiple transliteration engines and hypothesis re-ranking", "authors": [ { "first": "Hitoshi", "middle": [], "last": "Jong-Hoon Oh", "suffix": "" }, { "first": "", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2007, "venue": "Proceedings of MT Summit XI", "volume": "", "issue": "", "pages": "353--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh and Hitoshi Isahara. 2007. Machine transliteration using multiple transliteration engines and hypothesis re-ranking. In Proceedings of MT Sum- mit XI, pages 353-360.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Substringbased transliteration", "authors": [ { "first": "Tarek", "middle": [], "last": "Sherif", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL2007)", "volume": "", "issue": "", "pages": "944--951", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tarek Sherif and Grzegorz Kondrak. 2007. Substring- based transliteration. In Proceedings of the 45th An- nual Meeting of the Association of Computational Lin- guistics (ACL2007), pages 944-951.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Translating names and technical terms in arabic text", "authors": [ { "first": "Bonnie", "middle": [], "last": "Glover Stalls", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of The International Conference on Computational Linguistics and the 36th Annual Meeting of the Association of Computational Linguistics (COLING-ACL1998) Workshop on Computational Approaches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie Glover Stalls and Kevin Knight. 1998. Trans- lating names and technical terms in arabic text. In Proceedings of The International Conference on Com- putational Linguistics and the 36th Annual Meet- ing of the Association of Computational Linguistics (COLING-ACL1998) Workshop on Computational Ap- proaches to Semitic Languages.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Multilingual transliteration using feature based phonetic method", "authors": [ { "first": "Kyoung-Young", "middle": [], "last": "Su-Youn Yoon", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL2007)", "volume": "", "issue": "", "pages": "112--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su-Youn Yoon, Kyoung-Young Kim, and Richard Sproat. 2007. Multilingual transliteration using feature based phonetic method. In Proceedings of the Annual Meet- ing of the Association of Computational Linguistics (ACL2007), pages 112-119.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Figure 1: Similarity Threshold and Orthographic Variants Ratio.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "A Positive Example (TOP) and A Negative Example (BOTTOM).", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "of pairs found and correct total # of pairs found , Recall = # of pairs found and correct total # of pairs correct , F \u03b2=1 = 2 \u00d7 Recall \u00d7 P recision Recall + P recision .", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "3 http://chasen.org/ taku/software/TinySVM/ SIM and orthographic variants ratio.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "num": null, "content": "
spaghettiThompson operation
", "html": null, "type_str": "table", "text": "Examples of Orthographic Variants." }, "TABREF1": { "num": null, "content": "", "html": null, "type_str": "table", "text": "Character-based Features." }, "TABREF2": { "num": null, "content": "
: Results
ALL-SETTRANS-SET
Precision Recall F \u03b2=1 Precision Recall F \u03b2=1
SIM-ED 65.2%64.6% 0.6591.2%36.3% 0.51
SIM-TR ---92.6%43.9% 0.59
PROPOSED 78.2%70.2% 0.7381.9%75.6% 0.78
PROPOSED+TR ---81.7%82.7% 0.82
", "html": null, "type_str": "table", "text": "The performance in SIM-ED and SIM-TR showed the highest F \u03b2=1 values." }, "TABREF3": { "num": null, "content": "", "html": null, "type_str": "table", "text": "Transliterated Similarity (SIM tr )." } } } }