{ "paper_id": "I08-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:30.760022Z" }, "title": "Hypothesis Selection in Machine Transliteration: A Web Mining Approach", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Group National Institute of Information and Communications Technology", "institution": "", "location": { "addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Hitoshi", "middle": [], "last": "Isahara", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Group National Institute of Information and Communications Technology", "institution": "", "location": { "addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "isahara@nict.go.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a new method of selecting hypotheses for machine transliteration. We generate a set of Chinese, Japanese, and Korean transliteration hypotheses for a given English word. We then use the set of transliteration hypotheses as a guide to finding relevant Web pages and mining contextual information for the transliteration hypotheses from the Web page. Finally, we use the mined information for machine-learning algorithms including support vector machines and maximum entropy model designed to select the correct transliteration hypothesis. In our experiments, our proposed method based on Web mining consistently outperformed systems based on simple Web counts used in previous work, regardless of the language. 1)\u00b8X/W 1 \u00ff=\u00ff 2 (Adrienne 1 Clarkson 2) 2)\u00b0\u00eb\u00b3-\u00b9 1 \u00aa\u2022\u00c0-\u00bc 2 (glucose 1 oxidase 2) 3) n Z t 1 6 \u00a4r ] j 2 (diphenol 1 oxidase 2) Note that the subscripted numbers in all examples represent the correspondence between the English word and its CJK counterpart. These parenthetical expressions are very useful in selecting translit", "pdf_parse": { "paper_id": "I08-1031", "_pdf_hash": "", "abstract": [ { "text": "We propose a new method of selecting hypotheses for machine transliteration. We generate a set of Chinese, Japanese, and Korean transliteration hypotheses for a given English word. We then use the set of transliteration hypotheses as a guide to finding relevant Web pages and mining contextual information for the transliteration hypotheses from the Web page. Finally, we use the mined information for machine-learning algorithms including support vector machines and maximum entropy model designed to select the correct transliteration hypothesis. In our experiments, our proposed method based on Web mining consistently outperformed systems based on simple Web counts used in previous work, regardless of the language. 1)\u00b8X/W 1 \u00ff=\u00ff 2 (Adrienne 1 Clarkson 2) 2)\u00b0\u00eb\u00b3-\u00b9 1 \u00aa\u2022\u00c0-\u00bc 2 (glucose 1 oxidase 2) 3) n Z t 1 6 \u00a4r ] j 2 (diphenol 1 oxidase 2) Note that the subscripted numbers in all examples represent the correspondence between the English word and its CJK counterpart. These parenthetical expressions are very useful in selecting translit", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine transliteration has been a great challenge for cross-lingual information retrieval and machine translation systems. Many researchers have developed machine transliteration systems that accept a source language term as input and then output its transliteration in a target language (Al-Onaizan and Knight, 2002; Goto et al., 2003; Kang and Kim, 2000; Li et al., 2004; Meng et al., 2001; Oh and Choi, 2002; . Some of these have used the Web to select machine-generated transliteration hypotheses and have obtained promising results (Al-Onaizan and Knight, 2002; . More precisely, they used simple Web counts, estimated as the number of hits (Web pages) retrieved by a Web search engine.", "cite_spans": [ { "start": 289, "end": 318, "text": "(Al-Onaizan and Knight, 2002;", "ref_id": "BIBREF0" }, { "start": 319, "end": 337, "text": "Goto et al., 2003;", "ref_id": "BIBREF2" }, { "start": 338, "end": 357, "text": "Kang and Kim, 2000;", "ref_id": "BIBREF6" }, { "start": 358, "end": 374, "text": "Li et al., 2004;", "ref_id": "BIBREF8" }, { "start": 375, "end": 393, "text": "Meng et al., 2001;", "ref_id": "BIBREF9" }, { "start": 394, "end": 412, "text": "Oh and Choi, 2002;", "ref_id": "BIBREF11" }, { "start": 538, "end": 567, "text": "(Al-Onaizan and Knight, 2002;", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, there are several limitations imposed on the ability of Web counts to select a correct transliteration hypothesis. First, the assumption that hit counts approximate the Web frequency of a given query usually introduces noise (Lapata and Keller, 2005) . Moreover, some Web search engines disregard punctuation and capitalization when matching search terms (Lapata and Keller, 2005) . This can cause errors if such Web counts are relied on to select transliteration hypotheses. Second, it is not easy to consider the contexts of transliteration hypotheses with Web counts because Web counts are estimated based on the number of retrieved Web pages. However, as our preliminary work showed , transliteration or translation pairs often appear as parenthetical expressions or tend to be in close proximity in texts; thus context can play an important role in selecting transliteration hypotheses. For example, there are several Chinese, Japanese, and Korean (CJK) transliterations and their counterparts in a parenthetical expression, as follows. eration hypotheses because it is apparent that they are translation pairs or transliteration pairs. However, we cannot fully use such information with Web counts.", "cite_spans": [ { "start": 234, "end": 259, "text": "(Lapata and Keller, 2005)", "ref_id": "BIBREF7" }, { "start": 364, "end": 389, "text": "(Lapata and Keller, 2005)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these problems, we propose a new method of selecting transliteration hypotheses. We were interested in how to mine information relevant to the selection of hypotheses and how to select correct transliteration hypotheses using the mined information. To do this, we generated a set of CJK transliteration hypotheses for a given English word. We then used the set of transliteration hypotheses as a guide to finding relevant Web page and mining contextual information for the transliteration hypotheses from the Web page. Finally, we used the mined information for machine-learning algorithms including support vector machines (SVMs) and maximum entropy model designed to select the correct transliteration hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. Section 2 describes previous work based on simple Web counts. Section 3 describes a way of generating transliteration hypotheses. Sections 4 and 5 introduce our methods of Web mining and selecting transliteration hypotheses. Sections 6 and 7 deal with our experiments and the discussion. Conclusions are drawn and future work is discussed in Section 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Web counts have been used for selecting transliteration hypotheses in several previous work (Al-Onaizan and Knight, 2002; . Because the Web counts are estimated as the number of hits by a Web search engine, they greatly depend on queries sent to a search engine. Previous work has used three types of queries-monolingual queries (MQs) (Al-Onaizan and Knight, 2002; , bilingual simple queries (BSQs) , and bilingual bigram queries (BBQs) ). If we let S be a source language term and H = {h 1 , \u2022 \u2022 \u2022 , h r } be a set of machine-generated transliteration hypotheses of S, the three types of queries can be defined as MQ: h i (e.g., \u00ff\u00c9$ , \u202b\u05db\u202c\u00ea\u00f3\u00c8\u00f3, and 9 \u00fe t 2 ; ). BSQ: s and h i without quotations (e.g., Clinton \u00ff \u00c9$ , Clinton \u202b\u05db\u202c\u00ea\u00f3\u00c8\u00f3, and Clinton 9 \u00fe t 2 ; ).", "cite_spans": [ { "start": 92, "end": 121, "text": "(Al-Onaizan and Knight, 2002;", "ref_id": "BIBREF0" }, { "start": 335, "end": 364, "text": "(Al-Onaizan and Knight, 2002;", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Quoted bigrams composed of S and h i (e.g., \"Clinton \u00ff\u00c9$\", \"Clinton \u202b\u05db\u202c\u00ea\u00f3\u00c8\u00f3\", and \"Clinton 9 \u00fe t 2 ; \").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BBQ:", "sec_num": null }, { "text": "MQ is not able to determine whether h i is a counterpart of S, but whether h i is a frequently used target term in target-language texts. BSQ retrieves Web pages if S and h i are present in the same document but it does not take the distance between S and h i into consideration. BBQ retrieves Web pages where \"S h i \" or \"h i S\" are present as a bigram. The relative order of Web counts over H makes it possible to select transliteration hypotheses in the previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BBQ:", "sec_num": null }, { "text": "Let S be an English word, P be a pronunciation of S, and T be a target language transliteration corresponding to S. We implement Englishto-CJK transliteration systems based on three different transliteration models -a grapheme-based model (S \u2192 T ), a phoneme-based model (S \u2192 P and P \u2192 T ), and a correspondence-based model (S \u2192 P and (S, P ) \u2192 T ) -as described in our preliminary work . P and T are segmented into a series of sub-strings, each of which corresponds to a source grapheme. We can thus write", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Hypotheses", "sec_num": "3" }, { "text": "S = s 1 , \u2022 \u2022 \u2022 , s n = s n 1 , P = p 1 , \u2022 \u2022 \u2022 , p n = p n 1 , and T = t 1 , \u2022 \u2022 \u2022 , t n = t n 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Hypotheses", "sec_num": "3" }, { "text": ", where s i , p i , and t i represent the i th English grapheme, English phonemes corresponding to s i , and target language graphemes corresponding to s i , respectively. Given S, our transliteration systems generate a sequence of t i corresponding to either s i (in Eq. (1)) or p i (in Eq. (2)) or both of them (in Eq. (3)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Hypotheses", "sec_num": "3" }, { "text": "P r G (T |S) = P r(t n 1 |s n 1 ) (1) P r P (T |S) = P r(p n 1 |s n 1 ) \u00d7 P r(t n 1 |p n 1 ) (2) P r C (T |S) = P r(p n 1 |s n 1 ) \u00d7 P r(t n 1 |s n 1 , p n 1 ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Hypotheses", "sec_num": "3" }, { "text": "The maximum entropy model was used to estimate probabilities in Eqs. (1)-(3) . We produced the n-best transliteration hypotheses using a stack decoder (Schwartz and Chow, 1990) . We then created a set of transliteration hypotheses comprising the n-best transliteration hypotheses.", "cite_spans": [ { "start": 151, "end": 176, "text": "(Schwartz and Chow, 1990)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Hypotheses", "sec_num": "3" }, { "text": "Let S be an English word and H = {h 1 , \u2022 \u2022 \u2022 , h r } be its machine-generated set of transliteration hypotheses. We use S and H to generate queries sent to a search engine 1 to retrieve the top-100 snippets. A correct transliteration and its counterpart tend to be in close proximity on CJK Web pages. Our goal in Web mining was to find such Web pages and mine information that would help to select transliteration hypotheses from these pages. To find these Web pages, we used three kinds of queries, Q 1 =(S and h i ), Q 2 =S, and Q 3 =h i , where Q 1 is the same as BSQ's query and Q 3 is the same as MQ's. The three queries usually result in different sets of Web pages. We categorize the retrieved Web pages by Q 1 , Q 2 , and Q 3 into W 1 , W 2 , and W 3 . We extract three kinds of features from W l as follows, where l = 1, 2, 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Web Mining", "sec_num": "4" }, { "text": "\u2022 F req(h i , W l ): the number of occurrences of h i in W l \u2022 DF req k (h i , W l ): Co-occurrence of S and h i with distance d k \u2208 D in the same snippet of W l . \u2022 P F req k (h i , W l ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Web Mining", "sec_num": "4" }, { "text": "Co-occurrence of S and h i as parenthetical expressions with distance d k \u2208 D in the same snippet of W l . Parenthetical expressions are detected when either S or h i is in parentheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Web Mining", "sec_num": "4" }, { "text": "We", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Web Mining", "sec_num": "4" }, { "text": "define D = {d 1 , d 2 , d 3 } with three ranges of distances between S and h i , where d 1 (d < 5), d 2 (5 \u2264 d < 10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Web Mining", "sec_num": "4" }, { "text": ", and d 3 (10 \u2264 d \u2264 15). We counted distance d with the total number of characters (or words) 2 between S and h i . Here, we can take the contexts of transliteration hypotheses into account using DF req and P F req; while F req is counted regardless of the contexts of the transliteration hypotheses. Figure 1 shows examples of how to calculate F req, DF req k , and P F req k , where S = Clinton,", "cite_spans": [], "ref_spans": [ { "start": 301, "end": 309, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Web Mining", "sec_num": "4" }, { "text": "\u7f8e\u56fd\u524d\u603b\u7edf\u514b\u6797\u987f 1 (Bill Clinton 1 )\u65e5\u83b7\u5f97\u4ed6\u751f\u5e73\u7b2c\u4e8c\u5ea7\u845b\u83b1\u7f8e \u5956\uff0c\u800c\u4e3a\u4ed6\u593a\u5f97\u845b\u83b1\u7f8e\u8bf5\u8bfb\u7c7b\u5956\u7684\u6b63\u662f\u4ed6\u7684\u7545\u9500\u56de\u5fc6\u5f55\u300a\u6211\u7684\u4eba \u751f\u300b(My Life)\u3002\u514b\u6797\u987f 2 \u53bb\u5e74\u4e5f\u66fe\u83b7\u5f97\u845b\u83b1\u7f8e\u5956\u7684\u6700\u4f73\u513f\u7ae5\u8bf5\u8bfb \u5956\u9879\uff0c\u5176\u59bb\u5e0c\u62c9\u854a\u514b\u6797\u987f 3 (Hillary Rodham Clinton 2 )\u5219\u57281997 \u5e74\u4ee5\u81ea\u5df1\u7684 ... \u7f8e\u56fd\u524d\u603b\u7edf\u514b\u6797\u987f 1 (Bill Clinton 1 )\u65e5\u83b7\u5f97\u4ed6\u751f\u5e73\u7b2c\u4e8c\u5ea7\u845b\u83b1\u7f8e \u5956\uff0c\u800c\u4e3a\u4ed6\u593a\u5f97\u845b\u83b1\u7f8e\u8bf5\u8bfb\u7c7b\u5956\u7684\u6b63\u662f\u4ed6\u7684\u7545\u9500\u56de\u5fc6\u5f55\u300a\u6211\u7684\u4eba \u751f\u300b(My Life)\u3002\u514b\u6797\u987f 2 \u53bb\u5e74\u4e5f\u66fe\u83b7\u5f97\u845b\u83b1\u7f8e\u5956\u7684\u6700\u4f73\u513f\u7ae5\u8bf5\u8bfb \u5956\u9879\uff0c\u5176\u59bb\u5e0c\u62c9\u854a\u514b\u6797\u987f 3 (Hillary Rodham Clinton 2 )\u5219\u57281997 \u5e74\u4ee5\u81ea\u5df1\u7684 ... W 1 : Q 1 =(Clinton \u514b\u6797\u987f) ::\u514b\u6797\u987f 4 (Clinton 3 )\u7acb\u7aff\u89c1\u5f71\u5e2e\u52a9\u514b 1 \u91cc(Kerry):: \u514b 2 \u91cc(John Kerry)\u8eab\u8fb9\u7684\u9009\u6c11\uff0c\u4ed6\u4eec\u8bd5\u56fe\u628a\u672a\u4f5c\u51b3\u5b9a\u7684\u9009\u6c11 \u4ece\u6295\u7968\u7ad9\u5413\u8dd1\uff0c\u514b\u6797\u987f 5 (Clinton 4 )\u8bf4\uff0c\u4ed6\u8fd8\u8ba1\u5212\u4e8e\u661f\u671f\u4e00\u5728 \u4f5b\u7f57\u91cc\u8fbe\u5dde\u6709\u4e00\u4e2a\u5355\u72ec\u7684\u9009\u4e8b\u3002\u4ed6\u6279\u8bc4\u4e86\u5e03\u4ec0(Bush)\u7684\"\u8001\u4e00 \u5957\"\u574f\u653f\u7b56\u3002 \u514b\u6797\u987f 6 (Clinton 5 )\u548c\u514b 3 \u91cc(Kerry) ... ::\u514b\u6797\u987f 4 (Clinton 3 )\u7acb\u7aff\u89c1\u5f71\u5e2e\u52a9\u514b 1 \u91cc(Kerry):: \u514b 2 \u91cc(John Kerry)\u8eab\u8fb9\u7684\u9009\u6c11\uff0c\u4ed6\u4eec\u8bd5\u56fe\u628a\u672a\u4f5c\u51b3\u5b9a\u7684\u9009\u6c11 \u4ece\u6295\u7968\u7ad9\u5413\u8dd1\uff0c\u514b\u6797\u987f 5 (Clinton 4 )\u8bf4\uff0c\u4ed6\u8fd8\u8ba1\u5212\u4e8e\u661f\u671f\u4e00\u5728 \u4f5b\u7f57\u91cc\u8fbe\u5dde\u6709\u4e00\u4e2a\u5355\u72ec\u7684\u9009\u4e8b\u3002\u4ed6\u6279\u8bc4\u4e86\u5e03\u4ec0(Bush)\u7684\"\u8001\u4e00 \u5957\"\u574f\u653f\u7b56\u3002 \u514b\u6797\u987f 6 (Clinton 5 )\u548c\u514b 3 \u91cc(Kerry) ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Web Mining", "sec_num": "4" }, { "text": "Figure 1: Web corpora collected by Clinton and \u00ff \u00c9$ ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "Snippet 1 \u00ff\u00c9$ 1 \u00ff\u00c9$ 2 \u00ff\u00c9$ 3 Clinton 1 1 4 1 6 8 Clinton 2 72 29 2 Snippet 2 \u00ff\u00c9$ 4 \u00ff\u00c9$ 5 \u00ff\u00c9$ 6 Clinton 3 0 3 6 8 1 Clinton 4 40 0 37 Clinton 5 85 41 0 Snippet 2 \u00ff 1 \u00ff 2 \u00ff 3 Clinton", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "h i =\u00ff\u00c9$ in W 1 collected by Q 1 =(Clinton \u00ff\u00c9 $).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "The subscripted numbers of Clinton and \u00ff\u00c9 $ were used to indicate how many times they occurred in W 1 . In Fig. 1 , \u00ff\u00c9$ occurs six times thus F req(h i , W 1 ) = 6. Table 1 lists the distance between Clinton and \u00ff\u00c9$ within each snippet of W 1 . We can obtain DF req", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 113, "text": "Fig. 1", "ref_id": null }, { "start": 165, "end": 172, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "1 (h i , W 1 ) = 5. P F req 1 (h i , W l )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "is calculated by detecting parenthetical expressions between S and h i when", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "DF req 1 (h i , W l ) is counted. Because all S in W 1 (Clinton 1 to Clinton 5 ) are in parentheses, P F req 1 (h i , W 1 ) is the same as DF req 1 (h i , W 1 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "We ignore F req, DF req k , and P F req k when h i is a substring of other transliteration hypotheses because h i usually has a higher F req, DF req k , and P F req k than h j if h i is a substring of h j . Let a set of transliteration hypotheses for S = Clinton be H= {h 1 = \u00ff\u00c9$, h 2 = \u00ff}. Here, h 2 is a substring of h 1 . In Fig. 1 , h 2 appears six times as a substring of h 1 and three times independently in Snippet 2 . Moreover, independently used h 2 (\u00ff 1 , \u00ff 2 , and \u00ff 3 ) and S (Clinton 3 and Clinton 5 ) are sufficiently close to count DF req k and P F req k . Therefore, the F req, DF req k , and P F req k of h 1 will be lower than those of h 2 if we do not take the substring relation between h 1 and h 2 into account. Considering the substring relation, we obtain F req(h 2 , W 1 ) = 3, DF req 1 (h 2 , W 1 ) = 1, DF req 2 (h 2 , W 1 ) = 2, P F req 1 (h 2 , W 1 ) = 1, and P F req 2 (h 2 , W 1 ) = 2.", "cite_spans": [], "ref_spans": [ { "start": 328, "end": 334, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Snippet 2", "sec_num": null }, { "text": "We select transliteration hypotheses by ranking them. A set of transliteration hypotheses,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "H = {h 1 , h 2 , \u2022 \u2022 \u2022 , h r },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "is ranked to enable a correct hypothesis to be identified. We devise a rank function, g(h i ) in Eq. 4, that ranks a correct transliteration hypothesis higher and the others lower.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "g(h i ) : H \u2192 {R : R is ordering of h i \u2208 H} (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "Let x i \u2208 X be a feature vector of h i \u2208 H, y i = {+1, \u22121} be the training label for x i , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "T D = {td 1 =< x 1 , y 1 >, \u2022 \u2022 \u2022 , td z =< x z ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "y z >} be the training data for g(h i ). We prepare the training data for g(h i ) as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "1. Given each English word S in the training-set, generate transliteration hypotheses H.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "2. Given h i \u2208 H, assign y i by looking for S and h i in the training-set -y i = +1 if h i is a correct transliteration hypothesis corresponding to S, otherwise y i = \u22121.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "3. For each pair (S, h i ), generate its feature vector x i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis Selection", "sec_num": "5" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct a training data set, T D:", "sec_num": "4." }, { "text": "T D = T D + T D \u2212 \u2022 T D + td i where y i = +1 \u2022 T D \u2212 td j where y j = \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct a training data set, T D:", "sec_num": "4." }, { "text": "We used two machine-learning algorithms, support vector machines (SVMs) 3 and maximum entropy model 4 for our implementation of g(h i ). The SVMs assign a value to each transliteration hypothesis (h i ) using", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct a training data set, T D:", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g SV M (h i ) = w \u2022 x i + b", "eq_num": "(5)" } ], "section": "Construct a training data set, T D:", "sec_num": "4." }, { "text": "where w denotes a weight vector. Here, we use the predicted value of g SV M (h i ) rather than the predicted class of h i given by SVMs because our ranking function, as represented by Eq. 4, determines the relative ordering between h i and h j in H. A ranking function based on the maximum entropy model assigns a probability to h i using", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct a training data set, T D:", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g MEM (h i ) = P r(y i = +1|x i )", "eq_num": "(6)" } ], "section": "Construct a training data set, T D:", "sec_num": "4." }, { "text": "We can finally obtain a ranked list for the given Hthe higher the g(h i ) value, the better the h i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct a training data set, T D:", "sec_num": "4." }, { "text": "We represent the feature vector, x i , with two types of features. The first is the confidence scores of h i given by Eqs. (1)-(3) and the second is Web-based features -F req, DF req k , and P F req k . To normalize F req, DF req k , and P F req k , we use their relative frequency over H as in Eqs. (7)-(9), where k = 1, 2, 3 and l = 1, 2, 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "RF (h i , W l ) = F req(h i ,W l ) h j \u2208H F req(h j ,W l ) (7) RDF k (h i , W l ) = DF req k (h i ,W l ) h j \u2208H DF req k (h j ,W l ) (8) RP F k (h i , W l ) = P F req k (h i ,W l ) h j \u2208H P F req k (h j ,W l )", "eq_num": "(9)" } ], "section": "Features", "sec_num": "5.1" }, { "text": "Figure 2 shows how to construct feature vector x i from a given English word, Rachel, and its Chinese hypotheses, H, generated from our transliteration systems. We can obtain r Chinese transliteration hypotheses and classify them into positive and negative samples according to y i . Note that y i = +1 if and only if h i is registered as a counterpart of S in the training data. The bottom of Fig. 2 shows our feature set representing x i . There are three confidence scores in P (h i |S) according to transliteration models and the three Web-based features W eb(W 1 ), W eb(W 2 ), and W eb(W 3 ). Figure 2 : Feature vectors", "cite_spans": [], "ref_spans": [ { "start": 394, "end": 400, "text": "Fig. 2", "ref_id": null }, { "start": 599, "end": 607, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": "5.1" }, { "text": "\u96f7\u5947\u5c14 \u62c9\u8d6b\u5c14 \u62c9\u5207\u5c14 \u96f7\u8d6b\u5c14 \u96f7\u514b\u5c14 \u96f7\u5207\u5c14 h r \u2026 h 5 h 4 h 3 h 2 h 1 H -1 -1 -1 -1 -1 +1 y r \u2026 y 5 y 4 y 3 y 2 y 1 Y Rachel RF(h i ,W 1 ) RDF 1 (h i ,W 1 ) RDF 2 (h i ,W 1 ) RDF 3 (h i ,W 1 ) RPF 1 (h i ,W 1 ) RPF 2 (h i ,W 1 ) RPF 3 (h i ,W 1 ) Web (W 1 ) RF(W 3 ) RDF 1 (h i ,W 3 ) RDF 2 (h i ,W 3 ) RDF 3 (h i ,W 3 ) RPF 1 (h i ,W 3 ) RPF 2 (h i ,W 3 ) RPF 3 (h i ,W 3 ) RF(h i ,W 2 ) RDF 1 (h i ,W 2 ) RDF 2 (h i ,W 2 ) RDF 3 (h i ,W 2 ) RPF 1 (h i ,W 2 ) RPF 2 (h i ,W 2 ) RPF 3 (h i ,W 2 ) Pr G (h i |S) Pr P (h i |S) Pr C (h i |S) Web (W 3 ) Web (W 2 ) Pr(h i |S) x i td 1 \u2208 TD + td 2 , td 3 , td 4 , td 5 ,\u2026,td r \u2208 TD - x r \u2026 x 5 x 4 x 3 x 2 x 1 X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.1" }, { "text": "We evaluated the effectiveness of our system in selecting CJK transliteration hypotheses. We used the same test set used in Li et al. (2004) (ECSet) for Chinese transliterations (Xinhua News Agency, 1992) and those used in for Japanese and Korean transliterations -EJSET and EK-SET (Breen, 2003; Nam, 1997 We compared our transliteration system with three previous ones, all of which were based on a grapheme-based model (Goto et al., 2003; Kang and Kim, 2000; Li et al., 2004) . LI04 6 is an Englishto-Chinese transliteration system, which simultaneously takes English and Chinese contexts into consideration (Li et al., 2004) . KANG00 is an Englishto-Korean transliteration system and GOTO03 is an English-to-Japanese one -they segment a chunk of English graphemes and identify the most relevant sequence of target graphemes corresponding to the chunk (Goto et al., 2003; Kang and Kim, 2000) 7 . GM, PM, and CM, which are respectively based on Eqs. (1)-(3), are the transliteration systems we used for generating transliteration hypotheses. Our transliteration systems showed comparable or better performance than the previous ones regardless of the language. We compared simple Web counts with our Web mining for hypothesis selection. We used the same set of transliteration hypotheses H then compared their performance in hypothesis selection with two measures, relative frequency and g(h i ). Tables 4 and 5 list the results. Here, \"Upper bound\" is a system that always selects the correct transliteration hypothesis if there is a correct one in H. also be regarded as the \"Coverage\" of H generated by our transliteration systems. MQ, BSQ, and BBQ in the upper section of Table 4 , represent hypothesis selection systems based on the relative frequency of Web counts over H, the same measure used in :", "cite_spans": [ { "start": 124, "end": 140, "text": "Li et al. (2004)", "ref_id": "BIBREF8" }, { "start": 178, "end": 204, "text": "(Xinhua News Agency, 1992)", "ref_id": "BIBREF16" }, { "start": 282, "end": 295, "text": "(Breen, 2003;", "ref_id": "BIBREF1" }, { "start": 296, "end": 305, "text": "Nam, 1997", "ref_id": "BIBREF10" }, { "start": 421, "end": 440, "text": "(Goto et al., 2003;", "ref_id": "BIBREF2" }, { "start": 441, "end": 460, "text": "Kang and Kim, 2000;", "ref_id": "BIBREF6" }, { "start": 461, "end": 477, "text": "Li et al., 2004)", "ref_id": "BIBREF8" }, { "start": 610, "end": 627, "text": "(Li et al., 2004)", "ref_id": "BIBREF8" }, { "start": 854, "end": 873, "text": "(Goto et al., 2003;", "ref_id": "BIBREF2" }, { "start": 874, "end": 893, "text": "Kang and Kim, 2000)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 141, "end": 148, "text": "(ECSet)", "ref_id": null }, { "start": 1677, "end": 1684, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W ebCounts x (h i ) h j \u2208H W ebCounts x (h j )", "eq_num": "(10)" } ], "section": "Experiments", "sec_num": "6" }, { "text": "where W ebCounts x (h i ) is a function returning Web counts retrieved by x \u2208 {MQ, BSQ, BBQ} RF (W l ), RDF (W l ), and RP F (W l ) in Table 4 represent hypothesis selection systems with their relative frequency, where RDF (W l ) and RP F (W l ) use", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "3 k=1 RDF k (h j , W l ) and 3 k=1 RP F k (h j , W l ), respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "The comparison in Table 4 shows which is best for selecting transliteration hypotheses when each relative frequency is used alone. Table 5 compares Web counts with features mined from the Web when they are used as features in", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 131, "end": 138, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "g(h i ) -{P r(h i |S), W eb(W l )} in MEM W M and SV M W M (our proposed method), while {P r(h i |S), W ebCounts x (h i )} in MEM W C and SV M W C .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Here, W eb(W l ) is a set of mined features from W l as described in Fig .2 (1976). 1987--1991. \u8def\u6613\u65af\u2022 \u8d1d\u5c14\u7eb3\u591a\u2022\u7fc1\u74e6\u7eb3\u5148\u751f. 2001--2005. \u7eb3\u7c73\u6bd4\u4e9a. 1993--1997 (1976). 1987--1991. \u8def\u6613\u65af\u2022 \u8d1d\u5c14\u7eb3\u591a\u2022\u7fc1\u74e6\u7eb3\u5148\u751f. 2001--2005. \u7eb3\u7c73\u6bd4\u4e9a. 1993--1997.. ..", "cite_spans": [ { "start": 76, "end": 141, "text": "(1976). 1987--1991. \u8def\u6613\u65af\u2022 \u8d1d\u5c14\u7eb3\u591a\u2022\u7fc1\u74e6\u7eb3\u5148\u751f. 2001--2005. \u7eb3\u7c73\u6bd4\u4e9a. 1993--1997", "ref_id": null }, { "start": 142, "end": 209, "text": "(1976). 1987--1991. \u8def\u6613\u65af\u2022 \u8d1d\u5c14\u7eb3\u591a\u2022\u7fc1\u74e6\u7eb3\u5148\u751f. 2001--2005. \u7eb3\u7c73\u6bd4\u4e9a. 1993--1997..", "ref_id": null } ], "ref_spans": [ { "start": 69, "end": 75, "text": "Fig .2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "\"\u7f57\u514b\u5229\u592b \u7f57\u514b\u5229\u592b\" \" (meaning (meaning Rawcliffe Rawcliffe) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 3 retrieved by MQ: \"", "sec_num": null }, { "text": "Snippet 4 retrieved by MQ: \" \"\u5965\u5c14\u5fb7\u897f \u5965\u5c14\u5fb7\u897f\" \" (meaning (meaning A Aldersey ldersey) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Snippet 3 retrieved by MQ: \"", "sec_num": null }, { "text": "The results in the tables show that our systems consistently outperformed systems based on Web counts, especially for Chinese. This was due to the difference between languages. Japanese and Chinese do not use spaces between words. However, Japanese is written using three different alphabet systems, called Hiragana, Katakana, and Kanji, that assist word segmentation. Moreover, words written in Katakana are usually Japanese transliterations of foreign words. This makes it possible for a Web search engine to effectively retrieve Web pages containing given Japanese transliterations. Like English, Korean has spaces between words (or word phrases). As the spaces in the languages reduce ambiguity in segmenting words, a Web search engine can correctly identify Web pages containing given Korean transliterations. In contrast, there is a severe word-segmentation problem with Chinese that causes Chinese Web search engines to incorrectly retrieve Web pages, as shown in Fig. 3 . For example, Snippet 1 is not related to \"Aman\" but to \"a man\". Snippet 2 contains a super-string of a given Chinese query, which corresponds to \"Academy\" rather than to \"Agard\", which is the English counterpart of the Chinese transliteration\u00b8A. Moreover, Web search engines ignore punctuation marks in Chinese. In Snippet 3 and Snippet 4 , \",\" and \"\u2022\" in the underlined terms are disregarded, so the Web counts based on such Web documents are noisy. Thus, noise in the Chinese Web counts causes systems based on Web counts to produce more errors than our systems do. Our proposed method can filter out such noise because our systems take punctuation marks and the contexts of transliterations in Web mining into consideration. Thus, our systems based on features mined from the Web were able to achieve the best performance. The results revealed that our systems based on the Web-mining technique can effectively be used to select transliteration hypotheses regardless of the language. In Web mining, we used W 1 , W 2 , and W 3 , collected by respective queries Q 1 =(S and h i ), Q 2 =S, and Q 3 =h i . To investigate their contribution, we tested our proposed method with different combinations of Web corpora. \"Base\" is a baseline system that only uses P r(h i |S) as features but does not use features mined from the Web. We added features mined from different combinations of Web corpora to \"Base\" from W 1 to W All .", "cite_spans": [], "ref_spans": [ { "start": 971, "end": 977, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Figure 3: Snippets causing errors in Web counts", "sec_num": null }, { "text": "In Table 6 , we can see that W 1 , a set of Web pages retrieved by Q 1 , tends to give more relevant information than W 2 and W 3 , because Q 1 can search more Web pages containing both S and h i in the top-100 snippets if S and h i are a correct transliteration pair. Therefore, its performance tends to be superior in Table 6 if W 1 is used, especially for ECSet. However, as W 1 occasionally retrieves few snippets, it is not able to provide sufficient information. Using W 2 or W 3 , we can address the problem. Thus, combinations of W 1 and others (W 1+2 , W 1+3 , W All ) provided better W A than W 1 .", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 6", "ref_id": null }, { "start": 320, "end": 327, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Contribution of Web corpora", "sec_num": "6.2" }, { "text": "Several Web mining techniques for transliteration lexicons have been developed in the last few years (Jiang et al., 2007; . The main difference between ours and those previous ones is in the way a set of transliteration hypotheses (or candidates) is created. Jiang et al. (2007) generated Chinese transliterations for given English words and searched the Web using the transliterations. They generated only the best transliteration hypothesis and focused on Web mining to select transliteration lexicons rather than selecting transliteration hypotheses. The best transliteration hypothesis was used to guide Web searches. Then, transliteration candidates were mined from the retrieved Web pages. Therefore, their performance greatly depended on their ability to mine transliteration candidates from the Web. However, this system might create errors if it cannot find a correct transliteration candidate from the retrieved Web pages. Because of this, their system's coverage and W A were relatively poor than ours 8 . However, our transliteration process was able to generate a set of transliteration hypotheses with excellent coverage and could thus achieve superior W A. searched the Web using given source words and mined the retrieved Web pages to find target-language transliteration candidates. They extracted all possible sequences of target-language characters from the retrieved Web snippets as transliteration candidates for which the beginnings and endings of the given source word and the extracted transliteration candidate were phonetically similar. However, while this can exponentially increase the number of transliteration candidates, ours used the n-best transliteration hypotheses but still achieved excellent coverage.", "cite_spans": [ { "start": 101, "end": 121, "text": "(Jiang et al., 2007;", "ref_id": "BIBREF4" }, { "start": 259, "end": 278, "text": "Jiang et al. (2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "We have described a novel approach to selecting transliteration hypotheses based on Web mining. We first generated CJK transliteration hypotheses for a given English word and retrieved Web pages using the transliteration hypotheses and the given English word as queries for a Web search engine. We then mined features from the retrieved Web pages and trained machine-learning algorithms using the mined features. Finally, we selected transliteration hypotheses by ranking them. Our experiments revealed that our proposed method worked well regardless of the language, while simple Web counts were not effective, especially for Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Because our method was very effective in selecting transliteration pairs, we expect that it will also be useful for selecting translation pairs. We plan to extend our method in future work to selecting translation pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We used Google (http://www.google.com) 2 Depending on whether the languages had spacing units, words (for English and Korean) or characters (for Chinese and Japanese) were chosen to calculate d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "SV M light (Joachims, 2002) 4 \"Maximum Entropy Modeling Toolkit\"(Zhang, 2004)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set n = 10 for the n-best. Thus, n \u2264 r \u2264 3 \u00d7 n whereH = {h 1, h2, \u2022 \u2022 \u2022 , hr}6 The WA of LI04 was taken from the literature, where the training data were the same as the union of our training set and the development set while the test data were the same as in our test set. In other words, LI04 used more training data than ours did. With the same setting as LI04, our GM, PM, and CM produced respective WAs of 70.0, 57.7, and 71.7.7 We implemented KANG00(Kang and Kim, 2000) and GOTO03(Goto et al., 2003), and tested them with the same data as ours.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since bothJiang et al.'s (2007) and ours used Chinese transliterations of personal names as a test set, we can indirectly compare our coverage and W A with theirs(Jiang et al., 2007).Jiang et al. (2007) achieved a 74.5% coverage of transliteration candidates and 47.5% W A, while ours achieved a 94.6% coverage of transliteration hypotheses and 82.0-83.9% W A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Translating named entities using monolingual and bilingual resources", "authors": [ { "first": "Kevin", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL '02", "volume": "", "issue": "", "pages": "400--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al-Onaizan and Kevin Knight. 2002. Translating named entities using monolingual and bilingual re- sources. In Proc. of ACL '02, pages 400-408.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "EDICT Japanese/English dictionary .le. The Electronic Dictionary Research and Development Group", "authors": [ { "first": "J", "middle": [], "last": "Breen", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Breen. 2003. EDICT Japanese/English dictionary .le. The Electronic Dictionary Research and Development Group, Monash University. http://www.csse. monash.edu.au/\u02dcjwb/edict.html.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Transliteration considering context information based on the maximum entropy method", "authors": [ { "first": "I", "middle": [], "last": "Goto", "suffix": "" }, { "first": "N", "middle": [], "last": "Kato", "suffix": "" }, { "first": "N", "middle": [], "last": "Uratani", "suffix": "" }, { "first": "T", "middle": [], "last": "Ehara", "suffix": "" } ], "year": 2003, "venue": "Proc. of MT-Summit IX", "volume": "", "issue": "", "pages": "125--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Goto, N. Kato, N. Uratani, and T. Ehara. 2003. Transliteration considering context information based on the maximum entropy method. In Proc. of MT- Summit IX, pages 125-132.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mining the Web to create a language model for mapping between English names and phrases and Japanese", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Qu", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Evans", "suffix": "" } ], "year": 2004, "venue": "Proc. of Web Intelligence", "volume": "", "issue": "", "pages": "110--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Grefenstette, Yan Qu, and David A. Evans. 2004. Mining the Web to create a language model for mapping between English names and phrases and Japanese. In Proc. of Web Intelligence, pages 110- 116.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Named entity translation with Web mining and transliteration", "authors": [ { "first": "Long", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Lee-Feng", "middle": [], "last": "Chien", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "" } ], "year": 2007, "venue": "Proc. of IJCAI", "volume": "", "issue": "", "pages": "1629--1634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Jiang, Ming Zhou, Lee-Feng Chien, and Cheng Niu. 2007. Named entity translation with Web min- ing and transliteration. In Proc. of IJCAI, pages 1629- 1634.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 2002. Learning to Classify Text Us- ing Support Vector Machines: Methods, Theory and Algorithms. Kluwer Academic Publishers.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "English-to-Korean transliteration using multiple unbounded overlapping phoneme chunks", "authors": [ { "first": "I", "middle": [ "H" ], "last": "Kang", "suffix": "" }, { "first": "G", "middle": [ "C" ], "last": "Kim", "suffix": "" } ], "year": 2000, "venue": "Proc. of COLING '00", "volume": "", "issue": "", "pages": "418--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. H. Kang and G. C. Kim. 2000. English-to-Korean transliteration using multiple unbounded overlapping phoneme chunks. In Proc. of COLING '00, pages 418-424.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Web-based models for natural language processing", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2005, "venue": "ACM Trans. Speech Lang. Process", "volume": "2", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirella Lapata and Frank Keller. 2005. Web-based models for natural language processing. ACM Trans. Speech Lang. Process., 2(1):3.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A joint source-channel model for machine transliteration", "authors": [ { "first": "H", "middle": [], "last": "Li", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Su", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL '04", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Li, M. Zhang, and J. Su. 2004. A joint source-channel model for machine transliteration. In Proc. of ACL '04, pages 160-167.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Generating phonetic cognates to handle named entities in English-Chinese cross-language spoken document retrieval", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Meng", "suffix": "" }, { "first": "Wai-Kit", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "K", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2001, "venue": "Proc. of Automatic Speech Recognition and Understanding, 2001. ASRU '01", "volume": "", "issue": "", "pages": "311--314", "other_ids": {}, "num": null, "urls": [], "raw_text": "H.M. Meng, Wai-Kit Lo, Berlin Chen, and K. Tang. 2001. Generating phonetic cognates to handle named entities in English-Chinese cross-language spoken document retrieval. In Proc. of Automatic Speech Recognition and Understanding, 2001. ASRU '01, pages 311-314.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Foreign dictionary. Sung An Dang", "authors": [ { "first": "Y", "middle": [ "S" ], "last": "Nam", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. S. Nam. 1997. Foreign dictionary. Sung An Dang.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An English-Korean transliteration model using pronunciation and contextual rules", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2002, "venue": "Proc. of COLING2002", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh and Key-Sun Choi. 2002. An English- Korean transliteration model using pronunciation and contextual rules. In Proc. of COLING2002, pages 758-764.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mining the Web for transliteration lexicons: Joint-validation approach", "authors": [ { "first": "Hitoshi", "middle": [], "last": "Jong-Hoon Oh", "suffix": "" }, { "first": "", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2006, "venue": "Web Intelligence", "volume": "", "issue": "", "pages": "254--261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh and Hitoshi Isahara. 2006. Mining the Web for transliteration lexicons: Joint-validation ap- proach. In Web Intelligence, pages 254-261.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A comparison of different machine transliteration models", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2006, "venue": "Journal of Artificial Intelligence Research (JAIR)", "volume": "27", "issue": "", "pages": "119--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh, Key-Sun Choi, and Hitoshi Isahara. 2006. A comparison of different machine transliter- ation models. Journal of Artificial Intelligence Re- search (JAIR), 27:119-151.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Finding ideographic representations of Japanese names written in Latin script via language identification and corpus validation", "authors": [ { "first": "Yan", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL '04", "volume": "", "issue": "", "pages": "183--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Qu and Gregory Grefenstette. 2004. Finding ideo- graphic representations of Japanese names written in Latin script via language identification and corpus val- idation. In Proc. of ACL '04, pages 183-190.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The N-best algorithm: An efficient and exact procedure for finding the N most likely sentence hypothesis", "authors": [ { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Yen-Lu", "middle": [], "last": "Chow", "suffix": "" } ], "year": 1990, "venue": "Procs. of ICASSP '90", "volume": "", "issue": "", "pages": "81--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Schwartz and Yen-Lu Chow. 1990. The N-best algorithm: An efficient and exact procedure for finding the N most likely sentence hypothesis. In Procs. of ICASSP '90, pages 81-84.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Chinese transliteration of foreign personal names", "authors": [ { "first": "Xinhua", "middle": [], "last": "News Agency", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinhua News Agency. 1992. Chinese transliteration of foreign personal names. The Commercial Press.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Maximum entropy modeling toolkit for python and C++", "authors": [ { "first": "L", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Zhang. 2004. Maximum entropy model- ing toolkit for python and C++. http: //homepages.inf.ed.ac.uk/s0450736/ software/maxent/manual.pdf.", "links": null } }, "ref_entries": { "TABREF2": { "content": "
: Test data sets
data into training, development, and blind test sets
as in Table 2. The training set was used to train our
three transliteration models to generate the n-best
transliteration hypotheses 5 . The development set
was used to train hypothesis selection based on sup-
port vector machines and maximum entropy model.
We used the blind test set for evaluation. The eval-
uation was done in terms of word accuracy (W A).
W A is the proportion of correct transliterations in
the best hypothesis by a system to correct transliter-
ations in the blind test set.
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF3": { "content": "
: W A of individual transliteration systems
(%)
6.1 Results: Web counts vs. Web mining
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF5": { "content": "
: Web counts (WC) vs. Web mining (WM):
hypothesis selection by relative frequency (%)
SystemECSet EJSet EKSet
WCMEM W C SV M W C74.7 74.886.1 86.985.6 86.5
WMMEM W M SV M W M82.0 83.988.2 88.585.8 86.7
Upper bound94.693.593.2
Table 5: Web counts (WC) vs. Web mining (WM):
hypothesis selection by g(h i ) (%)
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF6": { "content": "
Snippet 1 retrieved by BSQ: Aman \"\u963f\u66fc\"
\u53eb\u6211\u81ea\u5df1\u7684\u4e00\u4e2a\u4eba(a Man To Call My Own) \u6982\u8981 \u53eb\u6211\u81ea\u5df1\u7684\u4e00\u4e2a\u4eba(a Man To Call My Own) \u6982\u8981 \u4e00\u672c\u4e66\u7684\u6982\u8981\u6458\u8981\u64b0\u5199\u4eba-\u53eb\u6211\u81ea\u5df1\u7684\u4e00\u4e2a\u4eba(a Man To Call \u4e00\u672c\u4e66\u7684\u6982\u8981\u6458\u8981\u64b0\u5199\u4eba-\u53eb\u6211\u81ea\u5df1\u7684\u4e00\u4e2a\u4eba(a Man To Call My Own), \u6545\u4e8b\uff0c\u5728ranchhouse\u7684\u96c6\u5408\uff0c\u4e8c\u4e2a\u4e3b\u6f14\uff0c\u5b6a\u751f\u963f\u66fc\u8fbe\uff0c My Own), \u6545\u4e8b\uff0c\u5728ranchhouse\u7684\u96c6\u5408\uff0c\u4e8c\u4e2a\u4e3b\u6f14\uff0c\u5b6a\u751f\u963f\u66fc\u8fbe\uff0c \u5e76\u4e14\u5723\u6bcd\u739b\u4e3d\u4e9a\u662f\u8fdc\u79bb\u5bb6\u548c\u8212\u9002\u3002 \u6e29\u6696\u5fc3\u4e3b\u8981\u7684\u6d6a\u6f2b\u53f2\uff0c\u4f20\u8bf4 \u5e76\u4e14\u5723\u6bcd\u739b\u4e3d\u4e9a\u662f\u8fdc\u79bb\u5bb6\u548c\u8212\u9002\u3002 \u6e29\u6696\u5fc3\u4e3b\u8981\u7684\u6d6a\u6f2b\u53f2\uff0c\u4f20\u8bf4 \u4e5f\u662f\u4e00\u4e2a\u6210\u957f\uff0c\u5b66\u4f1a\u548c\u4e86\u89e3\uff0c... \u4e5f\u662f\u4e00\u4e2a\u6210\u957f\uff0c\u5b66\u4f1a\u548c\u4e86\u89e3\uff0c...
Snippet 2 retrieved by MQ: \"\u963f\u52a0\" (meaning Agard)
\u5916\u56fd\u96d5\u5851\u6b23\u8d4f(4/03)-\u6301\u77db\u8005 \u5916\u56fd\u96d5\u5851\u6b23\u8d4f(4/03)-\u6301\u77db\u8005 \u5728\u53e4\u4ee3\u96c5\u5178\u57ce\u5916,\u6709\u4e24\u4e2a\u8457\u540d\u7684\u8fd0\u52a8\u573a,\u4e00\u4e2a\u53eb\u963f\u52a0\u5fb7\u7c73,\u4e00\u4e2a\u53eb \u5728\u53e4\u4ee3\u96c5\u5178\u57ce\u5916,\u6709\u4e24\u4e2a\u8457\u540d\u7684\u8fd0\u52a8\u573a,\u4e00\u4e2a\u53eb\u963f\u52a0\u5fb7\u7c73,\u4e00\u4e2a\u53eb \u5362\u57fa\u5384\u6a21\u3002\u90a3\u4e24\u5904\u8fd0\u52a8\u573a\u53d7\u5230\u653f\u5e9c\u7684\u4fdd\u62a4,\u90a3\u91cc\u5e38\u5e74\u78a7\u6811\u6210\u836b,\u7eff \u5362\u57fa\u5384\u6a21\u3002\u90a3\u4e24\u5904\u8fd0\u52a8\u573a\u53d7\u5230\u653f\u5e9c\u7684\u4fdd\u62a4,\u90a3\u91cc\u5e38\u5e74\u78a7\u6811\u6210\u836b,\u7eff \u8335\u94fa\u5730\u3002 ... \u8fd0\u52a8\u573a\u963f\u52a0\u5fb7\u7c73(Academy)\u7531\u4e8e\u7ecf\u5e38\u5c55\u5f00\u5b66\u672f\u6d3b\u52a8, \u8335\u94fa\u5730\u3002 ... \u8fd0\u52a8\u573a\u963f\u52a0\u5fb7\u7c73(Academy)\u7531\u4e8e\u7ecf\u5e38\u5c55\u5f00\u5b66\u672f\u6d3b\u52a8, \u6e10\u6e10\u6f14\u53d8\u6210\u540d\u8bcd\"\u5b66\u9662\"\u4e13\u79f0\u4e86\u3002... \u6e10\u6e10\u6f14\u53d8\u6210\u540d\u8bcd\"\u5b66\u9662\"\u4e13\u79f0\u4e86\u3002...
", "html": null, "type_str": "table", "num": null, "text": ". \u514b\u5229\u592b\u5fb7\u626c|Cliff De Young| \u751f\u5e73| \u4f5c\u54c1| \u5199\u771f| EO\u5f71\u89c6\u9891\u9053 \u5c11\u5973\u4e0a\u4e86\u763e | The Secret Life of Zoey (TV) \u53d1\u5e03\u5e74\u4ee3\uff1a2002 \u5bfc\u6f14\uff1a \u7f57\u4f2f\u7279\u66fc\u5fb7\u5c14 \u6f14\u5458\uff1a\u7c73\u4e9a\u6cd5\u7f57 , \u514b\u5229\u592b\u5fb7\u626c , \u5361\u7f57\u7433\u963f\u4f26 , \u5b89\u5fb7 \u9c81\u9ea6\u5361\u9521 , Avery Raskin. \u5728\u7247\u4e2d\u9970\u6f14\uff1aLarry Carter. \u8bc4\u5206\uff1a4.92\u2026 \u514b\u5229\u592b\u5fb7\u626c|Cliff De Young| \u751f\u5e73| \u4f5c\u54c1| \u5199\u771f| EO\u5f71\u89c6\u9891\u9053 \u5c11\u5973\u4e0a\u4e86\u763e | The Secret Life of Zoey (TV) \u53d1\u5e03\u5e74\u4ee3\uff1a2002 \u5bfc\u6f14\uff1a \u7f57\u4f2f\u7279\u66fc\u5fb7\u5c14 \u6f14\u5458\uff1a\u7c73\u4e9a\u6cd5\u7f57 \u7f57 , , \u514b\u5229\u592b \u514b\u5229\u592b\u5fb7\u626c , \u5361\u7f57\u7433\u963f\u4f26 , \u5b89\u5fb7 \u9c81\u9ea6\u5361\u9521 , Avery Raskin. \u5728\u7247\u4e2d\u9970\u6f14\uff1aLarry Carter. \u8bc4\u5206\uff1a4.92\u2026 UNESCO. General Conference; 32nd; Election of member \u963f\u8d6b\u8fc8\u5fb7\u2022\u5965\u5c14\u5fb7\u2022\u897f\u8fea\u2022\u5df4\u5df4\u5148\u751f. \u662f. 1987--1991. \u7a46\u54c8\u8fc8\u5fb7\u2022\u9a6c\u8d6b \u7a46\u5fb7\u2022\u4e4c\u5c14\u5fb7\u2022\u97e6\u8fbe\u8fea\u5148\u751f. \u83ab\u6851\u6bd4\u514b." }, "TABREF7": { "content": "", "html": null, "type_str": "table", "num": null, "text": ".... UNESCO. General Conference; 32nd; Election of member \u963f\u8d6b\u8fc8\u5fb7\u2022\u5965\u5c14\u5fb7 \u5965\u5c14\u5fb7\u2022 \u2022\u897f \u897f\u8fea\u2022\u5df4\u5df4\u5148\u751f. \u662f. 1987--1991. \u7a46\u54c8\u8fc8\u5fb7\u2022\u9a6c\u8d6b \u7a46\u5fb7\u2022\u4e4c\u5c14\u5fb7\u2022\u97e6\u8fbe\u8fea\u5148\u751f. \u83ab\u6851\u6bd4\u514b." } } } }