Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
89.6 kB
{
"paper_id": "I13-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:09.136339Z"
},
"title": "Robust Transliteration Mining from Comparable Corpora with Bilingual Topic Models",
"authors": [
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University",
"location": {
"postCode": "606-8501",
"settlement": "Kyoto"
}
},
"email": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Science and Technology Agency",
"location": {
"addrLine": "Kawaguchi-shi",
"postCode": "332-0012",
"settlement": "Saitama"
}
},
"email": "nakazawa@pa.jst.jp"
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University",
"location": {
"postCode": "606-8501",
"settlement": "Kyoto"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a high-precision, languageindependent transliteration framework applicable to bilingual lexicon extraction. Our approach is to employ a bilingual topic model to enhance the output of a state-of-the-art graphemebased transliteration baseline. We demonstrate that this method is able to extract a high-quality bilingual lexicon from a comparable corpus, and we extend the topic model to propose a solution to the out-of-domain problem.",
"pdf_parse": {
"paper_id": "I13-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a high-precision, languageindependent transliteration framework applicable to bilingual lexicon extraction. Our approach is to employ a bilingual topic model to enhance the output of a state-of-the-art graphemebased transliteration baseline. We demonstrate that this method is able to extract a high-quality bilingual lexicon from a comparable corpus, and we extend the topic model to propose a solution to the out-of-domain problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A large, high-quality bilingual lexicon is of great utility to any dictionary-based system that processes bilingual data. The ability to automatically generate such a lexicon without relying on expensive training data or preexisting lexical resources allows us to find translations for rare and unknown words with high efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transliteration 1 is particularly important as new words are often created by importing words from other languages, especially English. It would be an almost impossible task to create and maintain a dictionary of such words by hand, as new words appear rapidly, especially in online texts, and word usage can vary over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we construct a languageindependent transliteration framework. Our model builds on previous transliteration work, improving extraction and generation precision by including semantic as well as purely lexical features. The proposed model can be trained on comparable corpora, thereby not relying on expensive or often unavailable parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The motivation behind the approach of combining lexical and semantic features is that these two components are largely independent, greatly improving the effectiveness of their combination. This is particularly important for word-sense disambiguation. For example, a purely lexical approach is not sufficient to transliterate the Japanese \u30bd\u30fc\u30b9 (soosu), as it can mean either 'sauce' or 'source' depending on the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work has considered various methods for transliteration, ranging from simple edit distance and noisy-channel models (Brill et al., 2001) to conditional random fields (Ganesh et al., 2008) and finite state automata (Noeman and Madkour, 2010) . We construct a baseline by modelling transliteration as a Phrase-Based Statistical Machine Translation (PB-SMT) task, a popular and well-studied approach (Matthews, 2007; Hong et al., 2009; Antony et al., 2010) .",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Brill et al., 2001)",
"ref_id": "BIBREF2"
},
{
"start": 175,
"end": 196,
"text": "(Ganesh et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 223,
"end": 249,
"text": "(Noeman and Madkour, 2010)",
"ref_id": "BIBREF18"
},
{
"start": 406,
"end": 422,
"text": "(Matthews, 2007;",
"ref_id": "BIBREF14"
},
{
"start": 423,
"end": 441,
"text": "Hong et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 442,
"end": 462,
"text": "Antony et al., 2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The vast majority of previous work on transliteration has considered only lexical features, for example spelling similarity and transliteration symbol mapping, however we build on the inspiration of Li et al. (2007) and later Hagiwara and Sekine (2012) , who introduced semantic features to a transliteration model. Li et al. (2007) proposed the concept of 'semantic transliteration', which is the consideration of inherent semantic information in transliterations. Their example is the influence of the source language and gender of foreign names on their transliterations into Chinese. Hagiwara and Sekine (2012) expanded upon this idea by considering a 'latent class' transliteration model considering transliterations to be grouped into categories, such as language of origin, which can give additional information about their formation. For example, if we know that a transliteration is of Italian origin, we are more likely to recover the letter sequence 'gli' than if it were originally French.",
"cite_spans": [
{
"start": 199,
"end": 215,
"text": "Li et al. (2007)",
"ref_id": "BIBREF13"
},
{
"start": 226,
"end": 252,
"text": "Hagiwara and Sekine (2012)",
"ref_id": "BIBREF6"
},
{
"start": 316,
"end": 332,
"text": "Li et al. (2007)",
"ref_id": "BIBREF13"
},
{
"start": 588,
"end": 614,
"text": "Hagiwara and Sekine (2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "While these methods consider limited semantic features, they do not make use of the rich contextual information available from comparable corpora. We show such contextual information, in the form of bilingual topic distributions, to be highly effective in generating transliterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Bilingual lexicon mining from non-parallel data has been tackled in recent research such as Tamura et al. (2012) and Haghighi et al. (2008) , and we build upon the techniques of multilingual topic extraction from Wikipedia pioneered by Ni et al. (2009) . Previous research in lexicon mining has tended to focus on semantic features, such as context similarity vectors and topic models, but these have yet to be applied to the task of transliteration mining. We use the word-topic distribution similarities explored in Vuli\u0107 et al. (2011) as baseline word similarity measures.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "Tamura et al. (2012)",
"ref_id": "BIBREF25"
},
{
"start": 117,
"end": 139,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF4"
},
{
"start": 236,
"end": 252,
"text": "Ni et al. (2009)",
"ref_id": "BIBREF17"
},
{
"start": 518,
"end": 537,
"text": "Vuli\u0107 et al. (2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "In some cases it is possible to use monolingual corpora for transliteration mining, as English is often written alongside transliterations (Kaji et al., 2011) , however we consider the more general setting where such information is unavailable.",
"cite_spans": [
{
"start": 139,
"end": 158,
"text": "(Kaji et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "We begin by constructing a baseline transliteration system trained only on lexical features. This baseline system will allow us to compare directly the effectiveness of the addition of a semantic model to a traditional transliteration framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Transliteration Model",
"sec_num": "3"
},
{
"text": "Our baseline model is a grapheme-based machine transliteration system. We model transliteration as a machine translation task on a character rather than word level, treating character groups as phrases. The model is trained by learning phrase alignments such as that shown in Figure 1 . The field of phrasebased SMT has been well studied and there exists a standard toolset enabling the construc- We use the default configuration of Moses (Koehn et al., 2007) to train our baseline system, with the distortion limit set to 1 (as transliteration requires monotonic alignment). Character alignment is performed by GIZA++ (Och and Ney, 2003) with the 'grow-diag-final' heuristic for training. We apply standard tuning with MERT (Och, 2003) on the BLEU (Papineni et al., 2001) score. The language model is built with SRILM (Stolcke, 2002) using Kneser-Ney smoothing (Kneser and Ney, 1995) .",
"cite_spans": [
{
"start": 439,
"end": 459,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF12"
},
{
"start": 619,
"end": 638,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF20"
},
{
"start": 725,
"end": 736,
"text": "(Och, 2003)",
"ref_id": "BIBREF19"
},
{
"start": 749,
"end": 772,
"text": "(Papineni et al., 2001)",
"ref_id": "BIBREF21"
},
{
"start": 819,
"end": 834,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF24"
},
{
"start": 862,
"end": 884,
"text": "(Kneser and Ney, 1995)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 276,
"end": 284,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Baseline Transliteration Model",
"sec_num": "3"
},
{
"text": "\u30b3 \u30f3 \u30d4 \u30e5 \u30fc \u30bf \u30fc ko n p yu u ta a co m p u t e r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Transliteration Model",
"sec_num": "3"
},
{
"text": "The system described above has been implemented as specified in previous work such as Matthews (2007) . We demonstrate that this standard, highly-regarded baseline can be greatly improved with our proposed method.",
"cite_spans": [
{
"start": 86,
"end": 101,
"text": "Matthews (2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Transliteration Model",
"sec_num": "3"
},
{
"text": "Having set up the baseline system, we turn to the task of combining a semantic model with our transliteration engine. We employ the method of bilingual LDA (Mimno et al., 2009) , an extension of monolingual Latent Dirichlet Allocation (LDA) (Blei et al., 2003) as the semantic model.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(Mimno et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 241,
"end": 260,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Model",
"sec_num": "4"
},
{
"text": "Monolingual LDA takes as its input a set of monolingual documents and generates a wordtopic distribution \u03d5 classifying words appearing in these documents into semantically similar topics. Bilingual LDA extends this by considering pairs of comparable documents in each of two languages, and outputs a pair of word-topic distributions \u03d5 and \u03c8, one for each input language. The graphical model for bilingual LDA is illustrated in Figure 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 427,
"end": 435,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Semantic Model",
"sec_num": "4"
},
{
"text": "We choose to employ a bilingual topic model to measure semantic similarity (i.e. topic similarity) of word pairs rather than the more intuitive method of comparing monolingual context similarity vectors (Rapp, 1995) for reasons of robustness and scalability.",
"cite_spans": [
{
"start": 203,
"end": 215,
"text": "(Rapp, 1995)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for Bilingual LDA",
"sec_num": "4.1"
},
{
"text": "Measuring context similarity on a word level requires a bilingual lexicon to match crosslanguage word pairs and such bilingual data is often expensive or unavailable. There are also problems with directly comparing collocations and word concurrence of distant language pairs as they do not always correspond predictably. Therefore our proposed method provides a more robust approach using coarser semantic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for Bilingual LDA",
"sec_num": "4.1"
},
{
"text": "The use of topic models as a semantic similarity measure is a scalable method because document-aligned bilingual training data is growing ever more widely available. Examples of such sources are Wikipedia, multilingual newspaper articles and mined Web data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for Bilingual LDA",
"sec_num": "4.1"
},
{
"text": "In order to apply bilingual topic models to a transliteration task, we must construct an effective word similarity measure for source and target transliteration candidates. We improve upon three natural similarity measures, Cos, Cue and KL, based on those considered in Vuli\u0107 et al. (2011) , by proposing two methods of feature combination: reordering and SVM combination.",
"cite_spans": [
{
"start": 270,
"end": 289,
"text": "Vuli\u0107 et al. (2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Measures",
"sec_num": "4.2"
},
{
"text": "The reranking method considers hybrid scores Base+Cos, Base+Cue and Base+KL. These are generated by reranking the top-10 baseline (Base) transliteration candidates by their respective semantic scores (Cos, Cue or KL). We used 10 candidates for filtering as we found this gave the best balance between volume and accuracy in preliminary experiments. Approximately 75-85% of correct transliterations (depending on language pair) were within the top-10 candidates and this is therefore an upper bound for the hybrid model accuracy. As a comparison, the top-100 candidates contained roughly 80-85% of correct transliterations, the remainder failing to be identified by the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Measures",
"sec_num": "4.2"
},
{
"text": "We additionally consider the combination of all three semantic features with the baseline (Moses) transliteration scores using a Support Vector Machine (SVM) (Vapnik, 1995) . The SVM is used to classify candidate pairs as 'transliteration' (positive) or 'not transliteration' (negative), and we rerank the candidates by SVM predicted values. The features used for SVM training are baseline, Cos, Cue and KL scores.",
"cite_spans": [
{
"start": 158,
"end": 172,
"text": "(Vapnik, 1995)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Measures",
"sec_num": "4.2"
},
{
"text": "The similarity measures Cos, Cue and KL are defined below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Measures",
"sec_num": "4.2"
},
{
"text": "The Cos method calculates the cosine similarity of the topic distribution vectors \u03c8 k,we and \u03d5 k,w f for transliteration pair candidates w e and w f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cos Similarity",
"sec_num": "4.2.1"
},
{
"text": "Cos(w e , w f ) = \u2211 K k=1 \u03c8 k,we \u03d5 k,w f \u221a \u2211 K k=1 \u03c8 2 k,we \u221a \u2211 K k=1 \u03d5 2 k,w f (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cos Similarity",
"sec_num": "4.2.1"
},
{
"text": "The Cue method expresses the mean of the two probabilities P (w e | w f ) of a transliteration w e given some source language string w f and P (w f | w e ) of the reverse. We define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Similarity",
"sec_num": "4.2.2"
},
{
"text": "P (w e | w f ) = K \u2211 k=1 \u03c8 k,we \u03d5 k,w f N orm \u03d5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Similarity",
"sec_num": "4.2.2"
},
{
"text": "and likewise for P (w e | w f ), with the normalization factors given by N orm \u03d5 = \u2211 K k=1 \u03d5 k,w f and N orm \u03c8 = \u2211 K k=1 \u03c8 k,we . Finally, we consider:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Similarity",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Cue(w e , w f ) = 1 2 (P (w e | w f ) + P (w f | w e ))",
"eq_num": "(2)"
}
],
"section": "Cue Similarity",
"sec_num": "4.2.2"
},
{
"text": "The KL method considers the averaged Kullback-Leibler divergence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KL Similarity",
"sec_num": "4.2.3"
},
{
"text": "KL(w e , w f ) = 1 2 (KL e,f + KL f,e ) (3) KL e,f = K \u2211 k=1 \u03d5 k,we N orm \u03d5 log \u03d5 k,we /N orm \u03d5 \u03c8 k,w f /N orm \u03c8 KL f,e = K \u2211 k=1 \u03c8 k,w f N orm \u03c8 log \u03c8 k,w f /N orm \u03c8 \u03d5 k,we /N orm \u03d5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KL Similarity",
"sec_num": "4.2.3"
},
{
"text": "using the same normalization factors as for Cue similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KL Similarity",
"sec_num": "4.2.3"
},
{
"text": "In order to demonstrate the effectiveness of our proposed model, we constructed an evaluation framework for a transliteration extraction task. The language pairs English-Japanese (EN-JA), Japanese-English (JA-EN), English-Korean (EN-KO) and Korean-English (KO-EN) were chosen to verify that this method is effective for a variety of languages and in both transliteration directions. Indeed, the methods introduced in this paper could also be applied directly to other languages with many transliterations, such as Chinese, Arabic and Hindi. While it is possible to make languagespecific optimizations, we decided only to preprocess the data minimally (such as removing punctuation) in order to demonstrate that our model works effectively in a languageindependent setting. Examples of languagespecific preprocessing techniques that we did not perform include segmentation of Japanese compound nouns (Nakazawa et al., 2005) and splitting of Korean syllabic blocks (eumjeols) into smaller components (jamo) (Hong et al., 2009) . Test EN-JA/JA-EN 59K 1K 1K KO-EN/EN-KO 21K 1K 1K Table 1 : Number of aligned word pairs in each fold of data.",
"cite_spans": [
{
"start": 899,
"end": 922,
"text": "(Nakazawa et al., 2005)",
"ref_id": "BIBREF16"
},
{
"start": 1005,
"end": 1024,
"text": "(Hong et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1027,
"end": 1074,
"text": "Test EN-JA/JA-EN 59K 1K 1K KO-EN/EN-KO 21K",
"ref_id": "TABREF0"
},
{
"start": 1081,
"end": 1088,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We chose to build our data set from Wikipedia articles, as they provide document-aligned comparable data across a variety of languages. Figure 3 shows how the Wikipedia data was split.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "5.1"
},
{
"text": "We trained our baseline system on aligned Wikipedia page titles. This data consisted of pairs of English and Japanese/Korean words extracted from the freely available Wikipedia XML dumps. The aligned titles were filtered with hand-written rules 2 to extract only transliteration pairs, and the test data was verified for correctness by hand. This data will be made available to encourage comparison for future transliteration research 3 . The composition of this data is shown in Table 1 . Aligned word pairs were shuffled randomly before splitting into the three folds to ensure an even topic distribution across each of 'Train', 'Tune' and 'Test'.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 487,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Training Data",
"sec_num": "5.1.1"
},
{
"text": "The bilingual topic model was trained on the body text of Wikipedia articles aligned with Wikipedia inter-language links. These correspond to articles covering the same content, however they are rarely of similar length and not necessarily close transliterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Topic Model",
"sec_num": "5.1.2"
},
{
"text": "We first pre-processed the most recent Wikipedia XML dumps to remove all tags and data other than plain text sentences, then aligned articles with language links to generate comparable document pairs. Words occurring fewer than 10 or more than 100K times were also removed to reduce noise and computation time. ), the topic model was trained on document pairs and the SVM was trained on the title pairs 'Tune' fold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Topic Model",
"sec_num": "5.1.2"
},
{
"text": "The training data for the proposed SVM hybrid model was built from the same data used for the baseline (tuning fold). We first generated the top-10 distinct transliteration candidates for the tuning data using the 'nbest-list' option in Moses. These candidates were then labeled as 'transliteration' or 'nottransliteration' and feature scores (Base, Cos, Cue, KL) were generated for each candidate. The SVM model was trained using these labels and feature scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SVM Hybrid Model",
"sec_num": "5.1.3"
},
{
"text": "PolyLDA++, our implementation of multilingual LDA, was based on GibbsLDA++ (Phan et al., 2007) , a toolkit for monolingual LDA. This software is available for free 4 .",
"cite_spans": [
{
"start": 75,
"end": 94,
"text": "(Phan et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LDA Implementation Details",
"sec_num": "5.2"
},
{
"text": "Each topic model was trained over 1000 iterations, and the standard Dirichlet prior hyperparameters for the LDA model were set as \u03b1 = 50/K for K topics and \u03b2 = 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LDA Implementation Details",
"sec_num": "5.2"
},
{
"text": "The choice of number of topics is important, as demonstrated in Figure 4 , which shows the top-1 accuracy of the SVM hybrid model using various numbers of topics K. The optimal value of K seems to be between around 100 for this data.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "LDA Implementation Details",
"sec_num": "5.2"
},
{
"text": "The model accuracy gradually decreases with adding more than 100 topics. We believe that this is because the granularity of the topics becomes too fine to accommodate for the wide differences in semantic usage of English and Japanese/Korean transliteration pairs. A higher number of topics could be more suitable for more closely related language pairs, such as Italian and English (Vuli\u0107 et al., 2011) , because the higher similarity of word usage would allow for topics of more limited semantic scope. Such experiments are to be considered in future work. The results below are for K = 100. Table 2 : Top-1 accuracy of proposed model for each hybrid scoring method. mance. The SVM hybrid model outperformed the baseline for every language pair, by as much as 0.276 for JA-EN. This suggests that the addition of a bilingual topic model significantly improves transliteration accuracy. In general the SVM was the most effective hybrid score, outperforming Base+Cos, Base+Cue and Base+KL in all but KO-EN, where it performed very slightly worse than Base+Cue. Figure 5 shows the precision-recall curve for the SVM hybrid model over the test set. We vary recall by ranking the hybrid model scores for all test pairs and selecting only the highest scoring fraction to evaluate. This simulates a lexicon extraction task where we wish to sacrifice recall for precision. The results demonstrate that it is possible to improve significantly the precision of a set of extracted transliterations by reducing the recall. This large improvement is made possible because the topic similarity scores are particularly effective at measuring confidence in each transliteration candidate, allowing effective selection of the correct transliterations.",
"cite_spans": [
{
"start": 382,
"end": 402,
"text": "(Vuli\u0107 et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 593,
"end": 600,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 1059,
"end": 1067,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "LDA Implementation Details",
"sec_num": "5.2"
},
{
"text": "The results compare favorably to the top-1 accuracy of similar existing systems, such as Di-recTL+ (Jiampojamarn et al., 2010) , which also used Wikipedia titles (EN-JA 0.398), and Hagiwara and Sekine (2012) (EN-JA 0.349).",
"cite_spans": [
{
"start": 99,
"end": 126,
"text": "(Jiampojamarn et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Previous Work",
"sec_num": "5.4"
},
{
"text": "Our baseline transliteration system can be measured against previous work using Moses and GIZA++ alignment, such as Matthews 2007 While it is difficult to compare directly the accuracy of transliteration systems across different languages and data sets, especially since we use additional data to train the semantic model, the results above show that our model has made a considerable improvement over the state-of-the-art baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Previous Work",
"sec_num": "5.4"
},
{
"text": "The model described in this paper revolves around the use of a bilingual topic model to improve transliteration quality. What happens then when a source word is not covered by the topic model? This is a very important problem in a practical setting, and we show that even in such cases our model can improve considerably upon the baseline system. We define 'out-of-domain' words as source language words that did not appear in the topic model training data and hence do not have a known topic distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Out-of-Domain Words",
"sec_num": "6"
},
{
"text": "Our proposed approach is to consider not the word-topic distribution of the source word w e itself, but rather that of the words in the surrounding context. We consider two methods for calculating the modified topic similarity scores over the set of words W e in the same context as the source word. Let S(w e , w f ) be a basic topic similarity score Cos, Cue or KL, then we define the extended scores ExtM ean(W e , w f ) and ExtW eight(W e , w f ) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Details",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ExtM ean(W e , w f ) = \u2211 we\u2208We S(w e , w f ) |W e |",
"eq_num": "(4)"
}
],
"section": "Model Details",
"sec_num": "6.1"
},
{
"text": "ExtW eight(W e , w f ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Details",
"sec_num": "6.1"
},
{
"text": "\u2211 we\u2208We c \u2032 we S(w e , w f ) \u2211",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Details",
"sec_num": "6.1"
},
{
"text": "we\u2208We c \u2032 we (5) where c \u2032 we = (log c we ) \u22121 for the frequency c we of w e appearing in the semantic model training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Details",
"sec_num": "6.1"
},
{
"text": "ExtMean corresponds to the mean topic similarity for each word in the context W e . ExtWeight is weighted by the inverse log frequency of each word, allowing consideration of their semantic importance. These extended scores are used to train the SVM in place of the original scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Details",
"sec_num": "6.1"
},
{
"text": "We performed an additional experiment where we transliterated a set of 25 Japanese words unknown to the topic model into English. These words appeared in Wikipedia fewer than 10 times and therefore were not included in our training data. We extracted the sentences and documents in which these words occurred, and back-transliterated the Japanese words into English by hand. We considered both sentence-level and document-level contexts for W e , and evaluated each extended metric ExtMean and ExtWeight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain Experiment",
"sec_num": "6.2"
},
{
"text": "The results of the out-of-domain experiment are shown in Table 3 , which gives the top-1 accuracy of the SVM hybrid model trained on the ExtMean and ExtWeight counterparts of Cos, Cue and KL similarities. Base is the top-1 accuracy using only the Moses baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Out-of-Domain Experiment",
"sec_num": "6.2"
},
{
"text": "The most effective settings were to use Ex-tWeight on a sentence level context. There is a balance between size and relevance of context, with document-level context containing too many misleading words. The improvement of ExtWeight over ExtMean shows the impor-Base ExtMean ExtWeight Document 0.27 0.44 0.48 Sentence 0.48 0.52 Table 3 : Top-1 accuracy for out-of-domain model extension (JA-EN).",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Out-of-Domain Experiment",
"sec_num": "6.2"
},
{
"text": "tance of weighting contextual words based on their importance (i.e. inverse log frequency).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain Experiment",
"sec_num": "6.2"
},
{
"text": "The results show a large improvement (+0.25) over the baseline scores that is comparable to that of the in-domain model (+0.28, see Table 2 ). This suggests that the proposed model is an effective solution to the out-ofdomain problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Out-of-Domain Experiment",
"sec_num": "6.2"
},
{
"text": "An example of the top candidates for a successful and an incorrect transliteration are given in Tables 4 and 5 respectively. We can see that the topic model has succeeded in finding the correct transliteration of 'batik', a traditional Javanese fabric, however a low score was given to the Korean transliteration of the name 'Bernard' appearing in the training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 110,
"text": "Tables 4 and 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Error Analysis",
"sec_num": "7"
},
{
"text": "The benefits of the addition of a topic model is made clear with the example of 'batik' in Table 4. The semantic similarity measures give a higher score to 'batik' than 'Batic', a Slavic surname, despite 'Batic' being the more likely transliteration according to the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Error Analysis",
"sec_num": "7"
},
{
"text": "The improvement over the baseline for backtransliteration (XX-EN), on average +0.24, was considerably greater than that for transliteration (EN-XX), on average +0.17. We believe that this is due to the vast range of transliteration spelling variations in the non-English target languages. Since there is only one correct spelling variation defined in our test data and the topic distributions for each spelling variation are very similar, it is not possible to guess correctly. For an example of this problem, see Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 514,
"end": 521,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Error Analysis",
"sec_num": "7"
},
{
"text": "The majority of transliteration errors were caused by unsuccessful topic alignment between the source and target words. This was partly caused by the differences in usage of the original English words and the transliterated Japanese or Korean. For example, the Table 5 : An incorrect transliteration -bernard \u2192 \ubca0\ub974\ub098\ub974\ud2b8 (bereunareuteu).",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 268,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topic Alignment Difficulties",
"sec_num": "7.1"
},
{
"text": "Japanese \u30d0\u30a4\u30ad\u30f3\u30b0 (baikingu) is a transliteration of 'Viking', however it is almost always used to mean 'buffet', deriving from the Scandinavian smorgasbord. In this case, we can expect the Japanese to be associated with foodrelated topics, quite different from 'Viking'. There are also many cases where words that do not clearly fit into one topic have unclear distributions across many groups. For example, the word \ub85c\ub9c8 (roma / 'Rome') could be more strongly categorized with 'cities' and 'sightseeing' in English but 'history' and 'classical civilization' in Korean, giving a low overall topic correlation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Alignment Difficulties",
"sec_num": "7.1"
},
{
"text": "We found that our model was more successful at finding the correct transliteration of longer words, as smaller words tend to have more spelling variations and are orthographically more similar to other words. By removing words of length 5 characters or less from the test data, we were able to improve the top-1 accuracy (SVM) to 0.593 (KO-EN, +0.089) and 0.721 (JA-EN, +0.111). In a practical lexicon extraction task over the entirety of Wikipedia this would cover roughly 35-45% of words (depending on language). There was almost no variation in transliteration accuracy based on word frequency. The baseline is relatively unaffected by word frequency, with the exception of finding very rare character phrases not in the training data, and the topic model proved to be robust across words of both high and low frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Word Length and Frequency",
"sec_num": "7.2"
},
{
"text": "In this paper we demonstrated that the addition of semantic features can significantly improve transliteration accuracy. Specifically, it is possible to outperform the top-1 accuracy of a state-of-the-art phrase-based SMT transliteration baseline through the addition of a bilingual topic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Furthermore, our extended model is able to produce a considerable improvement in accuracy even for out-of-domain source words that have an unknown topic distribution. The experimental data set was constructed to simulate the task of extracting unknown word pairs from a comparable corpus, however our extension model results suggest that it will be possible to extract high-quality transliterations from larger and less comparable corpora than ever before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "In the future we would like to explore in depth the improvements to machine translation made possible by this approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "This paper considers both 'transliteration' (EN-XX) and 'back-transliteration' (XX-EN). For simplicity we refer to both tasks as 'transliteration'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Heuristic rules included extraction of Japanese katakana, a script used primarily for transliterations, and words aligned with proper nouns as defined in a name dictionary.3 http://orchid.kuee.kyoto-u.ac.jp/~john",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their feedback. The first author is supported by a Japanese Government (MEXT) research scholarship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical Method for English to Kannada Transliteration",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Antony",
"suffix": ""
},
{
"first": "V",
"middle": [
"P"
],
"last": "Ajith",
"suffix": ""
},
{
"first": "K",
"middle": [
"P"
],
"last": "Soman",
"suffix": ""
}
],
"year": 2010,
"venue": "BAIP",
"volume": "2010",
"issue": "",
"pages": "356--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.J. Antony, V.P. Ajith, and K.P. Soman 2010. Statistical Method for English to Kannada Transliteration. BAIP 2010, CCIS 70, pp. 356- 362.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent Dirichlet Allocation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Blei, Andrew Ng and Michael Jordan. 2003. Latent Dirichlet Allocation. In The Journal of Machine Learning Research, Volume 3.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatically Harvesting Katakana-English Term Pairs from Search Engine Query Logs",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Kacmarcik",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Sixth Natural Language Processing Pacific Rim Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill, Gary Kacmarcik and Chris Brock- ett. 2001. Automatically Harvesting Katakana- English Term Pairs from Search Engine Query Logs In Proceedings of the Sixth Natural Lan- guage Processing Pacific Rim Symposium, 2001.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Statistical Transliteration for Cross Language Information Retrieval using HMM alignment model and CRF",
"authors": [
{
"first": "Surya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Sree",
"middle": [],
"last": "Harsha",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Pingali",
"suffix": ""
}
],
"year": 2008,
"venue": "2nd International Workshop on Cross Language Information Access",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surya Ganesh, Sree Harsha, Prasad Pingali, Va- sudeva Varma. 2008. Statistical Transliteration for Cross Language Information Retrieval using HMM alignment model and CRF. In 2nd Inter- national Workshop on Cross Language Informa- tion Access, IJCNLP 2008.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning Bilingual Lexicons from Monolingual Corpora",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi, Percy Liang, Taylor Berg- Kirkpatrick and Dan Klein. 2008. Learning Bilingual Lexicons from Monolingual Corpora. In ACL 2008.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent Class Transliteration based on Source Language Origin",
"authors": [
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masato Hagiwara and Satoshi Sekine. 2011. La- tent Class Transliteration based on Source Lan- guage Origin. In ACL 2011.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Latent Semantic Transliteration using Dirichlet Mixture",
"authors": [
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masato Hagiwara and Satoshi Sekine. 2012. La- tent Semantic Transliteration using Dirichlet Mixture. In ACL 2012.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Hybrid Approach to English-Korean Name Transliteration",
"authors": [
{
"first": "Gumwon",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Min-Jeong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Do-Gil",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hae-Chang",
"middle": [],
"last": "Rim",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of 2009 Named Entities Workshop, ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gumwon Hong, Min-Jeong Kim, Do-Gil Lee and Hae-Chang Rim. 2009. A Hybrid Approach to English-Korean Name Transliteration. In Proceedings of 2009 Named Entities Workshop, ACL-IJCNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Transliteration Generation and Mining with Limited Training Resources",
"authors": [],
"year": 2010,
"venue": "Proceedings of the 2010 Named Entities Workshop, ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Transliteration Generation and Mining with Limited Training Resources. In Proceedings of the 2010 Named Entities Workshop, ACL 2010.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Splitting Noun Compounds via Monolingual and Bilingual Paraphrasing: A Study on Japanese Katakana Words",
"authors": [
{
"first": "Nobuhiro",
"middle": [],
"last": "Kaji",
"suffix": ""
},
{
"first": "Masaru",
"middle": [],
"last": "Kitsuregawa",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobuhiro Kaji and Masaru Kitsuregawa. 2011. Splitting Noun Compounds via Monolingual and Bilingual Paraphrasing: A Study on Japanese Katakana Words. In EMNLP 2011.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improved backing-off for m-gram language modeling",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Kneser and Hermann Ney. 1995. Im- proved backing-off for m-gram language model- ing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Volume 1.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Ma- chine Translation. In ACL 2007.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semantic Transliteration of Personal Names",
"authors": [
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Khe Chai",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "Jin-Shea",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Minghui",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haizhou Li, Khe Chai Sim, Jin-Shea Kuo, Minghui Dong. 2007. Semantic Transliteration of Per- sonal Names. In ACL 2007.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Machine Transliteration of Proper Names",
"authors": [
{
"first": "David",
"middle": [],
"last": "Matthews",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Matthews. 2007. Machine Transliteration of Proper Names. Masters Thesis, School of In- formatics, University of Edinburgh.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Polylingual topic models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno, Hanna Wallach, Jason Naradowsky, David Smith and Andrew McCallum. 2009. Polylingual topic models. In EMNLP 2009.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic Acquisition of Basic Katakana Lexicon from a Given Corpus",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Daisuke Kawahara and Sadao Kurohashi. 2005. Automatic Acquisition of Ba- sic Katakana Lexicon from a Given Corpus. In IJCNLP 2005.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mining Multilingual Topics from Wikipedia",
"authors": [
{
"first": "Xiaochuan",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Jian-Tao",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaochuan Ni, Jian-Tao Sun, Jian Hu, Zheng Chen. 2009. Mining Multilingual Topics from Wikipedia. In WWW 2009.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language independent transliteration mining system using finite state automata framework",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Noeman",
"suffix": ""
},
{
"first": "Amgad",
"middle": [],
"last": "Madkour",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Noeman and Amgad Madkour. 2010. Lan- guage independent transliteration mining sys- tem using finite state automata framework. In Proceedings of the 2010 Named Entities Work- shop.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Minimum Error Rate Training in Statistical Machine Translation",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In ACL 2003.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Systematic Comparison of Various Statistical Alignment Models",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Och and Hermann Ney. 2003. A System- atic Comparison of Various Statistical Align- ment Models. In Computational Linguistics 2003.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BLEU: A method for automatic evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. BLEU: A method for auto- matic evaluation of Machine Translation. Tech- nical Report RC22176, IBM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "GibbsLDA++: A C/C++ implementation of latent Dirichlet allocation (LDA)",
"authors": [
{
"first": "Xuan-Hieu",
"middle": [],
"last": "Phan",
"suffix": ""
},
{
"first": "Cam-Tu",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuan-Hieu Phan and Cam-Tu Nguyen. 2007. GibbsLDA++: A C/C++ implementation of la- tent Dirichlet allocation (LDA).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Identifying Word Translations in Non-Parallel Texts",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1995,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1995. Identifying Word Transla- tions in Non-Parallel Texts. In ACL 1995.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SRILM -An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICSLP",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -An Extensible Language Modeling Toolkit. In Proceedings of ICSLP, Volume 2.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bilingual Lexicon Extraction from Comparable Corpora Using Label Propagation",
"authors": [
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akihiro Tamura, Taro Watanabe and Eiichiro Sumita. 2012. Bilingual Lexicon Extraction from Comparable Corpora Using Label Propa- gation. In EMNLP-CoNLL 2012.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The Nature of Statistical Learning Theory",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag, New York.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Identifying Word Translations from Comparable Corpora Using Latent Topic Models",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "De",
"middle": [],
"last": "Wim",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Smet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Wim De Smet and Marie-Francine Moens. 2011. Identifying Word Translations from Comparable Corpora Using Latent Topic Models. In ACL 2011.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Example of Japanese-English transliteration phrase alignment. tion of an easily reproducable baseline system.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "(Chinese and Arabic), Hong et al. (2009) (Korean), and Antony et al. (2010) (Kannada)",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Graphical model for Bilingual LDA with K topics, D document pairs and hyperparameters \u03b1 and \u03b2. Topics for each document are sampled from the common distribution \u03b8, and the two languages have word-topic distributions \u03d5 and \u03c8.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "We extracted aligned title pairs (only transliterations) and aligned document pairs from Wikipedia using inter-language links. The baseline was trained and tuned on title pairs('Train' and 'Tune'",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "http://orchid.kuee.kyoto-u.ac.jp/~john",
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"text": "Top-1 accuracy of SVM for various K.",
"uris": null,
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"text": "Precision-recall curve for SVM hybrid model.",
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"num": null,
"text": "(EN-AR 0.43, AR-EN 0.39, EN-ZH 0.38, ZH-EN 0.35) and Hong et al. (2009) (EN-KO 0.45). These scores are consistent with our baseline results.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>compares the top-1 accuracy of our</td></tr><tr><td>proposed hybrid models to the baseline perfor-</td></tr></table>",
"html": null,
"text": ""
}
}
}
}