{ "paper_id": "I08-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:47.220430Z" }, "title": "Multi-View Co-training of Transliteration Model", "authors": [ { "first": "Jin-Shea", "middle": [], "last": "Kuo", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper discusses a new approach to training of transliteration model from unlabeled data for transliteration extraction. We start with an inquiry into the formulation of transliteration model by considering different transliteration strategies as a multi-view problem, where each view exploits a natural division of transliteration features, such as phonemebased, grapheme-based or hybrid features. Then we introduce a multi-view Cotraining algorithm, which leverages compatible and partially uncorrelated information across different views to effectively boost the model from unlabeled data. Applying this algorithm to transliteration extraction, the results show that it not only circumvents the need of data labeling, but also achieves performance close to that of supervised learning, where manual labeling is required for all training samples.", "pdf_parse": { "paper_id": "I08-1049", "_pdf_hash": "", "abstract": [ { "text": "This paper discusses a new approach to training of transliteration model from unlabeled data for transliteration extraction. We start with an inquiry into the formulation of transliteration model by considering different transliteration strategies as a multi-view problem, where each view exploits a natural division of transliteration features, such as phonemebased, grapheme-based or hybrid features. Then we introduce a multi-view Cotraining algorithm, which leverages compatible and partially uncorrelated information across different views to effectively boost the model from unlabeled data. Applying this algorithm to transliteration extraction, the results show that it not only circumvents the need of data labeling, but also achieves performance close to that of supervised learning, where manual labeling is required for all training samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Named entities are important content words in text documents. In many applications, such as crosslanguage information retrieval (Meng et al., 2001; Virga and Khudanpur, 2003) and machine translation (Knight and Graehl, 1998; Chen et al., 2006) , one of the fundamental tasks is to identify these words. Imported foreign proper names constitute a good portion of such words, which are newly translated into Chinese by transliteration. Transliteration is a process of translating a foreign word into the native language by preserving its pronunciation in the original language, otherwise known as translation-by-sound.", "cite_spans": [ { "start": 128, "end": 147, "text": "(Meng et al., 2001;", "ref_id": "BIBREF13" }, { "start": 148, "end": 174, "text": "Virga and Khudanpur, 2003)", "ref_id": "BIBREF22" }, { "start": 199, "end": 224, "text": "(Knight and Graehl, 1998;", "ref_id": "BIBREF6" }, { "start": 225, "end": 243, "text": "Chen et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As new words emerge everyday, no lexicon is able to cover all transliterations. It is desirable to find ways to harvest transliterations from real world corpora. In this paper, we are interested in the learning of English to Chinese (E-C) transliteration model for transliteration extraction from the Web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A statistical transliteration model is typically trained on a large amount of transliteration pairs, also referred to a bilingual corpus. The correspondence between a transliteration pair may be described by the mapping of different basic pronunciation units (BPUs) such as phonemebased 1 , or grapheme-based one, or both. We can see each type of BPU mapping as a natural division of transliteration features, which represents a view to the phonetic mapping problem. By using different BPUs, we approach the transliteration modeling and extraction problems from different views.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. In Section 2, we briefly introduce previous work. In Section 3, we conduct an inquiry into the formulation of transliteration model or phonetic similarity model (PSM) and consider it as a multi-view problem. In Section 4, we propose a multi-view Co-training strategy for PSM training and transliteration extraction. In Section 5, we study the effectiveness of proposed algorithms. Finally, we conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Studies on transliteration have been focused on transliteration modeling and transliteration extraction. The transliteration modeling approach deduces either phoneme-based or grapheme-based mapping rules using a generative model that is trained from a large bilingual corpus. Most of the works are devoted to phoneme-based transliteration modeling (Knight and Graehl, 1998; Lee, 1999) . Suppose that EW is an English word and CW is its Chinese transliteration. EW and CW form an E-C transliteration pair. The phoneme-based approach first converts EW into an intermediate phonemic representation p, and then converts p into its Chinese counterpart CW. The idea is to transform both source and target words into comparable phonemes so that the phonetic similarity between two words can be measured easily.", "cite_spans": [ { "start": 348, "end": 373, "text": "(Knight and Graehl, 1998;", "ref_id": "BIBREF6" }, { "start": 374, "end": 384, "text": "Lee, 1999)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently the grapheme-based approach has attracted much attention. It was proposed by Jeong et al. (1999) , Li et al. (2004) and many others (Oh et al., 2006b) , which is also known as direct orthography mapping. It treats the transliteration as a statistical machine translation problem under monotonic constraint. The idea is to obtain the bilingual orthographical correspondence directly to reduce the possible errors introduced in multiple conversions. However, the grapheme-based transliteration model has more parameters than phoneme-based one does, thus expects a larger training corpus.", "cite_spans": [ { "start": 86, "end": 105, "text": "Jeong et al. (1999)", "ref_id": "BIBREF5" }, { "start": 108, "end": 124, "text": "Li et al. (2004)", "ref_id": "BIBREF11" }, { "start": 141, "end": 159, "text": "(Oh et al., 2006b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Most of the reported works have been focused on either phoneme-or grapheme-based approaches. Bilac and Tanaka (2004) and Oh et al. (2006a; 2006b ) recently proposed using a mix of phoneme and grapheme features, where both features are fused into a single learning process. The feature fusion was shown to be effective. However, their methods hinge on the availability of a labeled bilingual corpus.", "cite_spans": [ { "start": 93, "end": 116, "text": "Bilac and Tanaka (2004)", "ref_id": "BIBREF0" }, { "start": 121, "end": 138, "text": "Oh et al. (2006a;", "ref_id": "BIBREF17" }, { "start": 139, "end": 144, "text": "2006b", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In transliteration extraction, mining translations or transliterations from the ever-growing multilingual Web has become an active research topic, for example, by exploring query logs (Brill et al., 2001 ) and parallel (Nie et al., 1999) or comparable corpora (Sproat et al., 2006) . Transliterations in such a live corpus are typically unlabeled.", "cite_spans": [ { "start": 184, "end": 203, "text": "(Brill et al., 2001", "ref_id": "BIBREF2" }, { "start": 219, "end": 237, "text": "(Nie et al., 1999)", "ref_id": "BIBREF15" }, { "start": 260, "end": 281, "text": "(Sproat et al., 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For model-based transliteration extraction, recent progress in machine learning offers different options to exploit unlabeled data, that include active learning (Lewis and Catlett, 1994) and Co-training (Nigam and Ghani, 2000; T\u00fcr et al. 2005) .", "cite_spans": [ { "start": 161, "end": 186, "text": "(Lewis and Catlett, 1994)", "ref_id": "BIBREF10" }, { "start": 203, "end": 226, "text": "(Nigam and Ghani, 2000;", "ref_id": "BIBREF16" }, { "start": 227, "end": 243, "text": "T\u00fcr et al. 2005)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Taking the prior work a step forward, this paper explores a new way of fusing phoneme and grapheme features through a multi-view Cotraining algorithm (Blum and Mitchell, 1998) , which starts with a small number of labeled data to bootstrap a transliteration model to automatically harvest transliterations from the Web.", "cite_spans": [ { "start": 150, "end": 175, "text": "(Blum and Mitchell, 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Machine transliteration can be formulated as a generative process, which takes a character string in source language as input and generates a character string in the target language as output. Conceptually, this process can be regarded as a 3step decoding: segmentation of both source and target strings into basic pronunciation units (BPUs), relating the source BPUs with target units by resolving different combinations of alignments and unit mappings in finding the most probable BPU pairs. A BPU can be defined as a phoneme sequence, a grapheme sequence, or a part of them. A transliteration model establishes the phonetic relationship between BPUs in two languages to measure their similarity, therefore, it is also known as the phonetic similarity model (PSM).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "To introduce the multi-view concept, we illustrate the BPU transfers in Figure 1 , where each transfer is represented by a direct path with different line style. There are altogether four different paths: the phoneme-based path V 1 (T 1 \u2192T 2 \u2192T 3 ), the grapheme-based path V 4 (T 4 ), and their variants, V 2 (T 1 \u2192T 5 ) and V 3 (T 6 \u2192T 3 ). The last two paths make use of the intermediate BPU mappings between phonemes and graphemes. Each of the paths represents a view to the mapping problem. Given a labeled bilingual corpus, we are able to train a transliteration model for each view easily. The E-C transliteration has been studied extensively in the paradigm of noisy channel model Target Word T1 T2 T4 T3 T5 T6 (Manning and Scheutze, 1999) , with EW as the observation and CW as the input to be recovered. Applying Bayes rule, the transliteration can be described by Eq. 1,", "cite_spans": [ { "start": 730, "end": 758, "text": "(Manning and Scheutze, 1999)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 72, "end": 80, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 689, "end": 726, "text": "Target Word T1 T2 T4 T3 T5", "ref_id": null } ], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "( | ) ( ) ( | ) , ( ) P EW CW P CW P CW EW P EW \u00d7 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "where we need to deal with two probability distributions: P(EW|CW), the probability of transliterating CW to EW, also known as the unit mapping rules, and P(CW), the probability distribution of CW, known as the target language model. , a typical transliteration probability can be expressed as,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "( | ) ( | ) ( | ) ( | ). P EW CW P EW EP P EP CP P CP CW \u2248 \u00d7 \u00d7 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "The language model, P(CW), can be represented by Chinese characters n-gram statistics (Manning and Scheutze, 1999) and expressed in Eq. 3. In the case of bigram, we have,", "cite_spans": [ { "start": 86, "end": 114, "text": "(Manning and Scheutze, 1999)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 2 ( ) ( ) ( | ) N n n n P CW P c P c c \u2212 = \u2248 \u220f", "eq_num": "(3)" } ], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "We next rewrite Eq. (2) for the four different views depicted in Figure 1 in a systematic manner.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 73, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phonetic Similarity Model with Multiple Views", "sec_num": "3" }, { "text": "The phoneme-based approach approximates the transliteration probability distribution by introducing an intermediate phonemic representation. In this way, we convert the words in the source language, say 1 2 , ... K EW e e e = , into English syllables ES , then Chinese syllables CS and finally the target language, say Chinese 1 2 , ... K CW c c c = in sequence. Eq. (2) can be rewritten by replacing EP and CP with ES and CS, respectively, and expressed by Eq. (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phoneme-based Approach", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( | ) ( | ) ( | ) ( | ) P EW CW P EW ES P ES CS P CS CW \u2248 \u00d7 \u00d7", "eq_num": "(4)" } ], "section": "Phoneme-based Approach", "sec_num": "3.1" }, { "text": "The three probabilities correspond to the three-step mapping in V 1 path. The phoneme-based approach suffers from multiple step mappings. This could compromise overall performance because none of the three steps guarantees a perfect conversion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phoneme-based Approach", "sec_num": "3.1" }, { "text": "The grapheme-based approach is inspired by the transfer model (Vauqois, 1988) in machine translation that estimates ( | ) P EW CW directly without interlingua representation. This method aims to alleviate the imprecision introduced by the multiple transfers in phoneme-based approach.", "cite_spans": [ { "start": 62, "end": 77, "text": "(Vauqois, 1988)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Grapheme-based Approach", "sec_num": "3.2" }, { "text": "In practice, a grapheme-based approach converts the English graphemes to Chinese graphemes in one single step. Suppose that we have ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme-based Approach", "sec_num": "3.2" }, { "text": "Eq. 5is a grapheme-based alternative to Eq.(2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme-based Approach", "sec_num": "3.2" }, { "text": "A tradeoff between the phoneme-and graphemebased approaches is to take shortcuts to the mapping between phonemes and graphemes of two languages via V 2 or V 3 , where only two steps of mapping are involved. For V 3 , we rewrite Eq.(2) as Eq. (6):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "( | ) ( | ) ( | ), = \u00d7 P EW CW P EW CS P CS CW (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "where ( | ) P EW CS translates Chinese sounds into English words. For V 2 , we rewrite Eq. (2) as Eq. 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "( | ) ( | ) ( | ), = \u00d7 P EW CW P EW ES P ES CW (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "where ( | ) P ES CW translates Chinese words into English sounds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "Eqs. (4) -(7) describe the four paths of transliteration. In a multi-view problem, one partitions the domain's features into subsets, each of which is sufficient for learning the target concept. Here the target concept is the label of transliteration pair. Given a collection of E-C pair candidates, the transliteration extraction task can be formulated as a hypothesis test, which makes a binary decision as to whether a candidate E-C pair is a genuine transliteration pair or not. Given an E-C pair X={EW,CW}, we have 0 H , which hypothesizes that EW and CW form a genuine E-C pair, and 1 H , which hypothesizes otherwise. The likelihood ratio is given as 0 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "( | ) / ( | ) P X H P X H \u03c3 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": ", where 0 ( | ) P X H and 0 ( | ) P X H are derived from P(EW|CW). By comparing \u03c3 with a threshold \u03c4 , we make the binary decision as that in (Kuo et al., 2007) .", "cite_spans": [ { "start": 142, "end": 160, "text": "(Kuo et al., 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "As discussed, each view takes a distinct path that has its own advantages and disadvantages in terms of model expressiveness and complexity. Each view represents a weak learner achieving moderately good performance towards the target concept. Next, we study a multi-view Co-training process that leverages the data of different views from each other in order to boost the accuracy of a PSM model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Approach", "sec_num": "3.3" }, { "text": "The PSM can be trained in a supervised manner using a manually labeled corpus. The advantage of supervised learning is that we can establish a model quickly as long as labeled data are available. However, this method suffers from some practical constraints. First, the derived model can only be as good as the data it sees. Second, the labeling of corpus is labor intensive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-View Learning Framework", "sec_num": "4" }, { "text": "To circumvent the need of manual labeling, here we study three adaptive strategies cast in the machine learning framework, namely unsupervised learning, Co-training and Co-EM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-View Learning Framework", "sec_num": "4" }, { "text": "Unsupervised learning minimizes human supervision by probabilistically labeling data through an Expectation and Maximization (EM) (Dempster et al., 1977) process. The unsupervised learning strategy can be depicted in Figure 2 by taking the dotted path, where the extraction process accumulates all the acquired transliteration pairs in a repository for training a new PSM. A new PSM is in turn used to extract new transliteration pairs. The unsupervised learning approach only needs a few labeled samples to bootstrap the initial model for further extraction. Note that the training samples are noisy and hence the quality of initial PSM therefore has a direct impact on the final performance.", "cite_spans": [ { "start": 130, "end": 153, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 217, "end": 225, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Unsupervised Learning", "sec_num": "4.1" }, { "text": "The multi-view setting (Muslea et al., 2002) applies to learning problems that have a natural way to divide their features into different views, each of which is sufficient to learn the target concept. Blum and Mitchell (1998) proved that for a problem with two views, the target concept can be learned based on a few labeled and many unlabeled examples, provided that the views are compatible and uncorrelated. Intuitively, the transliteration problem has compatible views. If an E-C pair forms a transliteration, then this is true across all different views. However, it is arguable that the four views in Figure 1 are uncorrelated. Studies (Nigam and Ghani, 2000; Muslea et al., 2002) shown that the views do not have to be entirely uncorrelated for Co-training to take effect. This motivates our attempt to explore multi-view Co-training for learning models in transliteration extraction. To simplify the discussion, here we take a twoview (V 1 and V 2 ) example to show how Cotraining can potentially help. To start with, one can learn a weak hypothesis PSM 1 using V 1 based on a few labeled examples and then apply PSM 1 to all unlabeled examples. If the views are uncorrelated, or at least partially uncorrelated, these newly labeled examples seen from V 1 augment the training set for V 2 . These newly labeled examples present new information from the V 2 point of view, from which one can in turn update the PSM 2 . As the views are compatible, both V 1 and V 2 label the samples consistently according to the same probabilistic transliteration criteria. In this way, PSMs are boosted each other through such an iterative process between two different views. Table 1 . Co-training with two learners.", "cite_spans": [ { "start": 23, "end": 44, "text": "(Muslea et al., 2002)", "ref_id": "BIBREF14" }, { "start": 202, "end": 226, "text": "Blum and Mitchell (1998)", "ref_id": "BIBREF1" }, { "start": 643, "end": 666, "text": "(Nigam and Ghani, 2000;", "ref_id": "BIBREF16" }, { "start": 667, "end": 687, "text": "Muslea et al., 2002)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 608, "end": 616, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1670, "end": 1677, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Co-training and Co-EM", "sec_num": "4.2" }, { "text": "Extending the two-view to multi-view, one can develop multiple learners from several subsets of features, each of which approaches the problem from a unique perspective, called a view when taking the Co-training path in Figure 2 . Finally, we use outputs from multi-view learners to approximate the manual labeling. The multi-view learning is similar to unsupervised learning in the sense that the learning alleviates the need of labeling and starts with very few labeled data. However, it is also different from the unsupervised learning because the latter does not leverage the natural split of compatible and uncorrelated features. Two variants of two-view learning strategy can be summarized in Table 1 and Table 2 , where the algorithm in Table 1 is referred to as Cotraining and the one in Table 2 as Co-EM (Nigam and Ghani. 2000; Muslea et al., 2002) .", "cite_spans": [ { "start": 813, "end": 836, "text": "(Nigam and Ghani. 2000;", "ref_id": "BIBREF16" }, { "start": 837, "end": 857, "text": "Muslea et al., 2002)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 220, "end": 228, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 699, "end": 718, "text": "Table 1 and Table 2", "ref_id": null }, { "start": 744, "end": 751, "text": "Table 1", "ref_id": null }, { "start": 796, "end": 803, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Co-training and Co-EM", "sec_num": "4.2" }, { "text": "In Co-training, Learners A and B are trained on the same training data and updated simultaneously. In Co-EM, Learners A and B are trained on labeled set predicted by each other's view, with their models being updated in sequence. In other words, the Co-EM algorithm interchanges the probabilistic labels generated in the view of each other before a new EM iteration. In both cases, the unsupervised, multi-view algorithms use the hypotheses learned to probabilistically label the examples. Table 2 . Co-EM with two learners.", "cite_spans": [], "ref_spans": [ { "start": 490, "end": 497, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Co-training and Co-EM", "sec_num": "4.2" }, { "text": "The extension of algorithms in Table 1 and 2 to the multi-view transliteration problem is straightforward. After an ensemble of learners are trained, the overall PSM can be expressed as a linear combination of the learners,", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 38, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Co-training and Co-EM", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 ( | ) ( | ), n i i i P EW CW w P EW CW = = \u2211", "eq_num": "(8)" } ], "section": "Co-training and Co-EM", "sec_num": "4.2" }, { "text": "where i w is the weight of i th learner (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-training and Co-EM", "sec_num": "4.2" }, { "text": "i P EW CW , which can be learnt by using a development corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "| )", "sec_num": null }, { "text": "To validate the effectiveness of the learning framework, we conduct a series of experiments in transliteration extraction on a development corpus described later. First, we repeat the experiment in (Kuo et al., 2006) to train a PSM using PSA and GSA feature fusion in a supervised manner, which serves as the upper bound of Co-training or Co-EM system performance. We then train the PSMs with single view V1, V2, V3 and V4 alone in an unsupervised manner. The performance achieved by each view alone can be considered as the baseline for multi-view benchmarking. Then, we run two-view Co-training for different combinations of views on the same development corpus. We expect to see positive effects with the multi-view training. Finally, we run the experiments using two-view Co-training and Co-EM and compare the results.", "cite_spans": [ { "start": 198, "end": 216, "text": "(Kuo et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "A 500 MB development corpus is constructed by crawling pages from the Web for the experiments. We first establish a gold standard for performance evaluation by manually labeling the corpus based on the following criteria: (i) if an EW is partly Given a). A small set of labeled samples and a set of unlabeled samples. b). Learner A is trained on a labeled set to predict the labels of the unlabeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "1) Loop for k iterations a). Learner B is trained on data labeled by Learner A to predict the labels of the unlabeled data; b). Learner A is trained on data labeled by Learner B to predict the labels of the unlabeled data; 2) Combine models from Learners A and B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Given: a). A small set of labeled samples and a set of unlabeled samples. b). Two learners A and B are trained on the labeled set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "1) Loop for k iterations: a). Learners A and B predict the labels of the unlabeled data to augment the labeled set; b). Learners A and B are trained on the augmented labeled set. 2) Combine models from Learners A and B. translated phonetically and partly translated semantically, only the phonetic transliteration constituent is extracted to form a transliteration pair; (ii) multiple E-C pairs can appear in one sentence; (iii) an EW can have multiple valid Chinese transliterations and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We first derive 80,094 E-C pair candidates from the 500 MB corpus by spotting the co-occurrence of English and Chinese words in the same sentences. This can be done automatically without human intervention. Then, the manual labeling process results in 8,898 qualified E-C pairs, also referred to as Distinct Qualified Transliteration Pairs (DQTPs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "To establish comparison, we first train a PSM using all 8,898 DQTPs in a supervised manner and conduct a closed test as reported in Table 3 . We further implement three PSM learning strategies and conduct a systematic series of experiments by following the recognition followed by validation strategy proposed in (Kuo et al., 2007) .", "cite_spans": [ { "start": 313, "end": 331, "text": "(Kuo et al., 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 132, "end": 139, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "F-measure Closed test 0.834 0.663 0.739 Table 3 . Performance with PSM trained in the supervised manner.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 47, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Precision Recall", "sec_num": null }, { "text": "For performance benchmarking, we define the precision as the ratio of extracted number of DQTPs over that of total extracted pairs, recall as the ratio of extracted number of DQTPs over that of total DQTPs, and F-measure as in Eq. (9). They are collectively referred to as extraction performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision Recall", "sec_num": null }, { "text": "2 recall precision F measure recall precision", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision Recall", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00d7 \u00d7 \u2212 = +", "eq_num": "(9)" } ], "section": "Precision Recall", "sec_num": null }, { "text": "As formulated in Section 4.1, first, we derive an initial PSM using randomly selected 100 seed DQTPs for each learner and simulate the Webbased learning process: (i) extract E-C pairs using the PSM; (ii) add all of the extracted E-C pairs to the DQTP pool; (iii) re-estimate the PSM for each view by using the updated DQTP pool. This process is also known as semi-supervised EM (Muslea et al., 2002) .", "cite_spans": [ { "start": 378, "end": 399, "text": "(Muslea et al., 2002)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Learning", "sec_num": "5.1" }, { "text": "As shown in Figure 3 , the unsupervised learning algorithm consistently improves the initial PSM using in all four views. To appreciate the effectiveness of each view, we report the Fmeasures on each individual view V 1 , V 2 , V 3 and V 4, as 0.680, 0.620, 0.541 and 0.520, respectively at the 6 th iteration. We observe that V 1 , the phonemebased path, achieves the best result. Figure 3. F-measure over iterations using unsupervised learning with individual view.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Unsupervised Learning", "sec_num": "5.1" }, { "text": "We report three typical combinations of two coworking learners or two-view Co-training. Like in unsupervised learning, we start with the same 100 seed DQTPs and an initial PSM model by following the algorithm in Table 1 over 6 iterations.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Co-training (CT)", "sec_num": "5.2" }, { "text": "With two-view Co-training, we obtain 0.726, 0.705, 0.590 and 0.716 in terms of F-measures for V 1 +V 2 , V 2 +V 3 , V 3 +V 4 and V 1 +V 4 at the 6 th iteration, as shown in Figure 4 . Comparing Figure 3 and 4, we find that Co-training consistently outperforms unsupervised learning by exploiting compatible information across different views. The V 1 +V 2 Co-training outperforms other Co-training combinations, and surprisingly achieves close performance to that of supervised learning. Next we start with the same 100 seed DQTPs by initializing the training pool and carry out Co-EM on the same corpus. We build PSM 1 for Learner A and PSM 2 for Learner B. To start with, PSM 1 is learnt from the initial labeled set. We then follow the algorithm in Table 2 by looping in the following two steps over 6 iterations: (i) estimate the PSM 2 from the samples labeled by Learner A (V 1 ) to extract the high confident E-C pairs and augment the DQTP pool with the probabilistically labeled E-C pairs; (ii) estimate the PSM 1 from the samples labeled by Learner B (V 2 ) to extract the high confident E-C pairs and augment the DQTP pool with the probabilistically labeled E-C pairs. We report the results in Figure 5 . To summarize, we compare the performance of six learning methods studied in this paper in Table 4 . The Co-training and Co-EM learning approaches have alleviated the need of manual labeling, yet achieving performance close to supervised learning. The multi-view learning effectively leverages multiple compatible and partially uncorrelated views. It reduces the need of labeled samples from 80,094 to just 100.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 181, "text": "Figure 4", "ref_id": "FIGREF5" }, { "start": 194, "end": 203, "text": "Figure 3", "ref_id": null }, { "start": 753, "end": 760, "text": "Table 2", "ref_id": null }, { "start": 1204, "end": 1212, "text": "Figure 5", "ref_id": "FIGREF6" }, { "start": 1305, "end": 1313, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Co-training (CT)", "sec_num": "5.2" }, { "text": "We also compare the multi-view learning algorithm with active learning on the same development corpus using same features. We include the results from previously reported work (Kuo et al., 2006) into Table 4 (see Exp. 2) where multiple features are fused in a single active learning process. In Exp. 2, PSA feature is the equivalent of V1 feature in Exp. 4; GSA feature is the equivalent of V4 feature in Exp. 4. In Exp. 4, we carry out V1+V4 two-view Co-training. It is interesting to find that the multi-view learning in this paper achieves better results than active learning in terms of F-measure while reducing the need of manual labeling from 8,191 samples to just 100.", "cite_spans": [ { "start": 176, "end": 194, "text": "(Kuo et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 200, "end": 207, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Co-training (CT)", "sec_num": "5.2" }, { "text": "Exp. Learning algorithm Fmeasure # of samples to label 1 Supervised 0.739 80,094 2 Active Learning (Kuo et al., 2006) 0.710 8,191 3 Unsupervised (V 1 ) 0.680 100 4", "cite_spans": [ { "start": 99, "end": 131, "text": "(Kuo et al., 2006) 0.710 8,191 3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Co-training (CT)", "sec_num": "5.2" }, { "text": "Co-training (V 1 +V 4 ) 0.716 100 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-training (CT)", "sec_num": "5.2" }, { "text": "Co-training (V 1 +V 2 ) 0.726 100 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-training (CT)", "sec_num": "5.2" }, { "text": "Co-EM (V 1 +V 2 ) 0.725 100 Table 4 . Comparison of six learning strategies.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 35, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Co-training (CT)", "sec_num": "5.2" }, { "text": "Fusion of phoneme and grapheme features in transliteration modeling was studied in many previous works. However, it was done through the combination of phoneme and grapheme similarity scores (Bilac and Tanaka, 2004) , or by pooling phoneme and grapheme features together into a single-view training process (Oh and Choi, 2006b) . This paper presents a new approach that leverages the information across different views to effectively boost the learning from unlabeled data. We have shown that both Co-training and Co-EM not only outperform the unsupervised learning of single view, but also alleviate the need of data labeling. This reaffirms that multi-view is a viable solution to the learning of transliteration model and hence transliteration extraction. Moving forward, we believe that contextual feature in documents presents another compatible, uncorrelated, and complementary view to the four views.", "cite_spans": [ { "start": 191, "end": 215, "text": "(Bilac and Tanaka, 2004)", "ref_id": "BIBREF0" }, { "start": 307, "end": 327, "text": "(Oh and Choi, 2006b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We validate the effectiveness of the proposed algorithms by conducting experiments on transliteration extraction. We hope to extend the work further by investigating the possibility of applying the multi-view learning algorithms to machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Both phoneme and syllable based approaches are referred to as phoneme-based in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Improving backtransliteration by combining information sources", "authors": [ { "first": "S", "middle": [], "last": "Bilac", "suffix": "" }, { "first": "H", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 2004, "venue": "Proc. of Int'l Joint Conf. on Natural Language Processing", "volume": "", "issue": "", "pages": "542--547", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bilac and H. Tanaka. 2004. Improving back- transliteration by combining information sources, In Proc. of Int'l Joint Conf. on Natural Language Processing, pp. 542-547.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Combining Labeled and Unlabeled Data with Co-training", "authors": [ { "first": "S", "middle": [], "last": "Blum", "suffix": "" }, { "first": "T", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 1998, "venue": "Proc. of 11 th Conference on Computational Learning Theory", "volume": "", "issue": "", "pages": "92--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Blum and T. Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-training, In Proc. of 11 th Conference on Computational Learning Theory, pp. 92-100.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatically Harvesting Katakana-English Term Pairs from Search Engine Query Logs", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" }, { "first": "G", "middle": [], "last": "Kacmarcik", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2001, "venue": "Proc. of Natural Language Processing Pacific Rim Symposium (NLPPRS)", "volume": "", "issue": "", "pages": "393--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Brill, G. Kacmarcik and C. Brockett. 2001. Automatically Harvesting Katakana-English Term Pairs from Search Engine Query Logs, In Proc. of Natural Language Processing Pacific Rim Symposium (NLPPRS), pp. 393-399.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Translating-Transliterating Named Entities for Multilingual Information Access", "authors": [ { "first": "H.-H", "middle": [], "last": "Chen", "suffix": "" }, { "first": "W.-C", "middle": [], "last": "Lin", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Yang", "suffix": "" }, { "first": "W.-H", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Journal of the American Society for Information Science and Technology", "volume": "57", "issue": "5", "pages": "645--659", "other_ids": {}, "num": null, "urls": [], "raw_text": "H.-H. Chen, W.-C. Lin, C.-H. Yang and W.-H. Lin. 2006, Translating-Transliterating Named Entities for Multilingual Information Access, Journal of the American Society for Information Science and Technology, 57(5), pp. 645-659.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Maximum Likelihood from Incomplete Data via the EM Algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society, Ser. B", "volume": "39", "issue": "", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird and D. B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm, Journal of the Royal Statistical Society, Ser. B. Vol. 39, pp. 1-38.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic Identification and Backtransliteration of Foreign Words for Information Retrieval", "authors": [ { "first": "K", "middle": [ "S" ], "last": "Jeong", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Myaeng", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Lee", "suffix": "" }, { "first": "K.-S", "middle": [], "last": "Choi", "suffix": "" } ], "year": 1999, "venue": "Information Processing and Management", "volume": "35", "issue": "", "pages": "523--540", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. S. Jeong, S. H. Myaeng, J. S. Lee and K.-S. Choi. 1999. Automatic Identification and Back- transliteration of Foreign Words for Information Retrieval, Information Processing and Management, Vol. 35, pp. 523-540.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Machine Transliteration", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "", "volume": "24", "issue": "", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Knight and J. Graehl. 1998. Machine Transliteration, Computational Linguistics, Vol. 24, No. 4, pp. 599- 612.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning Transliteration Lexicons from the Web", "authors": [ { "first": "J.-S", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y.-K", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2006, "venue": "Proc. of 44 th ACL", "volume": "", "issue": "", "pages": "1129--1136", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-S. Kuo, H. Li and Y.-K. Yang. 2006. Learning Transliteration Lexicons from the Web, In Proc. of 44 th ACL, pp. 1129-1136.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Phonetic Similarity Model for Automatic Extraction of Transliteration Pairs", "authors": [ { "first": "J.-S", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y.-K", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2007, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "6", "issue": "2", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-S. Kuo, H. Li and Y.-K. Yang. 2007. A Phonetic Similarity Model for Automatic Extraction of Transliteration Pairs, ACM Transactions on Asian Language Information Processing. 6(2), pp. 1-24.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An English-Korean Transliteration and Retransliteration Model for Cross-Lingual Information Retrieval", "authors": [ { "first": "J.-S", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-S. Lee. 1999. An English-Korean Transliteration and Retransliteration Model for Cross-Lingual Information Retrieval, PhD Thesis, Department of Computer Science, KAIST.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Heterogeneous Uncertainty Sampling for Supervised Learning", "authors": [ { "first": "D", "middle": [ "D" ], "last": "Lewis", "suffix": "" }, { "first": "J", "middle": [], "last": "Catlett", "suffix": "" } ], "year": 1994, "venue": "Proc. of Int'l Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "148--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. D. Lewis and J. Catlett. 1994. Heterogeneous Uncertainty Sampling for Supervised Learning, In Proc. of Int'l Conference on Machine Learning (ICML), pp. 148-156.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Joint Source Channel Model for Machine Transliteration", "authors": [ { "first": "H", "middle": [], "last": "Li", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Su", "suffix": "" } ], "year": 2004, "venue": "Proc. of 42 nd ACL", "volume": "", "issue": "", "pages": "159--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Li, M. Zhang and J. Su. 2004. A Joint Source Channel Model for Machine Transliteration, In Proc. of 42 nd ACL, pp. 159-166.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Fundamentals of Statistical Natural Language Processing", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Scheutze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. D. Manning and H. Scheutze. 1999. Fundamentals of Statistical Natural Language Processing, The MIT Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Generate Phonetic Cognates to Handle Name Entities in English-Chinese Cross-Language Spoken Document Retrieval", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Meng", "suffix": "" }, { "first": "W.-K", "middle": [], "last": "Lo", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "T", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2001, "venue": "Proceedings of Automatic Speech Recognition Understanding (ASRU)", "volume": "", "issue": "", "pages": "311--314", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. M. Meng, W.-K. Lo, B. Chen and T. Tang. 2001. Generate Phonetic Cognates to Handle Name Entities in English-Chinese Cross-Language Spoken Document Retrieval, In Proceedings of Automatic Speech Recognition Understanding (ASRU), pp. 311- 314.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Active + Semi-supervised learning = Robust Multi-View Learning", "authors": [ { "first": "I", "middle": [], "last": "Muslea", "suffix": "" }, { "first": "S", "middle": [], "last": "Minton", "suffix": "" }, { "first": "C", "middle": [ "A" ], "last": "Knoblock", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 9 th Int'l Conference on Machine Learning", "volume": "", "issue": "", "pages": "435--442", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Muslea, S. Minton and C. A. Knoblock. 2002. Active + Semi-supervised learning = Robust Multi-View Learning, In Proc. of the 9 th Int'l Conference on Machine Learning, pp. 435-442.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Cross-language Information Retrieval based on Parallel Texts and Automatic Mining of Parallel Text from the Web", "authors": [ { "first": "J.-Y", "middle": [], "last": "Nie", "suffix": "" }, { "first": "P", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "M", "middle": [], "last": "Simard", "suffix": "" }, { "first": "R", "middle": [], "last": "Durand", "suffix": "" } ], "year": 1999, "venue": "Proc. of 22 nd ACM SIGIR", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-Y. Nie, P. Isabelle, M. Simard and R. Durand. 1999. Cross-language Information Retrieval based on Parallel Texts and Automatic Mining of Parallel Text from the Web, In Proc. of 22 nd ACM SIGIR, pp 74-81.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Analyzing the Effectiveness and Applicability of Co-training", "authors": [ { "first": "K", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "R", "middle": [], "last": "Ghani", "suffix": "" } ], "year": 2000, "venue": "Proc. of the 9 th Conference in Information and Knowledge and Management", "volume": "", "issue": "", "pages": "86--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Nigam and R. Ghani. 2000. Analyzing the Effectiveness and Applicability of Co-training, In Proc. of the 9 th Conference in Information and Knowledge and Management, pp. 86-93.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Machine Transliteration Model based on Graphemes and Phonemes", "authors": [ { "first": "J.-H", "middle": [], "last": "Oh", "suffix": "" }, { "first": "K.-S", "middle": [], "last": "Choi", "suffix": "" }, { "first": "H", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2006, "venue": "ACM TALIP", "volume": "5", "issue": "3", "pages": "185--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-H. Oh, K.-S. Choi and H. Isahara. 2006a. A Machine Transliteration Model based on Graphemes and Phonemes, ACM TALIP, Vol. 5, No. 3, pp. 185-208.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An Ensemble of Transliteration Models for Information Retrieval", "authors": [ { "first": "J.-H", "middle": [], "last": "Oh", "suffix": "" }, { "first": "K.-S", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2006, "venue": "In Information Processing and Management", "volume": "42", "issue": "", "pages": "980--1002", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-H. Oh and K.-S. Choi. 2006b. An Ensemble of Transliteration Models for Information Retrieval, In Information Processing and Management, Vol. 42, pp. 980-1002.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Named Entity Transliteration with Comparable Corpora", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "T", "middle": [], "last": "Tao", "suffix": "" }, { "first": "C", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2006, "venue": "Proc. of 44 th ACL", "volume": "", "issue": "", "pages": "73--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Sproat, T. Tao and C. Zhai. 2006. Named Entity Transliteration with Comparable Corpora, In Proc. of 44 th ACL, pp. 73-80.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Combining Active and Semi-supervised Learning for Spoken Language Understanding", "authors": [ { "first": "G", "middle": [], "last": "T\u00fcr", "suffix": "" }, { "first": "D", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "R", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 2005, "venue": "Speech Communication", "volume": "45", "issue": "", "pages": "171--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. T\u00fcr, D. Hakkani-T\u00fcr and R. E. Schapire. 2005. Combining Active and Semi-supervised Learning for Spoken Language Understanding, Speech Communication, 45, pp. 171-186.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Survey of Formal Grammars and Algorithms for Recognition and Transformation in Machine Translation, IFIP Congress-68, reprinted TAO: Vingtcinq Ans de Traduction Automatique -Analectes in", "authors": [ { "first": "B", "middle": [], "last": "Vauqois", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "201--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Vauqois. 1988. A Survey of Formal Grammars and Algorithms for Recognition and Transformation in Machine Translation, IFIP Congress-68, reprinted TAO: Vingtcinq Ans de Traduction Automatique - Analectes in C. Boitet, Ed., Association Champollin, Grenoble, pp.201-213", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transliteration of Proper Names in Cross-Lingual Information Retrieval", "authors": [ { "first": "P", "middle": [], "last": "Virga", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 41 st ACL Workshop on Multilingual and Mixed Language Named Entity Recognition", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Virga and S. Khudanpur. 2003. Transliteration of Proper Names in Cross-Lingual Information Retrieval, In Proceedings of 41 st ACL Workshop on Multilingual and Mixed Language Named Entity Recognition, pp. 57-64.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Multiple views for establishing transliteration correspondence.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Diagram of unsupervised/multi-view Cotraining for transliteration extraction.", "uris": null, "type_str": "figure" }, "FIGREF5": { "num": null, "text": "F-measure over iterations using Cotraining algorithm5.3 Co-EM (CE)", "uris": null, "type_str": "figure" }, "FIGREF6": { "num": null, "text": "Comparing F-measure over iterations between Co-training (CT) and Co-EM (CE).", "uris": null, "type_str": "figure" } } } }