Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
84 kB
{
"paper_id": "I11-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:32:39.238152Z"
},
"title": "Labeling Unlabeled Data using Cross-Language Guided Clustering",
"authors": [
{
"first": "Sachindra",
"middle": [],
"last": "Joshi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Negi",
"suffix": "",
"affiliation": {},
"email": "sumitneg@in.ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The effort required to build a classifier for a task in a target language can be significantly reduced by utilizing the knowledge gained during an earlier effort of model building in a source language for a similar task. In this paper, we investigate whether unlabeled data in the target language can be labeled given the availability of labeled data for a similar domain in the source language. We view the problem of labeling unlabeled documents in the target language as that of clustering them such that the resulting partitioning has the best alignment with the classes provided in the source language. We develop a cross language guided clustering (CLGC) method to achieve this. We also propose a method to discover concept mapping between languages which is utilized by CLGC to transfer supervision across languages. Our experimental results show significant gains in the accuracy of labeling documents over the baseline methods.",
"pdf_parse": {
"paper_id": "I11-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "The effort required to build a classifier for a task in a target language can be significantly reduced by utilizing the knowledge gained during an earlier effort of model building in a source language for a similar task. In this paper, we investigate whether unlabeled data in the target language can be labeled given the availability of labeled data for a similar domain in the source language. We view the problem of labeling unlabeled documents in the target language as that of clustering them such that the resulting partitioning has the best alignment with the classes provided in the source language. We develop a cross language guided clustering (CLGC) method to achieve this. We also propose a method to discover concept mapping between languages which is utilized by CLGC to transfer supervision across languages. Our experimental results show significant gains in the accuracy of labeling documents over the baseline methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The last few years have seen a rapid growth in the development of machine learning applications for non-English languages. This growth can be attributed to several factors such as increased Internet penetration (especially in non-English speaking countries) and wide adoption of Unicode standards that allow people to generate content in their own language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A key guiding principal in the development of such applications for a new language (referred to as the target or resource-poor language) has been to leverage the existing models and linguistic re-sources available for a popular language such as English (also called source or resource-rich language). Existing literature examines two ways of utilizing this knowledge. The first way is to adapt an existing statistical model for a new target language. Examples of this is the problem of cross-lingual sentiment classification (Xiaojun Wan 2009) , or in a more general setting for cross language domain adaptation for classification (Peter Prettenhofer and Benno Stein 2010) . The second way is to develop linguistic resources for a target or resource-poor language by leveraging the resources available in a source or resource-rich language. An example of this is the work done for automatically transferring syntactic relations (in WordNet) from a source language (English) into a target language (Romanian) (Verginica Barbu Mititelu and Radu Ion 2005) .",
"cite_spans": [
{
"start": 525,
"end": 543,
"text": "(Xiaojun Wan 2009)",
"ref_id": "BIBREF0"
},
{
"start": 638,
"end": 672,
"text": "Prettenhofer and Benno Stein 2010)",
"ref_id": "BIBREF1"
},
{
"start": 1008,
"end": 1052,
"text": "(Verginica Barbu Mititelu and Radu Ion 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate another way of utilizing the knowledge gained in one language for building machine learning applications in an another language. Our work focuses on generating training data (in contrast to adapting models and language resources) in the target language, given in-domain training data for the source language. The labeled data in the source language could be used to guide the grouping of unlabeled data in the target language, where each group aligns to a class label from the source language. We assume that the domain for both the source and target language data is similar and therefore the set of class labels across the two languages will be shared (but may not be exactly the same). As an example consider a real world scenario from a call routing application. A call routing application maps natural language utterances (typically a caller's response to an open ended question such as \"how may I help you\") to one of a given set of classes also called call types. terances (in English) along with associated class labels from the banking domain. These labeled utterances could be used as training data for building a call-routing classifier for the two class labels namely \"Balance-Enquiry\" and \"Credit-Card-Enquiry\". Let us assume that we now have utterances in a new language (in Hindi) which are unlabeled. Given that these utterances belong to the same domain, they can be labeled using the same label set as the one used for the source language. This is shown in the Figure 1 where utterances h.1 and h.2 are grouped together and labeled as \"Balance-Enquiry\" and utterance h.3 and h.4 is labeled as \"Credit-Card-Enquiry\". The labeled data can then be used to train a classifier in the target language.",
"cite_spans": [],
"ref_spans": [
{
"start": 1509,
"end": 1517,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To label the target language documents automatically we propose a method called crosslanguage guided clustering (CLGC). This method is built upon a recently proposed approach called cross guided clustering (CGC). CGC guides clustering of documents in a target domain given clusters/classes in a source domain (Bhattacharya et al. 2009) . This is achieved by discovering a partitioning in the target domain that is most \"similar\" or \"aligned\" to a given partitioning in the source domain. In CLGC we view the problem of labeling unlabeled documents in the target language as that of clustering them such that the resulting partitioning has the best alignment with the classes provided in the source language. Since in our case the source and target data are in different languages, we extend the CGC framework to transfer supervi-sion across different languages. We develop cross language similarity measures that use word level and concept level mappings to guide the clustering across languages. We also develop methods to discover concept level mapping between languages. Our experimental results show significant gains in the accuracy of labeling documents over the baseline methods.",
"cite_spans": [
{
"start": 309,
"end": 335,
"text": "(Bhattacharya et al. 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One could argue that if the final goal is to classify documents in the target language, this could be achieved by either of the following approaches -(1) by adapting the source language classifier (Peter Prettenhofer and Benno Stein 2010) or (2) by translating unlabeled documents from the target language to the source language and then applying a source language classifier (Mckeown et al. 2003) . We claim that our approach is more general and has several advantages over both these approaches. First, building a classifier given a training dataset is a well studied and understood problem. Several off-the-shelf machine learning tools exist that can readily be used for tasks such as feature construction, and building classifiers, provided a training dataset is available (Hall et al. 2009) . Our approach can be used to generate a training dataset for the target language which enables use of existing approaches not only for building classifiers, but also for feature engineering tasks such as feature construction and feature selection. This cannot be done using either of the above mentioned approaches.",
"cite_spans": [
{
"start": 204,
"end": 238,
"text": "Prettenhofer and Benno Stein 2010)",
"ref_id": "BIBREF1"
},
{
"start": 376,
"end": 397,
"text": "(Mckeown et al. 2003)",
"ref_id": null
},
{
"start": 777,
"end": 795,
"text": "(Hall et al. 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, a key assumption made in both these approaches is that the class labels across languages are completely shared. This may not be true in several cases as there could be categories that are specific to the target language dataset. As an example, while most of the Hindi utterances in the Figure 1 can be grouped and aligned with a class label in the source language, there exist utterances (h.5,h.6) which do not belong to any of the existing labels in the source language. Our method allows such groupings to be discovered which can then be used to build target language specific class labels. Moreover, it is worth mentioning that apart from these advantages our proposed method is more efficient than machine translation based methods as it does not require a complete machine translation system.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 302,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The specific contributions made by us in this paper are two fold. First, we introduce the problem of labeling documents in one language using the set of labeled documents in another language and show that it is not only feasible but also better than other competitor techniques. Second, we extend the CGC framework to transfer supervision across languages. For this we develop methods to discover concept level mapping between languages that is utilized to guide the clustering across languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. In Section 2 we present related work. We formulate the problem in Section 3. We describe the cross-language guided clustering framework in Section 4. In Section 5, we describe the cross language similarity measure that is used in the CLGC framework. We provide the experimental results in Section 6 and conclude in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The two research areas that are related to our work are, (1) cross lingual classification and clustering, and (2) semi-supervised clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "2"
},
{
"text": "Cross Lingual Classification and Clustering : Traditional approaches to cross language text classification use linguistic resources such as bilingual dictionaries or parallel corpora to induce correspondences between two languages (Olsson 2005) . Some of these methods employ latent semantic analysis (LSA) (Dumais et.al. 1997) or kernel canonical correlation analysis, CCA (Fortuna and Shawe-Taylor 2005). The major limitations of these approaches are their computational complexity and dependence on a parallel corpus. Cross-lingual clustering aims to cluster a heterogeneous (a collection of documents from different languages) document collection. Initial work done in cross-lingual document clustering employed an expensive machine translation (MT) system to fill the gap between two languages (Mckeown et al. 2003) . Later work (Wu 2007) done in this area demonstrated that it was possible to achieve comparable performance to the direct MT method using simple linguistic resource such as bilingual dictionaries.",
"cite_spans": [
{
"start": 231,
"end": 244,
"text": "(Olsson 2005)",
"ref_id": "BIBREF12"
},
{
"start": 307,
"end": 327,
"text": "(Dumais et.al. 1997)",
"ref_id": null
},
{
"start": 799,
"end": 820,
"text": "(Mckeown et al. 2003)",
"ref_id": null
},
{
"start": 834,
"end": 843,
"text": "(Wu 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "2"
},
{
"text": "Semi-supervised clustering: Semi-supervised clustering aims to improve clustering performance by limited supervision in the form of a small set of labeled instances. Alternatively, a small set of labeled instances can be used to learn a parameterized distance function (M. Bilenko and R. J. Mooney 2003) , (Klein et al. 2002) . The coclustering approach (Dhillon et al. 2003) , (N. Slonim and N. Tishby 2000) clusters related dimensions simultaneously through explicitly provided relations between them, such as words and documents, or people and reviews.",
"cite_spans": [
{
"start": 273,
"end": 303,
"text": "Bilenko and R. J. Mooney 2003)",
"ref_id": "BIBREF6"
},
{
"start": 306,
"end": 325,
"text": "(Klein et al. 2002)",
"ref_id": "BIBREF7"
},
{
"start": 354,
"end": 375,
"text": "(Dhillon et al. 2003)",
"ref_id": "BIBREF8"
},
{
"start": 382,
"end": 408,
"text": "Slonim and N. Tishby 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "2"
},
{
"text": "The problem that we address in this paper differs significantly from the above mentioned work. Unlike others, our objective is to cluster target language documents such that the resulting clusters are most 'similar' or best 'aligned' to the given source language classes. This problem is an instance of semi-supervised clustering in a bilingual setting, which to the best of our best knowledge has received very little attention. Our work builds upon Cross Guided Clustering (CGC) work (Bhattacharya et al. 2009) where supervision is discovered in the form of cluster level similarities obtained from labeled instances from a different domain, having different but related labels. In our work we extend the CGC framework to transfer supervision across different languages.",
"cite_spans": [
{
"start": 486,
"end": 512,
"text": "(Bhattacharya et al. 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "2"
},
{
"text": "Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "T S = {< d S 1 , l S 1 >, < d S 2 , l S 2 >, . . . , < d S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "n , l S n >} denote a training dataset in the source language S for a classification task \u03b3. Here",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "d S i \u2208 D S denotes a document that has an associated class label l S i \u2208 L S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "where, L S denotes the set of class labels used in T S . Note, that L S induces a partitioning of D S , where each class label l S i can be seen as a cluster containing documents d S i that have l S i as the class label. We are also given a set of unlabeled documents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "D T = {d T 1 , d T 2 , . . . , d T m }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "where all the documents are from a similar domain as in T S but are from a different language T . Our objective is to generate a training dataset using D T for the classification task \u03b3. We pose this as a clustering problem over document set D T , where the resulting clusters are aligned with the given classes in the source language dataset. The alignment is achieved by taking the supervision from the partitioning of D S , which is induced by the label set L S , to guide the clustering of document set D T . We refer to this clustering method as crosslanguage guided clustering. In the next section, we describe cross-language guided clustering in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "In this section, we modify the cross guided clustering framework as described in (Bhattacharya et al. 2009) to transfer supervision across languages.",
"cite_spans": [
{
"start": 81,
"end": 107,
"text": "(Bhattacharya et al. 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "Let Dis(d T i , d T j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "provide a distance measure between documents d T i and d T j in the target language T . A clustering method partitions the given document set into k clusters denoted by centroids",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "C T = {C T 1 , C T 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": ". . , C T k } such that the total divergence Div(C T ) also referred to as target only divergence is minimized. This is defined as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "Div T (C T ) = C T i d T j \u03b4(C T i , d T j )Dis(C T i , d T j ) 2 (1) Here \u03b4(C T i , d T j ) returns 1 if d T j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "is assigned to the centroid C T i else returns 0. This is a standard formulation used in the K-Means algorithm (Hall et al. 2009) .",
"cite_spans": [
{
"start": 111,
"end": 129,
"text": "(Hall et al. 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "In our problem setting, we are additionally provided with a labeled dataset in the source language where the label set induces a partitioning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "C S = {C S 1 , C S 2 , . . . , C S l } of D S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "in the source language. Our objective is to discover partitioning of D T such that each resulting cluster is aligned with at most one class label from the source language and vice-versa. This enables discovery of clusters in the target language that are aligned with the classes in the source language while simultaneously allowing for discovery of any additional concept in the target language. To do this, we require a cross-language similarity function Sim X (..) that given two documents from different languages, returns a similarity score. This is non-trivial as documents in different languages are represented in entirely separate attribute/feature space. We develop a cross-language similarity measure to achieve this in Section 5. For now, we assume that we have access to such a measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "To find a cross-language alignment between the source partition and the target partition we construct a bipartite cross language graph G x that has one set of vertices C S corresponding to source centroids, and another set C T corresponding to target centroids. An edge is added between every pair of vertices (C S i , C T j ) where the weight of the edge is given by Sim X (C S i , C T j ). Now finding the best cross language alignment is equivalent to finding the maximum weighted bipartite match in the graph G x . Recall that a matching is a subset of the edges such that any vertex is spanned by at most one edge. The score of a matching is the sum of the weights of all the edges in it. In our implementation, we use the 'Hungarian method' to determine the matching (Kuhn 1955) .",
"cite_spans": [
{
"start": 773,
"end": 784,
"text": "(Kuhn 1955)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "The matching provides an alignment between the source classes and the target clusters. We only consider those edges in the matching whose weight is more than some predefined threshold. To measure the goodness of cross-language alignment we define a cross-language divergence measure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "Div X (C S , C T ) = C S i C T j \u03b4 X (C S i , C T j )(1\u2212Sim X (C S i , C T j )) 2 |C T j | (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "Here, \u03b4 X (C S i , C T j ) returns the weight of the edge between node C S i and node C T j if these nodes are matched, else it returns 0. Here |C T j | denotes the size of the cluster for which C T j is the centroid. The weighing by |C T j | is done to make Div X (C S , C T ) comparable to Div(C T ). Now the combined divergence between the source partition and the target partition is computed by taking a weighted sum of target-only divergence and cross-language divergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "Div(C S , C T ) = \u03b1 * Div T (C T ) + (1 \u2212 \u03b1) * Div X (C S , C T ) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "Here \u03b1 captures the relative importance of the two divergences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "We now provide an algorithm (see Figure 2 ) that minimizes the objective function given in Equation 3. The algorithm starts by selecting k random data points as centroids from the target language and then executes the following two steps in each iteration. It first assigns points to their nearest centroids and then re-estimates the target centroids to minimize cross-language divergence as given in Equation 3. This is achieved by the ",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 41,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T i = \u03b1 d T i \u2208C T i d T i + (1 \u2212 \u03b1) j \u03b4 x (C T i , C S j )\u03c6(C S j ) \u03b1|C T i | + (1 \u2212 \u03b1)|C T i | j \u03b4 x (C T i , C S j )\u03c6(C S j )",
"eq_num": "(4)"
}
],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "Here the \u03b4 X function captures the current matching of target clusters with source classes. Intuitively, there are two factors contributing to the update rule. The first factor tries to move the current target centroid towards the center of the cluster computed using the currently assigned data points. This is similar to the standard K-means approach. The second factor that arises due to crosslanguage alignment tries to move the centroid towards the currently matched source class. Since the feature space used to represent source classes and target centroids are different, we use the function \u03c6 that projects source classes in the feature space used by the target language. We provide more details regarding the projection function and cross-language similarity in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Language Guided Clustering",
"sec_num": "4"
},
{
"text": "In order to perform cross language guided clustering we need a similarity function Sim X that given two documents d S i and d T j from source and target languages, computes a similarity score. Let V S and V T be the vocabularies used to represent documents in source and target language respectively. Given a word w S i \u2208 V S , let the function proj(w S i ) return a probability distribution P = {p 1 , p 2 , . . . , p |V T | } where p j represents the probability of the word w S i being translated to the word w T j in target dictionary. The function proj(..) has access to a statistical dictionary D S T for doing this. The dictionary could be constructed using some large general purpose parallel corpus. We now present three different methods to compute the similarity function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "Sim X (d S i , d T j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": ". Projection based Method: Let M represent a matrix of dimension |V S | * |V T | where each i th row contains the probability distribution returned by proj(w S i ) for 1 \u2264 i \u2264 |V S |. Given a source document d S i , letd S i refer to its vector representation using the feature space V S . Then the projection function \u03c6(d S i ) = (d S i ) M and the similarity function Sim X can be defined as follows, where denotes transpose of a matrix:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim X (d S i , d T j ) = \u03c6(d S i )d T j = (d S i ) Md T j",
"eq_num": "(5)"
}
],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "Weighted Projection based Method: The function proj(w S i ) returns a probability distribution that captures the likelihood that w S i gets translated to a word w T j in the target dictionary. Since, this function uses a general purpose bi-lingual statistical dictionary it does not capture domain specific translations. For example, the English word \"bank\" may have equal probabilities for being translated as \"\u00ba \" or \" \" however, given a corpus from the banking domain, it is more likely that the word \"bank\" translates to \"\u00ba \". Therefore, given a source term we weigh the probability values of the target terms that it translates to, by the frequency of the target terms computed over the target corpus. We then normalize these values again to obtain a probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "Semantic Mapping based Method: There are multiple words that are synonymous to each other and can be used to represent the same meaning. For example, the word \"games\" and \"sports\" are synonymous English words and can be used to represent the same meaning as \" \" or \" \". The matrix M used in the previous methods, captures the translation probabilities at the word level. In this method we first discover the concepts in each language and then find translation probabilities at the concept level. We refer to this as semantic mapping between the two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "To discover the concepts, words from the source and target vocabulary are clustered into term clusters based on the words that occur in its context. For this a word-by-word co-occurrence matrix is built for the given language. The entry (i, j) in the matrix contains the number of times the word w i and w j occur within a fixed window of L words in the corpus. Thus, each word is represented by a vector called \"context vector\" that captures words occurring in the context of the given word. We then use an off-the-shelf clustering algorithm (Hall et al. 2009) to obtain term clusters in a language. These term clusters are referred to as concepts. The Figure 3 shows examples of concepts identified in English and Hindi languages. Let",
"cite_spans": [
{
"start": 543,
"end": 561,
"text": "(Hall et al. 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 654,
"end": 662,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "G S = {G S 1 , G S 2 , . . . , G S l } and G T = {G T 1 , G T 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": ". . , G T m } be the source and target concepts obtained by clustering. To find the semantic relationship across concepts from different languages, we construct a bipartite graph that has one set of vertices G S corresponding to the source concepts, and another set G T corresponding to the target concepts. Now for each word w S \u2208 G S i , we determine the set of target words T w S that it translates to along with the corresponding translation probabilities. For each word w T \u2208 T w S , we find the concept G T j that contains w T and add a weight p on the edge between the vertex G S i and G T j , where p is the probability of w S being translated to w T . After repeating this process for all the source concepts, we normalize the edge weights such that for each G S i , the sum of weights corresponding to the edges connecting G S i and any concept in the target language equals to 1. Thus for each source concept the normalized bipartite graph contains a distribution over the target concepts. We call this normalized bipartite graph as the semantic mapping between the two languages. Note, that the normalized bipartite graph can be seen as a matrix M map where the rows and columns correspond to source and target concepts respectively and the entry (i, j) denotes the probability that the i th source concept corresponds to j th target concept. Now using the matrix M map , the similarity function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "Sim X (d S i , d T j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "can be defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim X (d S i , d T j ) = (c S i ) M mapc T j",
"eq_num": "(6)"
}
],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "Here,c S i andc T i denote the concept vector representation of d S i and d T j respectively. The concept vector for a document is obtained by replacing the occurrence of each word w i in the document by its concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross Language Similarity",
"sec_num": "5"
},
{
"text": "There are three key questions for which we seek an answer through our experimental evaluation. First, whether the availability of labeled data in a source language is helpful for labeling unlabeled documents in the target language. Second, Figure 3 : Vocabulary after Semantic Projection whether discovery of concepts and concept mapping between languages improves the CLGC performance. Third, given that the target language contains exactly the same classes as the source language (which is not an assumption for CLGC), whether labeling documents using CLGC gives comparable performance to computationally more expensive method that uses a machine translation system. We next describe the dataset, baselines and evaluation metrics that we use to answer these questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 248,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "6"
},
{
"text": "To evaluate the performance of our method, we constructed a dataset of news articles by crawling an English and a Hindi news site. The crawled news articles are from a four month period and belong to the following five categories, viz, (1) Economy and Finance -these are news reports on macro-economic events (such as cuts in interest rates, stock market and increase in taxes), (2) Healthcare and BioTech -these are business reports from the Healthcare and Biotechnology industry (mergers and acquisition, patents lawsuits , expansion etc), (3) Energy -these are news reports from the energy and utility sector, (4) sports and (5) Auto. The number of documents for each language and category are shown in Table 1. As mentioned earlier, the CLGC method does not assume that the same set of categories are present in both the languages, to verify this claim we have an additional category, viz, \"Auto\" in our Hindi dataset which is absent in the English dataset. Even though both English and Hindi news articles are from the same time frame these articles are not aligned. In our experiments, we use an English-Hindi statistical dictionary which was built using the Moses toolkit (Koehn 2007) . The training data for the dictionary was a collection of 150,000 English and Hindi parallel sentences sourced from a general corpus. The dictionary built using this corpus is referred to as a \"general dictionary\" (GD). We further collected 10,000 parallel sentences on the topics present in our news dataset. These were then used along with the earlier set of parallel sentence to learn a dictionary that contains domain specific words and their translations. We refer to this dictionary as a \"domain dictionary\" (DD). The statistics for these dictionaries in terms of word coverage is shown in Table 2 . The objective of creating these two dictionaries is to observe the performance of CLGC when a general purpose dictionary is used in contrast to a domain specific dictionary.",
"cite_spans": [
{
"start": 1179,
"end": 1191,
"text": "(Koehn 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 1789,
"end": 1796,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "Baselines: One of the objective of experimental evaluation is to see if the availability of source classes helps in clustering documents in the target language. In order to measure gains achieved by the availability of source class information, we compare the performance of CLGC against the standard k-means algorithm. We refer to this as k-means baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "Another objective of the experimental evaluation is to see whether labeling documents using CLGC gives comparable performance to computationally more expensive method that uses a machine translation system. For this we train a classifier using the English news articles referred to as source classifier. We then translate Hindi new articles into English using Google's machine translation system and then label them using the source classifier. We refer to this as NB baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "Evaluation Metric The objective of the CLGC approach is to label the unlabeled target dataset. We use the following approach for evaluating this. As the true class-labels for the target news articles are known we assign to each cluster the class-label which is the most frequent in the cluster. All articles in the cluster are now labeled with the corresponding cluster-label. Based on this labeling strategy and the available ground truth we report the accuracy/purity measure which is computed by dividing the correctly labelled documents by the total number of documents. We also evaluate clustering quality by considering the correctness of clustering decisions over all document pairs. We report the standard F1 measure over the pairwise clustering decisions. The F1 measure is the harmonic mean of precision and recall over pairwise decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "Experiment 1: In our first experiment, We compare the performance of k-means with the projection based method, referred to as PB, weighted projection based method referred to as WPB and semantic mapping based method, referred to as SM. For this experiment we use the English dataset as the source dataset and Hindi dataset as the target dataset with 4 and 5 categories respectively. For the semantic mapping based method, we discover concepts using the word clustering. The word clustering algorithm uses k-means algorithm. We set k to a large value (we set it to 1000) and use only the first 100 best clusters where goodness of a cluster is measured in terms of its divergence. For each word that is not covered by the first best 100 clusters, we create singleton clusters for the word. We use this procedure for both the source and target dataset. We then use the method described in Section 5 to discover concept mappings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "Since the results obtained for both the k means and all the variations of CLGC depends on the choice of initial centroids, in each experimental run all the methods are seeded with the same set of centroids. The reported results are averaged over 10 runs with random initialization. We set the value of k equal to the actual number of categories in each dataset for both k-means as well as for CLGC. The value of \u03b1 in Equation 4 is set to 0.5 and value of n and m in the procedure given in the Figure 2 is kept 20 and 5 respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 493,
"end": 501,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "The results are reported in Table 3 . The results show that there is a significant gain that is achieved by CLGC methods over K-means. This shows that the presence of labeled data in the source language helps in the clustering of documents in the target language. We further note that the SM methods, both using \"general dictionary\" (GD) and \"domain dictionary\" (DD) outperforms all other methods in their class. This happens because words that do not get translated using the statistical dictionary, are taken into account as they become part of concept mappings that have correspondence across languages. Thus, these terms get accounted in the computation of the SM similarity measure. These terms were not being considered in the PB and WPB similarity computations. As an example the statistical dictionary did not have the translation for the word \"bharti\" , which is the name of a company from the telecommunication and retail sector. However the word \"bharti\" mapped to a concept from the source language which contained words such as \"communication\", \"retail\" and \"ipo\". This cluster mapped to a concept in Hindi which had words such as \" \u00ca \", \" \" and \" \" where the first two words are translations for the words \"communication\" and \"retail\" respectively. As a result of this correspondence between the two concepts the words \"bharti\" and \"",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "\" get associated. Another key point to note is that the performance of Semantic Mapping using General Dictionary is only slightly worse than Semantic Mapping using the Domain Dictionary. This shows that the semantic mapping based method is able to achieve good performance even when it does not have access to a domain specific dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "Experiment 2: In our second experiment, we compare the performance of SM method which is the best performing CLGC method with the NB baseline. We use the rainbow package (McCallum 1996) to train a na\u00efve Bayes classifier using the English dataset. For translating Hindi documents to English, we use Google 1 translation engine. The accuracy results for this experiment are provided in Table 4. 1 http://code.google.com/p/google-api-translate-java Method Accuracy NB 0.71 SM 0.73 Table 4 : Comparison of na\u00efve Bayes with CLGC (SM using General Dictionary)",
"cite_spans": [
{
"start": 170,
"end": 185,
"text": "(McCallum 1996)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 384,
"end": 392,
"text": "Table 4.",
"ref_id": null
},
{
"start": 478,
"end": 485,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "We note that the performance of SM is slightly higher than the na\u00efve Bayes approach. We investigated the reasons behind this and found that there are a few important features that are specific to the Hindi dataset. As the na\u00efve Bayes classifier is trained using the English dataset only, it does not have access to these features and therefore incorrectly classifies the documents that contain such features. While classification techniques such as those based on Support Vector Machines can be expected to perform better than simple NB, our aim here is only to demonstrate that in a resource poor language, where building such classifiers may not be possible (due to the lack of a good machine translation system etc), CLGC can prove to be a useful method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Resources:",
"sec_num": null
},
{
"text": "In this paper, we presented cross language guided clustering (CLGC) that utilizes the labeled data from a source language to label unlabeled data from a target language. CLGC tries to cluster unlabeled target language documents such that the resulting clusters are most 'similar' or best 'aligned' to the given source language classes. To achieve this alignment we defined a crosslanguage similarity measures that returns a similarity score between two documents in different languages. We presented and compared three cross-language similarity measure namely Projection Based, Weighted Projection Based and Semantic Mapping and demonstrate their effectiveness on real-world data-sets. Our Semantic Mapping method, which discovers concepts and their associated mapping across languages, shows the maximum gain in the accuracy of labeling documents over the baseline methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Co-Training for Cross-Lingual Sentiment Classification",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "235--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan. 2009. Co-Training for Cross- Lingual Sentiment Classification, Proceed- ings of the 47th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 235-243.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Cross-Language Text Classification using Structural Correspondence Learning",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1118--1127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross- Language Text Classification using Struc- tural Correspondence Learning. Proceedings of the 48th Annual Meeting of the Associ- ation for Computational Linguistics, pages 1118-1127.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Import of Verbal Syntactic Relations Using Parallel Corpora. Cross-Language Knowledge Induction Workshop",
"authors": [
{
"first": "Barbu",
"middle": [],
"last": "Verginica",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Mititelu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ion",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Verginica Barbu Mititelu and Radu Ion. 2005. Au- tomatic Import of Verbal Syntactic Relations Using Parallel Corpora. Cross-Language Knowledge Induction Workshop.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cross-Guided Clustering: Transfer of Relevant Supervision across Domains for Improved Clustering",
"authors": [
{
"first": "Indrajit",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Shantanu",
"middle": [],
"last": "Godbole",
"suffix": ""
},
{
"first": "Sachindra",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Verma",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Internationl Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "41--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Indrajit Bhattacharya and Shantanu Godbole and Sachindra Joshi and Ashish Verma. 2009. Cross-Guided Clustering: Transfer of Rel- evant Supervision across Domains for Im- proved Clustering. Proceedings of the In- ternationl Conference on Data Mining, pages 41-50.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The WEKA Data Mining Software: An Update. SIGKDD Explorations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall and Eibe Frank and Geoffrey Holmes and Bernhard Pfahringer and Peter Reute- mann and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. SIGKDD Explorations, Volume 11, Issue 1.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Columbias newsblaster: New features and future directions",
"authors": [],
"year": 2003,
"venue": "Proceedings of NAACL-HLT03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen Mckeown and Regina Barzilay and John Chen and David Elson and David Evans and Judith Klavans and Ani Nenkova and Barry Schiffman and Sergey Sigelman. 2003. Columbias newsblaster: New features and future directions. In Proceedings of NAACL- HLT03.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adaptive duplicate detection using learnable string similarity measures",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bilenko",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2003,
"venue": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bilenko and R. J. Mooney. 2003 Adaptive du- plicate detection using learnable string sim- ilarity measures In ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, 2003.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "From instance level constraints to space-level constraints: Making the most of prior knowledge in data clustering",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "S",
"middle": [
"D"
],
"last": "Kamvar",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and S. D. Kamvar and C. Manning. 2002 From instance level constraints to space-level constraints: Making the most of prior knowl- edge in data clustering In International Con- ference on Machine Learning, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Information theoretic co-clustering On ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mallela",
"suffix": ""
},
{
"first": "D",
"middle": [
"S"
],
"last": "Modha",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dhillon and S. Mallela and D. S. Modha. 2003 Information theoretic co-clustering On ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2003.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Document clustering using word clusters via the information bottleneck method",
"authors": [
{
"first": "N",
"middle": [],
"last": "Slonim",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tishby",
"suffix": ""
}
],
"year": 2000,
"venue": "The Annual International ACM SIGIR Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Slonim and N. Tishby. 2000 Document clus- tering using word clusters via the informa- tion bottleneck method In The Annual Inter- national ACM SIGIR Conference, 2000.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Collective entity resolution in relational data",
"authors": [
{
"first": "I",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Transactions on Knowledge Discovery from Data",
"volume": "1",
"issue": "1",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Bhattacharya and L. Getoor. 2007 Collective en- tity resolution in relational data ACM Trans- actions on Knowledge Discovery from Data, vol. 1, no. 1, pp. 1-36, March 2007.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The hungarian method for the assignment problem Naval Research Logistics Quarterly",
"authors": [
{
"first": "H",
"middle": [
"W"
],
"last": "Kuhn",
"suffix": ""
}
],
"year": 1955,
"venue": "",
"volume": "2",
"issue": "",
"pages": "83--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. W. Kuhn. 1955. The hungarian method for the assignment problem Naval Research Logis- tics Quarterly, vol. 2, pp. 83-97, 1955.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cross language text classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Olsson",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"W"
],
"last": "Oard",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of SIGIR-05",
"volume": "",
"issue": "",
"pages": "645--646",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Scott Olsson and Douglas W. Oard and Jan Ha- jic. 2005. Cross language text classification In Proceedings of SIGIR-05, pages 645-646.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum 1996. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/ mccallum/bow.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic cross-language retrieval using latent semantic indexing",
"authors": [
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "Todd",
"middle": [
"A"
],
"last": "Letsche",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Littman",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
}
],
"year": 1997,
"venue": "AAAI Symposium on Cross-Language Text and Speech Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan T. Dumais, Todd A. Letsche, Michael L. Littman, and Thomas K. Landauer. 1997. Au- tomatic cross-language retrieval using latent semantic indexing In AAAI Symposium on Cross-Language Text and Speech Retrieval.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The use of machine translation tools for crosslingual text mining",
"authors": [
{
"first": "Blaz",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ICML Workshop on Learning with Multiple Views",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blaz Fortuna and John Shawe-Taylor. 2005. The use of machine translation tools for cross- lingual text mining. In Proceedings of the ICML Workshop on Learning with Multiple Views.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Cross-Lingual Document Clustering",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2007,
"venue": "Lecture Notes in Computer Science",
"volume": "4426",
"issue": "",
"pages": "956--963",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Wu and Bao-Liang Lu. 2007. Cross-Lingual Document Clustering . In Lecture Notes in Computer Science, Volume 4426/2007, 956- 963,",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [
"Birch"
],
"last": "Mayne",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual Meeting of the Association for Computation Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch Mayne, Christopher Callison-Burch, Mar- cello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, Evan Herbst 2007. Open source toolkit for statistical machine translation. Annual Meeting of the Asso- ciation for Computation Linguistics (ACL), Demonstration Session",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1shows examples of a few ut-. Utterances and class labels in source and target languages",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Procedure for Cross Language Guided Clustering following update rule that is obtained by differentiating the divergence function in Equation 3 with respect to the current target centroids.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"text": "Procedure CrossLanguageGuidedClustering Select k centroids randomly from D T % Initialize target clusters Iterate n times or until convergence Iterate m times Assign each d T i \u2208 D T to the nearest centroid Recompute the centroids % Start CLGC Create cross language similarity graph Gx using Sim X Compute maximum bipartite graph matching over Gx Iterate over k target centroids in C T Update centroid using the cross language update rule Assign each d T i \u2208 D T to the nearest centroid Return k centroids",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"html": null,
"text": "News Dataset used for Experimentation",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Number of Unique Words General Dictionary coverage Domain Dictionary coverage</td><td>English 18128 11061 (61%) 14969 (82.5%)</td><td>Hindi 14521 9344 (64%) 11767 (81%)</td></tr></table>"
},
"TABREF3": {
"html": null,
"text": "Dictionary Statistics",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF5": {
"html": null,
"text": "Comparison of k means with CLGC using different cross lingual similarity measures",
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}