Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
79.2 kB
{
"paper_id": "D09-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:39:52.292036Z"
},
"title": "Projecting Parameters for Multilingual Word Sense Disambiguation",
"authors": [
{
"first": "Mitesh",
"middle": [
"M"
],
"last": "Khapra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology",
"location": {
"addrLine": "Bombay Powai, Mumbai -400076",
"region": "Maharashtra",
"country": "India"
}
},
"email": "miteshk@cse.iitb.ac.in"
},
{
"first": "Sapan",
"middle": [],
"last": "Shah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology",
"location": {
"addrLine": "Bombay Powai, Mumbai -400076",
"region": "Maharashtra",
"country": "India"
}
},
"email": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Kedia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology",
"location": {
"addrLine": "Bombay Powai, Mumbai -400076",
"region": "Maharashtra",
"country": "India"
}
},
"email": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology",
"location": {
"addrLine": "Bombay Powai, Mumbai -400076",
"region": "Maharashtra",
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We report in this paper a way of doing Word Sense Disambiguation (WSD) that has its origin in multilingual MT and that is cognizant of the fact that parallel corpora, wordnets and sense annotated corpora are scarce resources. With respect to these resources, languages show different levels of readiness; however a more resource fortunate language can help a less resource fortunate language. Our WSD method can be applied to a language even when no sense tagged corpora for that language is available. This is achieved by projecting wordnet and corpus parameters from another language to the language in question. The approach is centered around a novel synset based multilingual dictionary and the empirical observation that within a domain the distribution of senses remains more or less invariant across languages. The effectiveness of our approach is verified by doing parameter projection and then running two different WSD algorithms. The accuracy values of approximately 75% (F1-score) for three languages in two different domains establish the fact that within a domain it is possible to circumvent the problem of scarcity of resources by projecting parameters like sense distributions, corpus-co-occurrences, conceptual distance, etc. from one language to another.",
"pdf_parse": {
"paper_id": "D09-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "We report in this paper a way of doing Word Sense Disambiguation (WSD) that has its origin in multilingual MT and that is cognizant of the fact that parallel corpora, wordnets and sense annotated corpora are scarce resources. With respect to these resources, languages show different levels of readiness; however a more resource fortunate language can help a less resource fortunate language. Our WSD method can be applied to a language even when no sense tagged corpora for that language is available. This is achieved by projecting wordnet and corpus parameters from another language to the language in question. The approach is centered around a novel synset based multilingual dictionary and the empirical observation that within a domain the distribution of senses remains more or less invariant across languages. The effectiveness of our approach is verified by doing parameter projection and then running two different WSD algorithms. The accuracy values of approximately 75% (F1-score) for three languages in two different domains establish the fact that within a domain it is possible to circumvent the problem of scarcity of resources by projecting parameters like sense distributions, corpus-co-occurrences, conceptual distance, etc. from one language to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Currently efforts are on in India to build large scale Machine Translation and Cross Lingual Search systems in consortia mode. These efforts are large, in the sense that 10-11 institutes and 6-7 languages spanning the length and breadth of the country are involved. The approach taken for translation is transfer based which needs to tackle the problem of word sense disambiguation (WSD) (Sergei et. al., 2003) . Since 90s machine learning based approaches to WSD using sense marked corpora have gained ground (Eneko Agirre & Philip Edmonds, 2007) . However, the creation of sense marked corpora has always remained a costly proposition. Statistical MT has obviated the need for elaborate resources for WSD, because WSD in SMT happens implicitly through parallel corpora (Brown et. al., 1993) . But parallel corpora too are a very costly resource.",
"cite_spans": [
{
"start": 388,
"end": 410,
"text": "(Sergei et. al., 2003)",
"ref_id": "BIBREF16"
},
{
"start": 517,
"end": 547,
"text": "Agirre & Philip Edmonds, 2007)",
"ref_id": "BIBREF2"
},
{
"start": 771,
"end": 792,
"text": "(Brown et. al., 1993)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The above situation brings out the challenges involved in Indian language MT and CLIR. Lack of resources coupled with the multiplicity of Indian languages severely affects the performance of several NLP tasks. In the light of this, we focus on the problem of developing methodologies that reuse resources. The idea is to do the annotation work for one language and find ways of using them for another language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work on WSD takes place in a multilingual setting involving Hindi (national language of India; 500 million speaker base), Marathi (20 million speaker base), Bengali (185 million speaker base) and Tamil (74 million speaker base). The wordnet of Hindi and sense marked corpora of Hindi are used for all these languages. Our methodology rests on a novel multilingual dictionary organization and on the idea of \"parameter projection\" from Hindi to the other languages. Also the domains of interest are tourism and health.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The roadmap of the paper is as follows. Section 2 describes related work. In section 3 we introduce the parameters essential for domain-specific WSD. Section 4 builds the case for parameter projection. Section 5 introduces the Multilingual Dictionary Framework which plays a key role in parameter projection. Section 6 is the core of the work, where we present parameter projection from one language to another. Section 7 describes two WSD algorithms which combine various parameters for do-main-specific WSD. Experiments and results are presented in sections 8 and 9. Section 10 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Knowledge based approaches to WSD such as Lesk\"s algorithm (Michael Lesk, 1986 ), Walker\"s algorithm (Walker D. & Amsler R., 1986) , conceptual density (Agirre Eneko & German Rigau, 1996) and random walk algorithm (Mihalcea Rada, 2005) essentially do Machine Readable Dictionary lookup. However, these are fundamentally overlap based algorithms which suffer from overlap sparsity, dictionary definitions being generally small in length.",
"cite_spans": [
{
"start": 42,
"end": 78,
"text": "Lesk\"s algorithm (Michael Lesk, 1986",
"ref_id": null
},
{
"start": 101,
"end": 130,
"text": "(Walker D. & Amsler R., 1986)",
"ref_id": "BIBREF18"
},
{
"start": 152,
"end": 187,
"text": "(Agirre Eneko & German Rigau, 1996)",
"ref_id": "BIBREF0"
},
{
"start": 224,
"end": 235,
"text": "Rada, 2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Supervised learning algorithms for WSD are mostly word specific classifiers, e.g., WSD using SVM (Lee et. al., 2004) , Exemplar based WSD (Ng Hwee T. & Hian B. Lee, 1996) and decision list based algorithm (Yarowsky, 1994) . The requirement of a large training corpus renders these algorithms unsuitable for resource scarce languages.",
"cite_spans": [
{
"start": 83,
"end": 116,
"text": "WSD using SVM (Lee et. al., 2004)",
"ref_id": null
},
{
"start": 160,
"end": 170,
"text": "Lee, 1996)",
"ref_id": "BIBREF10"
},
{
"start": 205,
"end": 221,
"text": "(Yarowsky, 1994)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Semi-supervised and unsupervised algorithms do not need large amount of annotated corpora, but are again word specific classifiers, e.g., semisupervised decision list algorithm (Yarowsky, 1995) and Hyperlex (V\u00e9ronis Jean, 2004) ). Hybrid approaches like WSD using Structural Semantic Interconnections (Roberto Navigli & Paolo Velardi, 2005 ) use combinations of more than one knowledge sources (wordnet as well as a small amount of tagged corpora). This allows them to capture important information encoded in wordnet (Fellbaum, 1998) as well as draw syntactic generalizations from minimally tagged corpora.",
"cite_spans": [
{
"start": 177,
"end": 193,
"text": "(Yarowsky, 1995)",
"ref_id": "BIBREF20"
},
{
"start": 216,
"end": 227,
"text": "Jean, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 310,
"end": 339,
"text": "Navigli & Paolo Velardi, 2005",
"ref_id": "BIBREF15"
},
{
"start": 518,
"end": 534,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "At this point we state that no single existing solution to WSD completely meets our requirements of multilinguality, high domain accuracy and good performance in the face of not-so-large annotated corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We discuss a number of parameters that play a crucial role in WSD. To appreciate this, consider the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "The river flows through this region to meet the sea.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "The word sea is ambiguous and has three senses as given in the Princeton Wordnet (PWN): S1: (n) sea (a division of an ocean or a large body of salt water partially enclosed by land) S2: (n) ocean, sea (anything apparently limitless in quantity or volume) S3: (n) sea (turbulent water with swells of considerable size) \"heavy seas\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "Our first parameter is obtained from Domain specific sense distributions. In the above example, the first sense is more frequent in the tourism domain (verified from manually sense marked tourism corpora). Domain specific sense distribution information should be harnessed in the WSD task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "The second parameter arises from the dominance of senses in the domain. Senses are expressed by synsets, and we define a dominant sense as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "A few dominant senses in the Tourism domain are {place, country, city, area}, {body of water}, {flora, fauna}, {mode of transport} and {fine arts}. In disambiguating a word, that sense which belongs to the sub-tree of a domain-specific dominant sense should be given a higher score than other senses. The value of this parameter (\u03b8) is decided as follows: \u03b8 = 1; if the candidate synset is a dominant synset \u03b8 = 0.5; if the candidate synset belongs to the subtree of a dominant synset \u03b8 = 0.001; if the candidate synset is neither a dominant synset nor belongs to the sub-tree of a dominant synset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "Our third parameter comes from Corpus cooccurrence. Co-occurring monosemous words as well as already disambiguated words in the context help in disambiguation. For example, the word river appearing in the context of sea is a monosemous word. The frequency of co-occurrence of river with the \"water body\" sense of sea is high in the tourism domain. Corpus co-occurrence is cal-A synset node in the wordnet hypernymy hierarchy is called Dominant if the synsets in the sub-tree below the synset are frequently occurring in the domain corpora. culated by considering the senses which occur in a window of 10 words around a sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "Our fourth parameter is based on the semantic distance between any pair of synsets in terms of the shortest path length between two synsets in the wordnet graph. An edge in the shortest path can be any semantic relation from the wordnet relation repository (e.g., hypernymy, hyponymy, meronymy, holonymy, troponymy etc.) .",
"cite_spans": [
{
"start": 257,
"end": 320,
"text": "(e.g., hypernymy, hyponymy, meronymy, holonymy, troponymy etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "For nouns we do something additional over and above the semantic distance. We take advantage of the deeper hierarchy of noun senses in the wordnet structure. This gives rise to our fifth and final parameter which arises out of the conceptual distance between a pair of senses. Conceptual distance between two synsets S 1 and S 2 is calculated using Equation 1, motivated by Agirre Eneko & German Rigau (1996) .",
"cite_spans": [
{
"start": 374,
"end": 408,
"text": "Agirre Eneko & German Rigau (1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for WSD",
"sec_num": "3"
},
{
"text": "Length of the path between (S1, S2) in terms of hypernymy hierarchy Height of the lowest common ancestor of S1 and S2 in the wordnet hierarchy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Distance (S1, S2) =",
"sec_num": null
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Distance (S1, S2) =",
"sec_num": null
},
{
"text": "The conceptual distance is proportional to the path length between the synsets, as it should be. The distance is also inversely proportional to the height of the common ancestor of two sense nodes, because as the common ancestor becomes more and more general the conceptual relatedness tends to get vacuous (e.g., two nodes being related through entity which is the common ancestor of EVERYTHING, does not really say anything about the relatedness).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Distance (S1, S2) =",
"sec_num": null
},
{
"text": "To summarize, our various parameters used for domain-specific WSD are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Distance (S1, S2) =",
"sec_num": null
},
{
"text": "Wordnet-dependent parameters \uf0b7 belongingness-to-dominant-concept \uf0b7 conceptual-distance \uf0b7 semantic-distance Corpus-dependent parameters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Distance (S1, S2) =",
"sec_num": null
},
{
"text": "\uf0b7 sense distributions \uf0b7 corpus co-occurrence. In section 7 we show how these parameters are used to come up with a scoring function for WSD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual Distance (S1, S2) =",
"sec_num": null
},
{
"text": "Wordnet-dependent parameters depend on the graph based structure of Wordnet whereas the Corpus-dependent parameters depend on various statistics learnt from a sense marked corpora. Both the tasks of (a) constructing a wordnet from scratch and (b) collecting sense marked corpora for multiple languages are tedious and expensive. An important question being addressed in this paper is: whether the effort required in constructing semantic graphs for multiple wordnets and collecting sense marked corpora can be avoided? Our findings seem to suggest that by projecting relations from the wordnet of a language and by projecting corpus statistics from the sense marked corpora of the language we can achieve this end. Before we proceed to discuss the way to realize parameter projection, we present a novel dictionary which facilitates this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building a case for Parameter Projection",
"sec_num": "4"
},
{
"text": "Parameter projection as described in section 4 rests on a novel and effective method of storage and use of dictionary in a multilingual setting proposed by Mohanty et. al. (2008) . For the purpose of current discussion, we will call this multilingual dictionary framework MultiDict. One important departure from traditional dictionary is that synsets are linked, and after that the words inside the synsets are linked. The basic mapping is thus between synsets and thereafter between the words. Table 1 shows the structure of MultiDict, with one example row standing for the concept of boy. The first column is the pivot describing a concept with a unique ID. The subsequent columns show the words expressing the concept in respective languages (in the example table above, English, Hindi and Marathi). Thus to express the concept \"04321: a youthful male person\", there are two lexical elements in English, which constitute a synset. Correspondingly, the Hindi and Marathi synsets contain 3 words each.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "Mohanty et. al. (2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 495,
"end": 502,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Synset based multilingual dictionary",
"sec_num": "5"
},
{
"text": "It may be noted that the central language whose synsets the synsets of other languages link to is Hindi. This way of linking synsets-more popularly known as the expansion approach-has several advantages as discussed in (Mohanty et. al., 2008) . One advantage germane to the point of this paper is that the synsets in a particular column automatically inherit the various semantic relations of the Hindi wordnet (Dipak Narayan et. al., 2000) , which saves the effort involved in reconstructing these relations for multiple languages.",
"cite_spans": [
{
"start": 219,
"end": 242,
"text": "(Mohanty et. al., 2008)",
"ref_id": "BIBREF13"
},
{
"start": 411,
"end": 440,
"text": "(Dipak Narayan et. al., 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synset based multilingual dictionary",
"sec_num": "5"
},
{
"text": "After the synsets are linked, cross linkages are set up manually from the words of a synset to the words of a linked synset of the central language. The average number of such links per synset per language pair is approximately 3. These crosslinkages actually solve the problem of lexical choice in translating from text of one language to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synset based multilingual dictionary",
"sec_num": "5"
},
{
"text": "Thus for the Marathi word \u092e\u0941 \u0932\u0917\u093e {mulagaa} denoting \"a youthful male person\", the correct lexical substitute from the corresponding Hindi synset is \u0932\u0921\u093c\u0915\u093e {ladakaa} (Figure 1 ). One might argue that any word within the synset could serve the purpose of translation. However, the exact lexical substitution has to respect native speaker acceptability. We put these cross linkages to another use, as described later.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 173,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Synset based multilingual dictionary",
"sec_num": "5"
},
{
"text": "Since it is the MultiDict which is at the heart of parameter projection, we would like to summarize the main points of this section. (1) By linking with the synsets of Hindi, the cost of building wordnets of other languages is partly reduced (semantic relations are inherited). The wordnet parameters of Hindi wordnet now become projectable to other languages. (2) By using the cross linked words in the synsets, corpus parameters become projectable (vide next section).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synset based multilingual dictionary",
"sec_num": "5"
},
{
"text": "6 Parameter projection using MultDict",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synset based multilingual dictionary",
"sec_num": "5"
},
{
"text": "Suppose a word (say, W) in language L 1 (say, Marathi) has k senses. For each of these k senses we are interested in finding the parameter P(S i |W)which is the probability of sense S i given the word W expressed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": ") = #( , ) #( , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "where \"#\" indicates \"count-of\". Consider the example of two senses of the Marathi word \u0938\u093e\u0917\u0930 {saagar}, viz., sea and abundance and the corresponding cross-linked words in Hindi (Figure 2 below): Marathi Hindi Figure 2 : Two senses of the Marathi word \u0938\u093e\u0917\u0930 (saagar), viz., {water body} and {abundance}, and the corresponding cross-linked words in Hindi 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 185,
"text": "(Figure 2",
"ref_id": null
},
{
"start": 208,
"end": 216,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "The probability P({water body}|saagar) for Marathi is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "#({ }, ) #({ }, ) + #({ }, )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "We propose that this can be approximated by the counts from Hindi sense marked corpora by replacing saagar with the cross linked Hindi words samudra and saagar, as per Figure 2 : Thus, the following formula is used for calculating the sense distributions of Marathi words using the sense marked Hindi corpus from the same domain:",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 176,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "#(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") = #( , _ _ _ ) #( , _ _ _ )",
"eq_num": "(2)"
}
],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "Note that we are not interested in the exact sense distribution of the words, but only in the relative sense distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "To prove that the projected relative distribution is faithful to the actual relative distribution of senses, we obtained the sense distribution statistics of a set of Marathi words from a sense tagged Marathi corpus (we call the sense marked corpora of a language its self corpora). These sense distribution statistics were compared with the statistics for these same words obtained by projecting from a sense tagged Hindi corpus using Equation (2). The results are summarized in Table 2 shows that whenever \u0938\u093e\u0917\u0930 (saagar) (sea) appears in the Marathi tourism corpus there is a 100% chance that it will appear in the \"water body\" sense and 0% chance that it will appear in the sense of \"abundance\". Column 5 shows that the same probability values are obtained using projections from Hindi tourism cor-pus. Taking another example, the third row shows that whenever \u0920\u093f\u0915\u093e\u0923 (thikaan) (place, home) appears in the Marathi tourism corpus there is a much higher chance of it appearing in the sense of \"place\" (96.2%) then in the sense of \"home\" (3.7%). Column 5 shows that the relative probabilities of the two senses remain the same even when using projections from Hindi tourism corpus (i.e. by using the corresponding cross-linked words in Hindi). To quantify these observations, we calculated the average KL divergence and Spearman\"s correlation co-efficient between the two distributions. The KL divergence is 0.766 and Spearman\"s correlation co-efficient is 0.299. Both these values indicate that there is a high degree of similarity between the distributions learnt using projection and those learnt from the self corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 487,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "P(Sense|Word) parameter",
"sec_num": "6.1"
},
{
"text": "Similarly, within a domain, the statistics of cooccurrence of senses remain the same across languages. For example, the co-occurrence of the Marathi synsets {\u0906\u0915\u093e\u0935 (akash) (sky), \u0905\u093f\u0902 \u092c\u0930 (ambar) (sky)} and {\u092e\u0947 \u0918 (megh) (cloud), \u0905\u092d\u094d\u0930 (abhra) (cloud)} in the Marathi corpus remains more or less same as (or proportional to) the co-occurrence between the corresponding Hindi synsets in the Hindi corpus. Table 3 shows a few examples depicting similarity between co-occurrence statistics learnt from Marathi tourism corpus and Hindi tourism corpus. Note that we are talking about co-occurrence of synsets and not words. For example, the second row shows that the probability of co-occurrence of the synsets {cloud} and {sky} is almost same in the Marathi and Hindi corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 399,
"end": 406,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Co-occurrence parameter",
"sec_num": "6.2"
},
{
"text": "We describe two algorithms to establish the usefulness of the idea of parameter projection. The first algorithm-called iterative WSD (IWSD-) is greedy, and the second based on PageRank algorithm is exhaustive. Both use scoring functions that make use of the parameters detailed in the previous sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our algorithms for WSD",
"sec_num": "7"
},
{
"text": "We have been motivated by the Energy expression in Hopfield network (Hopfield, 1982) in formulating a scoring function for ranking the senses. Hopfield Network is a fully connected bidirectional symmetric network of bi-polar (0/1 or +1/-1) neurons. We consider the asynchronous Hopfield Network. At any instant, a randomly chosen neuron (a) examines the weighted sum of the input, (b) compares this value with a threshold and (c) gets to the state of 1 or 0, depending on whether the input is greater than or less than or equal to the threshold. The assembly of 0/1 states of individual neurons defines a state of the whole network. Each state has associated with it an energy, E, given by the following expression",
"cite_spans": [
{
"start": 68,
"end": 84,
"text": "(Hopfield, 1982)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative WSD (IWSD)",
"sec_num": "7.1"
},
{
"text": "= \u2212 + > =1 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative WSD (IWSD)",
"sec_num": "7.1"
},
{
"text": "where, N is the total number of neurons in the network, and are the activations of neurons i and j respectively and is the weight of the connection between neurons i and j. Energy is a fundamental property of Hopfield networks, providing the necessary machinery for discussing convergence, stability and such other considerations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative WSD (IWSD)",
"sec_num": "7.1"
},
{
"text": "The energy expression as given above cleanly separates the influence of self-activations of neurons and that of interactions amongst neurons to the global macroscopic property of energy of the network. This fact has been the primary insight for equation 4which was proposed to score the most appropriate synset in the given context. The correspondences are as follows: The component * of the energy due to the self activation of a neuron can be compared to the corpus specific sense of a word in a domain. The other component * * coming from the interaction of activations can be compared to the score of a sense due to its interaction in the form of corpus co-occurrence, conceptual distance, and wordnetbased semantic distance with the senses of other words in the sentence. The first component thus captures the rather static corpus sense, whereas the second expression brings in the sentential context. 1. Tag all monosemous words in the sentence. 2. Iteratively disambiguate the remaining words in the sentence in increasing order of their degree of polysemy. 3. At each stage select that sense for a word which maximizes the score given by Equation 4Algorithm1: Iterative WSD IWSD is clearly a greedy algorithm. It bases its decisions on already disambiguated words, and ignores words with higher degree of polysemy. For example, while disambiguating bisemous words, the algorithm uses only the monosemous words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative WSD (IWSD)",
"sec_num": "7.1"
},
{
"text": "Rada Mihalcea (2005) proposed the idea of using PageRank algorithm to find the best combination of senses in a sense graph. The nodes in a sense graph correspond to the senses of all the words in a sentence and the edges depict the strength of interaction between senses. The score of each node in the graph is then calculated using the following recursive formula:",
"cite_spans": [
{
"start": 5,
"end": 20,
"text": "Mihalcea (2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modified PageRank algorithm",
"sec_num": "7.2"
},
{
"text": "= 1 \u2212 d + d * W ij W jk S k \u2208Out S i * Score S j S j \u2208In S i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified PageRank algorithm",
"sec_num": "7.2"
},
{
"text": "Instead of calculating W ij based on the overlap between the definition of senses S i and S as proposed by Rada Mihalcea (2005) , we calculate the edge weights using the following formula:",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "Rada Mihalcea (2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modified PageRank algorithm",
"sec_num": "7.2"
},
{
"text": "= , * 1 , * 1 , * | * | = 0.85",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified PageRank algorithm",
"sec_num": "7.2"
},
{
"text": "This formula helps capture the edge weights in terms of the corpus bias as well as the interaction between the senses in the corpus and wordnet. It should be noted that this algorithm is not greedy. Unlike IWSD, this algorithm allows all the senses of all words to play a role in the disambiguation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified PageRank algorithm",
"sec_num": "7.2"
},
{
"text": "We tested our algorithm on tourism corpora in 3 languages (viz., Marathi, Bengali and Tamil) and health corpora in 1 language (Marathi) using projections from Hindi. The corpora for both the domains were manually sense tagged. A 4-fold cross validation was done for all the languages in both the domains. The size of the corpus for each language is described in Table 4 . Table 6 shows the results of disambiguation (precision, recall and F-score). We give values for two algorithms in the tourism domain: IWSD and Pa-geRank. In each case figures are given for both with and without parameter projection. The wordnet baseline figures too are presented for the sake of grounding the results. Note the lines of numbers in bold, and compare them with the numbers in the preceding line. This shows the fall in accuracy value when one tries the parameter projection approach in place of self corpora. For example, consider the F-score as given by IWSD for Marathi. It degrades from about 81% to 72% in using parameter projection in place of self corpora. Still, the value is much more than the baseline, viz., the wordnet first sense (a typically reported baseline).",
"cite_spans": [],
"ref_spans": [
{
"start": 362,
"end": 369,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 372,
"end": 379,
"text": "Table 6",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Experimental Setup:",
"sec_num": "8"
},
{
"text": "Coming to PageRank for Marathi, the fall in accuracy is about 8%. Appendix A shows the corresponding figure for Tamil with IWSD as 10%. Appendix B reports the fall to be 11% for a different domain-Health-for Marathi (using IWSD) .",
"cite_spans": [
{
"start": 216,
"end": 228,
"text": "(using IWSD)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "9"
},
{
"text": "In all these cases, even after degradation the performance is far above the wordnet baseline. This shows that one could trade accuracy with the cost of creating sense annotated corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "9"
},
{
"text": "Based on our study for 3 languages and 2 domains, we conclude the following: (i) Domain specific sense distributions-if obtainable-can be exploited to advantage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work:",
"sec_num": "10"
},
{
"text": "(ii) Since sense distributions remain same across languages, it is possible to create a disambiguation engine that will work even in the absence of sense tagged corpus for some resource deprived language, provided (a) there are aligned and cross linked sense dictionaries for the language in question and another resource rich language, (b) the domain in which disambiguation needs to be performed for the resource deprived language is the same as the domain for which sense tagged corpora is available for the resource rich language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work:",
"sec_num": "10"
},
{
"text": "(iii) Provided the accuracy reduction is not drastic, it may make sense to trade high accuracy for the effort in collecting sense marked corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work:",
"sec_num": "10"
},
{
"text": "It would be interesting to test our algorithm on other domains and other languages to conclusively establish the effectiveness of parameter projection for multilingual WSD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work:",
"sec_num": "10"
},
{
"text": "It would also be interesting to analyze the contribution of corpus and wordnet parameters independently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work:",
"sec_num": "10"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word sense disambiguation using conceptual density",
"authors": [
{
"first": "Agirre",
"middle": [],
"last": "Eneko & German Rigau",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre Eneko & German Rigau. 1996. Word sense dis- ambiguation using conceptual density. In Proceed- ings of the 16th International Conference on Computational Linguistics (COLING), Copenhagen, Denmark.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Experience in Building the Indo WordNet -a WordNet for Hindi. First International Conference on Global WordNet, Mysore",
"authors": [
{
"first": "Dipak",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Debasri",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Pande",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipak Narayan, Debasri Chakrabarti, Prabhakar Pande and P. Bhattacharyya. 2002. An Experience in Build- ing the Indo WordNet -a WordNet for Hindi. First International Conference on Global WordNet, My- sore, India.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Word Sense Disambiguation Algorithms and Applications",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "& Philip",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre & Philip Edmonds. 2007. Word Sense Disambiguation Algorithms and Applications. Sprin- ger Publications.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, C. 1998. WordNet: An Electronic Lexical Database. The MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural networks and physical systems with emergent collective computational abilities",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Hopfield",
"suffix": ""
}
],
"year": 1982,
"venue": "Proceedings of the National Academy of Sciences of the USA",
"volume": "79",
"issue": "",
"pages": "2554--2558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. J. Hopfield. April 1982. \"Neural networks and physi- cal systems with emergent collective computational abilities\", Proceedings of the National Academy of Sciences of the USA, vol. 79 no. 8 pp. 2554-2558.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Supervised word sense disambiguation with support vector machines and multiple knowledge sources",
"authors": [
{
"first": "Lee",
"middle": [],
"last": "Yoong",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hwee",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ng & Tee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chia",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text",
"volume": "",
"issue": "",
"pages": "137--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee Yoong K., Hwee T. Ng & Tee K. Chia. 2004. Su- pervised word sense disambiguation with support vector machines and multiple knowledge sources. Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Se- mantic Analysis of Text, Barcelona, Spain, 137-140.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using syntactic dependency as local context to resolve word sense ambiguity",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Dekang",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Dekang. 1997. Using syntactic dependency as local context to resolve word sense ambiguity. In Proceed- ings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL), Madrid, 64-71.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Lesk",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 5th annual international conference on Systems documentation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, Toronto, Ontario, Canada.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Large vocabulary unsupervised word sense disambiguation with graph-based algorithms for sequence data labeling",
"authors": [
{
"first": "Mihalcea",
"middle": [],
"last": "Rada",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Joint Human Language Technology and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP)",
"volume": "",
"issue": "",
"pages": "411--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea Rada. 2005. Large vocabulary unsupervised word sense disambiguation with graph-based algo- rithms for sequence data labeling. In Proceedings of the Joint Human Language Technology and Empiri- cal Methods in Natural Language Processing Confe- rence (HLT/EMNLP), Vancouver, Canada, 411-418.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Integrating multiple knowledge sources to disambiguate word senses: An exemplar-based approach",
"authors": [
{
"first": "T",
"middle": [],
"last": "Ng Hwee",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Hian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng Hwee T. & Hian B. Lee. 1996. Integrating multiple knowledge sources to disambiguate word senses: An exemplar-based approach. In Proceedings of the 34th",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computation- al Linguistics (ACL), Santa Cruz, U.S.A., 40-47.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown and Vincent J.Della Pietra and Stephen A. Della Pietra and Robert. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Pa- rameter Estimation. Computational Linguistics Vol 19, 263-311.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Synset Based Multilingual Dictionary: Insights, Applications and Challenges",
"authors": [
{
"first": "Rajat",
"middle": [],
"last": "Mohanty",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Pande",
"suffix": ""
},
{
"first": "Shraddha",
"middle": [],
"last": "Kalele",
"suffix": ""
},
{
"first": "Mitesh",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2008,
"venue": "Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajat Mohanty, Pushpak Bhattacharyya, Prabhakar Pande, Shraddha Kalele, Mitesh Khapra and Aditya Sharma. 2008. Synset Based Multilingual Dictionary: Insights, Applications and Challenges. Global Word- net Conference, Szeged, Hungary, January 22-25.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Selectional preference and sense disambiguation",
"authors": [
{
"first": "Resnik",
"middle": [],
"last": "Philip",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ACL Workshop on Tagging Text with Lexical Semantics, Why, What and How",
"volume": "",
"issue": "",
"pages": "52--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik Philip. 1997. Selectional preference and sense disambiguation. In Proceedings of ACL Workshop on Tagging Text with Lexical Semantics, Why, What and How? Washington, U.S.A., 52-57.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Structural Semantic Interconnections: A Knowledge-Based Approach to Word Sense Disambiguation",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions On Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli, Paolo Velardi. 2005. Structural Se- mantic Interconnections: A Knowledge-Based Ap- proach to Word Sense Disambiguation. IEEE Transactions On Pattern Analysis and Machine Intel- ligence.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Readings in Machine Translation",
"authors": [
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
},
{
"first": "Harold",
"middle": [],
"last": "Somers",
"suffix": ""
},
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergei Nirenburg, Harold Somers, and Yorick Wilks. 2003. Readings in Machine Translation. Cambridge, MA: MIT Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "HyperLex: Lexical cartography for information retrieval",
"authors": [
{
"first": "V\u00e9ronis",
"middle": [],
"last": "Jean",
"suffix": ""
}
],
"year": 2004,
"venue": "Computer Speech & Language",
"volume": "18",
"issue": "3",
"pages": "223--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00e9ronis Jean. 2004. HyperLex: Lexical cartography for information retrieval. Computer Speech & Language, 18(3):223-252.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Use of Machine Readable Dictionaries in Sublanguage Analysis",
"authors": [
{
"first": "D",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Amsler",
"suffix": ""
}
],
"year": 1986,
"venue": "Analyzing Language in Restricted Domains",
"volume": "",
"issue": "",
"pages": "69--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker D. and Amsler R. 1986. The Use of Machine Readable Dictionaries in Sublanguage Analysis. In Analyzing Language in Restricted Domains, Grish- man and Kittredge (eds), LEA Press, pp. 69-83.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French",
"authors": [
{
"first": "Yarowsky",
"middle": [],
"last": "David",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky David. 1994. Decision lists for lexical ambi- guity resolution: Application to accent restoration in Spanish and French. In Proceedings of the 32nd An- nual Meeting of the association for Computational Linguistics (ACL), Las Cruces, U.S.A., 88-95.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "Yarowsky",
"middle": [],
"last": "David",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky David. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Pro- ceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL), Cambridge, MA, 189-196.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Cross linked synset members for the concept: a youthful male person",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "performIterativeWSD(sentence)",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "Multilingual Dictionary Framework",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "Sense_8231 shows the same word saagar for both Marathi and Hindi. This is not uncommon, since Marathi and Hindi are sister languages.",
"num": null,
"content": "<table><tr><td/><td/><td/><td>saagar (sea)</td><td>Sense_2650</td><td>samudra (sea)</td></tr><tr><td>Marathi Synset</td><td colspan=\"2\">Hindi Synset English Synset</td><td>{water body}</td><td/><td>{water body}</td></tr><tr><td/><td/><td/><td>saagar (sea)</td><td>Sense_8231</td><td>saagar (sea)</td></tr><tr><td>mulagaa, \u092a\u094b\u0930\u0917\u093e /MW2 poragaa, pora \u092a\u094b\u0930 /MW3 /MW1 \u092e\u0941 \u0932\u0917\u093e</td><td>\u091b\u094b\u0930\u093e /HW4 bachcha, \u092c\u091a\u094d\u091a\u093e /HW3 baalak, /HW2 \u092c\u093e\u0932\u0915 ladakaa, /HW1 \u0932\u0921\u093c\u0915\u093e</td><td>/HW2 boy /HW1, male-child</td><td>{abundance}</td><td/><td>{abundance}</td></tr><tr><td/><td>choraa</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">{water body}, samudra)</td></tr><tr><td/><td/><td/><td colspan=\"3\">#({water body}, samudra) + #({abundance}, saagar)</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>Sr.</td><td>Marathi</td><td>Synset</td><td>P(S|word)</td><td>P(S|word) as</td></tr><tr><td>No</td><td>Word</td><td/><td>as learnt</td><td>projected</td></tr><tr><td/><td/><td/><td>from</td><td>from sense</td></tr><tr><td/><td/><td/><td>sense</td><td>tagged</td></tr><tr><td/><td/><td/><td>tagged</td><td>Hindi cor-</td></tr><tr><td/><td/><td/><td>Marathi</td><td>pus</td></tr><tr><td/><td/><td/><td>corpus</td><td/></tr><tr><td>1</td><td>\u0915\u0915\u093f\u0902 \u092e\u0924</td><td>{ worth }</td><td>0.684</td><td>0.714</td></tr><tr><td/><td>(kimat)</td><td>{ price }</td><td>0.315</td><td>0.285</td></tr><tr><td>2</td><td>\u0930\u0938\u094d\u0924\u093e</td><td>{ roadway }</td><td>0.164</td><td>0.209</td></tr><tr><td/><td>(rasta)</td><td>route} {road,</td><td>0.835</td><td>0.770</td></tr><tr><td>3</td><td>(thikan) \u0920\u093f\u0915\u093e\u0923</td><td>place} { land site,</td><td>0.962</td><td>0.878</td></tr><tr><td/><td/><td>{ home }</td><td>0.037</td><td>0.12</td></tr><tr><td>4</td><td>(saagar) \u0938\u093e\u0917\u0930</td><td>body} {water</td><td>1.00</td><td>1.00</td></tr><tr><td/><td/><td>{abun-</td><td>0</td><td>0</td></tr><tr><td/><td/><td>dance}</td><td/><td/></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "Comparison of the sense distributions of some Marathi words learnt from Marathi sense tagged corpus with those projected from Hindi sense tagged corpus.",
"num": null,
"content": "<table/>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"text": "Comparison of the corpus co-occurrence statistics learnt from Marathi and Hindi Tourism corpus.",
"num": null,
"content": "<table/>"
},
"TABREF8": {
"type_str": "table",
"html": null,
"text": "Size of manually sense tagged corpora for different languages.",
"num": null,
"content": "<table/>"
},
"TABREF9": {
"type_str": "table",
"html": null,
"text": "shows the number of synsets in MultiDict for each language.",
"num": null,
"content": "<table><tr><td>Language</td><td># of synsets in</td></tr><tr><td/><td>MultiDict</td></tr><tr><td>Hindi</td><td>29833</td></tr><tr><td>Marathi</td><td>16600</td></tr><tr><td>Bengali</td><td>10732</td></tr><tr><td>Tamil</td><td>5727</td></tr></table>"
},
"TABREF10": {
"type_str": "table",
"html": null,
"text": "Number of synsets for each language",
"num": null,
"content": "<table><tr><td>Algorithm</td><td/><td colspan=\"2\">Language</td></tr><tr><td/><td>Marathi</td><td/><td>Bengali</td></tr><tr><td/><td>P % R %</td><td>F %</td><td>P % R % F %</td></tr><tr><td>IWSD (training on self corpora; no parameter pro-</td><td/><td/></tr><tr><td>jection)</td><td colspan=\"3\">81.29 80.42 80.85 81.62 78.75 79.94</td></tr><tr><td>IWSD (training on Hindi and reusing parameters</td><td/><td/></tr><tr><td>for another language)</td><td/><td/></tr><tr><td>(training on self corpora; no parameter</td><td/><td/></tr><tr><td>projection)</td><td colspan=\"3\">79.61 79.61 79.61 76.41 76.41 76.41</td></tr><tr><td>PageRank (training on Hindi and reusing parame-</td><td/><td/></tr><tr><td>ters for another language)</td><td/><td/></tr><tr><td>Wordnet Baseline</td><td colspan=\"3\">58.07 58.07 58.07 52.25 52.25 52.25</td></tr></table>"
},
"TABREF11": {
"type_str": "table",
"html": null,
"text": "Precision, Recall and F-scores of IWSD, PageRank and Wordnet Baseline. Values are reported with and without parameter projection.",
"num": null,
"content": "<table/>"
}
}
}
}