{ "paper_id": "D07-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:19:23.611627Z" }, "title": "LEDIR: An Unsupervised Algorithm for Learning Directionality of Inference Rules", "authors": [ { "first": "Rahul", "middle": [], "last": "Bhagat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California Marina del Rey", "location": { "region": "CA" } }, "email": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California Marina del Rey", "location": { "region": "CA" } }, "email": "pantel@isi.edu" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California Marina del Rey", "location": { "region": "CA" } }, "email": "hovy@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Semantic inference is a core component of many natural language applications. In response, several researchers have developed algorithms for automatically learning inference rules from textual corpora. However, these rules are often either imprecise or underspecified in directionality. In this paper we propose an algorithm called LEDIR that filters incorrect inference rules and identifies the directionality of correct ones. Based on an extension to Harris's distributional hypothesis, we use selectional preferences to gather evidence of inference directionality and plausibility. Experiments show empirical evidence that our approach can classify inference rules significantly better than several baselines.", "pdf_parse": { "paper_id": "D07-1017", "_pdf_hash": "", "abstract": [ { "text": "Semantic inference is a core component of many natural language applications. In response, several researchers have developed algorithms for automatically learning inference rules from textual corpora. However, these rules are often either imprecise or underspecified in directionality. In this paper we propose an algorithm called LEDIR that filters incorrect inference rules and identifies the directionality of correct ones. Based on an extension to Harris's distributional hypothesis, we use selectional preferences to gather evidence of inference directionality and plausibility. Experiments show empirical evidence that our approach can classify inference rules significantly better than several baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Paraphrases are textual expressions that convey the same meaning using different surface forms. Textual entailment is a similar phenomenon, in which the presence of one expression licenses the validity of another. Paraphrases and inference rules are known to improve performance in various NLP applications like Question Answering (Harabagiu and Hickl 2006) , summarization (Barzilay et al. 1999) and Information Retrieval (Anick and Tipirneni 1999) .", "cite_spans": [ { "start": 331, "end": 357, "text": "(Harabagiu and Hickl 2006)", "ref_id": "BIBREF8" }, { "start": 374, "end": 396, "text": "(Barzilay et al. 1999)", "ref_id": "BIBREF2" }, { "start": 423, "end": 449, "text": "(Anick and Tipirneni 1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Paraphrase and entailment involve inference rules that license a conclusion when a premise is given. Deciding whether a proposed inference rule is fully valid is difficult, however, and most NL systems instead focus on plausible inference. In this case, one statement has some likelihood of being identical in meaning to, or derivable from, the other. In the rest of this paper we discuss plausible inference only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given the importance of inference, several researchers have developed inference rule collections. While manually built resources like Word-Net (Fellbaum 1998) and Cyc (Lenat 1995) have been around for years, for coverage and domain adaptability reasons many recent approaches have focused on automatic acquisition of paraphrases (Barzilay and McKeown 2001) and inference rules (Lin and Pantel 2001; Szpektor et al. 2004) . The downside of these approaches is that they often result in incorrect inference rules or in inference rules that are underspecified in directionality (i.e. asymmetric but are wrongly considered symmetric). For example, consider an inference rule from DIRT (Lin and Pantel 2001) :", "cite_spans": [ { "start": 143, "end": 158, "text": "(Fellbaum 1998)", "ref_id": "BIBREF6" }, { "start": 167, "end": 179, "text": "(Lenat 1995)", "ref_id": "BIBREF10" }, { "start": 329, "end": 356, "text": "(Barzilay and McKeown 2001)", "ref_id": "BIBREF1" }, { "start": 377, "end": 398, "text": "(Lin and Pantel 2001;", "ref_id": "BIBREF12" }, { "start": 399, "end": 420, "text": "Szpektor et al. 2004)", "ref_id": "BIBREF20" }, { "start": 681, "end": 702, "text": "(Lin and Pantel 2001)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X eats Y \u21d4 X likes Y", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "All rules in DIRT are considered symmetric. Though here, one is most likely to infer that \"X eats Y\" \u21d2 \"X likes Y\", because if someone eats something, he most probably likes it 1 , but if he likes something he might not necessarily be able to eat it. So for example, given the sentence \"I eat spicy food\", one is mostly likely to infer that \"I like spicy food\". On the other hand, given the sentence \"I like rollerblading\", one cannot infer that \"I eat rollerblading\". In this paper, we propose an algorithm called LEDIR (pronounced \"leader\") for LEarning Directionality of Inference Rules. Our algorithm filters incorrect inference rules and identifies the directionality of the correct ones. Our algorithm works with any resource that produces inference rules of the form shown in example (1). We use both the distributional hypothesis and selectional preferences as the basis for our algorithm. We provide empirical evidence to validate the following main contribution: Claim: Relational selectional preferences can be used to automatically determine the plausibility and directionality of an inference rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we describe applications that can benefit by using inference rules and their directionality. We then talk about some previous work in this area.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Open domain question answering approaches often cast QA as the problem of finding some kind of semantic inference between a question and its answer(s) (Moldovan et al. 2003; Echiabi and Marcu 2003) . Harabagiu and Hickl (2006) recently demonstrated that textual entailment inference information, which in this system is a set of directional inference relations, improves the performance of a QA system significantly even without using any other form of semantic inference. This evidence supports the idea that learning the directionality of other sets of inference rules may improve QA performance.", "cite_spans": [ { "start": 151, "end": 173, "text": "(Moldovan et al. 2003;", "ref_id": "BIBREF15" }, { "start": 174, "end": 197, "text": "Echiabi and Marcu 2003)", "ref_id": null }, { "start": 200, "end": 226, "text": "Harabagiu and Hickl (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2.1" }, { "text": "In Multi-Document Summarization (MDS), paraphrasing is useful for determining sentences that have similar meanings (Barzilay et al. 1999) . Knowing the directionality between the inference rules here could allow the MDS system to choose either the more specific or general sentence depending on the purpose of the summary.", "cite_spans": [ { "start": 115, "end": 137, "text": "(Barzilay et al. 1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2.1" }, { "text": "In IR, paraphrases have been used for query expansion, which is known to promote effective retrieval (Anick and Tipirneni 1999) . Knowing the directionality of rules here could help in making a query more general or specific depending on the user needs.", "cite_spans": [ { "start": 101, "end": 127, "text": "(Anick and Tipirneni 1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2.1" }, { "text": "Automatically learning paraphrases and inference rules from text is a topic that has received much attention lately. Barzilay and McKeown (2001) for paraphrases, DIRT (Lin and Pantel 2001) and TEASE (Szpektor et al. 2004) for inference rules, are recent approaches that have achieved promising results. While all these approaches produce collections of inference rules that have good recall, they suffer from the complementary problem of low precision. They also make no attempt to distinguish between symmetric and asymmetric inference rules. Given the potential positive impact shown in Section 2.1 of learning the directionality of inference rules, there is a need for methods, such as the one we present, to improve existing automatically created resources.", "cite_spans": [ { "start": 117, "end": 144, "text": "Barzilay and McKeown (2001)", "ref_id": "BIBREF1" }, { "start": 167, "end": 188, "text": "(Lin and Pantel 2001)", "ref_id": "BIBREF12" }, { "start": 199, "end": 221, "text": "(Szpektor et al. 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Inference Rules", "sec_num": "2.2" }, { "text": "There have been a few approaches at learning the directionality of restricted sets of semantic relations, mostly between verbs. Chklovski and Pantel (2004) used lexico-syntactic patterns over the Web to detect certain types of symmetric and asymmetric relations between verbs. They manually examined and obtained lexico-syntactic patterns that help identify the types of relations they considered and used these lexico-syntactic patterns over the Web to detect these relations among a set of candidate verb pairs. Their approach however is limited only to verbs and to specific types of verb-verb relations. Zanzotto et al. (2006) explored a selectional preference-based approach to learn asymmetric inference rules between verbs. They used the selectional preferences of a single verb, i.e. the semantic types of a verb's arguments, to infer an asymmetric inference between the verb and the verb form of its argument type. Their approach however applies also only to verbs and is limited to some specific types of verb-argument pairs. Torisawa (2006) presented a method to acquire inference rules with temporal constraints, between verbs. They used co-occurrences between verbs in Japanese coordinated sentences and co-occurrences between verbs and nouns to learn the verb-verb inference rules. Like the previous two methods, their approach too deals only with verbs and is limited to learning inference rules that are temporal in nature. Geffet and Dagan (2005) proposed an extension to the distributional hypothesis to discover entailment relation between words. They model the context of a word using its syntactic features and compare the contexts of two words for strict inclusion to infer lexical entailment. In principle, their work is the most similar to ours. Their method however is limited to lexical entailment and they show its effectiveness for nouns. Our method on the other hand deals with inference rules between binary relations and includes inference rules between verbal relations, non-verbal relations and multi-word relations. Our definition of context and the methodology for obtaining context similarity and overlap is also much different from theirs.", "cite_spans": [ { "start": 128, "end": 155, "text": "Chklovski and Pantel (2004)", "ref_id": "BIBREF3" }, { "start": 608, "end": 630, "text": "Zanzotto et al. (2006)", "ref_id": "BIBREF23" }, { "start": 1036, "end": 1051, "text": "Torisawa (2006)", "ref_id": "BIBREF21" }, { "start": 1440, "end": 1463, "text": "Geffet and Dagan (2005)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Directionality", "sec_num": "2.3" }, { "text": "The aim of this paper is to filter out incorrect inference rules and to identify the directionality of the correct ones. Let p i \u21d4 p j be an inference rule where each p is a binary semantic relation between two entities x and y. Let be an instance of relation p. Formal problem definition: Given the inference rule p i \u21d4 p j , we want to conclude which one of the following is more appropriate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Directionality of Inference Rules", "sec_num": "3" }, { "text": "1. p i \u21d4 p j 2. p i \u21d2 p j 3. p i \u21d0 p j 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Directionality of Inference Rules", "sec_num": "3" }, { "text": "Consider the example (1) from section 1. There, it is most plausible to conclude \"X eats Y\" \u21d2 \"X likes Y\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". No plausible inference", "sec_num": null }, { "text": "Our algorithm LEDIR uses selectional preferences along the lines of Resnik (1996) and Pantel et al. (2007) to determine the plausibility and directionality of inference rules.", "cite_spans": [ { "start": 68, "end": 81, "text": "Resnik (1996)", "ref_id": "BIBREF18" }, { "start": 86, "end": 106, "text": "Pantel et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": ". No plausible inference", "sec_num": null }, { "text": "Many approaches to modeling lexical semantics have relied on the distributional hypothesis (Harris 1954) , which states that words that appear in the same contexts tend to have similar meanings. The idea is that context is a good indicator of a word meaning. Lin and Pantel (2001) proposed an extension to the distributional hypothesis and applied it to paths in dependency trees, where if two paths tend to occur in similar contexts it is hypothesized that the meanings of the paths tend to be similar.", "cite_spans": [ { "start": 91, "end": 104, "text": "(Harris 1954)", "ref_id": null }, { "start": 259, "end": 280, "text": "Lin and Pantel (2001)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Underlying Assumption", "sec_num": "3.1" }, { "text": "In this paper, we assume and propose a further extension to the distributional hypothesis and call it the \"Directionality Hypothesis\". Directionality Hypothesis: If two binary semantic relations tend to occur in similar contexts and the first one occurs in significantly more contexts than the second, then the second most likely implies the first and not vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying Assumption", "sec_num": "3.1" }, { "text": "The intuition here is that of generality. The more general a relation, more the types (and number) of contexts in which it is likely to appear. Consider the example (1) from section 1. The fact is that there are many more things that someone might like than those that someone might eat. Hence, by applying the directionality hypothesis, one can infer that \"X eats Y\" \u21d2 \"X likes Y\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Underlying Assumption", "sec_num": "3.1" }, { "text": "The key to applying the distributional hypothesis to the problem at hand is to model the contexts appropriately and to introduce a measure for calculating context similarity. Concepts in semantic space, due to their abstractive power, are much richer for reasoning about inferences than simple surface words. Hence, we model the context of a relation p of the form by using the semantic classes C(x) and C(y) of words that can be instantiated for x and y respectively. To measure context similarity of two relations, we calculate the overlap coefficient (Manning and Sch\u00fctze, 1999) between their contexts.", "cite_spans": [ { "start": 564, "end": 591, "text": "(Manning and Sch\u00fctze, 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Underlying Assumption", "sec_num": "3.1" }, { "text": "The selectional preferences of a predicate is the set of semantic classes that its arguments can belong to (Wilks 1975) . Resnik (1996) gave an information theoretical formulation of the idea. Pantel et al. (2007) extended this idea to non-verbal relations by defining the relational selectional preferences (RSPs) of a binary relation p as the set of semantic classes C(x) and C(y) of words that can occur in positions x and y respectively.", "cite_spans": [ { "start": 107, "end": 119, "text": "(Wilks 1975)", "ref_id": "BIBREF22" }, { "start": 122, "end": 135, "text": "Resnik (1996)", "ref_id": "BIBREF18" }, { "start": 193, "end": 213, "text": "Pantel et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Preferences", "sec_num": "3.2" }, { "text": "The set of semantic classes C(x) and C(y) can be obtained either from a manually created taxonomy like WordNet as proposed in the above previous approaches or by using automatically generated classes from the output of a word clustering algorithm as proposed in Pantel et al. (2007) . For example given a relation like \"X likes Y\", its RSPs from WordNet could be {individual, so-cial_group\u2026} for X and {individual, food, activ-ity\u2026} for Y.", "cite_spans": [ { "start": 262, "end": 282, "text": "Pantel et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Preferences", "sec_num": "3.2" }, { "text": "In this paper, we deployed both the Joint Relational Model (JRM) and Independent Relational Model (IRM) proposed by Pantel et al. (2007) to obtain the selectional preferences for a relation p.", "cite_spans": [ { "start": 116, "end": 136, "text": "Pantel et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Selectional Preferences", "sec_num": "3.2" }, { "text": "The JRM uses a large corpus to learn the selectional preferences of a binary semantic relation by considering its arguments jointly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "Given a relation p and large corpus of English text, we first find all occurrences of relation p in the corpus. For every instance in the corpus, we obtain the sets C(x) and C(y) of the semantic classes that x and y belong to. We then accumulate the frequencies of the triples by assuming that every c(x) \u2208 C(x) can co-occur with every c(y) \u2208 C(y) and vice versa. Every triple obtained in this manner is a candidate selectional preference for p. Following Pantel et al. (2007) , we rank these candidates using Pointwise mutual information (Cover and Thomas 1991) . The ranking function is defined as the strength of association between two semantic classes, c x and c y 2 , given the relation p:", "cite_spans": [ { "start": 488, "end": 518, "text": "Following Pantel et al. (2007)", "ref_id": null }, { "start": 581, "end": 604, "text": "(Cover and Thomas 1991)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "! pmi c x p; c y p ( ) = log P c x ,c y p ( ) P c x p ( ) P c y p ( ) (3.1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "Let |c x , p, c y | denote the frequency of observing the instance . We estimate the probabilities of Equation 3.1 using maximum likelihood estimates over our corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "! P c x p ( ) = c x , p,\" \", p,\" P c y p ( ) = \", p,c y \", p,\" P c x ,c y p ( ) = c x , p,c y \", p,\" (3.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "We estimate the above frequencies using:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "! c x , p,\" = w, p,\" C w ( ) w #c x $ \", p,c y = \", p,w C w ( ) w #c y $ c x , p,c y = w 1 , p,w 2 C w 1 ( ) % C w 2 ( ) w 1 #c x ,w 2 #c y $ (3.3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "where |x, p, y| denotes the frequency of observing the instance and |C(w)| denotes the number of classes to which word w belongs. |C(w)| distributes w's mass equally among all of its senses C(w).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1: Joint Relational Model (JRM)", "sec_num": null }, { "text": "Due to sparse data, the JRM is likely to miss some pair(s) of valid relational selectional preferences. Hence we use the IRM, which models the arguments of a binary semantic relation independently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2: Independent Relational Model (IRM)", "sec_num": null }, { "text": "Similar to JRM, we find all instances of the form for a relation p. We then find the sets C(x) and C(y) of the semantic classes that x and y belong to and accumulate the frequencies of the triples and <*, p, c(y)> where c(x) \u2208 C(x) and c(y) \u2208 C(y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2: Independent Relational Model (IRM)", "sec_num": null }, { "text": "All the tuples and <*, p, c(y)> are the independent candidate RSPs for a relation p and we rank them according to equation 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2: Independent Relational Model (IRM)", "sec_num": null }, { "text": "Once we have the independently learnt RSPs, we need to convert them into a joint representation for use by the inference plausibility and directionality model. To do this, we obtain the Cartesian product between the sets and <*, p, C(y)> for a relation p. The Cartesian product between two sets A and B is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2: Independent Relational Model (IRM)", "sec_num": null }, { "text": "! A \" B = a,b ( ) : #a $ A and #b $ B { } (3.4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2: Independent Relational Model (IRM)", "sec_num": null }, { "text": "Similarly we obtain:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2: Independent Relational Model (IRM)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "! C x , p,\" # \", p,C y = c x , p,c y : $ c x , p,\" % C x , p,\" and $ \", p,c y % \", p,C y & ' ( ) ( * + ( , (", "eq_num": "(3." } ], "section": "Model 2: Independent Relational Model (IRM)", "sec_num": null }, { "text": "The Cartesian product in equation 3.5 gives the joint representation of the RSPs of the relation p learnt using IRM. In the joint representation, the IRM RSPs have the form which is the same form as the JRM RSPs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5)", "sec_num": null }, { "text": "Our model for determining inference plausibility and directionality is based on the intuition that for an inference to hold between two semantic relations there must exist sufficient overlap between their contexts and the directionality of the inference depends on the quantitative comparison between their contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "Here we model the context of a relation by the selectional preferences of that relation. We determine the plausibility of an inference based on the overlap coefficient (Manning and Sch\u00fctze, 1999) between the selectional preferences of the two paths. We determine the directionality based on the difference in the number of selectional preferences of the relations when the inference seems plausible.", "cite_spans": [ { "start": 168, "end": 195, "text": "(Manning and Sch\u00fctze, 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "Given a candidate inference rule p i \u21d4 p j , we first obtain the RSPs for p i and for p j . We then calculate the overlap coefficient between their respective RSPs. Overlap coefficient is one of the many distribu-tional similarity measures used to calculate the similarity between two vectors A and B:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "! sim A,B ( ) = A \" B min A, B ( ) (3.6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "The overlap coefficient between the selectional preferences of p i and p j is calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "! sim p i , p j ( ) = C x , p i ,C y \" C x , p j ,C y min C x , p i ,C y , C x , p j ,C y ( ) (3.7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "If sim(p i ,p j ) is above a certain empirically determined threshold \u03b1 (\u22641), we conclude that the inference is plausible, i.e.:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "If", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "! sim p i , p j ( ) \" #", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "we conclude the inference is plausible else we conclude the inference is not plausible For a plausible inference, we then compute the ratio between the number of selectional preferences |C(x), p i , C(y)| for p i and |C(x), p j , C(y)| for p j and compare it against an empirically determined threshold \u03b2 (\u22651) to determine the direction of inference. So the algorithm is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "If ! C x , p i ,C y C x , p j ,C y \" # we conclude p i \u21d0 p j else if ! C x , p i ,C y C x , p j ,C y \" 1 # we conclude p i \u21d2 p j else we conclude p i \u21d4 p j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference plausibility and directionality model", "sec_num": "3.3" }, { "text": "In this section, we describe our experimental setup to validate our claim that LEDIR can be used to determine plausibility and directionality of an inference rule. Given an inference rule of the form p i \u21d4 p j , we want to use automatically learned relational selectional preferences to determine whether the inference rule is valid and if it is valid then what its directionality is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "LEDIR can work with any set of binary semantic inference rules. For the purpose of this paper, we chose the inference rules from the DIRT resource (Lin and Pantel 2001) . DIRT consists of 12 million rules extracted from 1GB of newspaper text (AP Newswire, San Jose Mercury and Wall Street Journal). For example, \"X eats Y\" \u21d4 \"X likes Y\" is an inference rule from DIRT.", "cite_spans": [ { "start": 147, "end": 168, "text": "(Lin and Pantel 2001)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": "4.1" }, { "text": "Appropriate choice of semantic classes is crucial for learning relational selectional preferences. The ideal set should have semantic classes that have the right balance between abstraction and discrimination, the two important characteristics that are often conflicting. A very general class has limited discriminative power, while a very specific class has limited abstractive power. Finding the right balance here is a separate research problem of its own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Classes", "sec_num": "4.2" }, { "text": "Since the ideal set of universally acceptable semantic classes in unavailable, we decided to use the Pantel et al. (2007) approach of using two sets of semantic classes. This approach gave us the advantage of being able to experiment with sets of classes that vary a lot in the way they are generated but try to maintain the granularity by obtaining approximately the same number of classes.", "cite_spans": [ { "start": 101, "end": 121, "text": "Pantel et al. (2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Classes", "sec_num": "4.2" }, { "text": "The first set of semantic classes was obtained by running the CBC clustering algorithm (Pantel and Lin, 2002) on TREC-9 and TREC-2002 newswire collections consisting of over 600 million words. This resulted in 1628 clusters, each representing a semantic class.", "cite_spans": [ { "start": 87, "end": 109, "text": "(Pantel and Lin, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Classes", "sec_num": "4.2" }, { "text": "The second set of semantic classes was obtained by using WordNet 2.1 (Fellbaum 1998) . We obtained a cut in the WordNet noun hierarchy 3 by manual inspection and used all the synsets below a cut point as the semantic class at that node. Our inspection showed that the synsets at depth four formed the most natural semantic classes 4 . A cut at depth four resulted in a set of 1287 semantic classes, a set that is much coarser grained than WordNet which has an average depth of 12. This seems to be a depth that gives a reasonable abstraction while maintaining good discriminative power. It would however be interesting to experiment with more sophisticated algorithms for extracting semantic classes from WordNet and see their effect on the relational selectional preferences, something we do not address this in this paper.", "cite_spans": [ { "start": 69, "end": 84, "text": "(Fellbaum 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Classes", "sec_num": "4.2" }, { "text": "We implemented LEDIR with both the JRM and IRM models using inference rules from DIRT and semantic classes from both CBC and WordNet. We parsed the 1999 AP newswire collection consisting of 31 million words with Minipar (Lin 1993) and used this to obtain the probability statistics for the models (as described in section 3.2).", "cite_spans": [ { "start": 220, "end": 230, "text": "(Lin 1993)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "4.3" }, { "text": "We performed both system-wide evaluations and intrinsic evaluations with different values of \u03b1 and \u03b2 parameters. Section 5 presents these results and our error analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "4.3" }, { "text": "In order to evaluate the performance of the different systems, we compare their outputs against a manually annotated gold standard. To create this gold standard, we randomly sampled 160 inference rules of the form p i \u21d4 p j from DIRT. We discarded three rules since they contained nominalizations 5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Standard Construction", "sec_num": "4.4" }, { "text": "For every inference rule of the form p i \u21d4 p j , the annotation guideline asked annotators (in this paper we used two annotators) to choose the most appropriate of the four options:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Standard Construction", "sec_num": "4.4" }, { "text": "1. p i \u21d4 p j 2. p i \u21d2 p j 3. p i \u21d0 p j 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Standard Construction", "sec_num": "4.4" }, { "text": "No plausible inference To help the annotators with their decisions, the annotators were provided with 10 randomly chosen instances for each inference rule. These instances, extracted from DIRT, provided the annotators with context where the inference could hold. So for example, for the inference rule \"X eats Y\" \u21d4 \"X likes Y\", an example instance would be \"I eat spicy food\" \u21d4 \"I like spicy food\". The annotation guideline however gave the annotators the freedom to think of examples other than the ones provided to make their decisions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Standard Construction", "sec_num": "4.4" }, { "text": "The annotators found that while some decisions were quite easy to make, the more complex ones often involved the choice between bi-directionality and one of the directions. To minimize disagreements and to get a better understanding of the task, the annotators trained themselves by annotating several samples together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Standard Construction", "sec_num": "4.4" }, { "text": "We divided the set of 157 inference rules, into a development set of 57 inference rules and a blind test set of 100 inference rules. Our two annotators annotated the development test set together to train themselves. The blind test set was then annotated individually to test whether the task is well defined. We used the kappa statistic (Siegel and Castellan Jr. 1988) to calculate the inter-annotator agreement, resulting in \u03ba=0.63. The annotators then looked at the disagreements together to build the final gold standard.", "cite_spans": [ { "start": 338, "end": 369, "text": "(Siegel and Castellan Jr. 1988)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Gold Standard Construction", "sec_num": "4.4" }, { "text": "All this resulted in a final gold standard of 100 annotated DIRT rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Standard Construction", "sec_num": "4.4" }, { "text": "To get an objective assessment of the quality of the results obtained by using our models, we compared the output of our systems against three baselines: B-random: Randomly assigns one of the four possible tags to each candidate inference rule. B-frequent: Assigns the most frequently occurring tag in the gold standard to each candidate inference rule. B-DIRT: Assumes each inference rule is bidirectional and assigns the bidirectional tag to each candidate inference rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.5" }, { "text": "In this section, we provide empirical evidence to validate our claim that the plausibility and directionality of an inference rule can be determined using LEDIR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "We want to measure the effectiveness of LEDIR for the task of determining the validity and directionality of a set of inference rules. We follow the standard approach of reporting system accuracy by comparing system outputs on a test set with a manually created gold standard. Using the gold standard described in Section 4.4, we measure the accuracy of our systems using the following formula: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criterion", "sec_num": "5.1" }, { "text": "We ran all our algorithms with different parameter combinations on the development set (the 57 DIRT rules described in Section 4.4). This resulted in a total of 420 experiments on the development set. Based on these experiments, we used the accuracy statistic to obtain the best parameter combination for each of our four systems. We then used these parameter values to obtain the corresponding percentage accuracies on the test set for each of the four systems. Table 1 summarizes the results obtained on the test set for the three baselines and for each of the four systems using the best parameter combinations obtained as described above. The overall best performing system uses the IRM algorithm with RSPs form CBC. Its performance is found to be significantly better than all the three baselines using the Student's paired t-test (Manning and Sch\u00fctze, 1999) at p<0.05. However, this system is not statistically significant when compared with the other LEDIR implementations (JRM and IRM with WordNet).", "cite_spans": [ { "start": 836, "end": 863, "text": "(Manning and Sch\u00fctze, 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 463, "end": 470, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Result Summary", "sec_num": "5.2" }, { "text": "The best performing system selected using the development set is the IRM system using CBC with the parameters \u03b1=0.15 and \u03b2=3. In general, the results obtained on the test set show that the IRM tends to perform better than the JRM. This observation points at the sparseness of data available for learning RSPs for the more restrictive JRM, the reason why we introduced the IRM in the first place. A much larger corpus would be needed to obtain good enough coverage for the JRM. \u21d4 \u21d2 \u21d0 NO \u21d4 16 1 3 7 \u21d2 0 3 1 3 \u21d0 7 4 22 15 SYSTEM NO 2 3 4 9 Table 2 : Confusion Matrix for the best performing system, IRM using CBC with \u03b1=0.15 and \u03b2=3. Table 2 shows the confusion matrix for the overall best performing system as selected using the development set (results are taken from the test set). The confusion matrix indicates that the system does a very good job of identifying the directionality of the correct inference rules, but gets a big performance hit from its inability to identify the incorrect inference rules accurately. We will analyze this observation in more detail below. Figure 1 plots the variation in accuracy of IRM with different RSPs and different values of \u03b1 and \u03b2. The figure shows a very interesting trend. It is clear that for all values of \u03b2, systems for IRM using CBC tend to reach their peak in the range 0.15 \u2264 \u03b1 \u2264 0.25, whereas the systems for IRM using WordNet (WN), tend to reach their peak in the range 0.4 \u2264 \u03b1 \u2264 0.6. This variation indicates the kind of impact the selection of semantic classes could have on the overall performance of the system. This is not hard evidence, but it does suggest that finding the right set of semantic classes could be one big step towards improving system accuracy. Two other factors that have a big impact on the performance of our systems are the values of the system parameters \u03b1 and \u03b2, which decide the plau-sibility and directionality of an inference rule, respectively. To better study their effect on the system performances, we studied the two parameters independently. Figure 2 shows the variation in the accuracy for the task of predicting the correct and incorrect inference rules for the different systems when varying the value of \u03b1. To obtain this graph, we classified the inference rules in the test set only as correct and incorrect without further classification based on directionality. All of our four systems obtained accuracy scores in the range of 68-70% showing a good performance on the task of determining plausibility. This however is only a small improvement over the baseline score of 66% obtained by assuming every inference to be plausible (as will be shown below, our system has most impact not on determining plausibility but on deter-mining directionality). Manual inspection of some system errors showed that the most common errors were due to the well-known 'problem of antonymy' when applying the distributional hypothesis. In DIRT, one can learn rules like \"X loves Y\" \u21d4 \"X hates Y\". Since the plausibility of inference rules is determined by applying the distributional hypothesis and the antonym paths tend to take the same set of classes for X and Y, our models find it difficult to filter out the incorrect inference rules which DIRT ends up learning for this very same reason. To improve our system, one avenue of research is to focus specifically on filtering incorrect inference rules involving antonyms (perhaps using methods similar to (Lin et al. 2003) ). Figure 3 shows the variation in the accuracy for the task of predicting the directionality of the correct inference rules for the different systems when varying the value of \u03b2. To obtain this graph, we separated the correct inference rules form the incorrect ones and ran all the systems on only the correct ones, predicting only the directionality of each rule for different values of \u03b2. Too low a value of \u03b2 means that the algorithms tend to predict most things as unidirectional and too high a value means that the algorithms tend to predict everything as bidirectional. It is clear from the figure that the performance of all the systems reach their peak performance in the range 2 \u2264 \u03b2 \u2264 4, which agrees with our intuition of obtaining the best system accuracy in a medium range. It is also seen that the best accuracy for each of the models goes up as compared to the corresponding values obtained in the general framework. The best performing system, IRM using CBC RSPs, reaches a peak accuracy of 63.64%, a much higher score than its accuracy score of 48% under the general framework and also a significant improvement over the baseline score of 48.48% for this task. Paired t-test shows that the difference is statistically significant at p<0.05. The baseline score for this task is obtained by assigning the most frequently occurring direction to all the correct inference rules. This paints a very encouraging picture about the ability of the algorithm to identify the directionality much more accurately if it can be provided with a cleaner set of inference rules.", "cite_spans": [ { "start": 3464, "end": 3481, "text": "(Lin et al. 2003)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 477, "end": 571, "text": "\u21d4 \u21d2 \u21d0 NO \u21d4 16 1 3 7 \u21d2 0 3 1 3 \u21d0 7 4 22 15 SYSTEM NO 2 3 4 9 Table 2", "ref_id": "TABREF0" }, { "start": 658, "end": 665, "text": "Table 2", "ref_id": null }, { "start": 1102, "end": 1110, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 2060, "end": 2068, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 3485, "end": 3493, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Performance and Error Analysis", "sec_num": "5.3" }, { "text": "Semantic inferences are fundamental to understanding natural language and are an integral part of many natural language applications such as question answering, summarization and textual entailment. Given the availability of large amounts of text and with the increase in computation power, learning them automatically from large text corpora has become increasingly feasible and popular. We introduced the Directionality Hypothesis, which states that if two paths share a significant number of relational selectional preferences (RSPs) and if the first has many more RSPs than the second, then the second path implies the first. Our experiments show empirical evidence that the Directionality Hypothesis with RSPs can indeed be used to filter incorrect inference rules and find the directionality of correct ones. We believe that this result is one step in the direction of solving the basic problem of semantic inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Several questions must still be addressed. The models need to be improved in order to address the problem of incorrect inference rules. The distributional hypothesis does not provide a framework to address the issue with antonymy relations like \"X loves Y\" \u21d4 \"X hates Y\" and hence other ideas need to be investigated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Ultimately, our goal is to improve the performance of NLP applications with better inferencing capabilities. Several recent data points, such as (Harabagiu and Hickl 2006) , and others discussed in Section 2.1, give promise that refined inference rules for directionality may indeed improve question answering, textual entailment and multidocument summarization accuracies. It is our hope that methods such as the one proposed in this paper may one day be used to harness the richness of automatically created inference rule resources within large-scale NLP applications.", "cite_spans": [ { "start": 145, "end": 171, "text": "(Harabagiu and Hickl 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "There could be certain usages of \"X eats Y\" where, one might not be able to infer \"X likes Y\" (for example metaphorical). But, in most cases, this inference holds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "c x and c y are shorthand for c(x) and c(y) in our equations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Since we are dealing with only noun binary relations, we use only WordNet noun Hierarchy.4 By natural, here, we simply mean that a manual inspection by the authors showed that, at depth four, the resulting clusters had struck a better granularity balance than other cutoff points. We acknowledge that this is a very coarse way of extracting concepts from WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For the purpose of simplicity, we in our experiments did not use DIRT rules containing nominalizations. The algorithm however can be applied without change to inference rules containing nominalization. In fact, in the resource that we plan to release soon, we have applied the algorithm without change to DIRT rules containing nominalizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Paraphrase Search Assistant: Terminology Feedback for Iterative Information Seeking", "authors": [ { "first": "P", "middle": [ "G" ], "last": "Anick", "suffix": "" }, { "first": "S", "middle": [], "last": "Tipirneni", "suffix": "" } ], "year": 1999, "venue": "Proceedings of SIGIR 1999", "volume": "", "issue": "", "pages": "53--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anick, P.G. and Tipirneni, S. 1999. The Paraphrase Search Assistant: Terminology Feedback for Iterative Information Seeking. In Proceedings of SIGIR 1999. pp. 53-159. Berkeley, CA", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Extracting Paraphrases from a Parallel Corpus", "authors": [ { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL 2001", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barzilay, R. and McKeown, K.R. 2001.Extracting Para- phrases from a Parallel Corpus. In Proceedings of ACL 2001. pp. 50-57. Toulose, France.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Information Fusion in the Context of Multi-Document Summarization", "authors": [ { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Mckeown", "suffix": "" }, { "first": "M", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL 1999", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barzilay, R.; McKeown, K.R. and Elhadad, M. 1999. Information Fusion in the Context of Multi- Document Summarization. In Proceedings of ACL 1999. College Park, Maryland.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "VerbOCEAN: Mining the Web for Fine-Grained Semantic Verb Relations", "authors": [ { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "P", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chklovski, T. and Pantel, P. 2004. VerbOCEAN: Min- ing the Web for Fine-Grained Semantic Verb Rela- tions. In Proceedings of EMNLP 2004. Barcellona Spain.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Elements of Information Theory", "authors": [ { "first": "T", "middle": [ "M" ], "last": "Cover", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Thomas", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cover, T.M. and Thomas, J.A. 1991. Elements of Infor- mation Theory. John Wiley & Sons.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Noisy-Channel Approach to Question Answering", "authors": [ { "first": "A", "middle": [], "last": "Echihabi", "suffix": "" }, { "first": "", "middle": [ "D" ], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL 2003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Echihabi, A. and Marcu. D. 2003. A Noisy-Channel Approach to Question Answering. In Proceedings of ACL 2003. Sapporo, Japan.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Distributional Inclusion Hypothesis and Lexical Entailment", "authors": [ { "first": "M", "middle": [], "last": "Geffet", "suffix": "" }, { "first": "I", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL 2005", "volume": "", "issue": "", "pages": "107--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geffet, M.; Dagan, I. 2005. The Distributional Inclusion Hypothesis and Lexical Entailment. In Proceedings of ACL 2005. pp. 107-114. Ann Arbor, Michigan.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Methods for Using Textual Entailment in Open-Domain Question Answering", "authors": [ { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "A", "middle": [], "last": "Hickl", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL 2006", "volume": "", "issue": "", "pages": "905--912", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harabagiu, S.; and Hickl, A. 2006. Methods for Using Textual Entailment in Open-Domain Question An- swering. In Proceedings of ACL 2006. pp. 905-912. Sydney, Australia.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "CYC: A large-scale investment in knowledge infrastructure", "authors": [ { "first": "D", "middle": [], "last": "Lenat", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "33--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lenat, D. 1995. CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Parsing Without OverGeneration", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1993, "venue": "Proceedings of ACL 1993", "volume": "", "issue": "", "pages": "112--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. 1993. Parsing Without OverGeneration. In Pro- ceedings of ACL 1993. pp. 112-120. Columbus, OH.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Discovery of Inference Rules for Question Answering", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "P", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "4", "pages": "343--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. and Pantel, P. 2001. Discovery of Inference Rules for Question Answering. Natural Language Engineering 7(4):343-360.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Identifying Synonyms among Distributionally Similar Words", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "S", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "L", "middle": [], "last": "Qin", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IJCAI 2003", "volume": "", "issue": "", "pages": "1492--1493", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D.; Zhao, S.; Qin, L. and Zhou, M. 2003. Identify- ing Synonyms among Distributionally Similar Words. In Proceedings of IJCAI 2003, pp. 1492- 1493. Acapulco, Mexico.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C.D. and Sch\u00fctze, H. 1999. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "COGEX: A Logic Prover for Question Answering", "authors": [ { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "C", "middle": [], "last": "Clark", "suffix": "" }, { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "S", "middle": [], "last": "Maiorano", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL 2003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moldovan, D.; Clark, C.; Harabagiu, S. and Maiorano S. 2003. COGEX: A Logic Prover for Question An- swering. In Proceedings of HLT/NAACL 2003. Ed- monton, Canada.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ISP: Learning Inferential Selectional Preferences", "authors": [ { "first": "P", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "R", "middle": [], "last": "Bhagat", "suffix": "" }, { "first": "B", "middle": [], "last": "Coppola", "suffix": "" }, { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2007, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pantel, P.; Bhagat, R.; Coppola, B.; Chklovski, T. and Hovy, E. 2007. ISP: Learning Inferential Selectional Preferences. In Proceedings of HLT/NAACL 2007. Rochester, NY.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Discovering Word Senses from Text", "authors": [ { "first": "P", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2002, "venue": "Proceedings of KDD 2002", "volume": "", "issue": "", "pages": "613--619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pantel, P. and Lin, D. 2002. Discovering Word Senses from Text. In Proceedings of KDD 2002. pp. 613- 619. Edmonton, Canada.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Selectional Constraints: An Information-Theoretic Model and its Computational Realization", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "61", "issue": "", "pages": "127--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, P. 1996. Selectional Constraints: An Informa- tion-Theoretic Model and its Computational Realiza- tion. Cognition, 61:127-159.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Nonparametric Statistics for the Behavioral Sciences", "authors": [ { "first": "S", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "N", "middle": [ "J" ], "last": "Castellan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siegel, S. and Castellan Jr., N. J. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Scaling web-based acquisition of entailment relations", "authors": [ { "first": "I", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "H", "middle": [], "last": "Tanev", "suffix": "" }, { "first": "I", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "B", "middle": [], "last": "Coppola", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Szpektor, I.; Tanev, H.; Dagan, I.; and Coppola, B. 2004. Scaling web-based acquisition of entailment relations. In Proceedings of EMNLP 2004. pp. 41-48. Barce- lona, Spain.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Acquiring Inference Rules with Temporal Constraints by Using Japanese Coordinated Sentences and Noun-Verb Co-occurances", "authors": [ { "first": "K", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT/NAACL 2006", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Torisawa, K. 2006. Acquiring Inference Rules with Temporal Constraints by Using Japanese Coordi- nated Sentences and Noun-Verb Co-occurances. In Proceedings of HLT/NAACL 2006. pp. 57-64. New York, New York.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Preference Semantics", "authors": [ { "first": "Y", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1975, "venue": "Formal Semantics of Natural Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilks, Y. 1975. Preference Semantics. In E.L. Keenan (ed.), Formal Semantics of Natural Language. Cam- bridge: Cambridge University Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Discovering Asymmetric Entailment Relations between Verbs using Selectional Preferences", "authors": [ { "first": "F", "middle": [ "M" ], "last": "Zanzotto", "suffix": "" }, { "first": "M", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "M", "middle": [ "T" ], "last": "Pazienza", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING/ACL", "volume": "", "issue": "", "pages": "849--856", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zanzotto, F.M.; Pennacchiotti, M.; Pazienza, M.T. 2006. Discovering Asymmetric Entailment Relations between Verbs using Selectional Preferences. In Pro- ceedings of COLING/ACL 2006. pp. 849-856. Syd- ney, Australia.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "text": "Accuracy variation for IRM with different values of \u03b1 and \u03b2.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Accuracy variation in predicting correct versus incorrect inference rules for different values of \u03b1.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Accuracy variation in predicting directionality of correct inference rules for different values of \u03b2.", "uris": null, "num": null }, "TABREF0": { "text": "Summary of results on the test set", "num": null, "html": null, "type_str": "table", "content": "
Model\u03b1\u03b2Accuracy (%)
B-random--25
B-frequent--34
B-DIRT--25
JRMCBC 0.15238
WN 0.55238
IRMCBC 0.15348
WN 0.45243
" } } } }