{
"paper_id": "I17-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:37:29.782522Z"
},
"title": "Enabling Transitivity for Lexical Inference on Chinese Verbs Using Probabilistic Soft Logic",
"authors": [
{
"first": "Wei-Chung",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"addrLine": "128 Academia Road, Section2 Nankang",
"postCode": "11529",
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Lun-Wei",
"middle": [],
"last": "Ku",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"addrLine": "128 Academia Road, Section2 Nankang",
"postCode": "11529",
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "lwku@iis.sinica.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To learn more knowledge, enabling transitivity is a vital step for lexical inference. However, most of the lexical inference models with good performance are for nouns or noun phrases, which cannot be directly applied to the inference on events or states. In this paper, we construct the largest Chinese verb lexical inference dataset containing 18,029 verb pairs, where for each pair one of four inference relations are annotated. We further build a probabilistic soft logic (PSL) model to infer verb lexicons using the logic language. With PSL, we easily enable transitivity in two layers, the observed layer and the feature layer, which are included in the knowledge base. We further discuss the effect of transitives within and between these layers. Results show the performance of the proposed PSL model can be improved at least 3.5% (relative) when the transitivity is enabled. Furthermore, experiments show that enabling transitivity in the observed layer benefits the most. 'buy' entails the word 'have'. With the help of lexical inference system, we can know \"Mom has ap",
"pdf_parse": {
"paper_id": "I17-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "To learn more knowledge, enabling transitivity is a vital step for lexical inference. However, most of the lexical inference models with good performance are for nouns or noun phrases, which cannot be directly applied to the inference on events or states. In this paper, we construct the largest Chinese verb lexical inference dataset containing 18,029 verb pairs, where for each pair one of four inference relations are annotated. We further build a probabilistic soft logic (PSL) model to infer verb lexicons using the logic language. With PSL, we easily enable transitivity in two layers, the observed layer and the feature layer, which are included in the knowledge base. We further discuss the effect of transitives within and between these layers. Results show the performance of the proposed PSL model can be improved at least 3.5% (relative) when the transitivity is enabled. Furthermore, experiments show that enabling transitivity in the observed layer benefits the most. 'buy' entails the word 'have'. With the help of lexical inference system, we can know \"Mom has ap",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Lexical inference is an important component of natural language understanding for NLP tasks such as textual entailment (Garrette et al., 2011) , metaphor detection (Mohler et al., 2013) , and text generation (Biran and McKeown, 2013) to acquire implications not explicitly written in context. Given two words, the goal of lexical inferences is to detect whether there is an inference relation between the lexicon pair. For example, the word ples\" from the ground truth \"Mom buys apples\"to answer the question \"Who has apples?\" without explicitly mentioning it.",
"cite_spans": [
{
"start": 119,
"end": 142,
"text": "(Garrette et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 164,
"end": 185,
"text": "(Mohler et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 208,
"end": 233,
"text": "(Biran and McKeown, 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An intuitive solution to this problem is to first represent the sense of words in the lexicon to calculate the confidence of inferences from one sense to another, or to build a classifier to distinguish inference relations from other relations. Most related research is of one of these two types (Szpektor and Dagan, 2008a; Kiela et al., 2015) . However, for this problem it is difficult for these models to take into account transitivity. In the framework of a lexical inference system, transitivity can be included in three layers: the observed layer, the feature layer, and the prediction layer. Figure 1 illustrates these layers and the corresponding transitives. The observed layer includes inference relations we already know, e.g., true inferences from the gold labels or ontologies; the feature layer includes the observed features for all lexicon pairs to be predicted,i.e.,features for the testing data, and the predicted layer saves the predicted inference pairs, i.e., the relations of pairs in the testing data, predicted by the model. As inference usually involves available knowledge, the knowledge base (KB) is shown in Figure 1 as well. KB contains known information for the models. Therefore, in this system, it includes the observed layer and the feature layer which contain gold relations and the features for the testing data respectively.",
"cite_spans": [
{
"start": 296,
"end": 323,
"text": "(Szpektor and Dagan, 2008a;",
"ref_id": "BIBREF30"
},
{
"start": 324,
"end": 343,
"text": "Kiela et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 599,
"end": 607,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1136,
"end": 1144,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been several new rising research directions involving lexical inference. The most representative ones are the automatic problem solvers and the open-domain question answering systems, where inferring between events or states like Some animals grow thick fur effecting Some animals stay warm is critical (Clark et al., 2016) . However, many recent works of lexical inference are only designed for or being tested on nouns or noun phrases (Jiang and Conrath, 1997; Kiela et al., Figure 1: Three-layer lexical inference system. Points of the same shape in each layer are the same verbs; the solid arrow indicates the known inference relation; the dotted arrow indicates the hidden inference relation which can be inferred by the known inference relations.",
"cite_spans": [
{
"start": 313,
"end": 333,
"text": "(Clark et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 447,
"end": 472,
"text": "(Jiang and Conrath, 1997;",
"ref_id": "BIBREF16"
},
{
"start": 473,
"end": 486,
"text": "Kiela et al.,",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2015; Shwartz et al., 2016) , which makes them limited or not capable for these newly proposed research problems.",
"cite_spans": [
{
"start": 6,
"end": 27,
"text": "Shwartz et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we adopt the probabilistic soft logic (PSL) model to find lexical inference on Chinese verbs toward the math word problem solver. The contributions of this paper are listed as follows: (1) We build the largest Chinese verb lexical inference dataset with four types of inference relations as a potential testbed in the future. 2We show that in the proposed PSL model the transitivity is easy to enabled and can benefit the lexical inference on Chinese verbs. (3) We implement and discuss the transitivity inter-and intra-layers and conclude the transitivity within the observed layer brings the most performance gain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One mainstream lexical inference extracts either explicit or implicit features from the manually constructed lexical knowledge. Szpektor (2009) constructs a WordNet inference chain through substitution relations (synonyms and hypernyms) defined in WordNet. Aharon (2010) proposed a FrameNet Entailment-rule Derivation (FRED) algorithm to inference on the framework of FrameNet. FrameNet models the semantic argument structure of predicates in terms of prototypical situation, which is called frames. Predicates belong to the same frames are highly related to a specific situation defined for the frame. Therefore, it is intuitive to acquire lexical inference pairs from predicates in the same frame. However, no matter WordNet or FrameNet was used, the cov-erage problem was always an issue when leveraging handcraft resources. Moreover, the relations of verbs in WordNet are rather flat compared to nouns, which brings problems when directly adopting approaches utilizing WordNet to detect the inference between verbs.",
"cite_spans": [
{
"start": 128,
"end": 143,
"text": "Szpektor (2009)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An unsupervised concept, distributional similarity, for measuring relations between words was proposed to overcome the coverage problem. Distributional similarity related algorithms utilized a large, unstructured corpus to learn lexical entailment relations by assuming that semantically similar lexicons appear with similar context (Harris, 1954) . Various implementations were proposed to assess contextual similarity between two lexicons, including (Berant et al., 2010; Lin and Pantel, 2001; Weeds et al., 2004) . Lin Similarity, or known as DIRT, is one commonly adopted method to measure the lexical context similarity (Lin and Pantel, 2001) . Instead of applying the Distributional Hypothesis to verbs, Lin applied this hypothesis to the paths in dependency trees. They hypothesize that the meaning of two phrases is similar, if their paths tend to link the same sets of words in a dependency tree. Later, Weeds and Weir (2004) proposed a general framework for directional similarity measurement. The measurement examined the coverage of word w l 's features against those of w r 's, and more coverage indicated more similarity.",
"cite_spans": [
{
"start": 333,
"end": 347,
"text": "(Harris, 1954)",
"ref_id": null
},
{
"start": 452,
"end": 473,
"text": "(Berant et al., 2010;",
"ref_id": "BIBREF4"
},
{
"start": 474,
"end": 495,
"text": "Lin and Pantel, 2001;",
"ref_id": "BIBREF21"
},
{
"start": 496,
"end": 515,
"text": "Weeds et al., 2004)",
"ref_id": "BIBREF33"
},
{
"start": 625,
"end": 647,
"text": "(Lin and Pantel, 2001)",
"ref_id": "BIBREF21"
},
{
"start": 913,
"end": 934,
"text": "Weeds and Weir (2004)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Lin Similarity generates errors as its symmetric structure cannot tell the difference between w l \u2192 w r and w r \u2192 w l . That is, it makes errors on non-symmetric examples, like buy \u2192 take. Moreover, Weeds' method generates high score when an infrequent lexicon has features similar to those of another lexicon, which harms the performance as it happens a lot for non-entailed lexicons. Therefore, Szpektor and Dagan (2008a) proposed a hybrid method Balanced-Inclusion, BInc, and it was proved to outperform methods proposed prior to it. In this paper, we adopt BInc measurement and complement with lexical resource method to construct a hybrid model, which was proved to outperform both methods separately on our dataset.",
"cite_spans": [
{
"start": 397,
"end": 423,
"text": "Szpektor and Dagan (2008a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recent research is exploiting the effect of transitivity during model training. The intuition is that some implicit entailment relation is difficult to be identified when there is no direct features supporting it. Sometimes previous work could find the entailment pairs w 1 \u2192 w 2 and w 2 \u2192 w 3 , but failed to answer distant entailment relation like w 1 \u2192 w 3 . Skeptor and Dagan (2009) first applied transitive chaining in the knowledge provided by the lexical ontology Wordnet (Miller, 1995) in the feature layer. Berant et al. (2011) built a lexical entailment knowledge graph given the predicted results from the base classifier. They used integer linear programming (ILP) to find the latent entailment in the prediction cascade, which transits in the prediction layer. Kloetzer et al. (2015) , whose system outperformed Berant et al.'s on their own corpus, further use cascade entailment inference in the feature layer. They applied short transitivity optimization by a two-layered SVM classifier (Kloetzer et al., 2015) . A set of candidate transitivity paths were created by concatenating two identified inference pairs from the first SVM classifier, e.g., w 1 \u2192 w 2 and w 2 \u2192 w 3 result in a candidate path w 1 \u2192 w 2 \u2192 w 3 . Then the two-layered SVM classifier re-predicted whether there was an inference relation for the lexical pair w 1 \u2192 w 3 . However, none of these models takes into account transitivity in the observed layer or transitivity between two layers.",
"cite_spans": [
{
"start": 479,
"end": 493,
"text": "(Miller, 1995)",
"ref_id": "BIBREF24"
},
{
"start": 516,
"end": 536,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF5"
},
{
"start": 774,
"end": 796,
"text": "Kloetzer et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 1002,
"end": 1025,
"text": "(Kloetzer et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We select probabilistic soft logic (PSL) to model the lexical inference problem. PSL is a recently proposed alternative framework for probabilistic logic (Bach et al., 2015). It was first applied to the category prediction and similarity propagation on Wikipedia documents to align ontologies on a standard corpus of bibliographic ontology (Brocheler et al., 2012) . It has been adopted in social network analysis, including social group modeling and social trust analysis (Huang et al., 2013) . For natural language processing, recently, Dhanya Sridhar (2014) applied the PSL model to stance classification of on-line debates. Islam Beltagy (2014) approached the textual problem by transforming sentences into their logic representations and applying a PSL model to analyze word-to-word semantic coverage between the hypothesis and the premise. All these show that PSL is good at capturing relations. However, PSL has not been utilized yet in the lexical inference problem, and its power to provide lexical transitivity has not been tested, either. Thus in this paper, we explore its ability on detecting verb lexical inference and on enabling the transitivity.",
"cite_spans": [
{
"start": 340,
"end": 364,
"text": "(Brocheler et al., 2012)",
"ref_id": "BIBREF8"
},
{
"start": 473,
"end": 493,
"text": "(Huang et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 634,
"end": 648,
"text": "Beltagy (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We start from describing the features for each lexicon pair. To use PSL, we define atoms and design rules to enable the inter-and intra-layer transitives. Finally, PSL will automatically learn the rule weights by MLE to yield the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "3.1.1 Lexical ontology features E-HowNet is a large Chinese lexical resource extended from HowNet (Dong and Dong, 2006) . Manually constructed by several linguistic experts, it contains 93,953 Chinese words and 9,197 semantic types (concepts; some are sememes). It was designed as an ontology of semantic types, each is listed in both Chinese and in English. For example, one semantic type is (Give|\u7d66). Each semantic type has some instances which inherit the concept of it. Lexical relations are also defined. In addition to hypernym-hyponym pairs, E-Hownet contains conflation pairs, including preconditions like (Divorce|\u96e2\u5a5a) is to (GetMarried|\u7d50\u5a5a), consequences like (Labor|\u81e8 \u7522) is to (Pregnant|\u61f7 \u5b55), and same-events like (Sell|\u8ce3) is to (Buy|\u8cb7). The hypernym-hyponym relation and the conflation relation are two features that we use to represent a lexicon pair.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Dong and Dong, 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Pair Features",
"sec_num": "3.1"
},
{
"text": "Given two semantically related words, a key aspect of detecting lexical inference is the generality of the hypothesis compared to the premise. Though we have a lexical ontology to tell us explicitly the hypernym-hyponym relations, a score to estimate the degree of this compared generality is still necessary for model learning. Therefore, We define the cohesion score of a semantic type with E-Hownet to model the generality. For each semantic type s i \u2208 S which has a set of instantiate words V si , the cohesion score of s i is calculated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "Coh (s i ) = 1 N v 1 =v 2 sim (v 1 , v 2 ) ; v 1 , v 2 \u2208 V s i (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "sim(v 1 , v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "2 ) is the word-embedding cosine similarity of words v 1 and v 2 . We construct a graph by considering hypernym, hyponym, and conflation relations in E-HowNet where nodes are semantic types and instantiate words, and where edges are relations. Given a word pair (v l , v r ), a set of paths P from v l to v r can be found by traversing this graph, each of which is denoted as p with edges in the edge set E. Each of these edges in E is represented by the triple e(n 1 , n 2 , type e ), where node n 2 is of type type e to node n 1 . Nodes here can be a word or a semantic type. The P athScore(p) is defined as: P athScore(p) = e\u2208Ep coh(s e ), type e = Hyponym 1, otherwise 2The idea of P athScore(p) is to calculate the generality lost, which is caused by hyponym relations, of each step of inference. The hypernym or conflation relation does not lose generality, so the P athScore(p) is always 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "Empirically, those path p whose length exceed 10 are dropped as the inference chain is too long. Finally, the cohesion path score of word pair",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(v 1 , v 2 ) is defined as: CohP athScore(v 1 , v 2 ) = ln(max p\u2208P P athScore(p)) \u2212 ln(m) ln(M ) \u2212 ln(m)",
"eq_num": "(3)"
}
],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "while M and m are the Maximum and Minimum PathScore respectively. The cohesion path score also serves as a feature to build the PSL model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cohesion path score",
"sec_num": "3.1.2"
},
{
"text": "Distributional semantics has been used to exploit the semantic similarities of the linguistic items through large language data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "We applied the CKIP parser 1 , a well-known Chinese text parser, to raw sentences. Context of words are extracted as features f s of words, according to parsed sentence trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "Some pre-prosessing steps are performed. Words appearing only once in the corpus are dropped to reduce Chinese segmentation error. For each Word v, we retrieve all the words that share at least one feature with w and call them candidate words. Drop the candidate word if it shares less than 1 percent features, counted by frequency, with word w. We then calculate the distributional similarity score between w and its candidate words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "Balanced-inclusion (BInc, (Szpektor and Dagan, 2008a) ) is a well-known scoring function for 1 CKIP parser : http://parser.iis.sinica.edu.tw/ determining lexical entailment. It contains two parts, one is semantic similarity measurement, and one is semantic coverage direction measurement. Given two words w l , w r and their feature sets F l , F r , the semantic similarity between w l and w r is calculated by Lin similarity (Lin and Pantel, 2001 ):",
"cite_spans": [
{
"start": 26,
"end": 53,
"text": "(Szpektor and Dagan, 2008a)",
"ref_id": "BIBREF30"
},
{
"start": 426,
"end": 447,
"text": "(Lin and Pantel, 2001",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Lin(v l , v r ) = f \u2208F l \u2229Fr [w vl (f ) + w vr (f )] f \u2208F l w vl (f ) + f \u2208Fr w vr (f )",
"eq_num": "(4)"
}
],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "The coverage direction measurement, which provides clues of direction of entailment relation, is calculated by Weed's (Weeds et al., 2004) coverage measurement:",
"cite_spans": [
{
"start": 118,
"end": 138,
"text": "(Weeds et al., 2004)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "weed(v l , v r ) = f \u2208F l \u2229Fr w vl (f ) f \u2208F l w vl (f )",
"eq_num": "(5)"
}
],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "The weight of each feature w(f ) is the Pointwise Mutual Information (PMI) between the word v and the feature f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w v (f ) = log[ pr(f |v) pr(f ) ]",
"eq_num": "(6)"
}
],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "where pr(f ) is probability of feature f . BInc is defined as geometric mean of the above two:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "BInc(v l , v r ) = Lin(v l , v r ) \u2022 W eed(v l , v r )",
"eq_num": "(7)"
}
],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "To compare BInc's performance to the proposed PSL model and utilize it as a feature, we implemented it on the Chinese experimental dataset to calculate the BInc score of each lexicon pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional similarity",
"sec_num": "3.1.3"
},
{
"text": "Previous work has shown that word embeddings work well on entailment relation recognition of noun-noun pairs and (adj+noun)-noun pairs (Baroni et al., 2012; Roller et al., 2014) . We choose glove (Pennington et al., 2014) to train embeddings of each word, and concatenate the embedding of two words to create the embedding for each word pair. This embedding then serves as the feature in the rbf-kernel SVM classifier to predict the entailment relation of the corresponding word pair.",
"cite_spans": [
{
"start": 135,
"end": 156,
"text": "(Baroni et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 157,
"end": 177,
"text": "Roller et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 196,
"end": 221,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": "3.1.4"
},
{
"text": "We use the PSL model to find the latent inference relations by enabling the transitivity of lex-ical relations. The lexical relations include features described in Section 3.1, and the known inference relations in the observed layer. In PSL, each relation of the lexicon pair v l , v r is written as a (ground) atom a(v l , v r ) in the logic language. The description of the transitivity of atoms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "a i (v 1 , v 2 ), a j (v 2 , v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "3 ) and its latent inference relation, Etl(v 1 , v 3 ) is written as a rule in the logic language:",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 54,
"text": "Etl(v 1 , v 3 )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i (v 1 , v 2 ) \u2227 a j (v 2 , v 3 ) \u2192 Etl(v 1 , v 3 )",
"eq_num": "(8)"
}
],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "Each rule is assigned a weight to denote the reliability of the hypothesis that given",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "a i (v 1 , v 2 ), a j (v 2 , v 3 ) are true, Etl(v 1 , v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "3 ) is also true. The PSL model learns the rule weights by the training set. We encode the transitivity inter-(i = j) and intra-(i = j) different types of relations and their resulting latent inference relation to construct the experimental rule set. Given a set of (ground) atoms a = {a 1 , ..., a n }, we denote an interpretation the mapping I : a \u2192 [0, 1] n from ground atoms to soft truth value. The distance to satisfaction of each ground rule is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d(r, I) = max{0, I(r antecedent ) \u2212 I(r consequent )}",
"eq_num": "(9)"
}
],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "The PSL model learns the weights \u03bb r of these rules and optimizes the most probable interpretation of entailment relations, through the probability density function f over I:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "f (I) = 1 Z exp[\u2212 r\u2208R \u03bb r (d(r, I)) p ]; (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "where Z is the normalization term, \u03bb r is the weight of rule r, R is the set of all ground rules, and p \u2208 {1, 2}. In this paper, we set p to 2, indicating a squared function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "In the following section, we are going to describe the atoms defined in our lexical inference model in Section 3.2.1. Then rules are defined in Section 3.2.2. Last, weight learning is described in Section 3.2.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Soft Logic (PSL)",
"sec_num": "3.2"
},
{
"text": "Atoms are types of information provided in Knowledge base in PSL model, Table 1 lists all atoms defined in our lexical inference model. Etl denotes the entailment relation serving as the prediction target. It is the only unknown atom. In PSL model the number of prediction target grows quadratically with the number of the entities (verbs), if no limitation is provided, which is not desired and is time consuming. Thus Cdd indicates canopies (McCallum et al., 2000) over the prediction target. Hypr, Con, Coh, and BInc are the hypernym, conflation, cohesion path score, and distributional similarity score BInc features described in Section 3.1. Svm is the prediction of SVM classifier which takes concatenation of word embeddings as feature. Obv represents the knowledge of observed entailment lexical pairs for the training phase. Note that the set of pairs with Obv = true must not overlap with the testing set.",
"cite_spans": [
{
"start": 443,
"end": 466,
"text": "(McCallum et al., 2000)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Atoms for PSL",
"sec_num": "3.2.1"
},
{
"text": "Having defined the atoms, the five features Hypr, Con, BInc, Coh, and Svm are used in the design of five basic rules in Eq. 11. We further apply the inference chain by concatenating two atoms to create 25 rules shown as Eq.12 for feature-layer transitivity. For transitivity in the observed layer, we concatenate Obv atoms as shown in Eq.13. Then we concatenate Obv with other features and vice versa to add 10 additional rules shown as in Eq.14,15 for bidirectional transitives between the feature and the observed layers. Finally, the rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference rules for PSL",
"sec_num": "3.2.2"
},
{
"text": "\u00acEtl(v 1 , v 2 ) states that v 1 does not entail v 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference rules for PSL",
"sec_num": "3.2.2"
},
{
"text": "if the previous rules are not applicable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference rules for PSL",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Rel(v 1 , v 2 ) \u2192 Etl(v 1 , v 2 ); Rel \u2208 {Hypr, Con, BInc, Coh, Svm} (11) Rel(v 1 , v 2 ) \u2227 Rel(v 2 , v 3 ) \u2192 Etl(v 1 , v 3 ) (12) Obv(v 1 , v 2 ) \u2227 Obv(v 2 , v 3 ) \u2192 Etl(v 1 , v 3 ) (13) Obv(v 1 , v 2 ) \u2227 Rel(v 2 , v 3 ) \u2192 Etl(v 1 , v 3 )",
"eq_num": "(14)"
}
],
"section": "Inference rules for PSL",
"sec_num": "3.2.2"
},
{
"text": "Rel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference rules for PSL",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(v 1 , v 2 ) \u2227 Obv(v 2 , v 3 ) \u2192 Etl(v 1 , v 3 )",
"eq_num": "(15)"
}
],
"section": "Inference rules for PSL",
"sec_num": "3.2.2"
},
{
"text": "The rule weights(\u03bb r ) are determined using maximum-likelihood estimation. \u2202 \u2202\u03bb r log p(I) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 r\u2208R i (d(r, I)) + E r\u2208R i (d(r, I))",
"eq_num": "(16)"
}
],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Atom Name Description",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Cdd(v 1 ,v 2 ) Canopies over prediction target. Return 1 if (v 1 ,v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "2 ) is the prediction target in the task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Etl(v 1 ,v 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Entail statement which is the prediction target. Hypr(s 1 ,s 2 ) Hypernym relation between two semantic concept:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "s 1 is hypernym of s 2 . Con(s 1 ,s 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Conflation relation between two semantic types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Ehow(v 1 , v 2 ) E-HowNet algorithm. Dis(v 1 ,v 2 ) BInc between v 1 and v 2 . Svm(v 1 ,v 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Svm prediction featured by word embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Obv(v 1 ,v 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "Observed entail statement. Thus it is approximated via",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "r\u2208R i d r (I * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": ", where I * is the most probable interpretation given the current weight (Kimmig et al., 2012) .",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Kimmig et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning inference rule weights",
"sec_num": "3.2.3"
},
{
"text": "There are some of entailment dataset open to research utility, but the Chinese Verb entailment dataset (CVED) is special in some way. First, most of the open entailment dataset include the entailment between noun-noun pairs, adjective noun-noun pairs, and quantity nounquantity noun pairs, but none of them consider the entailment between verb-verb pairs like CVED. Second, in my knowledge, our CVED is the largest Chinese entailment dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Dataset",
"sec_num": "4.1"
},
{
"text": "To get more verb lexical inference pairs for our experiments, we collected verb pairs from math application problems, which usually contain logical relations in the descriptions for each problem. A total of 995 verbs and 18,029 verb pairs were extracted from 20,000 Chinese elementary math problems, where the verbs in each pair are from the same problem. Few types of verb are discarded, including V 1, V 2, VH, VI, VJ, VK and VL ,which are adjective 2 and statement associated verbs defined in CKIP 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Dataset",
"sec_num": "4.1"
},
{
"text": "Given a set of verbs extracted from a math problem, every possible directed verb pair was labeled. If there were n verbs, n \u00d7 (n \u2212 1) directed verb pairs (v i \u2192 v j ) were collected, where v i is the premise and v j is the hypothesis. For example, if we extracted \"sell\", \"buy\", and 2 Adjective words are seen as kind of verbs in CKIP 3 http://rocling.iis.sinica.edu.tw/CKIP/tr/ 9305 2013%20revision.pdf \"pay\" from the descriptions of the problem, we added six directed verb pairs to the annotation set: {(sell, buy), (sell, pay), (buy, pay), (buy, sell), (pay, sell), (pay, buy)} We provide four types of entailment label in CVED. One is commonly seen hypernym relation. The same-event relations are verb pairs related to same thing but in different point of view Some examples are (sell, buy) and (give, got). These are used by most earlier research or in small-scale experiments (Szpektor and Dagan, 2008b; Kiela et al., 2015) . Another two are casual relations, as premises in the precondition and consequence relations are likely to be true given their hypothesis in our daily life, and because these relations are more useful in real applications, we further consider these relations as entailment relations. These relations are usually selected for web-scale experiments (Aharon et al., 2010; Berant et al., 2011; Kloetzer et al., 2015) . Among all experimental verb pairs, 10% were used for testing, 10% were used for developing and the remaining dataset was for training. A five-fold training process was performed to learn the best parameters for the testing model.",
"cite_spans": [
{
"start": 882,
"end": 909,
"text": "(Szpektor and Dagan, 2008b;",
"ref_id": "BIBREF31"
},
{
"start": 910,
"end": 929,
"text": "Kiela et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 1278,
"end": 1299,
"text": "(Aharon et al., 2010;",
"ref_id": "BIBREF0"
},
{
"start": 1300,
"end": 1320,
"text": "Berant et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 1321,
"end": 1343,
"text": "Kloetzer et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Dataset",
"sec_num": "4.1"
},
{
"text": "To achieve better performance, weights are randomly initialized and retrained 10 times for each fold. The best combination is derived by averaging the five best weight sets obtained in the five-fold cross-validation process. Two baselines are provided for the evaluation of the models with transitivity disabled. Hyper+Conf is the ontology-based baseline. In this setting, verb pairs with hypernym and conflation relations found in E-Hownet are reported as entailment pairs. BInc is the distributional similarity baseline, where we set a best threshold for the development set and apply it To discuss the effect of transitivity within (intra-) and between (inter-) different layers, we build three additional models for PSL. PSL TrFeat allows transitivity within the feature layer, PSL TrObv allows transitivity within the observed layer on top of PSL TrFeat, and PSL TrFeatObv allows transitivity betwen the observed layer and the feature layer on top of PSL TrObv. Here we set the degree of transitivity to 2, and leave the determination of the best transitivity degree as future work. For comparison, we implement a SVM baseline ,the state-of-the-art entailment classifier (Kloetzer(base)), and its transitivity framework (Kloetzer(TrFeatPred)) (Kloetzer et al., 2015) . We use rbf-kernel SVM and the other hyper-parameters are selected from the 5fold training. Table 2 shows the performance of the proposed PSL model when transitivity is disabled (PSL). Unsurprisingly, Hyper+Conf achieves the highest precision as the relations found in E-Hownet are built manually. False alarms come from pairs that contain various unknown Chinese compound words that E-Hownet does not include, e.g., \u5206 \u7d66(distribute to) is composed of \u5206(issue) and \u7d66(give). We attempt to find its head to determine its sense, which sometimes causes errors. Compared to BInc, though in general distributional approaches may outperform ontology-based approaches at least in recall, Hyper+Conf still performs much better. We think the reason is that E-Hownet already contains a large number of words and adopting the heuristic of finding the head for compound words which could mitigates the coverage problem. Table 3 shows the performance of various PSL models when transitivity is enabled. We conduct a SVM baseline, SVM(w2v), by concatenating the word embeddings of two verbs as the features of the verb pair and it performs comparably well, indicating word embeddings are strong features. Therefore, we discuss the effect of the strong and the weak base settings here. The strong base setting involves the prediction of SVM by word embeddings (relation SVM), while the weak base setting involves the rest relations Hypr, Con, BInc and Coh. The SVM model from Kloetzer serves as the second baseline. It involves more than 100 features but does not include word embeddings, and hence we compare it with the PSL models of the weak base setting. For the weak base setting, the performance of PSL cannot beat that of Kloetzer's SVM in the very beginning, as SVM is generally considered a more powerful classifier and the Kloetzer's SVM model involves comparably more features. Surprisingly, this state-ofthe-art model from Kloetzer does not improve its F1 score after enabling the transitivity in the feature layer by their transitivity framework. (Kloetzer(TrFeatPred) vs. Kloetzer(base): they report a 2% improvement in average precision in their paper.) For the proposed PSL models, enabling transitivity in the feature layer (PSL(TrFeat) vs. PSL(base)) does improve the F1 score from the gain of recall. The reason for this could be that the transitivities of Kloetzer's features depend on the transitivities of the prediction results. If the predictions don't indicate a path to transit, their features will not be combined together for the next prediction. Therefore, their transitivity framework may involve the noise from the first prediction. On the contrary, in our PSL models, all possible feature-layered transitivities between pairs are explored. Hence, our feature-layered transitivity models have the capabilities to improve the recall.",
"cite_spans": [
{
"start": 1248,
"end": 1271,
"text": "(Kloetzer et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 1365,
"end": 1372,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 2179,
"end": 2186,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.2"
},
{
"text": "A significant improvement comes from enabling transitivity in the observed layer, that is, if we know w 1 \u2192 w 2 and w 2 \u2192 w 3 , we add w 1 \u2192 w 3 to the gold labels. As the relations in the observed layer constitute prior knowledge (known from the training data and saved in the PSL knowledge base), inferring from one relation to the other involves less uncertainty. Therefore, compared to ) shows a great improvement in both precision and F1. For recall, the feature-layer transitivity (PSL(WeakBase TrFeat)) enables the model to reach more words for a better recall, while the enrichment of the prior knowledge in PSL(WeakBase TrObv) helps to eliminate uncertainty but decreases recall. If we go further to enable transitivity between the observed layer and the feature layer using model PSL(WeakBase TrFeatObV), it begins to suffer from the lower precision caused by longer transitivity. Overall, PSL(WeakBase TrObV) achieves best among all PSL(WeakBase) models, with improvements of 21.7% over the transitivity-disabled PSL model. Compared to the models of the weak base setting, the PSL model of the strong base setting without transitivity enabled has achieved good performance in the very beginning (F1=0.66).",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 391,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.3"
},
{
"text": "Its performance is better than 3 baselines, SVM(w2v), Kloetzer(base) and Kloetzer(TrFeatPred) . It also performs better than the best PSL model of the weak base setting, PSL(WeakBase TrObv). The great thing is, enabling transitivity achieves even better performance in PSL(StrongBase TrObv) and PSL(StrongBase TrFeatObv).",
"cite_spans": [
{
"start": 73,
"end": 93,
"text": "Kloetzer(TrFeatPred)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.3"
},
{
"text": "For all models of the strong base settings, only enabling the transitivity in the feature layer does not benefit the performance as this decreases the precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.3"
},
{
"text": "From all the experiment results, we can conclude the followings. First, enabling transitivities help to find more inference pairs no matter the initial model is strong or weak. Second, for a general model, transitivities inter-or intra-layers both help it become stronger; however, for a strong model, only the transitivities intra-or inter the observed layer, i.e., involving the gold labels, contribute to the performance gain. In other words, only solid knowledge can make a strong model even stronger through transitivities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.3"
},
{
"text": "We have proposed a PSL model to explore the power of transitivity. In this process, the easy and straightforward nature of PSL in considering transitives for lexical inference is demonstrated. Results show that the best PSL model achieves the F1 score 0.684. Moreover, the proposed base PSL model has already achieved well and models with transitivity enabled achieve even better, which confirms the power of transitivity for solving the lexical inference problem on verbs. We will release the current experimental dataset. Future goals include enlarging our dataset by including web word pairs and applied the predicted results in textual entailment tasks. The constructed CVED dataset can be found in the NLPSA lab webpage 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "sinica treeback: http://rocling.iis.sinica.edu.tw/CKIP/ engversion/treebank.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://academiasinicanlplab.github.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Research of this paper was partially supported by Ministry of Science and Technology, Taiwan, under the contract 105-2221-E-001-007-MY3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating entailment rules from framenet",
"authors": [
{
"first": "Roni",
"middle": [],
"last": "Ben Aharon",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "241--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roni Ben Aharon, Idan Szpektor, and Ido Dagan. 2010. Generating entailment rules from framenet. In Pro- ceedings of the ACL 2010 Conference Short Papers. Association for Computational Linguistics, pages 241-246.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hinge-loss markov random fields and probabilistic soft logic",
"authors": [
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Bert",
"middle": [],
"last": "Broecheler",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.04406"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen H Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2015. Hinge-loss markov random fields and probabilistic soft logic. arXiv preprint arXiv:1505.04406 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Entailment above the word level in distributional semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Ngoc-Quynh",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Chung-Chieh",
"middle": [],
"last": "Shan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Pro- ceedings of the 13th Conference of the European Chapter of the Association for Computational Lin- guistics. Association for Computational Linguistics, pages 23-32.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Probabilistic soft logic for semantic textual similarity",
"authors": [
{
"first": "Islam",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "1210--1219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Islam Beltagy, Katrin Erk, and Raymond J Mooney. 2014. Probabilistic soft logic for semantic textual similarity. In ACL (1). pages 1210-1219.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Global learning of focused entailment graphs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1220--1229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2010. Global learning of focused entailment graphs. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Associ- ation for Computational Linguistics, pages 1220- 1229.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Global learning of typed entailment rules",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "610--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1. Association for Com- putational Linguistics, pages 610-619.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ledir: An unsupervised algorithm for learning directionality of inference rules",
"authors": [
{
"first": "Rahul",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Eduard",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rey",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "161--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahul Bhagat, Patrick Pantel, Eduard H Hovy, and Marina Rey. 2007. Ledir: An unsupervised algo- rithm for learning directionality of inference rules. In EMNLP-CoNLL. pages 161-170.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Classifying taxonomic relations between pairs of wikipedia articles",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2013,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "788--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran and Kathleen McKeown. 2013. Classifying taxonomic relations between pairs of wikipedia arti- cles. In IJCNLP. pages 788-794.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Probabilistic similarity logic",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Brocheler",
"suffix": ""
},
{
"first": "Lilyana",
"middle": [],
"last": "Mihalkova",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1203.3469"
]
},
"num": null,
"urls": [],
"raw_text": "Matthias Brocheler, Lilyana Mihalkova, and Lise Getoor. 2012. Probabilistic similarity logic. arXiv preprint arXiv:1203.3469 .",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Combining retrieval, statistics, and inference to answer elementary science questions",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sab- harwal, Oyvind Tafjord, Peter Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions .",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "HowNet and the Computation of Meaning",
"authors": [
{
"first": "Zhendong",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhendong Dong and Qiang Dong. 2006. HowNet and the Computation of Meaning. World Scientific.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Integrating logical representations with probabilistic information using markov logic",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Ninth International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "105--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Garrette, Katrin Erk, and Raymond Mooney. 2011. Integrating logical representations with probabilis- tic information using markov logic. In Proceedings of the Ninth International Conference on Compu- tational Semantics. Association for Computational Linguistics, pages 105-114.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fuzzy sets and fuzzy logic, theory and applications",
"authors": [
{
"first": "Klir",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Bo",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klir George J and Yuan Bo. 2008. Fuzzy sets and fuzzy logic, theory and applications. -.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Social group modeling with probabilistic soft logic",
"authors": [
{
"first": "Bert",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Norris",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Pujara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2012,
"venue": "NIPS Workshop on Social Network and Social Media Analysis: Methods, Models, and Applications",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bert Huang, Stephen H Bach, Eric Norris, Jay Pujara, and Lise Getoor. 2012. Social group modeling with probabilistic soft logic. In NIPS Workshop on So- cial Network and Social Media Analysis: Methods, Models, and Applications. volume 7.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A flexible framework for probabilistic models of social trust",
"authors": [
{
"first": "Bert",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Angelika",
"middle": [],
"last": "Kimmig",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Golbeck",
"suffix": ""
}
],
"year": 2013,
"venue": "Social Computing, Behavioral-Cultural Modeling and Prediction",
"volume": "",
"issue": "",
"pages": "265--273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bert Huang, Angelika Kimmig, Lise Getoor, and Jen- nifer Golbeck. 2013. A flexible framework for prob- abilistic models of social trust. In Social Comput- ing, Behavioral-Cultural Modeling and Prediction, Springer, pages 265-273.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semantic similarity based on corpus statistics and lexical taxonomy",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jay",
"suffix": ""
},
{
"first": "David",
"middle": [
"W"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Conrath",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay J Jiang and David W Conrath. 1997. Semantic sim- ilarity based on corpus statistics and lexical taxon- omy. arXiv preprint cmp-lg/9709008 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploiting image generality for lexical entailment detection",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vulic",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015). ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Laura Rimell, Ivan Vulic, and Stephen Clark. 2015. Exploiting image generality for lexical entailment detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics (ACL 2015). ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A short introduction to probabilistic soft logic",
"authors": [
{
"first": "Angelika",
"middle": [],
"last": "Kimmig",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Broecheler",
"suffix": ""
},
{
"first": "Bert",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angelika Kimmig, Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short intro- duction to probabilistic soft logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications. pages 1-4.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Large-scale acquisition of entailment pattern pairs",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Kloetzer",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Chikara",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
}
],
"year": 2013,
"venue": "Information Processing Society of Japan (IPSJ) Kansai-Branch Convention",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julien Kloetzer, Kentaro Torisawa, Chikara Hashimoto, and Jong-hoon Oh. 2013. Large-scale acquisition of entailment pattern pairs. In In Information Process- ing Society of Japan (IPSJ) Kansai-Branch Conven- tion. Citeseer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Large-scale acquisition of entailment pattern pairs by exploiting transitivity",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Kloetzer",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Chikara",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julien Kloetzer, Kentaro Torisawa, Chikara Hashimoto, and Jong-Hoon Oh. 2015. Large-scale acquisition of entailment pattern pairs by exploiting transitivity .",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Discovery of inference rules for question-answering",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Natural Language Engineering",
"volume": "7",
"issue": "04",
"pages": "343--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Discovery of in- ference rules for question-answering. Natural Lan- guage Engineering 7(04):343-360.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Efficient clustering of high-dimensional data sets with application to reference matching",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "Lyle",
"middle": [
"H"
],
"last": "Ungar",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "169--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum, Kamal Nigam, and Lyle H Ungar. 2000. Efficient clustering of high-dimensional data sets with application to reference matching. In Pro- ceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data min- ing. ACM, pages 169-178.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39- 41.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semantic signatures for example-based linguistic metaphor detection",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mohler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bracewell",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hinote",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Tomlinson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the First Workshop on Metaphor in NLP",
"volume": "",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Mohler, David Bracewell, David Hinote, and Marc Tomlinson. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the First Workshop on Metaphor in NLP. pages 27-35.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532- 43.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Inclusive yet selective: Supervised distributional hypernymy detection",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "1025--1036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hy- pernymy detection. In COLING. pages 1025-1036.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving hypernymy detection with an integrated path-based and distributional method",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06076"
]
},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an inte- grated path-based and distributional method. arXiv preprint arXiv:1603.06076 .",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Collective stance classification of posts in online debate forums",
"authors": [
{
"first": "Dhanya",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhanya Sridhar, Lise Getoor, and Marilyn Walker. 2014. Collective stance classification of posts in on- line debate forums. ACL 2014 page 109.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning entailment rules for unary templates",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "849--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor and Ido Dagan. 2008a. Learning en- tailment rules for unary templates. In Proceedings of the 22nd International Conference on Computa- tional Linguistics-Volume 1. Association for Com- putational Linguistics, pages 849-856.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning entailment rules for unary templates",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "849--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor and Ido Dagan. 2008b. Learning en- tailment rules for unary templates. In Proceedings of the 22nd International Conference on Computa- tional Linguistics-Volume 1. Association for Com- putational Linguistics, pages 849-856.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Augmenting wordnet-based inference with argument mapping",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on Applied Textual Inference. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor and Ido Dagan. 2009. Augmenting wordnet-based inference with argument mapping. In Proceedings of the 2009 Workshop on Applied Textual Inference. Association for Computational Linguistics, pages 27-35.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Characterising measures of lexical distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th international conference on Computational Linguistics. Associa- tion for Computational Linguistics, page 1015.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "
The expected value E intractable. | r\u2208R i (d(r, I)) is |
",
"html": null,
"num": null,
"type_str": "table",
"text": "List of atoms in lexical inference model"
},
"TABREF2": {
"content": "to the testing set to identify the entailment rela- |
tion. The 20,000 elementary math problems to- |
gether with 61,777 sentences from Sinica Tree- |
bank 4 are utilized to calculate the BInc score of |
each verb pair. A set of 300 dimensional word |
embedding representation is trained by a hybrid of |
Sinica Treebank, elementary math problems and |
Chinese Wikipedia. |
",
"html": null,
"num": null,
"type_str": "table",
"text": "Model performance: transitivity disabled."
},
"TABREF4": {
"content": "",
"html": null,
"num": null,
"type_str": "table",
"text": "Model performance: transitivity enabled. PSL(StrongBase TrObv) is significantly better than all the other models with p-value < 0.001."
}
}
}
}