Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:07:28.501320Z"
},
"title": "CaRe: Open Knowledge Graph Embeddings",
"authors": [
{
"first": "Swapnil",
"middle": [],
"last": "Gupta",
"suffix": "",
"affiliation": {},
"email": "swapnilgupta.229@gmail.com"
},
{
"first": "Sreyash",
"middle": [],
"last": "Kenkre",
"suffix": "",
"affiliation": {},
"email": "sreyash@gmail.com"
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Open Information Extraction (OpenIE) methods are effective at extracting (noun phrase, relation phrase, noun phrase) triples from text, e.g., (Barack Obama, took birth in, Honolulu). Organization of such triples in the form of a graph with noun phrases (NPs) as nodes and relation phrases (RPs) as edges results in the construction of Open Knowledge Graphs (OpenKGs). In order to use such OpenKGs in downstream tasks, it is often desirable to learn embeddings of the NPs and RPs present in the graph. Even though several Knowledge Graph (KG) embedding methods have been recently proposed, all of those methods have targeted Ontological KGs, as opposed to OpenKGs. Straightforward application of existing Ontological KG embedding methods to OpenKGs is challenging, as unlike Ontological KGs, OpenKGs are not canonicalized, i.e., a real-world entity may be represented using multiple nodes in the OpenKG, with each node corresponding to a different NP referring to the entity. For example, nodes with labels Barack Obama, Obama, and President Obama may refer to the same real-world entity Barack Obama. Even though canonicalization of OpenKGs has received some attention lately, output of such methods has not been used to improve OpenKG embeddings. We fill this gap in the paper and propose Canonicalization-infused Representations (CaRe) for OpenKGs. Through extensive experiments, we observe that CaRe enables existing models to adapt to the challenges in OpenKGs and achieve substantial improvements for the link prediction task.",
"pdf_parse": {
"paper_id": "D19-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "Open Information Extraction (OpenIE) methods are effective at extracting (noun phrase, relation phrase, noun phrase) triples from text, e.g., (Barack Obama, took birth in, Honolulu). Organization of such triples in the form of a graph with noun phrases (NPs) as nodes and relation phrases (RPs) as edges results in the construction of Open Knowledge Graphs (OpenKGs). In order to use such OpenKGs in downstream tasks, it is often desirable to learn embeddings of the NPs and RPs present in the graph. Even though several Knowledge Graph (KG) embedding methods have been recently proposed, all of those methods have targeted Ontological KGs, as opposed to OpenKGs. Straightforward application of existing Ontological KG embedding methods to OpenKGs is challenging, as unlike Ontological KGs, OpenKGs are not canonicalized, i.e., a real-world entity may be represented using multiple nodes in the OpenKG, with each node corresponding to a different NP referring to the entity. For example, nodes with labels Barack Obama, Obama, and President Obama may refer to the same real-world entity Barack Obama. Even though canonicalization of OpenKGs has received some attention lately, output of such methods has not been used to improve OpenKG embeddings. We fill this gap in the paper and propose Canonicalization-infused Representations (CaRe) for OpenKGs. Through extensive experiments, we observe that CaRe enables existing models to adapt to the challenges in OpenKGs and achieve substantial improvements for the link prediction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Open Information Extraction (OpenIE) methods such as ReVerb (Fader et al., 2011) , OLLIE (Mausam et al., 2012) , BONIE (Saha et al., 2017) and CALMIE (Saha and Mausam, 2018) can automatically extract (noun phrase, relation phrase, Sentences in Text Corpus:",
"cite_spans": [
{
"start": 60,
"end": 80,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 89,
"end": 110,
"text": "(Mausam et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 119,
"end": 138,
"text": "(Saha et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 150,
"end": 173,
"text": "(Saha and Mausam, 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Barack Obama, 44th president of USA, took birth in Honolulu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Michelle Obama wife of Barack was born in Chicago. noun phrase) triples from text, e.g., (Barack Obama, took birth in, Honolulu), (Michelle Obama, wife of, Barack), etc. An Open Knowledge Graph (OpenKG) can be constructed out of such triples by representing noun phrases (NPs) as nodes and relation phrases (RPs) as edges connecting them. Example of an OpenKG is shown in Figure 1. OpenKGs do not require pre-specified ontology, making them highly adaptable. In order to use OpenKGs in downstream tasks such as Question Answering, Document Classification, etc., it is often necessary to learn embeddings of NPs and RPs present as nodes and edges in an OpenKG.",
"cite_spans": [],
"ref_spans": [
{
"start": 374,
"end": 380,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Even though Knowledge Graph (KG) embed-ding has been an active area of research (Bordes et al., 2013; Yang et al., 2014) , all the proposed KG embedding methods have focused on embedding Ontological KGs, such as WikiData (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) , DBpedia (Auer et al., 2007) , YAGO (Suchanek et al., 2007) , NELL (Mitchell et al., 2018) , and Freebase (Bollacker et al., 2008) . Existing KG embedding models train representation of each node and edge label based on the context of triples they are present in. Doing this is suitable for ontological KGs as they are canonicalized. However, in OpenKGs, the same latent entity may be represented in different nodes labelled with different NPs. Similarly, the same latent relation can be represented with different RPs. For example, in Figure 1 , Barack Obama the entity is represented using two NP nodes: Barack Obama and Barack. Similarly, the two RPs -took birth in and was born in -refer to the same underlying relation. Hence, the paradigm of learning embeddings for each node and edge label only from the context of the triples they appear in is ineffective for OpenKGs. A possible solution is to canonicalize the OpenKGs. This involves identifying NPs and RPs that refer to the same entity and relation, and assigning them unique IDs. Nodes in the OpenKG having the same ID are merged, leading to a clean and canonicalized graph. Recent works that automatically canonicalize OpenKGs, including (Gal\u00e1rraga et al., 2014) and CESI (Vashishth et al., 2018) , pose canonicalization as a clustering task of the NPs and RPs. However, due to automatic generation, the output clusters often contain some incorrectly canonicalized elements. Thus, directly merging nodes or relations with the same IDs would result in the propagation of errors in the canonicalization step to down-stream tasks.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Bordes et al., 2013;",
"ref_id": "BIBREF3"
},
{
"start": 102,
"end": 120,
"text": "Yang et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 221,
"end": 251,
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)",
"ref_id": "BIBREF31"
},
{
"start": 262,
"end": 281,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 284,
"end": 312,
"text": "YAGO (Suchanek et al., 2007)",
"ref_id": null
},
{
"start": 320,
"end": 343,
"text": "(Mitchell et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 359,
"end": 383,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF2"
},
{
"start": 1454,
"end": 1478,
"text": "(Gal\u00e1rraga et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 1488,
"end": 1512,
"text": "(Vashishth et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 789,
"end": 797,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "OpenKG",
"sec_num": null
},
{
"text": "Our premise is that the output of these automatic canonicalization models can be utilized to improve OpenKG embedding. Instead of explicitly merging nodes with common IDs, KG embedding models can be designed to judiciously account for mistakes during the canonicalization step. Towards establishing this premise, we propose a flexible OpenKG embedding approach to integrate and utilize the output of a canonicalization model in an error-conscious manner. Our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OpenKG",
"sec_num": null
},
{
"text": "\u2022 We draw attention to an important but relatively unexplored problem of learning repre-sentations for OpenKGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OpenKG",
"sec_num": null
},
{
"text": "\u2022 We propose Canonicalization-infused Representations (CaRe) for Open KGs -a novel approach to enrich OpenKG embedding models with the output of a canonicalization model. To the best of our knowledge, this is the first model of its kind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OpenKG",
"sec_num": null
},
{
"text": "\u2022 Through extensive experiments on real-world datasets, we establish CaRe's effectiveness in embedding OpenKGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OpenKG",
"sec_num": null
},
{
"text": "CaRe source code is available at https:// github.com/malllabiisc/CaRE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OpenKG",
"sec_num": null
},
{
"text": "OpenKG extraction and canonicalization: Multiple OpenIE systems have been developed over the years. These include TextRunner (Yates et al., 2007) , (Angeli et al., 2015) , ReVerb (Fader et al., 2011) , OLLIE (Mausam et al., 2012) , SR-LIE (Christensen et al., 2011) , RelNoun (Pal and Mausam, 2016) and ClauseIE (Del Corro and Gemulla, 2013) . A recent survey on the progress in OpenIE systems is presented in (Mausam, 2016) .",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Yates et al., 2007)",
"ref_id": "BIBREF34"
},
{
"start": 148,
"end": 169,
"text": "(Angeli et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 179,
"end": 199,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 208,
"end": 229,
"text": "(Mausam et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 239,
"end": 265,
"text": "(Christensen et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 276,
"end": 298,
"text": "(Pal and Mausam, 2016)",
"ref_id": "BIBREF21"
},
{
"start": 303,
"end": 341,
"text": "ClauseIE (Del Corro and Gemulla, 2013)",
"ref_id": null
},
{
"start": 410,
"end": 424,
"text": "(Mausam, 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To perform automatic canonicalization of OpenKGs, (Gal\u00e1rraga et al., 2014) first cluster NPs over manually defined feature spaces. These are then passed to AMIE (Gal\u00e1rraga et al., 2013) for RP clustering. Whereas, CESI (Vashishth et al., 2018) jointly learns vector representations of NPs and RPs by infusing side information in KG embedding models which are then used to cluster NPs and RPs. In this work, we use CESI to generate canonicalization clusters for our datasets. KG Embedding Methods: KG embedding methods aim to learn low dimensional vector representations for the nodes and edge labels encoding the graph topology. All these methods train the embeddings by optimizing a link prediction based objective. They primarily differ in their way of mathematically modelling the likelihood of a triple being true. Translation-based hypothesis which regards relation vector for any (subject, relation, object) triple as a translation from the subject vector to the object vector is used in methods like TransE (Bordes et al., 2013) and TransH (Wang et al., 2014) . Semantic matching models, such as DistMult (Yang et al., 2014) , ComplEx (Trouillon et al., 2016) and HolE (Nickel et al., 2016) , use similarity-based scoring functions to measure the likelihood of a fact. Multi-Layer neural network models, ConvE (Dettmers et al., 2018) and R-GCN (Schlichtkrull et al., 2018) have shown better expressive strength. R-GCN adapts graph convolutional network (GCN) (Kipf and Welling, 2016) to a relational graph proposing an auto-encoder model for the link prediction task. Implicit in all these approaches is the assumption that each node in the graph is a different entity, and distinct edge labels refer to distinct relations. This assumption does not hold in OpenKGs. Existing KG embeddings are thus unsuitable for a task like link prediction on OpenKGs. CaRe addresses these limitations by infusing the output of a canonicalization model with the KG embedding models.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "(Gal\u00e1rraga et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 161,
"end": 185,
"text": "(Gal\u00e1rraga et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 219,
"end": 243,
"text": "(Vashishth et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 1014,
"end": 1035,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 1047,
"end": 1066,
"text": "(Wang et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 1112,
"end": 1131,
"text": "(Yang et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 1142,
"end": 1166,
"text": "(Trouillon et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 1176,
"end": 1197,
"text": "(Nickel et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 1317,
"end": 1340,
"text": "(Dettmers et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 1345,
"end": 1379,
"text": "R-GCN (Schlichtkrull et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we present a quick overview of a few basic methods which are useful for understanding the rest of the paper, especially the experiments section. We first start with the notations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "Notations: OpenKG is denoted as G = (N, R, T + ), where N and R are the set of NPs and RPs, respectively, and Here, . denotes norm and p is either 1 or 2. For training parameters, TransE uses margin-based pairwise ranking loss with negative sampling. ConvE: ConvE (Dettmers et al., 2018) first reshapes e s and r r to\u0113 s andr r , respectively, and passes them through a 2D convolution layer to compute the score corresponding to a triple:",
"cite_spans": [
{
"start": 264,
"end": 287,
"text": "(Dettmers et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "T + = {(s, r, o)|s \u2208 N, r \u2208 R, o \u2208 N}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "\u03c8(s, r, o) = f (vec(f ([\u0113 s ;r r ] * w))W )e o .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "Here, * and w denote the convolution operator and convolution filters, f represents an activation function and W , the weights of the final linear layer. For training, ConvE uses binary cross-entropy loss with correct samples considered as positive instances while negative instances are generated through negative sampling. Graph Neural Networks (GNN): GNNs were introduced in (Gori et al., 2005) and (Scarselli et al., 2009) as a generalization of recursive neural networks. Later, the generalization of CNN to graphstructured data, popularly known as Graph Convolution Network (GCN), was proposed in (Bruna et al., 2014) . A first-order formulation of GCN as proposed in (Kipf and Welling, 2016) . Under GCN formulation, the representation of a node n after l th layer is defined as follows.",
"cite_spans": [
{
"start": 378,
"end": 397,
"text": "(Gori et al., 2005)",
"ref_id": "BIBREF14"
},
{
"start": 402,
"end": 426,
"text": "(Scarselli et al., 2009)",
"ref_id": "BIBREF25"
},
{
"start": 603,
"end": 623,
"text": "(Bruna et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 674,
"end": 698,
"text": "(Kipf and Welling, 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "e l+1 n = f \uf8eb \uf8ed i\u2208N (n) W l e l i + b l \uf8f6 \uf8f8 , \u2200n \u2208 N (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "Here, W l , b l are layer parameters, N (n) corresponds to the immediate neighborhood of n and f denotes an activation function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "GAT: GAT uses the attention mechanism to determine the weights of a node's neighbors (Veli\u010dkovi\u0107 et al., 2018) . Further, it uses multihead attentions and defines the representation of S e s < l a t e x i t s h a 1 _ b a s e 6 4 = \" ",
"cite_spans": [
{
"start": 85,
"end": 110,
"text": "(Veli\u010dkovi\u0107 et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "I v 8 p p u E M d 3 J Y 9 A l l t U 5 F T + j T P 5 o = \" > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m 0 o M e i F 4 8 V 7 Q e 0 o W y 2 k 3 b p Z h N 2 N 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b / + g f H j U 0 n G q G D Z Z L G L V C a h G w S U 2 D T c C O 4 l C G g U C 2 8 H 4 d u a 3 n 1 B p H s t H M 0 n Q j + h Q 8 p A z a q z 0 g H 3 d L 1 f c q j s H W S V e T i q Q o 9 E v f / U G M U s j l I Y J q n X X c x P j Z 1 Q Z z g R O S 7 1 U Y 0 L Z m A 6 x a 6 m k E W o / m 5 8 6 J W d W G Z A w V r a k I X P 1 9 0 R G I 6 0 n U W A 7 I 2 p G e t m b i f 9 5 3 d S E 1 3 7 G Z Z I a l G y x K E w F M T G Z / U 0 G X C E z Y m I J Z Y r b W w k b U U W Z s e m U b A j e 8 s u r p H V R 9 S 6 r 3 n 2 t U r / J 4 y j C C Z z C O X h w B X W 4 g w Y 0 g c E Q n u E V 3 h z h v D j v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "V A 9 S w N B E J 2 L X z F + R S 1 t F o N g F e 5 U 0 D J o Y x n R m E B y h L 3 N X L J k b / f Y 3 R N C y E + w s V D E 1 l 9 k 5 7 9 x k 1 y h i Q 8 G H u / N M D M v S g U 3 1 v e / v c L K 6 t r 6 R n G z t L W 9 s 7 t X 3 j 9 4 N C r T D B t M C a V b E T U o u M S G 5 V Z g K 9 V I k 0 h g M x r e T P 3 m E 2 r D l X y w o x T D h P Y l j z m j 1 k n 3 2 F X d c s W v + j O Q Z R L k p A I 5 6 t 3 y V 6 e n W J a g t E x Q Y 9 q B n 9 p w T L X l T O C k 1 M k M p p Q N a R / b j k q a o A n H s 1 M n 5 M Q p P R I r 7 U p a M l N / T 4 x p Y s w o i V x n Q u 3 A L H p T 8 T + v n d n 4 K h x z m W Y W J Z s v i j N B r C L T v 0 m P a 2 R W j B y h T H N 3 K 2 E D q i m z L p 2 S C y F Y f H m Z P J 5 V g / N q c H d R q V 3 n c R T h C I 7 h F A K 4 h B r c Q h 0 a w K A P z / A K b 5 7 w X r x 3 7 2 P e W v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "i z + Q R O 2 b J d S 1 F F o K q T g / S k = \" > A A A B 8 H i c b V B N S 8 N A E J 3 4 W e t X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "a O X Y B E 9 l U Q F P R a 9 e K x g P 6 S t Z b O d t E t 3 N 2 F 3 I 5 S Q X + H F g y J e / T n e / D d u 2 x y 0 9 c H A 4 7 0 Z Z u Y F M W f a e N 6 3 s 7 S 8 s r q 2 X t g o b m 5 t 7 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "+ y W 9 v Y b O k o U x T q N e K R a A d H I m c S 6 Y Y Z j K 1 Z I R M C x G Y x u J n 7 z C Z V m k b w 3 4 x i 7 g g w k C x k l x k o P 2 E t 1 9 p i e Z L 1 S 2 a t 4 U 7 i L x M 9 J G X L U e q W v T j + i i U B p K C d a t 3 0 v N t 2 U K M M o x 6 z Y S T T G h I 7 I A N u W S i J Q d 9 P p w Z l 7 b J W + G 0 b K l j T u V P 0 9 k R K h 9 V g E t l M Q M 9 T z 3 k T 8 z 2 s n J r z q p k z G i U F J Z 4 v C h L s m c i f f u 3 2 m k B o + t o R Q x e y t L h 0 S R a i x G R V t C P 7 8 y 4 u k c V b x z y v + 3 U W 5 e p 3 H U Y B D O I J T 8 O E S q n A L N a g D B Q H P 8 A p v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "v N i A s F l 8 S c m L F G 7 i P j e t Q B M = \" > A A A B 8 H i c b V D L S g N B E O z 1 G e M r 6 t H L Y B A 9 h V 0 V 9 B j 0 4 j G C e U g S w + x k N h k y j 2 V m V g j L f o U X D 4 p 4 9 X O 8 + T d O k j 1 o Y k F D U d V N d 1 c Y c 2 a s 7 3 9 7 S 8 s r q 2 v r h Y 3 i 5 t b 2 z m 5 p b 7 9 h V K I J r R P F l W 6 F 2 F D O J K 1 b Z j l t x Z p i E X L a D E c 3 E 7 / 5 R L V h S t 7 b c U y 7 A g 8 k i x j B 1 k k P t J e q 7 D E 9 y X q l s l / x p 0 C L J M h J G X L U e q W v T l + R R F B p C c f G t A M / t t 0 U a 8 s I p 1 m x k x g a Y z L C A 9 p 2 V G J B T T e d H p y h Y 6 f 0 U a S 0 K 2 n R V P 0 9 k W J h z F i E r l N g O z T z 3 k T 8 z 2 s n N r r q p k z G i a W S z B Z F C U d W o c n 3 q M 8 0 J Z a P H c F E M 3 c r I k O s M b E u o 6 I L I Z h / e Z E 0 z i r B e S W 4 u y h X r / M 4 C n A I R 3 A K A V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "x C F W 6 h B n U g I O A Z X u H N 0 9 6 L 9 + 5 9 z F q X v H z m A P 7 A + / w B 8 q C Q f w = = < / l a t e x i t > r r < l a t e x i t s h a 1 _ b a s e 6 4 = \" e 0 8 V s O 9 M 3 I T M l r H U i C a j I w P K o j 4 = \" > A A A B 7 H i c b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l U 0 G P R i 8 c K p i 2 0 o W y 2 m 3 b p Z h N 2 J 0 I J / Q 1 e P C j i 1 R / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e m E p h 0 H W / n d L a + s b m V n m 7 s r O 7 t 3 9 Q P T x q m S T T j P s s k Y n u h N R w K R T 3 U a D k n V R z G o e S t 8 P x 3 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r J 1 / 1 c T / v V m l t 3 5 y C r x C t I D Q o 0 + 9 W v 3 i B h W c w V M k m N 6 X p u i k F O N Q o m + b T S y w x P K R v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "T I e 9 a q m j M T Z D P j 5 2 S M 6 s M S J R o W w r J X P 0 9 k d P Y m E k c 2 s 6 Y 4 s g s e z P x P 6 + b Y X Q ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "T 5 E K l G X L F F o u i T B J M y O x z M h C a M 5 Q T S y j T w t 5 K 2 I h q y t D m U 7 E h e M s v r 5 L W R d 2 7 r H s P V 7 X G b R F H G U 7 g F M 7 B g 2 t o w D 0 0 w Q c G A p 7 h F d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "V F e z H q v M Y 4 W b F R c + k d s K v u k 4 = \" > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l U 0 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 E 6 W E / g Q v H h T x 6 i / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j m 6 n f e u T a i F g 9 4 D j h f k Q H S o S C U b T S / V P P 6 5 U r b t W d g S w T L y c V y F H v l b + 6 / Z i l E V f I J D W m 4 7 k J + h n V K J j k k 1 I 3 N T y h b E Q H v G O p o h E 3 f j Y 7 d U J O r N I n Y a x t K S Q z 9 f d E R i N j x l F g O y O K Q 7 P o T c X / v E 6 K 4 Z W f C Z W k y B W b L w p T S T A m 0 7 9 J X 2 j O U I 4 t o U w L e y t h Q 6 o p Q 5 t O y Y b g L b 6 8 T J p n V e + 8 6 t 1 d V G r X e R x F O I J j O A U P L q E G t 1 C H B j A Y w D O 8 w p s j n R f n 3 f m Y t x a c f O Y Q / s D 5 / A E L J o 2 i < / l a t e x i t > w 2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" q m p m S M S 1 f 2 q 0 F k c T J m + u M n F v / / o = \" > A A A B 6 n i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k V 9 F j 0 4 r G i / Y A 2 l M 1 2 0 y 7 d b M L u R C m h P 8 G L B 0 W 8 + o u 8 + W / c t j l o 6 4 O B x 3 s z z M w L E i k M u u 6 3 s 7 K 6 t r 6 x W d g q b u / s 7 u 2 X D g 6 b J k 4 1 4 w 0 W y 1 i 3 A 2 q 4 F I o 3 U K D k 7 U R z G g W S t 4 L R z d R v P X J t R K w e c J x w P 6 I D J U L B K F r p / q l X 7 Z X K b s W d g S w T L y d l y F H v l b 6 6 / Z i l E V f I J D W m 4 7 k J + h n V K J j k k 2 I 3 N T y h b E Q H v G O p o h E 3 f j Y 7 d U J O r d I n Y a x t K S Q z 9 f d E R i N j x l F g O y O K Q 7 P o T c X / v E 6 K 4 Z W f C Z W k y B W b L w p T S T A m 0 7 9 J X 2 j O U I 4 t o U w L e y t h Q 6 o p Q 5 t O 0 Y b g L b 6 8 T J r V i n d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "V m J P J F k Y 7 h B P N 7 4 L d n 8 4 M V n z q o E = \" > A A A B 6 n i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K o k K e i x 6 8 V i x X 9 C G s t l u 2 q W b T d i d K C X 0 J 3 j x o I h X f 5 E 3 / 4 3 b N g d t f T D w e G + G m X l B I o V B 1 / 1 2 V l b X 1 j c 2 C 1 v F 7 Z 3 d v f 3 S w W H T x K l m v M F i G e t 2 Q A 2 X Q v E G C p S 8 n W h O o 0 D y V j C 6 n f q t R 6 6 N i F U d x w n 3 I z p Q I h S M o p U e n n r 1 X q n s V t w Z y D L x c l K G H L V e 6 a v b j 1 k a c Y V M U m M 6 n p u g n 1 G N g k k + K X Z T w x P K R n T A O 5 Y q G n H j Z 7 N T J + T U K n 0 S x t q W Q j J T f 0 9 k N D J m H A W 2 M 6 I 4 N I v e V P z P 6 6 Q Y X v u Z U E m K X L H 5 o j C V B G M y / Z v 0 h e Y M 5 d g S y r S w t x I 2 p J o y t O k U b Q j e 4 s v L p H l e 8 S 4 q 3 v 1 l u X q T x 1 G A Y z i B M / D g C q p w B z V o A I M B P M M r v D n S e X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "H e n Y 9 5 6 4 q T z x z B H z i f P 0 A y j c U = < / l a t e x i t > Figure 3 : CaRe Step 2: In this step, CaRe learns KG embeddings from the augmented OpenKG (Figure 2 ). Base model can be any existing KG embedding model (e.g., TransE, ConvE). RP embeddings are parameterized by encoding vector representations of the word sequence composing them. This enables CaRe to capture semantic similarity of RPs. Embeddings of NPs are made more context rich by updating them with the represenations of canonical NPs (connected with dotted lines). Please see Section 3 and 4 for details.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 3",
"ref_id": null
},
{
"start": 162,
"end": 171,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "n th the node as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "e l+1 n = || K k=1 f \uf8eb \uf8ed i\u2208N (n) \u03b1 k (e l n , e l i )W l k e l i \uf8f6 \uf8f8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "(2) Here, \u03b1(.) denotes attention weights and || denotes concatenation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "4 CaRe: Proposed Method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "CaRe consists of two steps. We first give an overview of the two steps below and then provide a detailed description of each step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "4.1"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "4.1"
},
{
"text": "Step 1: In this step, the given OpenKG is augmented by adding edges from a canonicalization model. This step is outlined in Figure 2 and described in detail in Section 4.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 132,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "4.1"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "4.1"
},
{
"text": "Step 2: In this step, embeddings of nodes and edges present in the augmented OpenKG obtained in Step 1 are learned. This step is outlined in Figure 3 . The system consists of three components: Base Model, Phrase Encoder Network, and Canonical Cluster Encoder Network. We describe these components in Section 4.3, Section 4.4, and Section 4.5, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 149,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "4.1"
},
{
"text": "CaRe architecture and its components are depicted in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 61,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "4.1"
},
{
"text": "As discussed in Section 1, based on the automatic clusters generated by a canonicalization model, merging NPs with a single ID can lead to the propagation of errors in the graph. Hence, we propose a soft integration scheme by adding an unlabeled and undirected edge between any two NPs which are canonical as per the canonicalization model as shown in Figure 2 . More formally, suppose n 1 , n 2 are two NPs that have been identified to be the same in the canonicalization step. We add an undirected and unlabelled edge (n 1 , n 2 ) between the nodes n 1 and n 2 , and call them canonicalization edges. If the canonicalization step produces a confidence score for n 1 and n 2 being the same, then this score can be incorporated as a weight on the canonicalization edge. If no such score is produced, then the edges are kept unweighted. We then collect all these canonicalization edges into the set C. Finally, these edges are added to the original OpenKG G to get a canonicalization augmented OpenKG, G = (N, R, T + \u222a C)",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Step 1: Canonicalization Augmented OpenKG",
"sec_num": "4.2"
},
{
"text": "This component decides the way parameters are trained in CaRe based on the relational edges T + . While CaRe is flexible to accommodate any KG embedding model as the base model, for the experiments in paper, we used TransE and ConvE (please see Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Base Model (B)",
"sec_num": "4.3"
},
{
"text": "Step 2: Phrase Encoder Network (PN)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.4",
"sec_num": null
},
{
"text": "We generate embeddings of RPs by encoding the vector representations of the words composing them. Consider an RP as a sequence of T words (w 1 , w 2 , ..., w T ) and their corresponding word vectors as (x 1 , x 2 , ..., x T ). They are passed through a Phrase Encoder Network module (Figure 3) for which we use a bidirectional GRU (Cho et al., 2014) model with last pooling as described below.",
"cite_spans": [
{
"start": 331,
"end": 349,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 283,
"end": 293,
"text": "(Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "4.4",
"sec_num": null
},
{
"text": "( \u2212 \u2192 h 1 , \u2212 \u2192 h 2 , ..., \u2212 \u2192 h T ) = \u2212 \u2212\u2212 \u2192 GRU (x 1 , x 2 , ..., x T ) ( \u2190 \u2212 h 1 , \u2190 \u2212 h 2 , ..., \u2190 \u2212 h T ) = \u2190 \u2212\u2212 \u2212 GRU (x 1 , x 2 , ..., x T ). Finally, r r = [ \u2212 \u2192 h T : \u2190 \u2212 h 1 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.4",
"sec_num": null
},
{
"text": "is concatenation of final hidden states in both the directions. This approach allows parameter sharing across RPs with word overlaps while leveraging the rich semantic information from pre-trained word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.4",
"sec_num": null
},
{
"text": "The canonicalization information is incorporated in the NP embeddings by utilizing the canonicalization-induced edges C. Each NP n \u2208 N is assigned a vector e n . As NP canonicalization is expressed as a clustering step, for each NP, its canonical NPs are a single edge away in C (see Figure 3) . Hence, a single layer of network aggregation is sufficient. We propose a non-parametric message passing and update network which works in the following two steps: Context vector: First for each NP, n \u2208 N a context vector e c n is generated by the following message passing scheme from its canonical neighbors.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 293,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Step 2: Canonical Cluster Encoder Network (CN)",
"sec_num": "4.5"
},
{
"text": "e c n = \uf8eb \uf8ed i\u2208N (n) 1 |N (n)| e i \uf8f6 \uf8f8 , \u2200n \u2208 N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Canonical Cluster Encoder Network (CN)",
"sec_num": "4.5"
},
{
"text": "Here, N (n) = {i|i \u2208 N, (n, i) \u2208 C}, is the canonical neighborhood of n. Updating NP embeddings: The updated embeddings for each NP is computed as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Canonical Cluster Encoder Network (CN)",
"sec_num": "4.5"
},
{
"text": "e n = e n 2 + e c n 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Canonical Cluster Encoder Network (CN)",
"sec_num": "4.5"
},
{
"text": "which are passed to the decoding stage. For the case of weighted canonicalization edges, the weights can be used while generating the context vector of an NP, by making the contribution of its canonical NPs proportional to the edge weights joining them. The proposed approach can be interpreted as a local embedding smoothening of canonical NPs and we call this network as Local Averaging Network (LAN). We also experimented with more sophisticated attention-based context vector generation and gating mechanism for embedding update stage, but they resulted in a performance drop. Note that, our approach to embed NPs is inspired by the auto-encoder framework used in R- GCN (Schlichtkrull et al., 2018) but with a key difference. In R-GCN, the same set of edges are utilized at both the encoding and decoding stages. Whereas in CaRe, the canonicalization edges C are used at the encoder (Canonical Cluster Encoder Network) while the decoder (Base Model) operates on the relational edges T + . CaRe nomenclature: We introduce a generic nomenclature for the possible variants of the CaRe framework based on the choice of the Base Model (B), Phrase Encoder Network (PN) and Canonical Cluster Encoder Network (CN) as CaRe(B, PN, CN). For e.g., CaRe(B=ConvE, PN=Bi-GRU, CN=LAN) corresponds to a CaRe model with ConvE as base model, Bi-GRU network with last pooling (Section 4.4) as the Phrase Encoder Network and Local Averaging Network (Section 4.5) as the Canonical Cluster Encoder Network. We define Bi-GRU and LAN as default values for the PN and CN arguments respectively. In case, values for these arguments are not explicitly mentioned, they take the default values. Thus, CaRe(B=ConvE) represents the same network as CaRe(B=ConvE, PN=Bi-GRU, CN=LAN).",
"cite_spans": [
{
"start": 671,
"end": 703,
"text": "GCN (Schlichtkrull et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Canonical Cluster Encoder Network (CN)",
"sec_num": "4.5"
},
{
"text": "Statistics of the two datasets used in the experiments of this paper are summarized in Table 1 . In order to build these datasets, we first obtained the three graph datasets -Base, Ambiguous and Re-Verb45K. While Base and Ambiguous datasets are created in (Gal\u00e1rraga et al., 2014) introduced in (Vashishth et al., 2018) . ReVerb20K is created by combining the two smaller datasets Base and Ambiguous. All the datasets are constructed through ReVerb Open KB (Fader et al., 2011) . We refer the readers to the respective papers for the construction details.",
"cite_spans": [
{
"start": 256,
"end": 280,
"text": "(Gal\u00e1rraga et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 295,
"end": 319,
"text": "(Vashishth et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 457,
"end": 477,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "To generate the train, validation and test triples the following procedure is adopted. First, the entire set of triples is divided in 80 : 20 ratio ensuring that each NP and RP in the triples of the smaller set is at least present once in the triples of the bigger set. This consideration is essential because of the transductive nature of existing KG embedding models. The bigger set is considered as the train dataset and the smaller set is further randomly divided in 30 : 70 to get the validation and test datasets respectively. Both datasets contain gold canonicalization clusters for the NPs extracted through the Freebase entity linking information (Gabrilovich et al., 2013) which enables automatic evaluation in the canonicalization task.",
"cite_spans": [
{
"start": 656,
"end": 682,
"text": "(Gabrilovich et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "In the typical link prediction evaluation, an unseen triple (s, r, o) is taken and partial triples (s, r, ?) and (?, r, o) are shown to the model. It ranks all the entities in the graph for their likelihood to be the missing entity and the rank assigned to the true missing entity is considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open KG Link Prediction Evaluation",
"sec_num": "5.2"
},
{
"text": "However, while this is suitable for ontological KGs, it is not valid for our setting. In OpenKGs, instead of entities, NPs are present and several of them can refer to the same entity. This means that, even when predicting correct entity, a model will be unfairly penalized if the prediction is a different canonical form of the entity than the one present in the considered triple. We, therefore propose to rank gold NP clusters, available in the dataset, instead of ranking each NP. We do this in the following manner:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open KG Link Prediction Evaluation",
"sec_num": "5.2"
},
{
"text": "\u2022 List all the NPs in decreasing order of their likelihood to be the missing part of the triple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open KG Link Prediction Evaluation",
"sec_num": "5.2"
},
{
"text": "\u2022 Prune this list by keeping the best ranked NPs for each gold cluster. This gives us the ranked list of gold clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open KG Link Prediction Evaluation",
"sec_num": "5.2"
},
{
"text": "\u2022 Consider the rank of the cluster to which the true missing NP belongs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open KG Link Prediction Evaluation",
"sec_num": "5.2"
},
{
"text": "We provide results using three commonly used evaluation metrics: mean rank (MR), mean reciprocal rank (MRR) and Hits@n with n = {10, 30, 50}. Filtered setting as introduced in (Bordes et al., 2013) is followed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open KG Link Prediction Evaluation",
"sec_num": "5.2"
},
{
"text": "We run CESI on both the datasets to generate the NP canonical clusters. Note that, none of the existing automatic canonicalization models output edge weights. Hence, for all the experiments, the canonicalization-induced edges are kept unweighted. For Phrase Encoder Network, we found single layer in the Bi-GRU model works best. CaRe allows the use of any pre-trained embeddings. However, (Vashishth et al., 2018) demonstrates the effectiveness of pre-trained GloVE (Pennington et al., 2014) vectors for initializing representations for the same datasets. Hence, the word vectors are initialized with 300-dimensional pre-trained GloVE embeddings and are kept trainable. We use PyTorch Geometric library (Fey and Lenssen, 2019) for the Canonical Cluster Encoder Network module.",
"cite_spans": [
{
"start": 389,
"end": 413,
"text": "(Vashishth et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 466,
"end": 491,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 703,
"end": 726,
"text": "(Fey and Lenssen, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.3"
},
{
"text": "The choice of optimizer and regularization based hyper-parameters is directly adopted from the ones proposed in the original work of the base models. Both the NP and RP embedding size is kept fixed at 300, while the learning rate is selected through a grid search over {0.1,0.01,0.001}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.3"
},
{
"text": "Models like ConvE introduce inverse relations. For our experiments we generate inverse phrase for an RP by adding a phrase \"inverse of\" to that RP. In ComplEx the embeddings have both real and imaginary parts. For this, we use two separate Phrase Encoder Network and Graph Neural Network modules for the real and imaginary parts of RP and NP embeddings respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.3"
},
{
"text": "In this section we evaluate the following questions: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "As shown in Figure 3 , any existing KG embedding model can be used as the base model in the CaRe framework. We experimented with several KG embedding models and found the CaRe achieved substantial improvements in comparison to standalone use of the KG embedding model in each case. We achieved overall best performance working with ConvE as the base model, CaRe(B=ConvE). The experimental results are presented in Table 2 . Table 1 shows that in both the datasets, on an average, the number of train triples for each NP and RP is less than 2. In contrast, FB15k (Bordes et al., 2013) , an ontological KG, has on an average 32 triples for each entity and 360 triples for each relation. This highlights the extremely sparse and fragmented nature of OpenKGs. Hence, the superior performance of CaRe supports the hypothesis that the flow of information while learning the representations of canonical NPs and RPs in OpenKGs is beneficial.",
"cite_spans": [
{
"start": 562,
"end": 583,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 3",
"ref_id": null
},
{
"start": 414,
"end": 421,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 424,
"end": 431,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "6.1"
},
{
"text": "Also, a comparison of the number of unique NPs and gold NP clusters for the two datasets (Table 1) shows that the number of NPs canonical to each other is more significant in ReVerb45K than in ReVerb20K. Hence, a more prominent improvement due to CaRe in ReVerb45K as compared to ReVerb20K proves the effectiveness of CaRe in utilizing the canonicalization information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "6.1"
},
{
"text": "As described in Section 4.4, a Phrase Encoder Network is used to parametrize RP embeddings, which allows parameter sharing while learning embeddings. Table 3 provides a quantitative anal- ysis of the impact of parameterizing RP embeddings across several KG embedding models. In these experiments, the CN argument of CaReis kept \u03c6, which implies no canonicalization information is used. The NP embeddings are trained in the same manner as the base KG embedding model. Comparing the performance of CaRe(B=ConvE,CN=\u03c6) in Table 3 with the performance of CaRe(B=ConvE) in Table 2 , it can be noticed that a major part of the performance boost of CaRe can be attributed to parameterization of RP embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 157,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 518,
"end": 525,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 567,
"end": 574,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Impact of parameterizing RP embeddings",
"sec_num": "6.2"
},
{
"text": "There can be several methods through which the canonicalization edges integrated into the graph (as described in Figure 2 ), can be utilized in the model. In Section 4.5, we described a local embedding smoothening method adopted in CaRe. In this section, we present a comparative analysis with the following competitive baselines:",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 121,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Different ways to utilize Canonicalization edges",
"sec_num": "6.3"
},
{
"text": "\u2022 CaRe(CN=GCN): A single layer of GCN (Kipf and Welling, 2016) for the Canonical Cluster Encoder Network. Refer to Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different ways to utilize Canonicalization edges",
"sec_num": "6.3"
},
{
"text": "\u2022 CaRe(CN=GAT): A single layer of GAT (Veli\u010dkovi\u0107 et al., 2018) for the Canonical Cluster Encoder Network. Refer to Equation 2.",
"cite_spans": [
{
"start": 38,
"end": 63,
"text": "(Veli\u010dkovi\u0107 et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Different ways to utilize Canonicalization edges",
"sec_num": "6.3"
},
{
"text": "\u2022 CaRe(CN=edge): Here the canonicalization edges in G are labelled with a symmetric RP \"is canonical to\". This adds a new edge type in the graph and is treated by KG embedding models like any other edge type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different ways to utilize Canonicalization edges",
"sec_num": "6.3"
},
{
"text": "We show these comparisons using both TransE and ConvE as base models in Figure 4 . The GCN and GAT architectures have an adverse effect on the performance. We believe this can be attributed to the complex nature of these architectures in an already over-parameterized and noisy setting. In CaRe(CN=edge), the model is expected to learn the meaning of two NPs being canonical and encode it in the vector embedding of the new edge type. The local averaging network CaRe(CN=LAN) used in CaRe is a GNN architecture with fixed weights. The choice of weights follows from the prior belief that canonical NP embeddings should be close in the vector space, inducing a useful inductive bias. The above results indicate that this leads to better performance. Additionally, unlike the GNN based methods, CaRe(CN=edge) has a limitation in its modelling capacity as it provides no way to handle the case where canonicalization edges are weighted. Figure 5 demonstrates a qualitative comparison of RP embeddings between ConvE and CaRe(B=ConvE). For this experiment, we selected seven RPs, and for each RP, through human judgement, we selected two more RPs with similar meaning. Thus, there are seven different clusters. The figure shows t-SNE (van der Maaten and Hinton, 2008) visualization of the embeddings learnt for these RPs by CaRe(B=ConvE) and ConvE. t-SNE is a non-linear transformation which tries to map points in high dimensional space to a lower dimension preserving the local relationships between the points. The figure verifies the hypothesis that due to the parameterization of RP embeddings and utilizing pre-trained word embeddings, CaRe is able to better capture the semantic similarity of the RPs in comparison to the base models. Due to the explicit integration of NP canonicalization in CaRe, we observed a similar desirable impact on the embeddings learned for NPs as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 4",
"ref_id": null
},
{
"start": 934,
"end": 942,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Different ways to utilize Canonicalization edges",
"sec_num": "6.3"
},
{
"text": "Open Information Extraction (OpenIE) provides an effective way to bootstrap Open Knowledge Graphs (OpenKGs) from text corpus. OpenKGs consist of noun phrases (NPs) as nodes and relations phrases (RPs) as edges. In spite of this advantage, OpenKGs are often sparse and non-canonicalized, i.e., the same entity could be expressed using multiple nodes in the graph (and similarly for relations). This renders existing Ontological KG embedding methods ineffective in learning embeddings of NPs and RPs in OpenKGs. In spite of this limitation, there has been no prior work which has focused on OpenKG embedding. We fill this gap in the paper and propose CaRe. CaRe infuses canonicalization information combined with the neighborhood graph structure to learn rich representations of NPs. Further, it captures the semantic similarity of RPs by utilizing the word sequence information in these relation phrases to parameterize the RP embeddings. Through extensive experiments on realworld datasets, we demonstrate the effectiveness of embeddings learned by CaRe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "As part of future work, we hope to extend CaRe to also utilize RP canonicalization information. Utilizing OpenKG embeddings in tasks beyond link prediction is another avenue of further work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their constructive comments. This work is supported by the Ministry of Human Resource Development (MHRD), Government of India. Finally, we thank all the members of MALL Lab, IISc for their invaluable suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Leveraging linguistic structure for open domain information extraction",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Melvin Jose Johnson",
"middle": [],
"last": "Premkumar",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "344--354",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1034"
]
},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguis- tic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 344-354, Beijing, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "The Semantic Web",
"volume": "",
"issue": "",
"pages": "722--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, pages 722-735, Berlin, Hei- delberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Freebase: A collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD '08",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {
"DOI": [
"10.1145/1376616.1376746"
]
},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A col- laboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Man- agement of Data, SIGMOD '08, pages 1247-1250, New York, NY, USA. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Dur\u00e1n",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Dur\u00e1n, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Proceedings of the 26th Interna- tional Conference on Neural Information Process- ing Systems -Volume 2, NIPS'13, pages 2787-2795, USA. Curran Associates Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Spectral networks and locally connected networks on graphs",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations (ICLR2014)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. 2014. Spectral networks and lo- cally connected networks on graphs. In Inter- national Conference on Learning Representations (ICLR2014), CBLS, April 2014.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An analysis of open information extraction based on semantic role labeling",
"authors": [
{
"first": "Janara",
"middle": [],
"last": "Christensen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth International Conference on Knowledge Capture, K-CAP '11",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {
"DOI": [
"10.1145/1999676.1999697"
]
},
"num": null,
"urls": [],
"raw_text": "Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2011. An analysis of open informa- tion extraction based on semantic role labeling. In Proceedings of the Sixth International Conference on Knowledge Capture, K-CAP '11, pages 113-120, New York, NY, USA. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Clausie: Clause-based open information extraction",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Del Corro",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22Nd International Conference on World Wide Web, WWW '13",
"volume": "",
"issue": "",
"pages": "355--366",
"other_ids": {
"DOI": [
"10.1145/2488388.2488420"
]
},
"num": null,
"urls": [],
"raw_text": "Luciano Del Corro and Rainer Gemulla. 2013. Clausie: Clause-based open information extraction. In Pro- ceedings of the 22Nd International Conference on World Wide Web, WWW '13, pages 355-366, New York, NY, USA. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Convolutional 2d knowledge graph embeddings",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Dettmers",
"suffix": ""
},
{
"first": "Minervini",
"middle": [],
"last": "Pasquale",
"suffix": ""
},
{
"first": "Stenetorp",
"middle": [],
"last": "Pontus",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1811--1818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Dettmers, Minervini Pasquale, Stenetorp Pon- tus, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, pages 1811-1818.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, pages 1535-1545, Edinburgh, Scotland, UK. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fast graph representation learning with PyTorch Geometric",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Fey",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"E"
],
"last": "Lenssen",
"suffix": ""
}
],
"year": 2019,
"venue": "ICLR Workshop on Representation Learning on Graphs and Manifolds",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Fey and Jan E. Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Facc1: Freebase annotation of clueweb corpora, version 1 (release date 2013-06-26, format version 1, correction level 0)",
"authors": [
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Amarnag",
"middle": [],
"last": "Subramanya",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeniy Gabrilovich, Michael Ringgaard, and Amar- nag Subramanya. 2013. Facc1: Freebase an- notation of clueweb corpora, version 1 (re- lease date 2013-06-26, format version 1, cor- rection level 0). Note: http://lemurproject. org/clueweb09/FACC1/Cited by, 5.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Canonicalizing open knowledge bases",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Gal\u00e1rraga",
"suffix": ""
},
{
"first": "Geremy",
"middle": [],
"last": "Heitz",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 23rd acm international conference on conference on information and knowledge management",
"volume": "",
"issue": "",
"pages": "1679--1688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Gal\u00e1rraga, Geremy Heitz, Kevin Murphy, and Fabian M Suchanek. 2014. Canonicalizing open knowledge bases. In Proceedings of the 23rd acm international conference on conference on informa- tion and knowledge management, pages 1679-1688. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Amie: Association rule mining under incomplete evidence in ontological knowledge bases",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Antonio Gal\u00e1rraga",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Teflioudi",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Hose",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22Nd International Conference on World Wide Web, WWW '13",
"volume": "",
"issue": "",
"pages": "413--422",
"other_ids": {
"DOI": [
"10.1145/2488388.2488425"
]
},
"num": null,
"urls": [],
"raw_text": "Luis Antonio Gal\u00e1rraga, Christina Teflioudi, Katja Hose, and Fabian Suchanek. 2013. Amie: Associ- ation rule mining under incomplete evidence in on- tological knowledge bases. In Proceedings of the 22Nd International Conference on World Wide Web, WWW '13, pages 413-422, New York, NY, USA. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A new model for learning in graph domains",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Gori",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Monfardini",
"suffix": ""
},
{
"first": "Franco",
"middle": [],
"last": "Scarselli",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings. 2005 IEEE International Joint Conference on Neural Networks",
"volume": "",
"issue": "",
"pages": "729--734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Gori, Gabriele Monfardini, and Franco Scarselli. 2005. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., vol- ume 2, pages 729-734. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.02907"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Visualizing data using t-SNE",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Open language learning for information extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Bart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "523--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learn- ing for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523-534, Jeju Island, Korea. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Open information extraction systems and downstream applications",
"authors": [
{
"first": "Mausam",
"middle": [],
"last": "Mausam",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4074--4077",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam Mausam. 2016. Open information extraction systems and downstream applications. In Proceed- ings of the Twenty-Fifth International Joint Con- ference on Artificial Intelligence, pages 4074-4077. AAAI Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Never-ending learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hruschka",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Platanios",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Samadi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wijaya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saparov",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Greaves",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "Commun. ACM",
"volume": "",
"issue": "5",
"pages": "103--115",
"other_ids": {
"DOI": [
"10.1145/3191513"
]
},
"num": null,
"urls": [],
"raw_text": "T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Pla- tanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2018. Never-ending learning. Commun. ACM, 61(5):103-115.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Holographic embeddings of knowledge graphs",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Rosasco",
"suffix": ""
},
{
"first": "Tomaso",
"middle": [],
"last": "Poggio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16",
"volume": "",
"issue": "",
"pages": "1955--1961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowl- edge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, pages 1955-1961. AAAI Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Demonyms and compound relational nouns in nominal open IE",
"authors": [
{
"first": "Harinder",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mausam",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 5th Workshop on Automated Knowledge Base Construction",
"volume": "",
"issue": "",
"pages": "35--39",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1307"
]
},
"num": null,
"urls": [],
"raw_text": "Harinder Pal and Mausam. 2016. Demonyms and com- pound relational nouns in nominal open IE. In Pro- ceedings of the 5th Workshop on Automated Knowl- edge Base Construction, pages 35-39, San Diego, CA. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Open information extraction from conjunctive sentences",
"authors": [
{
"first": "Swarnadeep",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2288--2299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swarnadeep Saha and Mausam. 2018. Open informa- tion extraction from conjunctive sentences. In Pro- ceedings of the 27th International Conference on Computational Linguistics, pages 2288-2299, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bootstrapping for numerical open IE",
"authors": [
{
"first": "Swarnadeep",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Harinder",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "317--323",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2050"
]
},
"num": null,
"urls": [],
"raw_text": "Swarnadeep Saha, Harinder Pal, and Mausam. 2017. Bootstrapping for numerical open IE. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 317-323, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The graph neural network model",
"authors": [
{
"first": "Franco",
"middle": [],
"last": "Scarselli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Gori",
"suffix": ""
},
{
"first": "Ah",
"middle": [],
"last": "Chung Tsoi",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Hagenbuchner",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Monfardini",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on Neural Networks",
"volume": "20",
"issue": "1",
"pages": "61--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling relational data with graph convolutional networks",
"authors": [
{
"first": "",
"middle": [],
"last": "Michael Sejr",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"N"
],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "Rianne",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "ESWC",
"volume": "10843",
"issue": "",
"pages": "593--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In ESWC, volume 10843 of Lecture Notes in Computer Science, pages 593-607. Springer.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fabian",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "697--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of the 16th international con- ference on World Wide Web, pages 697-706. ACM.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Complex embeddings for simple link prediction",
"authors": [
{
"first": "Th\u00e9o",
"middle": [],
"last": "Trouillon",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Bouchard",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2071--2080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Com- plex embeddings for simple link prediction. In In- ternational Conference on Machine Learning, pages 2071-2080.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Cesi: Canonicalizing open knowledge bases using embeddings and side information",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Prince",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 World Wide Web Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1317--1327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Prince Jain, and Partha Talukdar. 2018. Cesi: Canonicalizing open knowledge bases using embeddings and side information. In Pro- ceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1317-1327. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Graph Attention Networks. International Conference on Learning Representations",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Veli\u010dkovi\u0107",
"suffix": ""
},
{
"first": "Guillem",
"middle": [],
"last": "Cucurull",
"suffix": ""
},
{
"first": "Arantxa",
"middle": [],
"last": "Casanova",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Li\u00f2",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations. Accepted as poster.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Wikidata: A free collaborative knowledgebase. Commun",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "57",
"issue": "",
"pages": "78--85",
"other_ids": {
"DOI": [
"10.1145/2629489"
]
},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: A free collaborative knowledgebase. Com- mun. ACM, 57(10):78-85.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Knowledge graph embedding by translating on hyperplanes",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Twenty-Eighth AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In Twenty-Eighth AAAI con- ference on artificial intelligence.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Embedding entities and relations for learning and inference in knowledge bases",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6575"
]
},
"num": null,
"urls": [],
"raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Textrunner: open information extraction on the web",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "25--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. Textrunner: open information extraction on the web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 25-26. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Challenges in OpenKG. KG embedding methods are effective in downstream tasks such as Link Prediction. Existing KG embedding methods are ineffective on OpenKGs as they are not canonicalized. CaRe can effectively utilize canonicalization information to learn better embeddings in OpenKGs. Green dotted line represents the missing NP canonicalization information. Please see Section 1 for more details."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "CaRe Step 1: First CaRe augments the original OpenKG with the output of a canonicalization model. (a) OpenKG and NP clusters from a canonicalization model. (b) Augmented OpenKG by adding undirected edges between canonical NPs (represented as dotted lines). Please refer to Section 4.2 for more details."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "is the set of observed triples. Here s, o are the subject and object phrase, respectively. TransE: Given a triple (s, r, o), consider e s , r r , e o \u2208 R d as the d-dimensional vector representations of subject (s), relation (r) and object (o) respectively. TransE follows a translation based triple scoring function \u03c8(.): \u03c8(s, r, o) = \u2212 e s + r r \u2212 e o p ."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "z s e i t e D k M 8 f w B 8 7 n D 1 P C j d I = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = \" W 4 b N W a 5 s d Y Y 2 p m g h k c 5 o D U L u I y A = \" > A A A B 6 n i c b"
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "j n J e n H f n Y 9 a 6 5 O Q z B / A H z u c P + M S Q g w = = < / l a t e x i t > e 0 o < l a t e x i t s h a 1 _ b a s e 6 4 = \" E L m Z"
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "4 c 5 b w 4 7 8 7 H o r X k F D P H 8 A f O 5 w 8 q 5 o 7 q < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = \" m x /"
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "e 8 e 4 u y r X r P I 4 C H M M J n I E H l 1 C D W 6 h D A x g M 4 B l e 4 c 2 R z o v z 7 n z M W 1 e c f O Y I / s D 5 / A E M q o 2 j < / l a t e x i t > w T < l a t e x i t s h a 1 _ b a s e 6 4 = \""
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Performance comparison of CaRe framework for differnt values of the CN (as described in Section 6.3). B=ConvE (left group) and B=TransE (right group) in both the plots. PN=Bi-GRU. Mean Reciprocal Rank (MRR) is plotted on the y axis (higher is better). t-SNE visualization of RP embeddings. RP embeddings learned by CaRe(B=ConvE) are able to capture the semantic similarity of the RPs whereas in ConvE this information is lost. Please refer to Section 6.4."
},
"TABREF2": {
"num": null,
"text": "Link Prediction results. CaRe(B=ConvE) substantially outperforms all the existing KG embedding models. For all the experiments, evaluation strategy described in Section 5.2 is followed. B, CN, PN are the arguments of CaRe framework (Section 4 andFigure 3). For the reported results: PN=Bi-GRU and CN=LAN.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Method</td><td/><td/><td>ReVerb45K</td><td/><td/><td/><td/><td>ReVerb20K</td><td/></tr><tr><td/><td>MR</td><td colspan=\"4\">MRR Hits@10 Hits@30 Hits@50</td><td>MR</td><td colspan=\"4\">MRR Hits@10 Hits@30 Hits@50</td></tr><tr><td>TransE</td><td colspan=\"2\">2955.8 .193</td><td>.361</td><td>.446</td><td>.478</td><td colspan=\"2\">1425.8 .126</td><td>.299</td><td>.411</td><td>.468</td></tr><tr><td>CaRe(B=TransE, CN=\u03c6)</td><td colspan=\"2\">2522.3 .195</td><td>.378</td><td>.457</td><td>.488</td><td>978.2</td><td>.286</td><td>.411</td><td>.515</td><td>.565</td></tr><tr><td>ComplEx</td><td colspan=\"2\">7786.5 .047</td><td>.047</td><td>.048</td><td>.073</td><td colspan=\"2\">5502.2 .037</td><td>.058</td><td>.075</td><td>.085</td></tr><tr><td colspan=\"3\">CaRe(B=ComplEx, CN=\u03c6) 5205.4 .185</td><td>.225</td><td>.235</td><td>.325</td><td colspan=\"2\">3010.0 .216</td><td>.288</td><td>.345</td><td>.376</td></tr><tr><td>R-GCN</td><td colspan=\"2\">2866.8 .042</td><td>.046</td><td>.091</td><td>.113</td><td colspan=\"2\">1204.3 .122</td><td>.187</td><td>.263</td><td>.305</td></tr><tr><td>CaRe(B=R-GCN, CN=\u03c6)</td><td colspan=\"2\">2508.1 .145</td><td>.203</td><td>.260</td><td>.305</td><td colspan=\"2\">1210.3 .195</td><td>.275</td><td>.340</td><td>.370</td></tr><tr><td>ConvE</td><td colspan=\"2\">2650.8 .233</td><td>.338</td><td>.401</td><td>.429</td><td colspan=\"2\">1014.5 .294</td><td>.402</td><td>.491</td><td>.541</td></tr><tr><td>CaRe(B=ConvE, CN=\u03c6)</td><td colspan=\"2\">1656.1 .293</td><td>.401</td><td>.477</td><td>.509</td><td>966.9</td><td>.307</td><td>.419</td><td>.514</td><td>.556</td></tr></table>"
},
"TABREF3": {
"num": null,
"text": "Impact of parameterizing RP embeddings. B, CN, PN are the arguments of CaRe framework (Section 4 andFigure 3). \u03c6 value for CN argument implies that Canonical Cluster Encoder Network module is not used in these experiments. PN=Bi-GRU in all these experiments. Please refer to Section 6.2.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Q1. Is CaRe effective for the link prediction task</td></tr><tr><td>in OpenKGs? (Section 6.1)</td></tr><tr><td>Q2. What is the quantitave and qualitative impact</td></tr><tr><td>of parameterizing RP embeddings in CaRe?</td></tr><tr><td>(Section 6.2 and Section 6.4)</td></tr><tr><td>Q3. How does the local embedding smoothening</td></tr><tr><td>approach adopted in CaRe compare against</td></tr><tr><td>other competitive baselines? (Section 6.3)</td></tr></table>"
}
}
}
}