Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:05:51.787729Z"
},
"title": "Jointly Learning Entity and Relation Representations for Entity Alignment",
"authors": [
{
"first": "Yuting",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {
"country": "China"
}
},
"email": "fengyansong@pku.edu.cn"
},
{
"first": "Zheng",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leeds",
"location": {
"country": "U.K"
}
},
"email": "z.wang5@leeds.ac.uk"
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {
"country": "China"
}
},
"email": "zhaodongyan@pku.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Entity alignment is a viable means for integrating heterogeneous knowledge among different knowledge graphs (KGs). Recent developments in the field often take an embeddingbased approach to model the structural information of KGs so that entity alignment can be easily performed in the embedding space. However, most existing works do not explicitly utilize useful relation representations to assist in entity alignment, which, as we will show in the paper, is a simple yet effective way for improving entity alignment. This paper presents a novel joint learning framework for entity alignment. At the core of our approach is a Graph Convolutional Network (GCN) based framework for learning both entity and relation representations. Rather than relying on pre-aligned relation seeds to learn relation representations, we first approximate them using entity embeddings learned by the GCN. We then incorporate the relation approximation into entities to iteratively learn better representations for both. Experiments performed on three real-world cross-lingual datasets show that our approach substantially outperforms state-of-the-art entity alignment methods.",
"pdf_parse": {
"paper_id": "D19-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "Entity alignment is a viable means for integrating heterogeneous knowledge among different knowledge graphs (KGs). Recent developments in the field often take an embeddingbased approach to model the structural information of KGs so that entity alignment can be easily performed in the embedding space. However, most existing works do not explicitly utilize useful relation representations to assist in entity alignment, which, as we will show in the paper, is a simple yet effective way for improving entity alignment. This paper presents a novel joint learning framework for entity alignment. At the core of our approach is a Graph Convolutional Network (GCN) based framework for learning both entity and relation representations. Rather than relying on pre-aligned relation seeds to learn relation representations, we first approximate them using entity embeddings learned by the GCN. We then incorporate the relation approximation into entities to iteratively learn better representations for both. Experiments performed on three real-world cross-lingual datasets show that our approach substantially outperforms state-of-the-art entity alignment methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Knowledge graphs (KGs) transform unstructured knowledge into simple and clear triples of <head entity, relation, tail entity> for rapid response and reasoning of knowledge. They are an effective way for supporting various NLP-enabled tasks like machine reading (Yang and Mitchell, 2017) , information extraction (Wang et al., 2018a) , and question-answering .",
"cite_spans": [
{
"start": 261,
"end": 286,
"text": "(Yang and Mitchell, 2017)",
"ref_id": "BIBREF27"
},
{
"start": 312,
"end": 332,
"text": "(Wang et al., 2018a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Even though many KGs originate from the same resource, e.g., Wikipedia, they are usually created independently. Therefore, different KGs often use different expressions and surface forms to indicate equivalent entities and relations -let alone those built from different resources or languages. This common problem of heterogeneity makes it difficult to integrate knowledge among different KGs. A powerful technique to address this issue is Entity Alignment, the task of linking entities with the same real-world identity from different KGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Classical methods for entity alignment typically involve a labor-intensive and time-consuming process of feature construction (Mahdisoltani et al., 2013) or rely on external information constructed by others (Suchanek et al., 2011) . Recently, efforts have been devoted to the so-called embeddingbased approaches. Representative works of this direction include JE (Hao et al., 2016) , MTransE (Chen et al., 2017) , JAPE , IP-TransE (Zhu et al., 2017) , and BootEA (Sun et al., 2018) . More recent work (Wang et al., 2018b) uses the Graph Convolutional Network (GCN) (Kipf and Welling, 2017) to jointly embed multiple KGs.",
"cite_spans": [
{
"start": 126,
"end": 153,
"text": "(Mahdisoltani et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 208,
"end": 231,
"text": "(Suchanek et al., 2011)",
"ref_id": "BIBREF19"
},
{
"start": 364,
"end": 382,
"text": "(Hao et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 393,
"end": 412,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 432,
"end": 450,
"text": "(Zhu et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 464,
"end": 482,
"text": "(Sun et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 502,
"end": 522,
"text": "(Wang et al., 2018b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the recent works (e.g., JE, MTransE, JAPE, IPTransE and BootEA) rely on the translation-based models, such as TransE (Bordes et al., 2013) , which enable these approaches to encode both entities and relations of KGs. These methods often put more emphasis on the entity embeddings, but do not explicitly utilize relation embeddings to help with entity alignment. Another drawback of such approaches is that they usually rely on pre-aligned relations (JAPE and IPTransE) or triples (MTransE) . This limits the scale at which the model can be effectively performed due to the overhead for constructing seed alignments for large KGs. Alternative methods like GCN-based models, unfortunately, cannot directly obtain relation representations, leaving much room for improvement.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 457,
"end": 476,
"text": "(JAPE and IPTransE)",
"ref_id": null
},
{
"start": 488,
"end": 497,
"text": "(MTransE)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent studies have shown that jointly modeling entities and relations in a single framework can improve tasks like information extraction (Miwa and Bansal, 2016; Bekoulis et al., 2018) . We hypothesize that this will be the case for entity alignment too; that is, the rich relation information could be useful for improving entity alignment as entities and their relations are usually closely related. Our experiments show that this is even a conservative target: by jointly learning entity and relation representations, we can promote the results of both entity and relation alignment.",
"cite_spans": [
{
"start": 139,
"end": 162,
"text": "(Miwa and Bansal, 2016;",
"ref_id": "BIBREF13"
},
{
"start": 163,
"end": 185,
"text": "Bekoulis et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we aim to build a learning framework that jointly learns entity and relation representations for entity alignment; and we want to achieve this with only a small set of pre-aligned entities but not relations. Doing so will allow us to utilize relation information to improve entity alignment without paying extra cost for constructing seed relation alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work is enabled by the recent breakthrough effectiveness of GCNs (Kipf and Welling, 2017) in extracting useful representations from graph structures. Although GCNs provide a good starting point, applying it to develop a practical and efficient framework to accurately capture relation information across KGs is not trivial. Because a vanilla GCN operates on the undirected and unlabeled graphs, a GCN-based model like (Wang et al., 2018b) would ignore the useful relation information of KGs. While the Relational Graph Convolutional Network (R-GCN) (Schlichtkrull et al., 2018) can model multi-relational graphs, existing R-GCNs use a weight matrix for each relation. This means that an R-GCN would require an excessive set of parameters to model thousands of relations in a typical real-world KG, making it difficult to learn an effective model on large KGs.",
"cite_spans": [
{
"start": 422,
"end": 442,
"text": "(Wang et al., 2018b)",
"ref_id": "BIBREF25"
},
{
"start": 553,
"end": 581,
"text": "(Schlichtkrull et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A key challenge of our joint learning framework is how to generate useful relation representations at the absence of seed relation alignments, and to ensure the framework can scale to a large number of types of relations. We achieve this by first approximating the relation representations using entity embeddings learned through a small amount of seed entity alignments. We go further by constructing a new joint entity representation consisting of both relation information and neighboring structural information of an entity. The joint representations allow us to iteratively improve the model's capability of generating better entity and relation representations, which lead to not only better entity alignment, but also more accurate re-lation alignment as a by-product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our approach by applying it to three real-world datasets. Experimental results show that our approach delivers better and more robust results when compared with state-of-the-art methods for entity and relation alignments. The key contribution of this paper is a novel joint learning model for entity and relation alignments. Our approach reduces the human involvement and the associated cost in constructing seed alignments, but yields better performance over prior works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Until recently, entity alignment would require intensive human participation (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) to design hand-crafted features (Mahdisoltani et al., 2013) , rules, or rely on external sources (Wang et al., 2017) . In a broader context, works in schema and ontology matching also seek help from additional information by using e.g., extra data sources (Nguyen et al., 2011) , entity descriptions (Lacoste-Julien et al., 2013; Yang et al., 2015) , or semantics of the web ontology language (Hu et al., 2011) . Performance of such schemes is bounded by the quality and availability of the extra information about the target KG, but obtaining sufficiently good-quality annotated data could be difficult for large KGs.",
"cite_spans": [
{
"start": 77,
"end": 107,
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 140,
"end": 167,
"text": "(Mahdisoltani et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 205,
"end": 224,
"text": "(Wang et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 364,
"end": 385,
"text": "(Nguyen et al., 2011)",
"ref_id": "BIBREF14"
},
{
"start": 408,
"end": 437,
"text": "(Lacoste-Julien et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 438,
"end": 456,
"text": "Yang et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 501,
"end": 518,
"text": "(Hu et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Alignment",
"sec_num": "2.1"
},
{
"text": "Recently, embedding-based entity alignment methods were proposed to reduce human involvement. JE (Hao et al., 2016) was among the first attempts in this direction. It learns embeddings of different KGs in a uniform vector space where entity alignment can be performed. MTransE (Chen et al., 2017) encodes KGs in independent embeddings and learns transformation between KGs. BootEA (Sun et al., 2018 ) exploits a bootstrapping process to learn KG embeddings. SEA (Pei et al., 2019) proposes a degree-aware KG embedding model to embed KGs. KDCoE (Chen et al., 2018 ) is a semi-supervised learning approach for co-training embeddings for multilingual KGs and entity descriptions. They all use translation-based models as the backbone to embed KGs.",
"cite_spans": [
{
"start": 97,
"end": 115,
"text": "(Hao et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 277,
"end": 296,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 381,
"end": 398,
"text": "(Sun et al., 2018",
"ref_id": "BIBREF21"
},
{
"start": 462,
"end": 480,
"text": "(Pei et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 544,
"end": 562,
"text": "(Chen et al., 2018",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Alignment",
"sec_num": "2.1"
},
{
"text": "Non-translational embedding-based methods include recent works on a GCN-based model (Wang et al., 2018b) and NTAM (Li et al., 2018) . Additionally, most recent work, RDGCN (Wu et al., 2019) , introduces the dual relation graph to model the relation information of KGs. Through multiple rounds of interactions between the primal and dual graphs, RDGCN can effectively incorporate more complex relation information into entity representations and achieve promising results for entity alignment. However, existing methods only focus on entity embeddings and ignore the help that relation representations can provide on this task.",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Wang et al., 2018b)",
"ref_id": "BIBREF25"
},
{
"start": 114,
"end": 131,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 172,
"end": 189,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Alignment",
"sec_num": "2.1"
},
{
"text": "MTransE and NTAM are two of a few methods that try to perform both relation and entity alignments. However, both approaches require high-quality seed alignments, such as pre-aligned triples or relations, for relation alignment. Our approach advances prior works by jointly modeling entities and relations by using only a small set of pre-aligned entities (but not relations) to simultaneously perform entity and relation alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Alignment",
"sec_num": "2.1"
},
{
"text": "GCNs (Duvenaud et al., 2015; Kearnes et al., 2016; Kipf and Welling, 2017) are neural networks operating on unlabeled graphs and inducing features of nodes based on the structures of their neighborhoods. Recently, GCNs have demonstrated promising performance in tasks like node classification (Kipf and Welling, 2017), relation extraction , semantic role labeling (Marcheggiani and Titov, 2017) , etc. As an extension of GCNs, the R-GCNs (Schlichtkrull et al., 2018) have recently been proposed to model relational data for link prediction and entity classification. However, R-GCNs usually require a large number of parameters that are often hard to train, when applied to multi-relational graphs.",
"cite_spans": [
{
"start": 5,
"end": 28,
"text": "(Duvenaud et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 29,
"end": 50,
"text": "Kearnes et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 51,
"end": 74,
"text": "Kipf and Welling, 2017)",
"ref_id": "BIBREF8"
},
{
"start": 364,
"end": 394,
"text": "(Marcheggiani and Titov, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 438,
"end": 466,
"text": "(Schlichtkrull et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Networks",
"sec_num": "2.2"
},
{
"text": "In this work, we choose to use GCNs to first encode KG entities and to approximate relation representations based on entity embeddings. Our work is the first to utilize GCNs for jointly aligning entities and relations for heterogeneous KGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Networks",
"sec_num": "2.2"
},
{
"text": "We now introduce the notations used in this paper and define the scope of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "A KG is formalized as G = (E, R, T ), where E, R, T are the sets of entities, relations and triples, respectively. Let G 1 = (E 1 , R 1 , T 1 ) and G 2 = (E 2 , R 2 , T 2 ) be two different KGs. Usually, some equivalent entities between KGs are already known, defined as alignment seeds",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "L = {(e i 1 , e i 2 )|e i 1 \u2208 E 1 , e i 2 \u2208 E 2 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "We define the task of entity or relation align-ment as automatically finding more equivalent entities or relations based on known alignment seeds. In our model, we only use known aligned entity pairs as training data for both entity and relation alignments. The process of relation alignment in our framework is unsupervised, which does not need pre-aligned relation pairs for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "Given two target KGs, G 1 and G 2 , and a set of known aligned entity pairs L, our approach uses GCNs (Kipf and Welling, 2017) with highway network (Srivastava et al., 2015) gates to embed entities of the two KGs and approximate relation semantics based on entity representations. By linking entity representations with relation representations, they promote each other in our framework and ultimately achieve better alignment results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "4"
},
{
"text": "As illustrated in Figure 1 , our approach consists of three stages: (1) preliminary entity alignment, (2) approximating relation representations, and 3joint entity and relation alignment.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "4.1"
},
{
"text": "In the first stage, we utilize GCNs to embed entities of various KGs in a unified vector space for preliminary entity alignment. Next, we use the entity embeddings to approximate relation representations which can be used to align relations across KGs. In the third stage, we incorporate the relation representations into entity embeddings to obtain the joint entity representations, and continue using GCNs to iteratively integrate neighboring structural information to achieve better entity and relation representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Architecture",
"sec_num": "4.1"
},
{
"text": "As shown in Figure 1 , we put G 1 and G 2 in one graph G a = (E a , R a , T a ) to form our model's input. We utilize pre-aligned entity pairs to train our model and then discover latent aligned entities.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminary Entity Alignment",
"sec_num": "4.2"
},
{
"text": "Our entity alignment model utilizes GCNs to embed entities in G a . Our model consists of multiple stacked GCN layers so that it can incorporate higher degree neighborhoods. The input for GCN layer l is a node feature matrix,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "X (l) = {x (l) 1 , x (l) 2 , ..., x (l) n |x (l) i \u2208 R d (l) },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "where n is the number of nodes (entities) of G a , and d (l) is the number of features in layer l. X (l) is updated us- Figure 1 : Overall architecture of our model. The blue dotted lines denote the process of preliminary entity alignment and preliminary relation alignment using approximate relation representations, and the black solid lines denote the process of continuing using GCNs to iteratively learn better entity and relation representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "ing forward propagation as: 1where\u00c3 = A + I is the adjacency matrix of G a with self-connections, I is an identity matrix, D jj = k\u00c3 jk , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "X (l+1) = ReLU(D \u2212 1 2\u00c3D \u2212 1 2 X (l) W (l) ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "W (l) \u2208 R d (l) \u00d7d (l+1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "is a layerspecific trainable weight matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "Inspired by (Rahimi et al., 2018 ) that uses highway gates (Srivastava et al., 2015) to control the noise propagation in GCNs for geographic localization, we also employ layer-wise highway gates to build a Highway-GCN (HGCN) model. Our layer-wise gates work as follow:",
"cite_spans": [
{
"start": 12,
"end": 32,
"text": "(Rahimi et al., 2018",
"ref_id": "BIBREF16"
},
{
"start": 59,
"end": 84,
"text": "(Srivastava et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T (X (l) ) = \u03c3(X (l) W (l) T + b (l) T ),",
"eq_num": "(2)"
}
],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "X (l+1) = T (X (l) )\u2022X (l+1) +(1\u2212T (X (l) ))\u2022X (l) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "where X (l) is the input to layer l + 1; \u03c3 is a sigmoid function;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "\u2022 is element-wise multiplication; W (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "T and b (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "T are the weight matrix and bias vector for the transform gate T (X (l) ), respectively. Alignment. In our work, entity alignment is performed by simply measuring the distance between two entity nodes on their embedding space. With the output entity representations X = {x 1 , x 2 , ..., x n |x i \u2208 Rd}, for entities e 1 from G 1 and e 2 from G 2 , their distance is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d(e 1 , e 2 ) = x e 1 \u2212 x e 2 L 1 .",
"eq_num": "(4)"
}
],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "Training. We use a margin-based scoring function as the training objective, to make the distance between aligned entity pairs to be as close as possible, and the distance between positive and negative alignment pairs to be as large as possible. The loss function is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = (p,q)\u2208L (p ,q )\u2208L max{0, d(p, q)\u2212d(p , q )+\u03b3},",
"eq_num": "(5)"
}
],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "where \u03b3 > 0 is a margin hyper-parameter; L stands for the negative alignment set of L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "Rather than simply random sampling for negative instances, we look for more challenging negative samples, e.g., those with subtle differences from the positive ones, to train our model. Given a positive aligned pair (p, q), we choose the Knearest entities of p (or q) according to Eq. 4 in the embedding space to replace q (or p) as the negative instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Convolutional Layers.",
"sec_num": null
},
{
"text": "At this stage, we expect to obtain relation representations, which can be used in the next stage for constructing joint representations and can also be used for preliminary relation alignment. Since we are unable to explicitly modeling relations within our GCN-based framework, we thus approximate the relation representations based on their head and tail entity representations produced by the entity alignment model described in Section 4.2. This strategy is based on our observation that the statistical information of the head and tail entities of a relation can more or less reflect the shallow semantics of the relation itself, such as the head or tail entities' type requirements of a relation. Our experiments in Section 6 suggest that this is a reasonable assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "Given a relation r \u2208 R a , there is a set of triples of r,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "T r = {(h i , r, t j )|h i \u2208 H r , t j \u2208 T r },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "where H r and T r are the sets of head entities and tail entities of relation r, respectively. For a relation r, its representation can be approximated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r = f (H r , T r ),",
"eq_num": "(6)"
}
],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "where r is the approximated representation of relation r. H r and T r are the sets of HGCN-output embeddings of head entities and tail entities of relation r. f (\u2022) is a function to produce relation representations with input entity vectors, which can take many forms such as mean, adding, concatenation or more complex models. In our model, we compute the relation representation for r by first concatenating its averaged head and tail entity representations, and then introducing a matrix W R \u2208 R 2d\u00d7m as a learnable shared linear transformation on relation vectors. Here,d is the number of features in each HGCN-output entity embedding and m is the number of features in each relation representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "With the relation representations in place, relation alignment can be performed by measuring the similarity between two relation vectors. For relation r 1 from G 1 and r 2 from G 2 , their similarity is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(r 1 , r 2 ) = r 1 \u2212r 2 L 1 \u2212\u03b2 |P r 1 r 2 | |HT r 1 \u222a HT r 2 | ,",
"eq_num": "(7)"
}
],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "where r 1 and r 2 are the relation representations for r 1 and r 2 . In addition to calculating the distance between the two relation vectors, we believe that the more aligned entities exist in the entities that are connected to the two relations, the more likely the two relations are equivalent. Thus, for r 1 and r 2 , we collect the pre-aligned entities existing in the head/tail entities of these two relations as the set P r 1 r 2 = {(e i 1 , e i 2 )|e i 1 \u2208 HT r 1 , e i 2 \u2208 HT r 2 , (e i 1 , e i 2 ) \u2208 L}. HT r 1 and HT r 2 are the sets of head/tail entities for relation r 1 and r 2 respectively. \u03b2 is a hyper-parameter for balance. In our framework, relation alignment is explored in an unsupervised fashion, in which we do not have any pre-aligned relations as training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximating Relation Representations",
"sec_num": "4.3"
},
{
"text": "The first two stages of our approach could already produce a set of entity and relation alignments, but we do not stop here. Instead, we attentively fuse the entity and relation representations and further jointly optimize them using the seed entity alignments. Our key insight is that entity and relation alignment tasks are inherently closely related. This is because aligned entities tend to have some relations in common, and similar relations should have similar categories of head and tail entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Entity and Relation Alignment",
"sec_num": "4.4"
},
{
"text": "Specifically, we first pre-train the entity alignment model (Section 4.2) until its entity alignment performance has converged to be stable. We assume that both the pre-trained entity and approximate relation representations can provide rich information for themselves. Next, for each entity, we aggregate the representations of its relevant relations into a relation context vector, which is further combined with its pre-trained entity representation to form a new joint entity representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Entity and Relation Alignment",
"sec_num": "4.4"
},
{
"text": "Formally, for each entity e \u2208 E a , its new joint representation e joint can be calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Entity and Relation Alignment",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e joint = g(e, R e ),",
"eq_num": "(8)"
}
],
"section": "Joint Entity and Relation Alignment",
"sec_num": "4.4"
},
{
"text": "where e is the HGCN-output representation of entity e. R e is the set of relation representations of e's relevant relations. g(\u2022) is a function to produce the new joint entity representation by taking e and R e as input, which can also take many forms of operations. In our model, we calculate e joint by first summing all relation representations in R e and then concatenating e with the summed relation context vector. After getting the new joint entity representations, X joint , we continue optimizing our model against the seed entity alignments, where we use the joint entity representations to calculate the training loss according to Eq. 5 to continue updating HGCNs 1 . Note that the joint entity representations are composed of entity embeddings and relation representations, while the relation representations are also constructed based on the entity embeddings. Hence, after backpropagation of the loss calculated using the joint entity representations, we optimize the entity embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Entity and Relation Alignment",
"sec_num": "4.4"
},
{
"text": "We use DBP15K datasets from to evaluate our approach. DBP15K contains three",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "#Ent. #Rel. #Rel tr. Alignments #Ent. #Rel. ZH 66,469 2,830 153,929 1,5000 890 EN 98,125 2,317 237,674 JA-EN JA 65,744 2,043 164,373 1,5000 529 EN 95,680 2,096 233,319 FR-EN FR 66,858 1,379 192,191 1,5000 212 EN 105,889 2,209 278,590 cross-lingual datasets that were built from the English version to Chinese, Japanese and French versions of DBpedia. Each contains data from two KGs in different languages and provides 15K prealigned entity pairs. Besides, each dataset also provides some pre-aligned relations. We manually aligned more relations from the three datasets and removed the ambiguously aligned relation pairs to construct the test sets for relation alignment. Table 1 shows the statistics of the three datasets. We stress that our approach achieves entity and relation alignments simultaneously using only a small number of pre-aligned entities, and relation alignments are only used for testing. Following the previous works Wang et al., 2018b; Sun et al., 2018) , we use 30% of the pre-aligned entity pairs as training data and 70% for testing. Our source code and datasets are freely available online 2 .",
"cite_spans": [
{
"start": 971,
"end": 990,
"text": "Wang et al., 2018b;",
"ref_id": "BIBREF25"
},
{
"start": 991,
"end": 1008,
"text": "Sun et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 44,
"end": 257,
"text": "ZH 66,469 2,830 153,929 1,5000 890 EN 98,125 2,317 237,674 JA-EN JA 65,744 2,043 164,373 1,5000 529 EN 95,680 2,096 233,319 FR-EN FR 66,858 1,379 192,191 1,5000 212 EN 105,889 2,209",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "DBP15K",
"sec_num": null
},
{
"text": "We set \u03b3 = 1, \u03b2 = 20, and learning rate to 0.001. We sample K = 125 negative pairs every 50 epochs. We use entity names in different KGs for better model initialization. We translate non-English entity names to English via Google Translate, and the entity features are initialized with pretrained English word vectors glove.840B.300d 3 in our model. Note that Google Translate does not always give accurate translations for named entities. We inspected 100 English translations for Japanese and Chinese entity names, and discovered that around 20% of the translations are wrong. The errors are mainly attributed to the missing of titles/modifications and wrong interpretations for person/location names. The inaccurate translation poses further challenges for our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.2"
},
{
"text": "Entity Alignment. For entity alignment, we compare our approach against six embedding-based entity alignment methods discussed in Section 1: JE (Hao et al., 2016) , MTransE (Chen et al., 2017) , JAPE 4 , IPTransE (Zhu et al., 2017) , BootEA (Sun et al., 2018) and GCN (Wang et al., 2018b) . Among those, BootEA is the bestperforming model on DBP15K. Relation Alignment. For relation alignment, we compare our approach with the state-of-the-art BootEA (denoted by BootEA-R), and MTransE (denoted by MTransE-R). Note that MTransE provides five implementation variants for its alignment model. To provide a fair comparison, we choose the one that does not use pre-aligned relations but gives the best performance for a triplewise alignment verification (Chen et al., 2017 ) -a closely related task for relation alignment. Since BootEA and MTransE are translation-based models that encode both entities and relations, relation alignment can be done by measuring the similarities between two relation representations. Furthermore, to evaluate the effectiveness of our proposed relation approximation method, we also build BootEA-PR and MTransE-PR for relation alignment according to Section 4.3.",
"cite_spans": [
{
"start": 144,
"end": 162,
"text": "(Hao et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 173,
"end": 192,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 213,
"end": 231,
"text": "(Zhu et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 241,
"end": 259,
"text": "(Sun et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 268,
"end": 288,
"text": "(Wang et al., 2018b)",
"ref_id": "BIBREF25"
},
{
"start": 750,
"end": 768,
"text": "(Chen et al., 2017",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Competitive Approaches",
"sec_num": "5.3"
},
{
"text": "Model Variants. To evaluate our design choices, we provide different implementation variants with the following denotations. HGCN is our base GCN model with highway gates and entity name initialization. It has several variants, described as follows. HGCN-PE (Section 4.2) and HGCN-PR (Section 4.3) are our preliminary models for entity and relation alignments, respectively. HGCN-JE and HGCN-JR are our complete models that use joint representations to further improve entity alignment and relation alignment (Section 4.4). Finally, GCN-PE and GCN-PR are the preliminary GCN-based models for entity and relation alignments respectively, which use entity name initialization but no highway gates; GCN-JE and GCN-JR are the corresponding joint learning models; and GCN-JE-r is the randomly initialized version of GCN-JE without entity name initialization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "5.4"
},
{
"text": "Metrics. Like prior works Wang et al., 2018b; Sun et al., 2018) , we use Hits@k as our evaluation metric. A Hits@k score is computed by measuring the proportion of correctly aligned entities ranked in the top k list. Hence, we prefer higher Hits@k scores that indicate better performance.",
"cite_spans": [
{
"start": 26,
"end": 45,
"text": "Wang et al., 2018b;",
"ref_id": "BIBREF25"
},
{
"start": 46,
"end": 63,
"text": "Sun et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "5.4"
},
{
"text": "In this section, we first show that our complete model consistently outperforms all alternative methods across datasets, metrics and alignment tasks. We then analyze the impact of prior alignment data size on model performance, showing that our approach requires significantly less training data but achieves better performance over the best-performing prior method. Finally, we use a concrete example to discuss how jointly learned entity and relation representations can be used to improve both entity and relation alignments. Table 2 reports the performance for entity alignment of all compared approaches. The top part of the table shows the performance of prior approaches. By using a bootstrapping process to expand the training data, BootEA clearly outperforms all prior methods. By capturing the rich neighboring structural information, GCN outperforms all other translation-based models on Hits@1, and over IPTransE, MTransE and JE on Hits@10. The bottom part of Table 2 shows how our proposed techniques, i.e., entity name initialization, joint embeddings and layer-wise highway gates, can be used within a GCN framework to improve entity alignment. After initialized with the machine-translated entity names, GCN-PE considerably improves GCN on all datasets. The improvement suggests that even rough translations of entity names (see Section 5.2) can still provide important evidence for entity alignment and finally boost the performance. By employing layer-wise highway gates, HGCN-PE further improves GCN-PE, giving a 34.31% improvement on Hits@1 on DBP15K F R\u2212EN , and also outperforms the strongest baseline BootEA. This substantial improvement indicates that highway gates can effectively control the propagation of noisy information. Our complete framework HGCN-JE gives the best performance across all metrics and datasets. Comparing HGCN-JE with HGCN-PE and GCN-JE with GCN-PE (2.36% and 4.19% improvements of Hits@1 on DBP15K ZH\u2212EN respectively), we observe that joining entity and relation alignments improves the model performance. Even without entity name initialization, GCN-JE-r still has obvious advantages over JE, MTransE, JAPE, IPTransE and GCN. The results reinforce our claim that merging the relation information into entities can produce better entity representations. We stress that our proposed methods are not restricted to GCNs or HGCNs, but can be flexibly integrated with other KG representation models as well. Table 3 reports the results of relation alignment. Directly using the relation embeddings learned by MTransE to perform relation alignment leads to rather poor performance for MTransE-R, less than 4% for Hits@1 for all datasets. This is because the translation assumption, head + relation \u2248 tail, used by MTransE focuses on modeling the overall relationship among heads, tails, and relations, but capturing little neighboring information and relation semantics. After approximating the relation representations using entity embeddings according to Eq 6, MTransE-PR substantially improves MTransE-R. This confirms our assumption that it is feasible to approximate a relation using the in- formation of its head and tail entities.",
"cite_spans": [],
"ref_spans": [
{
"start": 529,
"end": 536,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 972,
"end": 979,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 2452,
"end": 2459,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6"
},
{
"text": "The strong entity alignment model BootEA also performs well for relation alignment. Using the relation embeddings from BootEA, BootEA-R delivers the best Hits@1 in MTransE and BootEA variants. Using our approximation strategy hurts BootEA-R in Hits@1, but we see improvements on Hits@10 across all datasets. This suggests that our approximation method can bring more related candidates, but may lack precision to select topranked candidates, comparing to explicitly relation modeling in translation-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Alignment",
"sec_num": "6.2"
},
{
"text": "Our framework, HGCN-JR, delivers the best relation alignment results across datasets and metrics, except for Hits@10 on DBP15K F R\u2212EN . Like entity alignment, we also observe that joining entity and relation alignments improves relation alignment, as evidenced by the better performance of HGCN-JR and GCN-JR over HGCN-PR and GCN-PR, respectively. That is, joint modeling produces better entity representations, which in turn provide better relation approximations. This can promote the results of both alignment tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Alignment",
"sec_num": "6.2"
},
{
"text": "Impact of Available Seed Alignments. To explore the impact of the size of seed alignments on our model, we compare our HGCN with BootEA by varying the proportion of pre-aligned entities from 10% to 40% with a step of 10%. Figure 2 (a-c) illustrate the Hits@1 for entity alignment of HGCN-JE and BootEA on three datasets. As the amount of seed alignments increases, the performances of both models on all three data sets gradually improve. HGCN-JE consistently obtains superior results compared to BootEA, and seems to be insensitive to the proportion of seed alignments. For example, HGCN-JE still achieves 86.40% for Hits@1 on DBP15K F R\u2212EN when only using 10% of training data. This Hits@1 score is 17.84% higher than that of BootEA when BootEA uses 40% of seed alignments. Figure 2 (d-f) show the Hits@1 for relation alignment of HGCN-JR and BootEA-R. HGCN-JR also consistently outperforms BootEA-R, and gives more stable results with different ratios of seed entity alignments. These results further confirm the robustness of our model, especially with limited seed entity alignments. Case Study. Figure 3 shows an example from DBP15K F R\u2212EN . In the stages of preliminary entity alignment and relation alignment, our model correctly predicts the aligned entity pair (v 2 , v 5 ) and relation pair (r 2 , r 5 ). After examining the full experimental data 5 , we find that the entities with more neighbors, such as v 2 and v 5 (indicating Norway), and the high-frequency relations, such as r 2 and r 5 (indicating country), are easier to align, since such entities and relations have rich structural information that can be exploited by a GCN. After jointly learning entity and relation representations, the extra neighboring relation information (e.g., the aligned relations (r 2 , r 5 )) enables our model to successfully align v F R and v EN . If we keep updating the model to learn better entity and relation representations, our alignment framework can successfully uncover more entity and relation alignments such as (v 1 , v 4 ) and (r 1 , r 4 ). This shows that joint representations can improve both entity and relation alignments.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 776,
"end": 790,
"text": "Figure 2 (d-f)",
"ref_id": "FIGREF1"
},
{
"start": 1101,
"end": 1109,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6.3"
},
{
"text": "This paper presents a novel framework for entity alignment by jointly modeling entities and relations of KGs. Our approach does not require prealigned relations as training data, yet it can simultaneously align entities and relations of heterogeneous KGs. We achieve this by employing gated GCNs to automatically learn high-quality entity and relation representations. As a departure from prior work, our approach constructs joint entity representations that contain both relation information and entity information. We demonstrate that the whole is greater than the sum of its parts, as the joint representations allow our model to iteratively improve the learned representations for both entities and relations. Extensive experiments on three real-world datasets show that our approach delivers better and more robust performance when compared to state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The training procedure is detailed in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/StephanieWyt/HGCN-JE-JR 3 http://nlp.stanford.edu/projects/glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that also provides analysis by considering the outputs of a machine translator and JAPE, and using a theoretically perfect oracle predictor to correctly choose in between the results given by the machine translator and JAPE. As this only serves as an interesting up-bound analysis, but does not reflect the capability of JAPE (because it is impossible to build such a perfect predictor in the first place), we do not compare to this oracle implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A more detailed analysis of our experimental results can be found in Appendix B in the supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported in part by the National Hi-Tech R&D Program of China (No. 2018YFB1005100), the NSFC Grants (No. 61672057, 61672058, 61872294), and a UK Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adversarial training for multi-context joint entity and relation extraction",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Bekoulis",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Deleu",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Develder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2830--2836",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830-2836.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems, pages 2787-2795.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment",
"authors": [
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yingtao",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Zaniolo",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training em- beddings of knowledge graphs and entity descrip- tions for cross-lingual entity alignment. In Proceed- ings of the Twenty-Seventh International Joint Con- ference on Artificial Intelligence, IJCAI-18.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multilingual knowledge graph embeddings for cross-lingual knowledge alignment",
"authors": [
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yingtao",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Mohan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Zaniolo",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph em- beddings for cross-lingual knowledge alignment. In Proceedings of the 26th International Joint Confer- ence on Artificial Intelligence (IJCAI).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Convolutional networks on graphs for learning molecular fingerprints",
"authors": [
{
"first": "Dougal",
"middle": [],
"last": "David K Duvenaud",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Maclaurin",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Iparraguirre",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Bombarell",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Hirzel",
"suffix": ""
},
{
"first": "Ryan P",
"middle": [],
"last": "Aspuru-Guzik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Adams",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "2224--2232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David K Duvenaud, Dougal Maclaurin, Jorge Ipar- raguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P Adams. 2015. Convolu- tional networks on graphs for learning molecular fin- gerprints. In Advances in Neural Information Pro- cessing Systems 28, pages 2224-2232.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A joint embedding method for entity alignment of knowledge bases",
"authors": [
{
"first": "Yanchao",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Yuanzhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of China Conference on Knowledge Graph and Semantic Computing (CCKS2016)",
"volume": "",
"issue": "",
"pages": "3--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanchao Hao, Yuanzhe Zhang, Shizhu He, Kang Liu, and Jian Zhao. 2016. A joint embedding method for entity alignment of knowledge bases. In Proceed- ings of China Conference on Knowledge Graph and Semantic Computing (CCKS2016), pages 3-14.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A self-training approach for resolving object coreference on the semantic web",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuzhong",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th international conference on World wide web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Hu, Jianfeng Chen, and Yuzhong Qu. 2011. A self-training approach for resolving object corefer- ence on the semantic web. In Proceedings of the 20th international conference on World wide web.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Molecular graph convolutions: moving beyond fingerprints",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Kearnes",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Mccloskey",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Berndl",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Pande",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Computer-Aided Molecular Design",
"volume": "30",
"issue": "8",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Kearnes, Kevin Mccloskey, Marc Berndl, Vi- jay Pande, and Patrick Riley. 2016. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer-Aided Molecular Design, 30(8):1-14.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sigma: Simple greedy matching for aligning large knowledge bases",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
},
{
"first": "Konstantina",
"middle": [],
"last": "Palla",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Davies",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "Thore",
"middle": [],
"last": "Graepel",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Lacoste-Julien, Konstantina Palla, Alex Davies, Gjergji Kasneci, Thore Graepel, and Zoubin Ghahramani. 2013. Sigma: Simple greedy matching for aligning large knowledge bases. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Non-translational alignment for multi-relational networks",
"authors": [
{
"first": "Shengnan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Mingzhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haiping",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yingzi",
"middle": [],
"last": "Ou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18",
"volume": "",
"issue": "",
"pages": "4180--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengnan Li, Xin Li, Rui Ye, Mingzhong Wang, Haip- ing Su, and Yingzi Ou. 2018. Non-translational alignment for multi-relational networks. In Pro- ceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4180-4186.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Yago3: A knowledge base from multilingual wikipedias",
"authors": [
{
"first": "Farzaneh",
"middle": [],
"last": "Mahdisoltani",
"suffix": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Biega",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2013,
"venue": "CIDR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farzaneh Mahdisoltani, Joanna Biega, and Fabian M Suchanek. 2013. Yago3: A knowledge base from multilingual wikipedias. In CIDR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Encoding sentences with graph convolutional networks for semantic role labeling",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1506--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1506-1515.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "End-to-end relation extraction using LSTMs on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1105--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 1105-1116.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multilingual schema matching for wikipedia infoboxes",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Viviane",
"middle": [],
"last": "Moreira",
"suffix": ""
},
{
"first": "Huong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Hoa",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Juliana",
"middle": [],
"last": "Freire",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. VLDB Endow",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh Nguyen, Viviane Moreira, Huong Nguyen, Hoa Nguyen, and Juliana Freire. 2011. Multilingual schema matching for wikipedia infoboxes. Proc. VLDB Endow.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference",
"authors": [
{
"first": "Shichao",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Hoehndorf",
"suffix": ""
},
{
"first": "Xiangliang",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference, WWW '19",
"volume": "",
"issue": "",
"pages": "3130--3136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shichao Pei, Lu Yu, Robert Hoehndorf, and Xiangliang Zhang. 2019. Semi-supervised entity alignment via knowledge graph embedding with awareness of de- gree difference. In The World Wide Web Conference, WWW '19, pages 3130-3136.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semi-supervised user geolocation via graph convolutional networks",
"authors": [
{
"first": "Afshin",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2009--2019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2018. Semi-supervised user geolocation via graph convolutional networks. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics, pages 2009-2019.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Modeling relational data with graph convolutional networks",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "Rianne",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "European Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "593--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolu- tional networks. In European Semantic Web Confer- ence, pages 593-607.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Paris: Probabilistic alignment of relations, instances, and schema",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Abiteboul",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. VLDB Endow",
"volume": "5",
"issue": "3",
"pages": "157--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Serge Abiteboul, and Pierre Senellart. 2011. Paris: Probabilistic alignment of re- lations, instances, and schema. Proc. VLDB Endow., 5(3):157-168.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Cross-lingual entity alignment via joint attributepreserving embedding",
"authors": [
{
"first": "Zequn",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chengkai",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "628--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attribute- preserving embedding. In International Semantic Web Conference, pages 628-644.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bootstrapping entity alignment with knowledge graph embedding",
"authors": [
{
"first": "Zequn",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qingheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuzhong",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18",
"volume": "",
"issue": "",
"pages": "4396--4402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4396-4402.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Wikidata: A free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "Communications of the ACM",
"volume": "57",
"issue": "",
"pages": "78--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: A free collaborative knowledgebase. Commu- nications of the ACM, 57:78-85.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Label-free distant supervision for relation extraction via knowledge graph embedding",
"authors": [
{
"first": "Guanying",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ruoxu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yalin",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Huajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2246--2255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guanying Wang, Wen Zhang, Ruoxu Wang, Yalin Zhou, Xi Chen, Wei Zhang, Hai Zhu, and Huajun Chen. 2018a. Label-free distant supervision for re- lation extraction via knowledge graph embedding. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2246-2255.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multisource knowledge bases entity alignment by leveraging semantic tags",
"authors": [
{
"first": "Xuepeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Shulin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuanzhe",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Chinese Journal of Computers",
"volume": "40",
"issue": "3",
"pages": "701--711",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuepeng Wang, Kang Liu, Shizhu He, Shulin Liu, Yuanzhe Zhang, and Jun Zhao. 2017. Multi- source knowledge bases entity alignment by lever- aging semantic tags. Chinese Journal of Computers, 40(3):701-711.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cross-lingual knowledge graph alignment via graph convolutional networks",
"authors": [
{
"first": "Zhichun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qingsong",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Xiaohan",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "349--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018b. Cross-lingual knowledge graph alignment via graph convolutional networks. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 349- 357.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Relation-aware entity alignment for heterogeneous knowledge graphs",
"authors": [
{
"first": "Yuting",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19",
"volume": "",
"issue": "",
"pages": "5278--5284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-aware en- tity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI- 19, pages 5278-5284.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Leveraging knowledge bases in lstms for improving machine reading",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Entity matching across heterogeneous sources",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yizhou",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Yang, Yizhou Sun, Jie Tang, Bo Ma, and Juanzi Li. 2015. Entity matching across heterogeneous sources. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Graph convolution over pruned dependency trees improves relation extraction",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Man- ning. 2018a. Graph convolution over pruned de- pendency trees improves relation extraction. In Em- pirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Variational reasoning for question answering with knowledge graph",
"authors": [
{
"first": "Yuyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hanjun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexan- der J Smola, and Le Song. 2018b. Variational reasoning for question answering with knowledge graph. In Thirty-Second AAAI Conference on Ar- tificial Intelligence.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Iterative entity alignment via joint knowledge embeddings",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17",
"volume": "",
"issue": "",
"pages": "4258--4264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In Proceedings of the Twenty-Sixth International Joint Conference on Ar- tificial Intelligence, IJCAI-17, pages 4258-4264.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "(a)-(c) report the performance for entity alignment of HGCN-JE and BootEA when they are trained with different proportions of seed entity alignments on the three DBP15K datasets. (d)-(f) show the relation alignment performance of HGCN-JR and BootEA-R under corresponding conditions. The x-axes are the proportions of seed alignments, and the y-axes are Hits@1 scores.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "world example from DBP15K F R\u2212EN .[v 2 ; v 5 ] and [r 2 ; r 5 ] are respectively the aligned entities and aligned relations after performing preliminary entity and relation alignments.[v F R ; v EN ] and [v 1 ; v 4 ]are the newly aligned entity pairs, and r 1 and r 4 are the newly aligned relations, which are discovered using jointly learned entity and relation representations. Jointly optimizing alignment tasks leads to the sucessful discovery of new aligned relation and entity pairs.",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Summary of the DBP15K datasets.",
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"text": "Performance on entity alignment.",
"content": "<table><tr><td>Models</td><td colspan=\"6\">ZH-EN Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 JA-EN FR-EN</td></tr><tr><td>MTransE-R</td><td>3.03</td><td>8.88</td><td>2.65</td><td>10.21</td><td>3.30</td><td>14.62</td></tr><tr><td>MTransE-PR</td><td>32.81</td><td>57.64</td><td>31.00</td><td>56.14</td><td>18.87</td><td>44.34</td></tr><tr><td>BootEA-R</td><td>55.17</td><td>70.00</td><td>47.83</td><td>67.67</td><td>36.79</td><td>58.49</td></tr><tr><td>BootEA-PR</td><td>45.28</td><td>85.37</td><td>41.40</td><td>79.77</td><td>30.19</td><td>60.38</td></tr><tr><td>GCN-PR</td><td>66.18</td><td>82.81</td><td>60.87</td><td>81.47</td><td>38.21</td><td>52.83</td></tr><tr><td>GCN-JR</td><td>70.22</td><td>82.81</td><td>63.89</td><td>81.10</td><td>41.98</td><td>53.77</td></tr><tr><td>HGCN-PR</td><td>69.33</td><td>84.49</td><td>63.14</td><td>81.26</td><td>41.51</td><td>54.25</td></tr><tr><td>HGCN-JR</td><td>70.34</td><td>85.39</td><td>65.03</td><td>83.55</td><td>42.45</td><td>56.60</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "Performance on relation alignment.",
"content": "<table/>",
"type_str": "table"
}
}
}
}