Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:58:06.237366Z"
},
"title": "Leveraging 2-hop Distant Supervision from Table Entity Pairs for Relation Extraction",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Deng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": ""
},
{
"first": "Huan",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distant supervision (DS) has been widely used to automatically construct (noisy) labeled data for relation extraction (RE). Given two entities, distant supervision exploits sentences that directly mention them for predicting their semantic relation. We refer to this strategy as 1-hop DS, which unfortunately may not work well for long-tail entities with few supporting sentences. In this paper, we introduce a new strategy named 2-hop DS to enhance distantly supervised RE, based on the observation that there exist a large number of relational tables on the Web which contain entity pairs that share common relations. We refer to such entity pairs as anchors for each other, and collect all sentences that mention the anchor entity pairs of a given target entity pair to help relation prediction. We develop a new neural RE method REDS2 in the multi-instance learning paradigm, which adopts a hierarchical model structure to fuse information respectively from 1-hop DS and 2-hop DS. Extensive experimental results on a benchmark dataset show that REDS2 can consistently outperform various baselines across different settings by a substantial margin. 1",
"pdf_parse": {
"paper_id": "D19-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "Distant supervision (DS) has been widely used to automatically construct (noisy) labeled data for relation extraction (RE). Given two entities, distant supervision exploits sentences that directly mention them for predicting their semantic relation. We refer to this strategy as 1-hop DS, which unfortunately may not work well for long-tail entities with few supporting sentences. In this paper, we introduce a new strategy named 2-hop DS to enhance distantly supervised RE, based on the observation that there exist a large number of relational tables on the Web which contain entity pairs that share common relations. We refer to such entity pairs as anchors for each other, and collect all sentences that mention the anchor entity pairs of a given target entity pair to help relation prediction. We develop a new neural RE method REDS2 in the multi-instance learning paradigm, which adopts a hierarchical model structure to fuse information respectively from 1-hop DS and 2-hop DS. Extensive experimental results on a benchmark dataset show that REDS2 can consistently outperform various baselines across different settings by a substantial margin. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation extraction (RE) aims to extract semantic relations between two entities from unstructured text and is an important task in natural language processing (NLP). Formally, given an entity pair (e 1 , e 2 ) from a knowledge base (KB) and a sentence (instance) that mentions them, RE tries to predict if a relation r from a predefined relation set exists between e 1 and e 2 . A special relation NA is used if none of the predefined relations holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given that it is costly to construct large-scale labeled instances for RE, distant supervision (DS) Figure 1 : Illustration of 2-hop distant supervision. The top panel shows a target entity pair, one sentence that mentions it, and the relation under study which cannot be inferred from the sentence. The middle gives part of a table from Wikipedia page \"Mr. Basketball USA\", where we can extract anchors for the target entity pair. The bottom shows some sentences that are associated with the anchors, which more clearly indicate the underinvestigated relation and can be utilized to extract relations between the target entity pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "has been a popular strategy to automatically construct (noisy) training data. It assumes that if two entities hold a relation in a KB, all sentences mentioning them express the same relation. Noticing that the DS assumption does not always hold and has the wrong labeling problem, many efforts including (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) have adopted the multiinstance learning paradigm to tackle the challenge, and more recently, neural models with attention mechanism have been proposed to de-emphasize the noisy instances (Lin et al., 2016; Ji et al., 2017; Han et al., 2018) . Such models tend to work well when there are a large number of sentences talking about the target entity pair (Lin et al., 2016) .",
"cite_spans": [
{
"start": 304,
"end": 325,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 326,
"end": 348,
"text": "Hoffmann et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 349,
"end": 371,
"text": "Surdeanu et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 559,
"end": 577,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 578,
"end": 594,
"text": "Ji et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 595,
"end": 612,
"text": "Han et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 725,
"end": 743,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, we observe that there can be a large portion of entity pairs that have very few supporting sentences (e.g., nearly 75% of entity pairs in the Riedel et al. (2010) dataset only have one single sentence mentioning them), which makes distantly supervised RE even more challenging.",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The conventional distant supervision strategy only exploits instances that directly mention a target entity pair, and because of this, we refer to it as 1-hop distant supervision. On the other hand, there are a large number of Web tables that contain relational facts about entities (Cafarella et al., 2008; Venetis et al., 2011; Wang et al., 2012) . Owing to the semi-structured nature of tables, we can extract from them sets of entity pairs that share common relations, and sentences mentioning these entity pairs often have similar semantic meanings. Under this observation, we introduce a new strategy named 2-hop distant supervision: We define entity pairs that potentially have the same relation with a given target entity pair as anchors, which can be found through Web tables, and aim to fully exploit the sentences that mention those anchor entity pairs to augment RE for the target entity pair. Figure 1 illustrates the 2-hop DS strategy.",
"cite_spans": [
{
"start": 283,
"end": 307,
"text": "(Cafarella et al., 2008;",
"ref_id": "BIBREF2"
},
{
"start": 308,
"end": 329,
"text": "Venetis et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 330,
"end": 348,
"text": "Wang et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 906,
"end": 914,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The intuition behind 2-hop DS is if the target entity pair holds a certain relation, one of its anchors is likely to have that relation too and at least one sentence mentioning the anchors should express the relation. Despite being noisy, the 2-hop DS can provide extra, informative supporting sentences for the target entity pair. One straightforward approach is to merge the two bags of sentences respectively derived from 1-hop and 2-hop DS as one single set and apply existing multiinstance learning models. However, the 2-hop DS strategy also has the wrong labeling problem that already exists in 1-hop DS. Simply mixing the two sets of sentences together may mislead the prediction, especially when there is a great disparity in their size. In this paper, we propose REDS2 2 , a new neural relation extraction method in the multiinstance learning paradigm, and design a hierarchical model structure to fuse information from 1-hop and 2-hop DS. We evaluate REDS2 on a widely used benchmark dataset and show that it consistently outperforms various baseline models by a large margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We summarize our contributions as three-fold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce 2-hop distant supervision as an 2 stands for relation extraction with 2-hop DS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "extension to the conventional distant supervision, and leverage entity pairs in Web tables as anchors to find additional supporting sentences to further improve RE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose REDS2, a new neural relation extraction method based on 2-hop DS and has achieved new state-of-the-art performance in the benchmark dataset (Riedel et al., 2010) .",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We release both our source code and an augmented benchmark dataset that has entity pairs aligned with those in Web tables, to facilitate future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Distant Supervision. One main drawback of traditional supervised relation extraction models (Zelenko et al., 2003; Mooney and Bunescu, 2006) is they require adequate amounts of annotated training data, which is time consuming and labor intensive. To address this issue, Mintz et al. (2009) proposes distant supervision (DS) to automatically label data by aligning plain text with Freebase. However, DS inevitably accompanies with the wrong labeling problem. To alleviate the noise brought by DS, Riedel et al. (2010) and Hoffmann et al. (2011) introduce multi-instance learning mechanism, which is originally used to combat the problem of ambiguously-labeled training data when predicting the activity of different drugs (Dietterich et al., 1997) . Neural Relation Extraction. Early stage relation extraction (RE) methods use features extracted by NLP tools and strongly rely on the quality of features. Due to the recent success of neural models in different NLP tasks, many researchers have investigated the possibility of using neural networks to build end-to-end relation extraction models. Zeng et al. (2014) uses convolutional neural network (CNN) to encode sentences, which is further improved through piecewise-pooling (Zeng et al., 2015) . Adel and Sch\u00fctze (2017) and Gupta et al. (2016) use neural networks for joint entity and relation extraction. More advanced network architectures like Tree-LSTM (Miwa and Bansal, 2016) and Graph Convolution Network (Vashishth et al., 2018) are also adopted to learn better representations by using syntactic features like dependency trees. Most recent models also incorporate neural attention technology (Lin et al., 2016) as an (Zeng et al., 2015) . We then use selective attention and bag aggregation to get the final representation, based on which a classifier predicts scores for each candidate relation.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Zelenko et al., 2003;",
"ref_id": "BIBREF23"
},
{
"start": 115,
"end": 140,
"text": "Mooney and Bunescu, 2006)",
"ref_id": "BIBREF13"
},
{
"start": 270,
"end": 289,
"text": "Mintz et al. (2009)",
"ref_id": "BIBREF11"
},
{
"start": 496,
"end": 516,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF15"
},
{
"start": 521,
"end": 543,
"text": "Hoffmann et al. (2011)",
"ref_id": "BIBREF8"
},
{
"start": 721,
"end": 746,
"text": "(Dietterich et al., 1997)",
"ref_id": "BIBREF4"
},
{
"start": 1095,
"end": 1113,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF25"
},
{
"start": 1227,
"end": 1246,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 1249,
"end": 1272,
"text": "Adel and Sch\u00fctze (2017)",
"ref_id": "BIBREF0"
},
{
"start": 1277,
"end": 1296,
"text": "Gupta et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 1410,
"end": 1433,
"text": "(Miwa and Bansal, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 1464,
"end": 1488,
"text": "(Vashishth et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 1653,
"end": 1671,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 1678,
"end": 1697,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "improvement to at-least-one multi-instance learning (Zeng et al., 2015) . Han et al. (2018) further develops a hierarchical attention scheme to utilize the relation correlations and help predictions for long-tail relations.",
"cite_spans": [
{
"start": 52,
"end": 71,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 74,
"end": 91,
"text": "Han et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Web (Venetis et al., 2011; Mu\u00f1oz et al., 2014; Ritze et al., 2015) . Given one table, the main idea is to first link cells to entities in KB. We can then use existing relations between linked entities to infer relations between columns and extract new facts by generalizing to all rows. However, this method requires a high overlap between table and KB, which is hampered by KB incompleteness. The other approach tries to leverage features extracted from the table header and column names (Ritze and Bizer, 2017; Cannaviccio et al., 2018) . Unfortunately, a large portion of Web tables miss such metadata or contain limited information, and the second approach will fail in such cases. Although the focus of this paper is the RE task, we believe the idea of connecting Web tables and plain texts using DS can potentially benefit table understanding as well.",
"cite_spans": [
{
"start": 4,
"end": 26,
"text": "(Venetis et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 27,
"end": 46,
"text": "Mu\u00f1oz et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 47,
"end": 66,
"text": "Ritze et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 489,
"end": 512,
"text": "(Ritze and Bizer, 2017;",
"ref_id": "BIBREF16"
},
{
"start": 513,
"end": 538,
"text": "Cannaviccio et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Given a set of sentences S = {s 1 , s 2 , ...} and a target entity pair (h, t), we will leverage the directly associated sentence bag S h,t \u2286 S by 1-hop distant supervision (1-hop DS bag), and the table expanded sentence bag S T h,t \u2286 S by 2-hop distant supervision (2-hop DS bag), for relation extraction. S h,t contains all instances mentioning both h and t, while S T h,t is obtained indirectly through the anchors of (h, t) found in Web tables. Following previous work (Riedel et al., 2010; Hoffmann et al., 2011) , we adopt the multi-instance learning paradigm to measure the probability of (h, t) having relation r. Figure 2 gives an overview of our framework with three major components:",
"cite_spans": [
{
"start": 473,
"end": 494,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 495,
"end": 517,
"text": "Hoffmann et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 622,
"end": 630,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022 Table- aided Instance Expansion: Given a target entity pair (h, t), we find its anchor en-",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 8,
"text": "Table-",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "tity pairs {(h 1 , t 1 ), (h 2 , t 2 ), ...} through Web tables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We define an anchor entity pair as two entities co-occurring with (h, t) in some table columns at least once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "S T h,t = S h 1 ,t 1 \u222a S h 2 ,t 2 \u222a .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": ".. is then exploited to augment the directly associated bag S h,t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022 Sentence Encoding: For each sentence s in bag S h,t or S T h,t , a sentence encoder is used to obtain its semantic representation s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022 Hierarchical Bag Aggregation: Once the embedding of each sentence is learned, we first use a sentence-level attention mechanism to get bag representation h and h T , and then aggregate them for final relation prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Now we introduce how to construct the table expanded sentence bag S T h,t for a given target entity pair (h, t) by 2-hop distant supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table-aided Instance Expansion",
"sec_num": "3.1"
},
{
"text": "Web tables have been found to contain rich facts of entities and relations. It is estimated that out of a total of 14.1 billion tables on the Web, 154 million tables contain relational data (Cafarella et al., 2008) and Wikipedia alone is the source of nearly 1.6 million relational tables (Bhagavatula et al., 2015) . Columns of a Wikipedia table can be classified into one of the following data types: 'empty', 'named entity', 'number', 'date expression', 'long text' and 'other' (Zhang, 2017) . Here we only focus on named entity columns (NEcolumns) and the Wikipedia page title, which can be easily linked to KB entities. These entities can be further categorized as:",
"cite_spans": [
{
"start": 190,
"end": 214,
"text": "(Cafarella et al., 2008)",
"ref_id": "BIBREF2"
},
{
"start": 289,
"end": 315,
"text": "(Bhagavatula et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 481,
"end": 494,
"text": "(Zhang, 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Web Tables",
"sec_num": "3.1.1"
},
{
"text": "A topic entity e t that the table is centered around. We refer to the Wikipedia article where the table is found and take the entity it describes as e t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web Tables",
"sec_num": "3.1.1"
},
{
"text": "Subject entities E s = {e s 1 , e s 2 , ...} that can act as primary keys of the table. Following previous work on Web table analysis (Venetis et al., 2011) , we select the leftmost NE-column as subject column and its entities as E s .",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Venetis et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Web Tables",
"sec_num": "3.1.1"
},
{
"text": "Body entities E = {e 1,1 , e 1,2 , ...} that compose the rest of the table. All entities in nonsubject NE-columns are considered as E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web Tables",
"sec_num": "3.1.1"
},
{
"text": "In the conventional distant supervision setting, each entity pair (h, t) is associated with a bag of sentences S h,t that directly mention h and t. The intuition behind 2-hop distant supervision is, if (h i , t i ) and (h j , t j ) potentially hold the same relation, we can treat them as anchor entity pairs for each other, and then use the 1-hop DS bag S h j ,t j to help with the prediction for (h i , t i ) and vice versa. In this paper, we extract anchor entity pairs with the help of Web tables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "We notice that owing to the semi-structured nature of tables, (1) subject entities can usually be connected with the topic entity by the same relation. (2) Non-subject columns of a table usually have binary relationships to or are properties of the subject column. Body entities in the same column share common relations with their corresponding subject entities. For example, in Figure 1 , the topic entity is \"Mr. Basketball USA\"; column 1 is the subject column and contains a list of winners of \"Mr. Basketball USA\"; column 2 and column 3 are high school and city of the subject entity.",
"cite_spans": [],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "Formally, we consider two entity pairs (h i , t i ) and (h j , t j ) as anchored if there exists a Web table such that either criterion below is met:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "\u2022 h i = h j = e t and t i , t j \u2208 E s . \u2022 h i \u2208 E s or t i \u2208 E s , (h i , h j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "is in the same column (and so is (t i , t j )), and (h i , t i ) is in the same row (and so is (h j , t j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "The 2-hop DS bag S T h,t is then constructed as the union of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "S h i ,t i 's, where (h i , t i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "is an anchor entity pair of (h, t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2-hop Distant Supervision",
"sec_num": "3.1.2"
},
{
"text": "Given a sentence s consisting of n words s = {w 1 , w 2 , ..., w n }, we use a neural network with an embedding layer and an encoding layer to obtain its low-dimensional vector representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Encoding",
"sec_num": "3.2"
},
{
"text": "Each token is first fed into an embedding layer to embed both semantic and positional information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.2.1"
},
{
"text": "Word Embedding maps words to vectors of real numbers which preserve syntactic and semantic information of words. Here we get a vector representation w i \u2208 R kw for each word from a pre-trained word embedding matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.2.1"
},
{
"text": "Position Embedding was proposed by Zeng et al. (2014) . Position embedding is used to embed the positional information of each word relative to the head and tail mention. A position embedding matrix is learned in training to compute position representation p i \u2208 R kp\u00d72 .",
"cite_spans": [
{
"start": 35,
"end": 53,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.2.1"
},
{
"text": "Finally, we concatenate the word representation w i and position representation p i to build the input representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.2.1"
},
{
"text": "x i \u2208 R k i (where k i = k w + k p \u00d7 2) for each word w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.2.1"
},
{
"text": "A sequence of input representations x = {x 1 , x 2 , ...} with a variable length is then fed through the encoding layer and converted to a fixed-sized sentence representation s \u2208 R k h . There are many existing neural architectures that can serve as the encoding layer, such as CNN (Zeng et al., 2014) , PCNN (Zeng et al., 2015) and LSTM-RNN (Miwa and Bansal, 2016) . We simply adopt PCNN here, which has been shown very powerful and efficient by a number of previous RE works.",
"cite_spans": [
{
"start": 282,
"end": 301,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 309,
"end": 328,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 342,
"end": 365,
"text": "(Miwa and Bansal, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.2.2"
},
{
"text": "PCNN is an extension to CNN, which first slides a convolution kernel with a window size m over the input sequence to get the hidden vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = CNN(x i\u2212 m\u22121 2 :i+ m\u22121 2 ),",
"eq_num": "(1)"
}
],
"section": "Encoding Layer",
"sec_num": "3.2.2"
},
{
"text": "A piecewise max-pooling is then applied over the hidden vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[s (1) ] j = max 1\u2264i\u2264i 1 {[h i ] j }, [s (2) ] j = max i i +1\u2264i\u2264i 2 {[h i ] j },",
"eq_num": "(2)"
}
],
"section": "Encoding Layer",
"sec_num": "3.2.2"
},
{
"text": "[s (3) ] j = max i 2 +1\u2264i\u2264n {[h i ] j },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.2.2"
},
{
"text": "where i 1 and i 2 are head and tail positions. The final sentence representation s is composed by concatenating these three pooling results s = [s (1) ; s (2) ; s (3) ].",
"cite_spans": [
{
"start": 147,
"end": 150,
"text": "(1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "3.2.2"
},
{
"text": "After we get sentence representations {s 1 , s 2 , ...} and {s T 1 , s T 2 , ...} for S and S T , to fuse key information from these two bags, we adopt a hierarchical aggregation design to obtain the final representation r for prediction. We first get bag representation h and h T using a sentence-level selective attention, and then employ a bag-level aggregation to compute r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Bag Aggregation",
"sec_num": "3.3"
},
{
"text": "Since the wrong labeling problem inevitably exists in both 1-hop and 2-hop distant supervision, here we use selective attention to assign different weights to different sentences given relation r and de-emphasize the noisy sentences. The attention is caculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level Selective Attention",
"sec_num": "3.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i = q T r s i , \u03b1 i = exp(e i ) n j=1 exp(e j ) ,",
"eq_num": "(3)"
}
],
"section": "Sentence-level Selective Attention",
"sec_num": "3.3.1"
},
{
"text": "h = n i=1 \u03b1 i s i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level Selective Attention",
"sec_num": "3.3.1"
},
{
"text": "where q r is a query vector assigned to relation r. h and h T are computed respectively for the two bags S and S T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-level Selective Attention",
"sec_num": "3.3.1"
},
{
"text": "Since 2-hop DS bag S T is collected indirectly through anchor entity pairs in Web tables, despite that it brings abundant information, it also contains a massive amount of noise. Thus treating S T equally as S may mislead the prediction, especially when their sizes are extremely imbalanced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "To automatically decide how to balance between S and S T , we utilize information from h, h T and q r to predict a weight \u03b2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 = \u03c3(W[h; h T ; q r ] + b),",
"eq_num": "(4)"
}
],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "where vector W and scalar b are learnable variables and \u03c3 is the sigmoid function. Next, \u03b2 is used as a weight to fuse information from 1-hop DS and 2-hop DS, determined by S and S T of the current target entity pair and relation r. We then obtain the final representation r as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r = \u03b2h + (1 \u2212 \u03b2)h T ,",
"eq_num": "(5)"
}
],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "Finally, we define the conditional probability P (r|S, S T , \u03b8) as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (r|S, S T , \u03b8) = exp(o r ) nr k=1 exp(o k )",
"eq_num": "(6)"
}
],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "where o is the score vector for current target entity pair having each relation,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o = Mr + d,",
"eq_num": "(7)"
}
],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "here M is the representation matrix of relations, which shares weights with q r 's. d is a learnable bias term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-level Aggregation",
"sec_num": "3.3.2"
},
{
"text": "We adopt the cross-entropy loss as the training objevtive function. Given a set of target entity pairs with relations \u03c0 = {(h 1 , t 1 , r 1 ), (h 2 , t 2 , r 2 ), ...}, we define the loss function as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J(\u03b8) = \u2212 1 |\u03c0| |\u03c0| i=1 logP (r i |S h i ,t i , S T h i ,t i , \u03b8).",
"eq_num": "(8)"
}
],
"section": "Optimization",
"sec_num": "3.4"
},
{
"text": "All models are trained with stochastic gradient descent (SGD) to minimize the objective function. The same sentence encoder is used to encode S and S T . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization",
"sec_num": "3.4"
},
{
"text": "We evaluate our model on the New York Times (NYT) dataset developed by Riedel et al. (2010) , which is widely used in recent works. The dataset has 53 relations including a special relation NA which indicates none of the other 52 relations exists between the head and tail entity.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Datasets and Evaluation",
"sec_num": "4"
},
{
"text": "We use the WikiTable corpus collected by Bhagavatula et al. 2015as our table source. It originally contains around 1.65M tables extracted from Wikipedia pages. Since the NYT dataset is already linked to Freebase, we perform entity linking on the table cells and the Wikipedia page titles using existing mapping from Wikipedia URL to Freebase MID (Machine Identifier). We then align the table corpus with NYT and construct S T for entity pairs as detailed in section 3.1. For both training and testing, we only use entity pairs and sentences in the original NYT training data for tableaided instance expansion. We set the max size of S T as 300, and randomly sample 300 sentences if |S T | > 300. Statistics of our final dataset is summarized in Table 1 . One can see that 38.18% and 46.79% of relational facts (i.e., entity pairs holding non-NA relations) respectively in the training and testing set can potentially benefit from leveraging 2-hop DS.",
"cite_spans": [],
"ref_spans": [
{
"start": 745,
"end": 752,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments 4.1 Datasets and Evaluation",
"sec_num": "4"
},
{
"text": "Following prior work (Mintz et al., 2009) , we use the testing set for held-out evaluation, and evaluate models by comparing the predicted relational facts with those in Freebase. For evaluation, we rank the extracted relational facts based on model confidence and plot precision-recall curves. In addition, we also show the area under the curve (AUC) and precision values at specific recall rates to conduct a more comprehensive comparison. ",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Datasets and Evaluation",
"sec_num": "4"
},
{
"text": "We compare REDS2 with the following baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "PCNN+ATT (Lin et al., 2016) . This model uses a PCNN encoder combined with selective attention over sentences. Since this is also the base block of our model, we also refer to it as BASE in this paper.",
"cite_spans": [
{
"start": 9,
"end": 27,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "PCNN+HATT (Han et al., 2018) . This is another PCNN based relation extraction model, where the authors use hierarchical attention to model the semantic correlations among relations. RESIDE (Vashishth et al., 2018) . It uses Graph Convolutional Networks (GCN) for sentence encoding, and also leverages relevant side information like relation alias and entity type.",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "(Han et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 189,
"end": 213,
"text": "(Vashishth et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "Results of PCNN+HATT and RESIDE are directly taken from the code repositories released by the authors. For PCNN+ATT, we report results obtained by our reproduced model, which are close to those shown in (Lin et al., 2016) . To simply verify the effectiveness of adding extra supporting sentences from 2-hop DS, we also compare the following vanilla method with PCNN+ATT: BASE+MERGE. For each target entity pair (h, t), we simply merge S and S T as one sentence bag, and apply the trained PCNN+ATT (or, BASE) model.",
"cite_spans": [
{
"start": 203,
"end": 221,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "We preprocess the WikiTable corpus with PySpark to build index for anchor entity pairs. On a single machine with two 8-core E5 CPUs and 256 GB memory, this processing takes around 20 minutes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "We use word embeddings from (Lin et al., 2016) for initialization, which are learned by word2vec tool 3 on NYT corpus. The vocabulary is composed of words that appear more than 100 times in the corpus and words in an entity mention are concatenated as a single word. To see the effect of 2-hop DS more directly, we set most parameters in REDS2 following Lin et al. (2016) . Since the original NYT dataset only contains training and testing set, we randomly sample 20% training data for development. We first pre-train a PCNN+ATT model with only S and sentence-level selective attention. This BASE model converges in around 100 epochs. We then fine-tune the entire model with S T and bag-level aggregation added, which can finish within 50 epochs. Some key parameter settings in REDS2 are summarized in Table 2 .",
"cite_spans": [
{
"start": 28,
"end": 46,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 354,
"end": 371,
"text": "Lin et al. (2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 802,
"end": 809,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "In testing phase, inference using 2-hop DS is slower, because the average size of S T is about 100 times that of S. With single 2080ti GPU, one full pass of testing data takes around 37s using REDS2, compared with 12s using BASE model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "Evaluation results on all target entity pairs in testing set are shown in Figure 3 and Table 3 , from which we make the following observations:",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 87,
"end": 94,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4.1"
},
{
"text": "(1) Figure 3 shows all models obtain a reasonable precision when recall is smaller than 0.05. With the recall gradually increasing, the performance of models with 2-hop DS drops slower than those existing methods without. From Figure 3, we can see simply merging S T with S in BASE+MERGE can boost the performance of basic PCNN+ATT model, and even achieves higher precision than state-of-the-art models like PCNN+HATT when recall is greater than 0.3. This demonstrates that models utilizing 2-hop DS are more robust and remain a reasonable precision when including more lower-ranked relational facts which tend to be more challenging to predict because of insufficient evidence.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 222,
"end": 233,
"text": "From Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4.1"
},
{
"text": "(2) As shown in both Figure 3 and Table 3 , REDS2 achieves the best results among all the models. Even when compared with PCNN+HATT and RESIDE which adopt extra relation hierarchy and side information from KB, our model still enjoys a significant performance gain. This is because our method can take advantage of the rich entity pair correlations in Web tables and leverage the extra information brought by 2-hop DS. We anticipate our REDS2 model can be further improved by using more advanced sentence encoders and extra mechanisms like reinforcement learning (Feng et al., 2018) and adversarial training (Wu et al., 2017 ), which we leave for future work.",
"cite_spans": [
{
"start": 562,
"end": 581,
"text": "(Feng et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 607,
"end": 623,
"text": "(Wu et al., 2017",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 34,
"end": 41,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Overall Evaluation Results",
"sec_num": "4.4.1"
},
{
"text": "To further show the effect of our hierarchical bag aggregation design, here we also plot precisionrecall curves in Figure 4 on a subset of entity pairs in the test set (i.e., 4832 in total according to Table 1 ) whose table expanded sentence bag S T is not empty.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 202,
"end": 210,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Effect of Hierarchical Bag Aggregation",
"sec_num": "4.4.2"
},
{
"text": "One main challenge of using 2-hop DS is it brings more noise. As shown in Table 1 , for en-Test Mode SINGLE MULTIPLE ONE TWO ALL Metric P@0.1 P@0.2 P@0.3 AUC P@0.1 P@0.2 P@0.3 AUC P@0.1 P@0.2 P@0.3 AUC P@0.1 P@0.2 P@0. tity pair with nonempty S T , the size of S T is usually tens of times the size of S. From Figure 4 we can see BASE+MERGE performs much worse compared with PCNN+ATT when recall is smaller than 0.2. This is because 2-hop DS bag tends to be much larger than 1-hop DS bag, and the model has a larger chance to attend to the noisy sentences obtained from 2-hop DS. While ignoring the information in its directly associated sentences. We alleviate this problem by introducing hierarchical structure to first aggregate the two sets separately and then weight and sum them together. The proposed REDS2 model has a comparable precision with PCNN+ATT in the beginning and gradually outperform it.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 310,
"end": 318,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effect of Hierarchical Bag Aggregation",
"sec_num": "4.4.2"
},
{
"text": "Number of sentences from 1-hop DS. In the originally testing set, there are 79176 entity pairs that are associated with only one sentence, out of which 1149 actually have relations. We hope our model can improve performance on these longtail entities. Following Lin et al. (2016) , we design the following test settings to evaluate the effect of sentence number: the \"SINGLE\" test setting contains all entity pairs that correspond to only one sentence; the \"MULTIPLE\" test setting contains the rest of entity pairs that have at least two sentences associated. We further construct the \"ONE\" testing setting where we randomly select one sentence for each entity pair; the \"TWO\" setting where we randomly select two sentences for each entity pair and the \"ALL\" setting where Relation: country.capital 1-hop ... the golden gate bridge and the petronas towers in kuala lumpur, malaysia, was experienced ... 2-hop a friend from cardiff , the capital city of wales , lives for complex ... Table 6 : An example for case study, where the sentence with the highest attention weight is selected respectively from 1-hop and 2-hop sentence bag.",
"cite_spans": [
{
"start": 262,
"end": 279,
"text": "Lin et al. (2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 983,
"end": 990,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of Sentence Number",
"sec_num": "4.4.3"
},
{
"text": "we use all the associated sentences from MUL-TIPLE. We use all sentences in S T for each entity pair if it is nonempty. Results are shown in Table 4 , from which we can see that REDS2 and BASE+MERGE have 25.0% and 18.7% improvements under AUC compared with PCNN+ATT in the SINGLE setting. Although the performance of all models generally improves as the sentence number increases in MULTIPLE setting, models leveraging 2-hop DS are more stable and have smaller changes. These observations indicate that 2-hop DS is helpful when information obtained by 1-hop DS is insufficient.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Effect of Sentence Number",
"sec_num": "4.4.3"
},
{
"text": "Number of sentences from 2-hop DS. We also evaluate how the number of sentences obtained by 2-hop DS will affect the performance of our proposed model. In Table 5 , we show the performance of REDS2 with different numbers of sentences sampled from S T . We observe that: (1) Performance of REDS2 improves as the number of sentences sampled increases. This shows that the selective attention over S T can effectively take advantage of the extra information from 2-hop DS while filtering out noisy sentences. (2) Even with 50 randomly sampled sentences, our model REDS2 still has a higher AUC than all baselines in Table 3 . This indicates information obtained by 2-hop DS is redundant, even a small portion can be beneficial to relation extraction. How to sample a representative set effectively is worth further exploring in future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 612,
"end": 619,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effect of Sentence Number",
"sec_num": "4.4.3"
},
{
"text": "We observe that there are large amounts of entity pairs in the table corpus that have no associated sentences but have anchor entity pairs mentioned in the text corpus. By leveraging 2-hop distant supervision, we can do relation extraction for this set of entity pairs. We extract a total number of 251917 entity pairs from the WikiTable dataset which do not exist in the NYT dataset but have at least one anchor entity pair that appear in the original NYT training data. We randomly sample 10000 examples and evaluate our trained model on them. Surprisingly, the relation extraction result is even better than the result on the NYT test data in Table 3 , with an overall AUC of 54.7 and a P@0.3 of 71.1. This can be explained partly by two observations: (1) The table corpus generates higher-quality entity pairs, 18% of extracted entity pairs have non-NA relations, compared with only 1.8% in NYT test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 646,
"end": 653,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "RE for Entity Pairs with Empty 1-hop Sentence Bag",
"sec_num": "4.5"
},
{
"text": "(2) The newly extracted entity pairs have 14 useful anchor entity pairs and 175 2-hop DS sentences on average, which give ample information for prediction. This study shows that for two entities that have no directly associated sentences, it is possible to utilize the 2-hop DS to predict their relations accurately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RE for Entity Pairs with Empty 1-hop Sentence Bag",
"sec_num": "4.5"
},
{
"text": "In addition to the motivating example from the training set shown in Figure 1 , we also demonstrate how 2-hop DS helped relation extraction using an example from the testing set in Table 6 . As we can see, the sentence with the highest attention weight in 1-hop DS bag does not express the desired relation between the target entity pair whereas that in 2-hop DS bag clearly indicates the country.capital relation. We also conduct an error analysis by analyzing examples where REDS2 gives worse predictions than BASE (e.g., assigns a lower score to a correct relation or a higher score to a wrong relation), and 50 examples with most disparity in the two methods' scores are selected. We find that 29 examples have wrong labels caused by KB incompleteness and our model in fact makes the right prediction. 11 examples are due to errors in column processing (e.g., errors in NE/subject column selection and entity linking), 9 are caused by anchor entity pairs with differet relations (e.g., (Greece, Atlanta) and (Mexico, Xalapa) are in the same table \"National Records in High Jump\" under columns (Nation, Place), but only the latter has relation location.contains), and 1 is because of wrong information in the original table.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 1",
"ref_id": null
},
{
"start": 181,
"end": 188,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study and Error Analysis",
"sec_num": "4.6"
},
{
"text": "This paper introduces 2-hop distant supervision for relation extraction, based on the intuition that entity pairs in relational Web tables often share common relations. Given a target entity pair, we define and find its anchor entity pairs via Web tables and collect all sentences that mention the anchor entity pairs to help relation prediction. We develop a new neural RE method REDS2 in the multi-instance learning paradigm which fuses information from 1-hop DS and 2-hop DS using a hierarchical model structure, and substantially outperforms existing RE methods on a benchmark dataset. Interesting future work includes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "(1) Given that information from 2-hop DS is redundant and noisy, we can explore smarter sampling and/or better bag-level aggregation methods to capture the most representative information. (2) Metadata in Web tables like headers and column names also contain rich information, which can be incorporated to further improve RE performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Our source code and datasets are at https:// github.com/sunlab-osu/REDS2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/archive/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was sponsored in part by the Army Research Office under cooperative agreements NSF Grant IIS1815674, W911NF-17-1-0412, Fujitsu gift grant, and Ohio Supercomputer Center [8]. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Global normalization of convolutional neural networks for joint entity and relation classification",
"authors": [
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1723--1729",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heike Adel and Hinrich Sch\u00fctze. 2017. Global normal- ization of convolutional neural networks for joint en- tity and relation classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1723-1729.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tabel: entity linking in web tables",
"authors": [
{
"first": "Chandra",
"middle": [],
"last": "Sekhar Bhagavatula",
"suffix": ""
},
{
"first": "Thanapon",
"middle": [],
"last": "Noraset",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
}
],
"year": 2015,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "425--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandra Sekhar Bhagavatula, Thanapon Noraset, and Doug Downey. 2015. Tabel: entity linking in web tables. In International Semantic Web Conference, pages 425-441.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Webtables: exploring the power of tables on the web",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Daisy",
"middle": [
"Zhe"
],
"last": "Halevy",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the VLDB Endowment",
"volume": "1",
"issue": "",
"pages": "538--549",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Cafarella, Alon Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. Webtables: ex- ploring the power of tables on the web. Proceedings of the VLDB Endowment, 1(1):538-549.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Leveraging wikipedia table schemas for knowledge graph augmentation",
"authors": [
{
"first": "Matteo",
"middle": [],
"last": "Cannaviccio",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Ariemma",
"suffix": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 21st International Workshop on the Web and Databases",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matteo Cannaviccio, Lorenzo Ariemma, Denilson Bar- bosa, and Paolo Merialdo. 2018. Leveraging wikipedia table schemas for knowledge graph aug- mentation. In Proceedings of the 21st International Workshop on the Web and Databases, page 5. ACM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Lathrop",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lozano-P\u00e9rez",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "89",
"issue": "",
"pages": "31--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G Dietterich, Richard H Lathrop, and Tom\u00e1s Lozano-P\u00e9rez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial in- telligence, 89(1-2):31-71.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Reinforcement learning for relation classification from noisy data",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xi- aoyan Zhu. 2018. Reinforcement learning for rela- tion classification from noisy data. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Table filling multi-task recurrent neural network for joint entity and relation extraction",
"authors": [
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Andrassy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2537--2547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pankaj Gupta, Hinrich Sch\u00fctze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural net- work for joint entity and relation extraction. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 2537-2547.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hierarchical relation extraction with coarse-to-fine grained attention",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies- Volume 1, pages 541-550. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distant supervision for relation extraction with sentence-level attention and entity descriptions",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Thirty-First AAAI Conference on Artificial Intel- ligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2124--2133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 2124-2133.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "End-to-end relation extraction using lstms on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1105--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1105- 1116.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Subsequence kernels for relation extraction",
"authors": [
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bunescu",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "171--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond J Mooney and Razvan C Bunescu. 2006. Subsequence kernels for relation extraction. In Ad- vances in neural information processing systems, pages 171-178.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using linked data to mine rdf from wikipedia's tables",
"authors": [
{
"first": "Emir",
"middle": [],
"last": "Mu\u00f1oz",
"suffix": ""
},
{
"first": "Aidan",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Alessandra",
"middle": [],
"last": "Mileo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 7th ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "533--542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emir Mu\u00f1oz, Aidan Hogan, and Alessandra Mileo. 2014. Using linked data to mine rdf from wikipedia's tables. In Proceedings of the 7th ACM international conference on Web search and data mining, pages 533-542. ACM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148-163. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Matching web tables to dbpedia -A feature utility study",
"authors": [
{
"first": "Dominique",
"middle": [],
"last": "Ritze",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 20th International Conference on Extending Database Technology",
"volume": "",
"issue": "",
"pages": "210--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominique Ritze and Christian Bizer. 2017. Match- ing web tables to dbpedia -A feature utility study. In Proceedings of the 20th International Conference on Extending Database Technology, EDBT 2017, Venice, Italy, March 21-24, 2017., pages 210-221.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Matching html tables to dbpedia",
"authors": [
{
"first": "Dominique",
"middle": [],
"last": "Ritze",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lehmberg",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 5th International Conference on Web Intelligence, Mining and Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominique Ritze, Oliver Lehmberg, and Christian Bizer. 2015. Matching html tables to dbpedia. In Proceedings of the 5th International Conference on Web Intelligence, Mining and Semantics, page 10. ACM.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 455- 465.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Reside: Improving distantly-supervised neural relation extraction using side information",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Chiranjib",
"middle": [],
"last": "Sai Suman Prayaga",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1257--1266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018. Reside: Improving distantly-supervised neural rela- tion extraction using side information. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1257-1266.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recovering semantics of tables on the web",
"authors": [
{
"first": "Petros",
"middle": [],
"last": "Venetis",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Halevy",
"suffix": ""
},
{
"first": "Jayant",
"middle": [],
"last": "Madhavan",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "Warren",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Gengxin",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Chung",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the VLDB Endowment",
"volume": "4",
"issue": "",
"pages": "528--538",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petros Venetis, Alon Halevy, Jayant Madhavan, Mar- ius Pa\u015fca, Warren Shen, Fei Wu, Gengxin Miao, and Chung Wu. 2011. Recovering semantics of tables on the web. Proceedings of the VLDB Endowment, 4(9):528-538.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Understanding tables on the web",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haixun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhongyuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kenny",
"middle": [
"Q"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2012,
"venue": "International Conference on Conceptual Modeling",
"volume": "",
"issue": "",
"pages": "141--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingjing Wang, Haixun Wang, Zhongyuan Wang, and Kenny Q Zhu. 2012. Understanding tables on the web. In International Conference on Conceptual Modeling, pages 141-155. Springer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Adversarial training for relation extraction",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Russell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1778--1783",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Wu, David Bamman, and Stuart Russell. 2017. Ad- versarial training for relation extraction. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1778-1783.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine learning research",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. Journal of machine learning research, 3(Feb):1083-1106.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distant supervision for relation extraction via piecewise convolutional neural networks",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1753--1762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1753- 1762.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335-2344.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Effective and efficient semantic table interpretation using tableminer+. Semantic Web",
"authors": [
{
"first": "Ziqi",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "8",
"issue": "",
"pages": "921--957",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqi Zhang. 2017. Effective and efficient semantic ta- ble interpretation using tableminer+. Semantic Web, 8(6):921-957.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Overview of our method REDS2. REDS2 first obtains anchors of the target entity pair and constructs a 2-hop DS bag. Sentences in the 1-hop and 2-hop DS bag are individually encoded with a PCNN sentence encoder"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Precision-recall curves for the proposed model and various baselines."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Precision-recall curves on the subset of test entity pairs whose S T is not empty, to better show the effect of hierarchical bag aggregation design."
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>. Aside from plain</td></tr><tr><td>texts, there are large amounts of factual knowl-</td></tr><tr><td>edge in the Web expressed in hundreds of millions</td></tr><tr><td>of tables and other structured lists (Cafarella et al.,</td></tr><tr><td>2008; Venetis et al., 2011), which have not been</td></tr><tr><td>fully explored yet. Table understanding tries to</td></tr><tr><td>match tables to KB and parse the schemas. Ex-</td></tr><tr><td>isting methods for table understanding mainly fall</td></tr><tr><td>into two categories. One is based on local evi-</td></tr><tr><td>dence</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF2": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Dataset statistics. We show statistics of entity pairs that hold non-NA relations separately from overall, as they are important relational facts to discover. Among non-NA entity pairs, 38.18% in training and 46.79% in testing have nonempty S T , which respectively have 190.61 and 217.23 sentences on average."
},
"TABREF4": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Parameter settings in REDS2."
},
"TABREF6": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF8": {
"num": null,
"content": "<table><tr><td colspan=\"5\">max |S T | P@0.1 P@0.2 P@0.3 AUC</td></tr><tr><td>10</td><td>57.9</td><td>55.8</td><td>51.5</td><td>36.2</td></tr><tr><td>50</td><td>69.4</td><td>65.7</td><td>60.8</td><td>42.2</td></tr><tr><td>100</td><td>70.4</td><td>66.7</td><td>62.3</td><td>43.2</td></tr><tr><td>200</td><td>72.8</td><td>68.4</td><td>63.4</td><td>44.0</td></tr><tr><td>300</td><td>75.9</td><td>70.4</td><td>65.5</td><td>44.7</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Comparison on Precision@recall and AUC under different testing settings, detailed in Section 4.4.3."
},
"TABREF9": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Effect of the table expanded sentence bag size |S"
}
}
}
}