Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:03:01.771315Z"
},
"title": "Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data",
"authors": [
{
"first": "Ruidong",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Fen",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Leyu",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSNs) to learn similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data. Experiment results on two real-world datasets show that our framework can achieve significant improvements as compared with other state-of-the-art methods. Our code is available at https://github. com/thunlp/RSN.",
"pdf_parse": {
"paper_id": "D19-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSNs) to learn similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data. Experiment results on two real-world datasets show that our framework can achieve significant improvements as compared with other state-of-the-art methods. Our code is available at https://github. com/thunlp/RSN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation extraction (RE) aims to extract relational facts between two entities from plain texts. For example, with the sentence \"Hayao Miyazaki is the director of the film 'The Wind Rises'\", we can extract a relation \"director_of\" between two entities \"Hayao Miyazaki\" and \"The Wind Rises\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent progress in supervised methods to RE has achieved great successes. Supervised methods can effectively learn significant relation semantic patterns based on existing labeled data, but the data constructions are time-consuming and human-intensive. To lower the level of supervision, several semi-supervised approaches have been developed, including bootstrapping, active learning, label propagation (Pawar et al., 2017) . Mintz (2009) also proposes distant supervision to generate training data automatically. It assumes that if two entities have a relation in KBs, all sentences that contain these two entities will express this relation. Still, all these approaches can only extract pre-defined relations that have already appeared either in human-annotated datasets or KBs. It is hard for them to cover the great variety of novel relational facts in the open-domain corpora.",
"cite_spans": [
{
"start": 404,
"end": 424,
"text": "(Pawar et al., 2017)",
"ref_id": null
},
{
"start": 427,
"end": 439,
"text": "Mintz (2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Novel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network",
"sec_num": null
},
{
"text": "Open relation extraction (OpenRE) aims to extract relational facts on the open-domain corpus, where the relation types may not be predefined. There are some efforts concentrating on extracting triples with new relation types. Banko (2008) directly extracts words or phrases in sentences to represent new relation types. However, some relations cannot be explicitly represented with tokens in sentences, and it is hard to align different relational tokens that exactly have the same meanings. Yao (2011) consid-ers OpenRE as a clustering task for extracting triples with new relation types. However, previous clustering-based OpenRE methods (Yao et al., 2011 (Yao et al., , 2012 Marcheggiani and Titov, 2016; Elsahar et al., 2017) are mostly unsupervised, and cannot effectively select meaningful relation patterns and discard irrelevant information.",
"cite_spans": [
{
"start": 226,
"end": 238,
"text": "Banko (2008)",
"ref_id": "BIBREF3"
},
{
"start": 492,
"end": 502,
"text": "Yao (2011)",
"ref_id": "BIBREF25"
},
{
"start": 640,
"end": 657,
"text": "(Yao et al., 2011",
"ref_id": "BIBREF25"
},
{
"start": 658,
"end": 677,
"text": "(Yao et al., , 2012",
"ref_id": "BIBREF26"
},
{
"start": 678,
"end": 707,
"text": "Marcheggiani and Titov, 2016;",
"ref_id": "BIBREF18"
},
{
"start": 708,
"end": 729,
"text": "Elsahar et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network",
"sec_num": null
},
{
"text": "In this paper, we propose to take advantage of high-quality supervised data of pre-defined relations for OpenRE. The approach is non-trivial, however, due to the considerable gap between the pre-defined relations and novel relations of interest in open domain. To bridge the gap, we propose Relational Siamese Networks (RSNs) to learn transferable relational knowledge from supervised data for OpenRE. Specifically, RSNs learn relational similarity metrics from labeled data of pre-defined relations, and then transfer the metrics to measure the similarity of unlabeled sentences for open relation clustering. We describe the flowchart of our framework in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 656,
"end": 664,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Relational Siamese Network",
"sec_num": null
},
{
"text": "Moreover, we show that RSNs can also be generalized to various weakly-supervised scenarios. We propose Semi-supervised RSN to learn from both supervised data of pre-defined relations and unsupervised data with novel relations, and Distantly-supervised RSN to learn from distantly-supervised data and unsupervised data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network",
"sec_num": null
},
{
"text": "We conduct experiments on real-world RE datasets, FewRel and FewRel-distant, by splitting relations into seen and unseen set, and evaluate our models in supervised, semi-supervised, and distantly-supervised scenarios. The results demonstrate that our models significantly outperform state-of-the-art baseline methods in all scenarios without using external linguistic tools. To summarize, the main contributions of this work are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network",
"sec_num": null
},
{
"text": "(1) We develop a novel relational knowledge transfer framework RSN for OpenRE, which can effectively transfer existing relational knowledge to novel-relation data and accurately identify novel relations. To the best of our knowledge, RSN is the first model to consider knowledge transfer in clustering-based OpenRE task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network",
"sec_num": null
},
{
"text": "(2) We further propose Semi-supervised RSNs and Distantly-supervised RSNs that can learn from various weakly supervised scenarios. The experimental results show that all these RSN models achieve significant improvements in F-measure compared with state-of-the-art baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network",
"sec_num": null
},
{
"text": "Open Relation Extraction. Relation extraction (RE) is an important task in NLP. Traditional RE methods mainly concentrate on classifying relational facts into pre-defined relation types (Mintz et al., 2009; Yu et al., 2017) . Zeng (2014) utilizes CNN encoders to build sentence representations with the help of position embeddings. Lin (2016) further improves RE performance on distantlysupervised data via instance-level attention. These methods take advantage of supervised or distantlysupervised data to learn neural sentence encoders for distributed representations, and have achieved promising results. However, these methods cannot handle the open-ended growth of new relation types in the open-domain corpora.",
"cite_spans": [
{
"start": 186,
"end": 206,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF19"
},
{
"start": 207,
"end": 223,
"text": "Yu et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 226,
"end": 237,
"text": "Zeng (2014)",
"ref_id": "BIBREF29"
},
{
"start": 332,
"end": 342,
"text": "Lin (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To solve this problem, recently many efforts have been invested in exploring methods for open relation extraction (OpenRE), which aims to discover new relation types from unsupervised open-domain corpora. OpenRE methods can be roughly divided into two categories: taggingbased and clustering-based. Tagging-based methods cast OpenRE as a sequence labeling problem, and extract relational phrases consisting of words from sentences in unsupervised (Banko et al., 2007; Banko and Etzioni, 2008) or supervised paradigms (Jia et al., 2018; Cui et al., 2018; Stanovsky et al., 2018) . However, tagging-based methods often extract multiple overly-specific relational phrases for the same relation type, and cannot be readily utilized for downstream tasks.",
"cite_spans": [
{
"start": 447,
"end": 467,
"text": "(Banko et al., 2007;",
"ref_id": "BIBREF2"
},
{
"start": 468,
"end": 492,
"text": "Banko and Etzioni, 2008)",
"ref_id": "BIBREF3"
},
{
"start": 517,
"end": 535,
"text": "(Jia et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 536,
"end": 553,
"text": "Cui et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 554,
"end": 577,
"text": "Stanovsky et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In comparison, conventional clustering-based OpenRE methods extract rich features for relation instances via external linguistic tools, and cluster semantic patterns into several relation types (Lin and Pantel, 2001; Yao et al., 2011 Yao et al., , 2012 . Marcheggiani (2016) proposes a reconstructionbased model discrete-state variational autoencoder for OpenRE via unlabeled instances. Elsahar (2017) utilizes a clustering algorithm over linguistic features. In this paper, we focus on the clustering-based OpenRE methods, which have the advantage of discovering highly distinguishable relation types.",
"cite_spans": [
{
"start": 194,
"end": 216,
"text": "(Lin and Pantel, 2001;",
"ref_id": "BIBREF14"
},
{
"start": 217,
"end": 233,
"text": "Yao et al., 2011",
"ref_id": "BIBREF25"
},
{
"start": 234,
"end": 252,
"text": "Yao et al., , 2012",
"ref_id": "BIBREF26"
},
{
"start": 255,
"end": 274,
"text": "Marcheggiani (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Few-shot Learning. Few-shot learning aims to classify instances with a handful of labeled samples. Many efforts are devoted to few-shot image classification (Koch et al., 2015) and relation classification (Yuan et al., 2017; Han et al., 2018) . Notably, (Koch et al., 2015) introduces Convolu- tional Siamese Neural Network for image metric learning, which inspires us to learn relational similarity metrics for OpenRE.",
"cite_spans": [
{
"start": 157,
"end": 176,
"text": "(Koch et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 205,
"end": 224,
"text": "(Yuan et al., 2017;",
"ref_id": "BIBREF28"
},
{
"start": 225,
"end": 242,
"text": "Han et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 254,
"end": 273,
"text": "(Koch et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Semi-supervised Clustering. Semi-supervised clustering aims to cluster semantic patterns given instance seeds of target categories (Bair, 2013; Hongtao Lin, 2019) . Differently, our proposed Semi-supervised RSN only leverages labeled instances of pre-defined relations, and does not need any seed of new relations.",
"cite_spans": [
{
"start": 131,
"end": 143,
"text": "(Bair, 2013;",
"ref_id": "BIBREF1"
},
{
"start": 144,
"end": 162,
"text": "Hongtao Lin, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our OpenRE framework mainly consists of two modules, the relation similarity calculation module and the relation clustering module. For relation similarity calculation, we propose Relational Siamese Networks (RSNs), which learn to predict whether two sentences mention the same relation. To utilize large-scale unsupervised data and distantly-supervised data, we further propose Semi-supervised RSN and Distantly-supervised RSN. Finally, in the relation clustering module, with the learned relation metric, we utilize hierarchical agglomerative clustering (HAC) and Louvain clustering algorithms to cluster target relation instances of new relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The architecture of our Relational Siamese Networks is shown in Figure 2 . CNN modules encode a pair of relational instances into vectors, and several shared layers compute their similarity.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "Sentence Encoder. We use a CNN module as the sentence encoder. The CNN module includes an embedding layer, a convolutional layer, a max-pooling layer, and a fully-connected (FC) layer. The embedding layer transforms the words in a sentence x and the positions of entities e head and e tail into pre-trained word embeddings and random-initialized position embeddings. Following (Zeng et al., 2014) , we concatenate these embeddings to form a vector sequence. Next, a one-dimensional convolutional layer and a maxpooling layer transform the vector sequence into features. Finally, an FC layer with sigmoid activation maps features into a relational vector v. To summarize, we obtain a vector representation v for a relational sentence with our CNN module:",
"cite_spans": [
{
"start": 377,
"end": 396,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v = CNN(s),",
"eq_num": "(1)"
}
],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "in which we denote the joint information of a sentence x and two entities in it e head and e tail as a data sample s. And with paired input relational instances, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "vl = CNN(s l ), vr = CNN(sr),",
"eq_num": "(2)"
}
],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "in which two CNN modules are identical and share all the parameters. Similarity Computation. Next, to measure the similarity of two relational vectors, we calculate their absolute distance and transform it into a realnumber similarity p \u2208 [0, 1]. First, a distance layer computes the element-wise absolute distance of two vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "vd = |vl \u2212 vr|.",
"eq_num": "(3)"
}
],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "Then, a classifier layer calculates a metric p for relation similarity. The layer is a one-dimensionaloutput FC layer with sigmoid activation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p = \u03c3(kvd + b),",
"eq_num": "(4)"
}
],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "in which \u03c3 denotes the sigmoid function, k and b denote the weights and bias. To summarize, we obtain a good similarity metric p of relational instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "Cross Entropy Loss. The output of RSN p can also be explained as the probability of two sentences mentioning two different relations. Thus, we can use binary labels q and binary cross entropy loss to train our RSN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "L l = E d l \u223cD l [q ln(p \u03b8 (d l )) + (1 \u2212 q) ln(1 \u2212 p \u03b8 (d l ))], (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "in which \u03b8 indicates all the parameters in the RSN. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "labeled data d l p = 0.7 q = 0 Cross Entropy Relational Siamese Network \u2026 (a) Supervised RSN (auto)-labeled data d l q = 0 Cross Entropy (+VAT) Relational Siamese Network \u2026 unlabeled data d u \u2026 p = 0.7 p = 0.6 Conditional Entropy +VAT (b) Weakly-supervised RSNs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Siamese Network (RSN)",
"sec_num": "3.1"
},
{
"text": "To discover relation clusters in the open-domain corpus, it is beneficial to not only learn from labeled data, but also capture the manifold of unlabeled data in the semantic space. To this end, we need to push the decision boundaries away from high-density areas, which is known as the cluster assumption (Chapelle and Zien, 2005) . We try to achieve this goal with several additional loss functions. In the following paragraphs, we denote the labeled training dataset as D l and a couple of labeled relational instances as d l . Similarly, we denote the unlabeled training dataset as D u and a couple of unlabeled instances as d u .",
"cite_spans": [
{
"start": 306,
"end": 331,
"text": "(Chapelle and Zien, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "Conditional Entropy Loss. In classification problems, a well-classified embedding space usually reserves large margins between different classified clusters, and optimizing margin can be a promising way to facilitate training. However, in clustering problems, type labels are not available during training. To optimize margin without explicit supervision, we can push the data points away from the decision boundaries. Intuitively, when the distance similarity p between two relational instances equals 0.5, there is a high prob-ability that at least one of two instances is near the decision boundary between relation clusters. Thus, we use the conditional entropy loss (Grandvalet and Bengio, 2005) , which reaches the maximum when p = 0.5, to penalize close-boundary distribution of data points:",
"cite_spans": [
{
"start": 671,
"end": 700,
"text": "(Grandvalet and Bengio, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Lu = E du\u223cDu [p \u03b8 (du) ln(p \u03b8 (du))+ (1 \u2212 p \u03b8 (du)) ln(1 \u2212 p \u03b8 (du))].",
"eq_num": "(6)"
}
],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "Virtual Adversarial Loss. Despite its theoretical promise, conditional entropy minimization suffers from shortcomings in practice. Due to neural networks' strong fitting ability, a very complex decision hyperplane might be learned so as to keep away from all the training samples, which lacks generalizability. As a solution, we can smooth the relational representation space with locally-Lipschitz constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "To satisfy this constraint, we introduce virtual adversarial training (Miyato et al., 2016) on both branches of RSN. Virtual adversarial training can search through data point neighborhoods, and penalize most sharp changes in distance prediction. For labeled data, we have",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "(Miyato et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L vl = E d l \u223cD l [DKL(p \u03b8 (d l )||p \u03b8 (d l , t1, t2))],",
"eq_num": "(7)"
}
],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "in which D KL indicates the Kullback-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "Leibler di- vergence, p \u03b8 (d l , t 1 , t 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "indicates a new distance estimation with perturbations t 1 and t 2 on both input instances respectively. Specifically, t 1 and t 2 are worst-case perturbations that maximize the KL divergence between",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "p \u03b8 (d l ) and p \u03b8 (d l , t 1 , t 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "with a limited length. Empirically, we approximate the perturbations the same as the original paper (Miyato et al., 2016) . Specifically, we first add a random noise to the input, and calculate the gradient of the KL-divergence between the outputs of the original input and the noisy input. We then add the normalized gradient to the original input and get the perturbed input. And for unlabeled data, we have",
"cite_spans": [
{
"start": 100,
"end": 121,
"text": "(Miyato et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Lvu = E du\u223cDu [DKL(p \u03b8 (du)||p \u03b8 (du, t1, t2))],",
"eq_num": "(8)"
}
],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "in which the perturbations t 1 and t 2 are added to word embeddings rather than the words themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "To summarize, we use the following loss function to train Semi-supervised RSN, which learns from both labeled and unlabeled data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L all = L l + \u03bbvL vl + \u03bbu(Lu + \u03bbvLvu),",
"eq_num": "(9)"
}
],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "in which \u03bb v and \u03bb u are two hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised RSN",
"sec_num": "3.2"
},
{
"text": "To alleviate the intensive human labor for annotation, the topic of distantly-supervised learning has attracted much attention in RE. Here, we propose Distantly-supervised RSN, which can learn from both distantly-supervised data and unsupervised data for relational knowledge transfer. Specifically, we use the following loss function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distantly-supervised RSN",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L all = L l + \u03bbu(Lu + \u03bbvLvu),",
"eq_num": "(10)"
}
],
"section": "Distantly-supervised RSN",
"sec_num": "3.3"
},
{
"text": "which treats auto-labeled data as labeled data but removes the virtual adversarial loss on the autolabeled data. The reason to remove the loss is simple: virtual adversarial training on auto-labeled data can amplify the noise from false labels. Indeed, we do find that the virtual adversarial loss on autolabeled data can harm our model's performance in experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distantly-supervised RSN",
"sec_num": "3.3"
},
{
"text": "We do not use more denoising methods, since we think RSN has some inherent advantages of tolerating such noise. Firstly, the noise will be overwhelmed by the large proportion of negative sampling during training. Secondly, during clustering, the prediction of a new relation cluster is based on areas where the density of relational instances is high. Outliers from noise, as a result, will not influence the prediction process so much.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distantly-supervised RSN",
"sec_num": "3.3"
},
{
"text": "After RSN is learned, we can use RSN to calculate the similarity matrix of testing instances. With this matrix, several clustering methods can be applied to extract new relation clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open Relation Clustering",
"sec_num": "3.4"
},
{
"text": "Hierarchical Agglomerative Clustering. The first clustering method we adopt is hierarchical agglomerative clustering (HAC). HAC is a bottomup clustering algorithm. At the start, every testing instance is regarded as a cluster. For every step, it agglomerates two closest instances. There are several criteria to evaluate the distance between two clusters. Here, we adopt the complete-linkage criterion, which is more robust to extreme instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open Relation Clustering",
"sec_num": "3.4"
},
{
"text": "However, there is a significant shortcoming of HAC: it needs the exact number of clusters in advance. A potential solution is to stop agglomerating according to an empirical distance threshold, but it is hard to determine such a threshold. This problem leads us to consider another clustering algorithm Louvain (Blondel et al., 2008) .",
"cite_spans": [
{
"start": 311,
"end": 333,
"text": "(Blondel et al., 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Open Relation Clustering",
"sec_num": "3.4"
},
{
"text": "Louvain. Louvain is a graph-based clustering algorithm traditionally used for detecting communities. To construct the graph, we use the binary approximation of RSN's output, with 0 indicating an edge between two nodes. The advantage of Louvain is that it does not need the number of potential clusters beforehand. It will automatically find proper sizes of clusters by optimizing community modularity. According to the experiments we conduct, Louvain performs better than HAC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open Relation Clustering",
"sec_num": "3.4"
},
{
"text": "After running, Louvain might produce a number of singleton clusters with few instances. It is not proper to call these clusters new relation types, so we label these instances the same as their closest labeled neighbors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open Relation Clustering",
"sec_num": "3.4"
},
{
"text": "Finally, we want to explain the reason why we do not use some other common clustering methods like K-Means, Mean-Shift and Ward's (Ward Jr, 1963) method of HAC: these methods calculate the centroid of several points during clustering by merely averaging them. However, the relation vectors in our model are high-dimensional, and the distance metric described by RSN is non-linear. Consequently, it is not proper to calculate the centroid by simply averaging the vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open Relation Clustering",
"sec_num": "3.4"
},
{
"text": "In this section, we conduct several experiments on real-world RE datasets to show the effectiveness of our models, and give a detailed analysis to show its advantages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In experiments, we use FewRel (Han et al., 2018) as our first dataset. FewRel is a human-annotated dataset containing 80 types of relations, each with 700 instances. An advantage of FewRel is that every instance contains a unique entity pair, so RE models cannot choose the easy way to memorize the entities.",
"cite_spans": [
{
"start": 30,
"end": 48,
"text": "(Han et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We use the original train set of FewRel, which contains 64 relations, as labeled set with predefined relations, and the original validation set of FewRel, which contains 16 new relations, as the unlabeled set with novel relations to extract. We then randomly choose 1, 600 instances from the unlabeled set as the test set, with the rest labeled and unlabeled instances considered as the train set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The second dataset we use is FewRel-distant, which contains the distantly-supervised data obtained by the authors of FewRel before human an-notation. We follow the split of FewRel to obtain the auto-labeled train set and unlabeled train set. For evaluation, we use the human-annotated test set of FewRel with 1, 600 instances. Unlabeled instances already existing in this test set are removed from the unlabeled train set of FewRel-distant. Finally, the auto-labeled train set contains 323, 549 relational instances, and the unlabeled train set contains 60, 581 instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "A previous OpenRE work reports performance on an unpublic dataset called NYT-FB (Marcheggiani and Titov, 2016) . However, it has several shortcomings compared with FewRel-distant. First, NTY-FB's test set is distantly-supervised and is noisy for instance-level RE. Moreover, instances in NYT-FB often share entity pairs or relational phrases, which makes it much easier for relation clustering. Therefore, we think the results on FewRel-distant are convincing enough for Distantly-supervised OpenRE.",
"cite_spans": [
{
"start": 73,
"end": 110,
"text": "NYT-FB (Marcheggiani and Titov, 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Data Sampling. The input of RSN should be a pair of sampled instances. For the unlabeled set, the only possible sampling method is to select two instances randomly. For the labeled set, however, random selection would result in too many different-relation pairs, and cause severe biases for RSN. To solve this problem, we use downsampling. In our experiments, we fix the percentage of same-relation pairs in every labeled data batch as 6%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "Let us denote this percentage number as the sample ratio for convenience. Experimental results show that the sample ratio decides RSN's tendency to predict larger or smaller clusters. In other words, it controls the granularity of the predicted relation types. This phenomenon suggests a potential application of our model in hierarchical relation extraction. However, we leave any serious discussion to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "Hyperparameter Settings. Following (Lin et al., 2016) and (Zeng et al., 2014) , we fix the less influencing hyperparameters for sentence encoding as their reported optimal values. For word embeddings, we use pre-trained 50-dimensional Glove (Pennington et al., 2014) word embeddings. For position embeddings, we use randominitialized 5-dimensional position embeddings. During training, all the embeddings are trainable. For the neural network, the number of feature maps in the convolutional layer is 230. The filter length is 3. The activation function after the max-pooling layer is ReLU, and the activation functions after FC layers are sigmoid. Besides, we adopt two regularization methods in the CNN module. We put a dropout layer right after the embedding layer as (Miyato et al., 2016) . The dropout rate is 0.2. We also impose L2 regularization on the convolutional layer and the FC layer, with parameters of 0.0002 and 0.001 respectively. Hyperparameters for virtual adversarial training are just the same as (Miyato et al., 2016) proposed.",
"cite_spans": [
{
"start": 35,
"end": 53,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 58,
"end": 77,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 241,
"end": 266,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 771,
"end": 792,
"text": "(Miyato et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 1018,
"end": 1039,
"text": "(Miyato et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "At the same time, major hyperparameters are selected with grid search according to the model performance on a validation set. Specifically, the validation set contains 10,000 randomly chosen sentence pairs from the unlabeled set (i.e. 16 novel relations) and does not overlap with the test set. The model is evaluated according to the precision of binary classification of sentence pairs on the validation set, which is an estimation for models' clustering ability. We do not use F1 during model validation because the clustering steps are timeconsuming.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "For optimization, we use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0001, which is selected from {0.1, 0.01, 0.001, 0.0001, 0.00001}. The batch size is 100 selected from {25, 50, 100}. For hyperparameters in Equation 9 and Equation 10, \u03bb v is 1.0 selected from {0.1, 0.5, 1.0, 2.0} and \u03bb u is 0.03 selected from {0.01, 0.02, 0.03, 0.04, 0.05}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "For baseline models, original papers do grid search for all possible hyperparameters and report the best result during testing. We follow their settings and do grid search directly on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "In this section, we demonstrate the effectiveness of our RSN models by comparing our models with state-of-the-art clustering-based OpenRE methods. We also conduct ablation experiments to detailedly investigate the contributions of different mechanisms of Semi-supervised RSN and Distantly-supervised RSN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "Baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "Conventional clustering-based OpenRE models usually cluster instances by either clustering their linguistic features (Lin and Pantel, 2001; Yao et al., 2012; Elsahar et al., 2017) or imposing reconstruction constraints (Yao et al., 2011; Marcheggiani and Titov, 2016) . To demonstrate the effectiveness of our RSN models, we compare our models with two state-of-the-art models:",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "(Lin and Pantel, 2001;",
"ref_id": "BIBREF14"
},
{
"start": 140,
"end": 157,
"text": "Yao et al., 2012;",
"ref_id": "BIBREF26"
},
{
"start": 158,
"end": 179,
"text": "Elsahar et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 219,
"end": 237,
"text": "(Yao et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 238,
"end": 267,
"text": "Marcheggiani and Titov, 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "(1) HAC with re-weighted word embeddings (RW-HAC) (Elsahar et al., 2017) : RW-HAC is the state-of-the-art feature clustering model for OpenRE. The model first extracts KB types and NER tags of entities as well as re-weighted word embeddings from sentences, then adopts principal component analysis (PCA) to reduce feature dimensionality, and finally uses HAC to cluster the concatenation of reduced feature representations.",
"cite_spans": [
{
"start": 50,
"end": 72,
"text": "(Elsahar et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "(2) Discrete-state variational autoencoder (VAE) (Marcheggiani and Titov, 2016) : VAE is the state-of-the-art reconstruction-based model for OpenRE via unlabeled instances. It optimizes a relation classifier by reconstructing entities from pairing entities and predicted relation types. Rich features including entity words, context words, trigger words, dependency paths, and context POS tags are used to predict the relation type.",
"cite_spans": [
{
"start": 49,
"end": 79,
"text": "(Marcheggiani and Titov, 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "RW-HAC and VAE both rely on external linguistic tools to extract rich features from plain texts. Specifically, we first align entities to Wikidata and get their KB types. Next, we preprocess the instances with part-of-speech (POS) tagging, named-entity recognition (NER), and dependency parsing with Stanford CoreNLP . It is worth noting that these features are only used by baseline models. Our models, in contrast, only use sentences and entity pairs as inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "Evaluation Protocol. In evaluation, we use B 3 metric (Bagga and Baldwin, 1998) as the scoring function. B 3 metric is a standard measure to balance the precision and recall of clustering tasks, and is commonly used in previous OpenRE works (Marcheggiani and Titov, 2016; Elsahar et al., 2017) . To be specific, we use F 1 measure, the harmonic mean of precision and recall.",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 241,
"end": 271,
"text": "(Marcheggiani and Titov, 2016;",
"ref_id": "BIBREF18"
},
{
"start": 272,
"end": 293,
"text": "Elsahar et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "First, we report the result of supervised RSN with different clustering methods. Specifically, SN represents the original RSN structure, HAC and L indicate HAC and Louvain clustering introduced in Sec. 3.3. The result shows that Louvain performs better than HAC, so in the following experiments we focus on using Louvain clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "Next, for Semi-supervised and Distantlysupervised RSN, we conduct various combinations of different mechanisms to verify the contribution of each part. (+C) indicates that the model is powered up with conditional entropy minimization, while (+V) indicates that the model is pow- Experimental Result Analysis. Table 1 shows the experimental results, from which we can observe that:",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 1",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "(1) RSN models outperform all baseline models on precision, recall, and F1-score, among which Weakly-supervised RSN (SN-L+CV) achieves state-of-the-art performances. This indicates that RSN is capable of understanding new relations' semantic meanings within sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "(2) Supervised and distantly-supervised relational representations improve clustering performances. Compared with RW-HAC, SN-HAC achieves better clustering results because of its supervised relational representation and similarity metric. Specifically, unsupervised baselines mainly use sparse one-hot features. RW-HAC uses word embeddings, but integrates them in a rulebased way. In contrast, RSN uses distributed feature representations, and can optimize information integration process according to supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "(3) Louvain outperforms HAC for clustering with RSN, comparing SN-HAC with SN-L. One explanation is that our model does not put additional constraints on the prior distribution of relational vectors, and therefore the relation clusters might have odd shapes in violation of HAC's assumption. Moreover, when representations are not distinguishable enough, forcing HAC to find finegrained clusters may harm recall while contributing minimally to precision. In practice, we do observe that the number of relations SN-L extracts is constantly less than the true number 16.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "(4) Both SN-L+V and SN-L+C improve the performance of supervised or distantly-supervised RSN by further utilizing unsupervised corpora. Both semi-supervised approaches bring significant improvements for F 1 scores by increasing the precision and recall, and combining both can further increase the F 1 score. (5) One interesting observation is that SN-L+V does not outperform SN-L so much on FewReldistant. This is probably because VAT on the noisy data might amplify the noise. In further experiments, we perform VAT only on unlabeled set and observe improvements on F 1 , with SN-L+V from 45.8% to 49.2% and SN-L+CV from 52.0% to 52.6%, which proves this conjecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results on OpenRE",
"sec_num": "4.3"
},
{
"text": "In this subsection, we mainly focus on analyzing the influence of pre-defined relation diversity, i.e., the number of relations in the labeled train set. To study this influence, we use FewRel for evaluation and change the number of relations in the labeled train set from 40 to 64 while fixing the total num-ber of labeled instances to 25, 000, and report the clustering results in Figure 5 . Several conclusions can be drawn according to Figure 5 . Firstly, a rich variety of labeled relations do improve the performance of our models, especially RSN. The models trained on 64 relations perform better than those trained on 40 relations constantly. Secondly, while the performance of supervised RSN is very sensitive to pre-defined relation diversity, its semi-supervised counterparts suffer much less from the relation number limit. This phenomenon suggests that Semi-supervised RSNs succeed in learning from unlabeled novelrelation data and are more generalizable to novel relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 383,
"end": 391,
"text": "Figure 5",
"ref_id": null
},
{
"start": 440,
"end": 448,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Influence of Pre-defined Relation Diversity on Generalizability",
"sec_num": "4.4"
},
{
"text": "To intuitively evaluate the knowledge transfer effects of RSN and Semi-supervised RSN, we visualize their relational knowledge representation spaces in the last layer of CNN encoders with t-SNE (Maaten and Hinton, 2008) in Figure 4 . We also compare with a supervised CNN trained on 9, 600 labeled instances of novel relations, which suggests the optimal relational knowledge representation. In each figure, we plot 402 relation instances of 4 randomly-chosen relation types in the test set, and points are colored according to their ground-truth labels.",
"cite_spans": [
{
"start": 194,
"end": 219,
"text": "(Maaten and Hinton, 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 223,
"end": 231,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relational Knowledge Representation Visualization",
"sec_num": "4.5"
},
{
"text": "As we can see from Figure 4 , RSN is able to roughly distinguish different relations, and Semi-supervised RSN further facilitated knowledge transfer by optimizing the margin between potential relation clusters during training. As a result, Semi-supervised RSN can extract more distinguishable novel relations, and gains comparable relational knowledge representation ability with supervised CNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relational Knowledge Representation Visualization",
"sec_num": "4.5"
},
{
"text": "In this paper, we propose a new model Relational Siamese Network (RSN) for OpenRE. Different from conventional unsupervised models, our model learns to measure relational similarity from supervised/distantly-supervised data of predefined relations, as well as unsupervised data of novel relations. There are mainly two innovative points in our model. First, we propose to transfer relational similarity knowledge with RSN structure. To the best of our knowledge, we are the first to propose knowledge transfer for OpenRE. Second, we propose Semi/Distantly-supervised RSN, to further perform semi-supervised and distantlysupervised transfer learning. Experiments show that our models significantly surpass conventional OpenRE models and achieve new state-of-the-art performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "For future research, we plan to explore the following directions: (1) Besides CNN, there are some other popular sentence encoder structures like piecewise convolutional neural network (PCNN) and Long Short-Term Memory (LSTM) for RE. In the future, we can try different sentence encoders in our model. (2) As mentioned above, our model has the potential ability to discover the hierarchical structure of relations. In the future, we will try to explore this application with additional experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "To highlight our model's ability to extract new relations, testing instances only contain new relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Here for FewRel-distant we use Equation 10 rather than Equation 9 as loss, which corresponds to Distantlysupervised RSN, and this brings a minor improvement on F1 from 52.0% to 52.6%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61572273, 61661146007). Ruidong Wu is also supported by Tsinghua University Initiative Scientific Research Program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the First Iternational Conference on Language Resources and Evaluation Workshop on Linguistics Coreference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the First Iternational Conference on Language Re- sources and Evaluation Workshop on Linguistics Coreference.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semi-supervised clustering methods",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bair",
"suffix": ""
}
],
"year": 2013,
"venue": "Wiley Interdisciplinary Reviews: Computational Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Bair. 2013. Semi-supervised clustering meth- ods. Wiley Interdisciplinary Reviews: Computa- tional Statistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Pro- ceedings of IJCAI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The tradeoffs between open and traditional relation extraction",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In Proceedings of ACL-HLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fast unfolding of communities in large networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"Loup"
],
"last": "Blondel",
"suffix": ""
},
{
"first": "Renaud",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Lambiotte",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lefebvre",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Statistical Mechanics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent D Blondel, Jean Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast un- folding of communities in large networks. Journal of Statistical Mechanics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semisupervised classification by low density separation",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Chapelle",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Zien",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Chapelle and Alexander Zien. 2005. Semi- supervised classification by low density separation. In Proceedings of AISTATS.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural open information extraction. arXiv",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. arXiv.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised open relation extraction",
"authors": [
{
"first": "Hady",
"middle": [],
"last": "Elsahar",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Demidova",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Gottschalk",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Gravier",
"suffix": ""
},
{
"first": "Frederique",
"middle": [],
"last": "Laforest",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of European Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hady Elsahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Frederique Laforest. 2017. Unsupervised open relation extraction. In Proceed- ings of European Semantic Web Conference.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semisupervised learning by entropy minimization",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Grandvalet",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Grandvalet and Yoshua Bengio. 2005. Semi- supervised learning by entropy minimization. In Proceedings of NIPS.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ziyun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classifica- tion dataset with state-of-the-art evaluation. In Pro- ceedings of EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning dual retrieval module for semi-supervised relation extraction",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Qu Xiang Ren Hongtao Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Qu Xiang Ren Hongtao Lin, Jun Yan. 2019. Learning dual retrieval module for semi-supervised relation extraction. In Proceedings of WWW.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Supervised neural models revitalize the open relation extraction",
"authors": [
{
"first": "Shengbin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengbin Jia, Yang Xiang, and Xiaojun Chen. 2018. Supervised neural models revitalize the open rela- tion extraction. arXiv.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Siamese neural networks for one-shot image recognition",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Koch",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICML Deep Learning Workshop",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Koch, Richard Zemel, and Ruslan Salakhut- dinov. 2015. Siamese neural networks for one-shot image recognition. In Proceedings of ICML Deep Learning Workshop, volume 2.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dirt -discovery of inference rules from text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Dirt -discovery of inference rules from text. In Proceedings of KDD.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "JMLRJournal of Statistical Mechanics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLRJournal of Sta- tistical Mechanics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Discretestate variational autoencoders for joint discovery and factorization of relations",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Ivan Titov. 2016. Discrete- state variational autoencoders for joint discovery and factorization of relations. Transactions of ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of ACL- IJCNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adversarial training methods for semisupervised text classification",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Andrew M Dai, and Ian Goodfel- low. 2016. Adversarial training methods for semi- supervised text classification. arXiv.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christoper",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christoper Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Supervised open information extraction",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of NAACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Hierarchical grouping to optimize an objective function",
"authors": [
{
"first": "H",
"middle": [],
"last": "Joe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 1963,
"venue": "Journal of the American Statistical Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe H Ward Jr. 1963. Hierarchical grouping to opti- mize an objective function. Journal of the American Statistical Association.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Structured relation discovery using generative models",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and An- drew Mccallum. 2011. Structured relation discov- ery using generative models. In Proceedings of EMNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised relation discovery with sense disambiguation",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCallum. 2012. Unsupervised relation discovery with sense disambiguation. In Proceedings of ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Open relation extraction and grounding",
"authors": [
{
"first": "Dian",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dian Yu, Lifu Huang, and Heng Ji. 2017. Open re- lation extraction and grounding. In Proceedings of IJCNLP.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "One-shot learning for fine-grained relation extraction via convolutional siamese neural network",
"authors": [
{
"first": "Jianbo",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zhiwei",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Xianchao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of BigData",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianbo Yuan, Han Guo, Zhiwei Jin, Hongxia Jin, Xian- chao Zhang, and Jiebo Luo. 2017. One-shot learn- ing for fine-grained relation extraction via convolu- tional siamese neural network. In Proceedings of BigData.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "A flowchart of our framework. Our model RSN learns from both labeled instances of pre-defined relations and unlabeled instances of new relations, and tries to cluster testing instances of new relations. 1",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "The architecture of Relational Siamese Networks. The output is the similarity between two relational instances.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "The t-SNE visualization of the output vectors of CNN modules in our (a) OpenRE model RSN, (b) Semi-supervised RSN facilitated by unlabeled novel-relation data and in (c) a classical RE baseline trained with labeled novel-relation data. All figures visualize the clustering result for 402 instances of 4 novel relations. The clustering results with different numbers of pre-defined training relations.",
"type_str": "figure"
},
"TABREF2": {
"text": "VAT). In figures, p indicates the predicted similarity of two relational sentences, while q indicates the ground-truth label between them.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Figure 3: The comparison of (a) Supervised RSN</td></tr><tr><td>and (b) Weakly-supervised RSNs. Weakly-supervised</td></tr><tr><td>RSNs, including Semi-supervised RSN and Distantly-</td></tr><tr><td>supervised RSN, further learn from unlabeled data with</td></tr><tr><td>conditional entropy minimization and virtual adversar-</td></tr><tr><td>ial training (</td></tr></table>"
},
"TABREF4": {
"text": "Precision, recall and F1 results (%) for different models. The first two models are baselines. The next five models are different variants of our model. ered up with virtual adversarial training.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}