Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:05:22.514020Z"
},
"title": "Uncover the Ground-Truth Relations in Distant Supervision: A Neural Expectation-Maximization Framework",
"authors": [
{
"first": "Junfan",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "chenjf@act.buaa.edu.cn"
},
{
"first": "Richong",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "zhangrc@act.buaa.edu.cn"
},
{
"first": "Yongyi",
"middle": [],
"last": "Mao",
"suffix": "",
"affiliation": {},
"email": "ymao@uottawa.ca"
},
{
"first": "Hongyu",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {},
"email": "hongyu.guo@nrc-cnrc.gc.ca"
},
{
"first": "Jie",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {},
"email": "j.xu@leeds.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distant supervision for relation extraction enables one to effectively acquire structured relations out of very large text corpora with less human efforts. Nevertheless, most of the priorart models for such tasks assume that the given text can be noisy, but their corresponding labels are clean. Such unrealistic assumption is contradictory with the fact that the given labels are often noisy as well, thus leading to significant performance degradation of those models on real-world data. To cope with this challenge, we propose a novel label-denoising framework that combines neural network with probabilistic modelling, which naturally takes into account the noisy labels during learning. We empirically demonstrate that our approach significantly improves the current art in uncovering the ground-truth relation labels.",
"pdf_parse": {
"paper_id": "D19-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Distant supervision for relation extraction enables one to effectively acquire structured relations out of very large text corpora with less human efforts. Nevertheless, most of the priorart models for such tasks assume that the given text can be noisy, but their corresponding labels are clean. Such unrealistic assumption is contradictory with the fact that the given labels are often noisy as well, thus leading to significant performance degradation of those models on real-world data. To cope with this challenge, we propose a novel label-denoising framework that combines neural network with probabilistic modelling, which naturally takes into account the noisy labels during learning. We empirically demonstrate that our approach significantly improves the current art in uncovering the ground-truth relation labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation extraction aims at automatically extracting semantic relationships from a piece of text. Consider the sentence \"Larry Page, the chief executive officer of Alphabet, Google's parent company, was born in East Lansing, Michigan.\". The knowledge triple (Larry Page, employed by, Google) can be extracted. Despite various efforts in building relation extraction models (Zelenko et al., 2002; Zhou et al., 2005; Bunescu and Mooney, 2005; Zeng et al., 2014; dos Santos et al., 2015; Yang et al., 2016; Xu et al., 2015; Miwa and Bansal, 2016; G\u00e1bor et al., 2018) , the difficulty of obtaining abundant training data with labelled relations remains a challenge, and thus motivates the development of Distant Supervision relation extraction (Mintz et al., 2009; Riedel et al., 2010; Zeng et al., 2015; Lin et al., 2016; Ji et al., 2017; Zeng et al., 2018; Feng et al., * Corresponding author 2018; Wang et al., 2018b,a; Hoffmann et al., 2011; Surdeanu et al., 2012; Jiang et al., 2016; Ye et al., 2017; Su et al., 2018; Qu et al., 2018) . Distant supervision (DS) relation extract methods collect a large dataset with \"distant\" supervision signal and learn a relation predictor from such data. In detail, each example in the dataset contains a collection, or bag, of sentences all involving the same pair of entities extracted from some corpus (e.g., news reports). Although such a dataset is expected to be very noisy, one hopes that when the dataset is large enough, useful correspondence between the semantics of a sentence and the relation label it implies still reinforces and manifests itself. Despite their capability of learning from large scale data, we show in this paper that these DS relation extraction strategies fail to adequately model the characteristic of the noise in the data. Specifically, most of the works fail to recognize that the labels can be noisy in a bag and directly use bag labels as training targets.",
"cite_spans": [
{
"start": 373,
"end": 395,
"text": "(Zelenko et al., 2002;",
"ref_id": "BIBREF27"
},
{
"start": 396,
"end": 414,
"text": "Zhou et al., 2005;",
"ref_id": "BIBREF32"
},
{
"start": 415,
"end": 440,
"text": "Bunescu and Mooney, 2005;",
"ref_id": "BIBREF0"
},
{
"start": 441,
"end": 459,
"text": "Zeng et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 460,
"end": 484,
"text": "dos Santos et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 485,
"end": 503,
"text": "Yang et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 504,
"end": 520,
"text": "Xu et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 521,
"end": 543,
"text": "Miwa and Bansal, 2016;",
"ref_id": "BIBREF14"
},
{
"start": 544,
"end": 563,
"text": "G\u00e1bor et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 740,
"end": 760,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF13"
},
{
"start": 761,
"end": 781,
"text": "Riedel et al., 2010;",
"ref_id": "BIBREF16"
},
{
"start": 782,
"end": 800,
"text": "Zeng et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 801,
"end": 818,
"text": "Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 819,
"end": 835,
"text": "Ji et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 836,
"end": 854,
"text": "Zeng et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 855,
"end": 896,
"text": "Feng et al., * Corresponding author 2018;",
"ref_id": null
},
{
"start": 897,
"end": 918,
"text": "Wang et al., 2018b,a;",
"ref_id": null
},
{
"start": 919,
"end": 941,
"text": "Hoffmann et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 942,
"end": 964,
"text": "Surdeanu et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 965,
"end": 984,
"text": "Jiang et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 985,
"end": 1001,
"text": "Ye et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 1002,
"end": 1018,
"text": "Su et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 1019,
"end": 1035,
"text": "Qu et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This aforementioned observation has inspired us to study a more realistic setting for DS relation extraction. That is, we treat the bag labels as the noisy observations of the ground-truth labels. To that end, we develop a novel framework that jointly models the semantics representation of the bag, the latent ground truth labels, and the noisy observed labels. The framework, probabilistic in nature, allows any neural network, that encodes the bag semantics, to nest within. We show that the well-known Expectation-Maximization (EM) algorithm can be applied to the end-to-end learning of the models built with this framework. As such, we term the framework Neural Expectation-Maximization, or the nEM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since our approach deviates from the conventional models and regards bag labels not as the ground truth, bags with the real ground-truth la- In each matrix, rows correspond to the sentences in a bag, and columns correspond to the labels assigned to the bag. A check mark on (S i , Lj) indicates that the label L j is supported by sentence S i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "bels are required for evaluating our model. To that end, we manually re-annotate a fraction of the testing bags in a standard DS dataset with their real ground-truth labels. We then perform extensive experiments and demonstrate that the proposed nEM framework improves the current state-of-the-art models in uncovering the ground truth relations. To the best of our knowledge, this work is the first that combines a neural network model with EM training under the \"noisy-sentence noisy-label\" assumption. The re-annotated testing dataset 1 , containing the ground-truth relation labels, would also benefit the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let set R contain all relation labels of interest. Specifically, each label r in R corresponds to a candidate relation in which any considered pair (e, e ) of entities may participate. Since R cannot contain all relations that are implied in a corpus, we include in R an additional relation label \"NA\", which refers to any relation that cannot be regarded as the other relations in R. Any subset of R will be written as a {0, 1}valued vector of length |R|, for example, z, where each element of the vector z corresponds to a label r \u2208 R. Specifically, if and only if label r is contained in the subset, its corresponding element z[r] of z equals 1. If two entities e and e participate in a relation r, we say that the triple (e, r, e ) is fac-1 Will be released upon the acceptance of the paper tual. Let B be a finite set, in which each b \u2208 B is a pair (e, e ) of entities. Each b = (e, e ) \u2208 B serves as the index for a bag x b of sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "2"
},
{
"text": "The objective of DS is to use a large but noisy training set to learn a predictor that predicts the relations involving two arbitrary (possibly unseen) entities; the predictor takes as input a bag of sentences each containing the two entities of interest, and hopefully outputs the set of all relations in which the two entities participate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "2"
},
{
"text": "Relation extraction is an important task in natural language processing. Many approaches with supervised methods have been proposed to complete this task. These works, such as (Zelenko et al., 2002; Zhou et al., 2005; Bunescu and Mooney, 2005) , although achieving good performance, rely on carefully selected features and well labelled dataset. Recently, neural network models, have been used in (Zeng et al., 2014; dos Santos et al., 2015; Yang et al., 2016; Xu et al., 2015; Miwa and Bansal, 2016) for supervised relation extraction. These models avoid feature engineering and are shown to improve upon previous models. But having a large number of parameters to estimate, these models rely heavily on costly human-labeled data.",
"cite_spans": [
{
"start": 176,
"end": 198,
"text": "(Zelenko et al., 2002;",
"ref_id": "BIBREF27"
},
{
"start": 199,
"end": 217,
"text": "Zhou et al., 2005;",
"ref_id": "BIBREF32"
},
{
"start": 218,
"end": 243,
"text": "Bunescu and Mooney, 2005)",
"ref_id": "BIBREF0"
},
{
"start": 397,
"end": 416,
"text": "(Zeng et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 417,
"end": 441,
"text": "dos Santos et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 442,
"end": 460,
"text": "Yang et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 461,
"end": 477,
"text": "Xu et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 478,
"end": 500,
"text": "Miwa and Bansal, 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Art and Related Works",
"sec_num": "3"
},
{
"text": "Distant supervision was proposed in (Mintz et al., 2009) to automatically generate large dataset through aligning the given knowledge base to text corpus. However, such dataset can be quite noisy. To articulate the nature of noise in DS dataset, a sentence is said to be noisy if it supports no relation labels of the bag, and a label of the bag is said to be noisy if it is not supported by any sentence in the bag. A sentence or label that is not noisy will be called clean. The cleanness of a training example may obey the following four assumptions, for each of which an example is given In Figure 1 .",
"cite_spans": [
{
"start": 36,
"end": 56,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 595,
"end": 603,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Prior Art and Related Works",
"sec_num": "3"
},
{
"text": "\u2022 Clean-Sentence Clean-Label (CSCL): All sentences and all labels are clean (Figure 1(a) ). \u2022 Noisy-Sentence Clean-Label (NSCL): Some sentences may be noisy but all labels are clean (Figure 1(b) ). Note that CSCL is a special case of NSCL. \u2022 Clean-Sentence Noisy-Label (CSNL): All sentences are clean but some labels may be noisy (Figure 1(c) ). Note that CSNL includes CSCL as a special case.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 88,
"text": "(Figure 1(a)",
"ref_id": "FIGREF0"
},
{
"start": 182,
"end": 194,
"text": "(Figure 1(b)",
"ref_id": "FIGREF0"
},
{
"start": 330,
"end": 342,
"text": "(Figure 1(c)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Prior Art and Related Works",
"sec_num": "3"
},
{
"text": "\u2022 Noisy-Sentence Noisy-Label (NSNL): Some sentences may be noisy and some labels may also be noisy (Figure 1(d) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 111,
"text": "(Figure 1(d)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Prior Art and Related Works",
"sec_num": "3"
},
{
"text": "Obviously, CSCL, NSCL, CSNL are all special cases of NSNL. Thus NSNL is the most general among all these assumptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Art and Related Works",
"sec_num": "3"
},
{
"text": "The author of (Mintz et al., 2009) creates a model under the CSCL assumption, which is however pointed out a too strong assumption (Riedel et al., 2010) . To alleviate this issue, many studies adopt NSCL assumption. Some of them, including (Riedel et al., 2010; Zeng et al., 2015; Lin et al., 2016; Ji et al., 2017; Zeng et al., 2018; Feng et al., 2018; Wang et al., 2018b,a) , formulate the task as a multi-instance learning problem where only one label is allowed for each bag. These works take sentence denoising through selecting the max-scored sentence (Riedel et al., 2010; Zeng et al., 2015 Zeng et al., , 2018 , applying sentence selection with soft attention (Lin et al., 2016; Ji et al., 2017) , performing sentence level prediction as well as filtering noisy bags (Feng et al., 2018) and redistributing the noisy sentences into negative bags (Wang et al., 2018b,a) . Other studies complete this task with multi-instance multi-label learning (Hoffmann et al., 2011; Surdeanu et al., 2012; Jiang et al., 2016; Ye et al., 2017; Su et al., 2018) , which allow overlapping relations in a bag. Despite the demonstrated successes, these models ignore the fact that relation labels can be noisy and \"noisy\" sentences that indeed point to factual relations may also be ignored. Two recent approaches using the NSNL assumption have also been introduced, but these methods are evaluated based on the assumption that the evaluation labels are clean.",
"cite_spans": [
{
"start": 14,
"end": 34,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 131,
"end": 152,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 240,
"end": 261,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF16"
},
{
"start": 262,
"end": 280,
"text": "Zeng et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 281,
"end": 298,
"text": "Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 299,
"end": 315,
"text": "Ji et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 316,
"end": 334,
"text": "Zeng et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 335,
"end": 353,
"text": "Feng et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 354,
"end": 375,
"text": "Wang et al., 2018b,a)",
"ref_id": null
},
{
"start": 558,
"end": 579,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF16"
},
{
"start": 580,
"end": 597,
"text": "Zeng et al., 2015",
"ref_id": "BIBREF28"
},
{
"start": 598,
"end": 617,
"text": "Zeng et al., , 2018",
"ref_id": "BIBREF30"
},
{
"start": 668,
"end": 686,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 687,
"end": 703,
"text": "Ji et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 775,
"end": 794,
"text": "(Feng et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 853,
"end": 875,
"text": "(Wang et al., 2018b,a)",
"ref_id": null
},
{
"start": 952,
"end": 975,
"text": "(Hoffmann et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 976,
"end": 998,
"text": "Surdeanu et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 999,
"end": 1018,
"text": "Jiang et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 1019,
"end": 1035,
"text": "Ye et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 1036,
"end": 1052,
"text": "Su et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Art and Related Works",
"sec_num": "3"
},
{
"text": "We note that this paper is not the first work that combines neural networks with EM training. Very recently, a model also known as Neural Expectation-Maximization or NEM (Greff et al., 2017) has been presented to learn latent representations for clusters of objects (e.g., images) under a complete unsupervised setting. The NEM model is not directly applicable to our problem setting which deals with noisy supervision signals from categorical relation labels. Nonetheless, given the existence of the acronym NEM, we choose to abbreviate our Neural Expecation-Maximization model as the nEM model.",
"cite_spans": [
{
"start": 170,
"end": 190,
"text": "(Greff et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Art and Related Works",
"sec_num": "3"
},
{
"text": "We first introduce the nEM architecture and its learning strategy, and then present approaches to encode a bag of sentences (i.e., the Bag Encoding Models) needed by the framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Framework",
"sec_num": "4"
},
{
"text": "Let random variable X denote a random bag and random variable Z denote the label set assigned to X. Under the NSNL assumption, Z, or some labels within, may not be clean for X. We introduce another latent random variable Y , taking values as a subset of R, indicating the set of ground-truth (namely, clean) labels for X. We will write Y again as an |R|-dimensional {0, 1}-valued vector. From here on, we will adopt the convention that a random variable will be written using a capitalized letter, and the value it takes will be written using the corresponding lower-cased letter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "A key modelling assumption in nEM is that random variables X, Y and Z form a Markov chain X \u2192 Y \u2192 Z. Specifically, the dependency of noisy labels Z on the bag X is modelled as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "p Z|X (z|x) := y\u2208{0,1} |R| p Y |X (y|x)p Z|Y (z|y) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "The conditional distribution p Z|Y is modelled as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "p Z|Y (z|y) := r\u2208R p Z[r]|Y [r] (z[r]|y[r]) (2) That is, for each ground-truth label r \u2208 R, Z[r] depends only on Y [r]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": ". Furthermore, we assume that for each r, there are two parameters \u03c6 0 r and \u03c6 1 r governing the dependency of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z[r] on Y [r] via p Z[r]|Y [r] (z[r]|y[r]):= \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u03c6 0 r y[r]= 0,z[r]= 1 1\u2212\u03c6 0 r y[r]= 0,z[r]= 0 \u03c6 1 r y[r]= 1,z[r]= 0 1\u2212\u03c6 1 r y[r]= 1,z[r]= 1",
"eq_num": "(3)"
}
],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "We will denote by {\u03c6 0 r , \u03c6 1 r : r \u2208 R} collectively by \u03c6. .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "On the other hand, we model p Y |X by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p Y |X (y|x) := r\u2208R p Y [r]|X (y[r]|x)",
"eq_num": "(4)"
}
],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "where x is the encoding of bag is essentially a logistic regression (binary) classifier based on the encoding x of bag x. We will denote by \u03b8 the set of all parameters {r : r \u2208 R} and the parameters for generating encoding x. At this end, we have defined the overall structure of the nEM framework (summarized in Figure 2 ). Next, we will discuss the learning of it. ",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 322,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The nEM Architecture",
"sec_num": "4.1"
},
{
"text": "Let (z|x; \u03c6, \u03b8) be the log-likelihood of observing the label set z given the bag x, that is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(z|x; \u03c6, \u03b8) := log p Z|X (z|x).",
"eq_num": "(6)"
}
],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "The structure of the framework readily enables a principled learning algorithm based on the EM algorithm (Dempster et al., 1977) . Let total (\u03c6, \u03b8) be the log-likelihood defined as",
"cite_spans": [
{
"start": 105,
"end": 128,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "total (\u03c6, \u03b8) := b\u2208B (z b |x b ; \u03c6, \u03b8).",
"eq_num": "(7)"
}
],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "The learning problem can then be formulated as maximizing this objective function over its parameters (\u03c6, \u03b8), or solving",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( \u03c6, \u03b8) := arg max \u03c6,\u03b8 total (\u03c6, \u03b8).",
"eq_num": "(8)"
}
],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "Let Q be an arbitrary distribution over {0, 1} |R| . Then it is possible to show",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(z|x; \u03c6, \u03b8) = log y\u2208{0,1} |R| Q(y) r\u2208R p Y [r]|X (y[r]|x)p Z[r]|Y [r] (z[r]|y[r]) Q(y) y\u2208{0,1} |R| Q(y) log r\u2208R p Y [r]|X (y[r]|x)p Z[r]|Y [r] (z[r]|y[r]) Q(y) = L(z|x; \u03c6, \u03b8, Q)",
"eq_num": "(9)"
}
],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "where the lower bound L(z|x; \u03c6, \u03b8, Q), often referred to as the variational lower bound. Now we define L total as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "L total (\u03c6,\u03b8,{Q b : b \u2208 B}):= b\u2208B L (z b |x b ;\u03c6,\u03b8,Q b ) (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "where we have introduced a Q b for each b \u2208 B. Instead of solving the original optimization problem (8), we can turn to solving a different optimization problem by maximizing L total",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "( \u03c6, \u03b8, { Q b }) := arg max \u03c6,\u03b8,{Q b } L total (\u03c6, \u03b8, {Q b }) (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "The EM algorithm for solving the optimization problem (11) is essentially the coordinate ascent algorithm on objective function L total , where we iterate over two steps, the E-Step and the M-Step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "In the E-Step, we maximizes L total over {Q b } for the current setting of (\u03c6, \u03b8) and in the M-Step, we maximize L total (\u03c6, \u03b8) for the current setting of {Q b }. We now describe the two steps in detail, where we will use superscript t to denote the iteration number. E-step: In this step, we hold (\u03c6, \u03b8) fixed and update {Q b : b \u2208 B} to maximize the lower bound L total (\u03c6, \u03b8, {Q b }). This boils down to update each factor Q b,r of Q b according to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q t+1 b,r (y[r]) : = p Y [r]|X,Z (y b [r]|x b , z b [r]; \u03b8 t , \u03c6 t r ) = p Z|Y (z b [r]|y b [r]; \u03c6 t r )p Y |X (y b [r]|x b ; \u03b8 t ) p Z|X (z b [r]|x b ; \u03b8 t , \u03c6 t r ) = p Z|Y (z b [r]|y b [r]; \u03c6 t r )p Y |X (y b [r]|x b ; \u03b8 t ) y[r]\u2208{0,1} p Z|Y (z b [r]|y[r]; \u03c6 t r )p Y |X (y[r]|x b ; \u03b8 t )",
"eq_num": "(12)"
}
],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "M-step: In this step, we hold {Q b } fixed and update (\u03c6, \u03b8), to maximize the lower bound",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L total (\u03c6, \u03b8, {Q b }). Let M = r\u2208R p Y [r]|X (y b [r]|x b )p Z[r]|Y [r] (z b [r]|y b [r]),",
"eq_num": "(13)"
}
],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "then (\u03c6, \u03b8) is updated according to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(\u03b8 t+1 , \u03c6 t+1 ) := argmax \u03b8,\u03c6 L total (\u03b8, \u03c6, {Q t+1 b }) = argmax \u03b8,\u03c6 b\u2208B y b \u2208{0,1} |R| Q t+1 b (y)log M Q t+1 b (y) = argmax \u03b8,\u03c6 b\u2208B y b \u2208{0,1} |R| Q t+1 b (y)logM\u2212 b\u2208B y b \u2208{0,1} |R| Q t b (y)logQ t+1 b (y) = argmax \u03b8,\u03c6 b\u2208B r\u2208R y b [r]\u2208{0,1} Q t+1 b,r (y[r]) log p Z|Y (z b [r]|y b [r]; \u03c6r) + log p Y |X (y b [r]|x b ; \u03b8)",
"eq_num": "(14)"
}
],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "Overall the EM algorithm starts with initializing (\u03c6, \u03b8, {Q b }), and then iterates over the two steps until convergence or after some prescribed number of iterations is reached. There are however several caveats on which caution is needed. First, the optimization problem in the M-Step cannot be solved in closed form. As such we take a stochastic gradient descent (SGD) approach 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "In each M-Step, we perform \u2206 updates, where \u2206 is a hyper-parameter. Such an approach is sometimes referred to as \"Generalized EM\" (Wu, 1983; Jojic and Frey, 2001; Greff et al., 2017) . Note that since the parameters \u03b8 are parameters for the neural network performing bag encoding, the objective function L total is highly non-convex with respect to \u03b8. This makes it desirable to choose an appropriate \u2206. Too small \u2206 results in little change in \u03b8 and hence provides insufficient signal to update {Q b }; too large \u2206, particularly at the early iterations when {Q b } has not been sufficiently optimized, tends to make the optimization stuck in undesired local optimum. In practice, one can try several values of \u2206 by inspecting their achieved value of L total and select the \u2206 giving rise to the highest L total . Note that such a tuning of \u2206 requires no access of the testing data. The second issue is that the EM algorithm is known to be sensitive to initialization. In our implementation, in order to provide a good initial parameter setting, we set each Q 0 b to z b . Despite the fact that z contains noise, this is a much better approximation of the posterior of true labels than any random initialization. The nEM framework needs the encoding of a bag of x as discussed in Equation 4. Any suitable neural network can be deployed to achieve this goal. Next, we present the widely used methods for DS relation extraction strategies: the Bag Encoding Models.",
"cite_spans": [
{
"start": 130,
"end": 140,
"text": "(Wu, 1983;",
"ref_id": "BIBREF22"
},
{
"start": 141,
"end": 162,
"text": "Jojic and Frey, 2001;",
"ref_id": "BIBREF8"
},
{
"start": 163,
"end": 182,
"text": "Greff et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with the EM Algorithm",
"sec_num": "4.2"
},
{
"text": "As illustrated in Figure 3 , the Bag Encoding Models include three components: Word-Position Embedding, Sentence Encoding, and Sentence Selectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Bag Encoding Models",
"sec_num": "4.3"
},
{
"text": "For the j th word in a sentence, Word-Position Embedding generates a vector representation w j as concatenated three components w w j , w p1 j , w p2 j . Specifically, w w j is the word embedding of the word and w p1 j and w p2 j are two position embeddings. Here w p1 j (resp. w p2 j ) are the embedding of the relative location of the word with respect to the first (resp. second) entity in the sentence. The dimensions of word and position embeddings are denoted by d w and d p respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Position Embedding",
"sec_num": "4.3.1"
},
{
"text": "Sentence encoding uses Piecewise Convolutional Neural Networks (PCNN) (Zeng et al., 2015; Lin et al., 2016; Ye et al., 2017; Ji et al., 2017) , which consists of convolution followed by Piecewise Max-pooling. The convolution operation uses a list of matrix kernels such as {K 1 , K 2 , \u2022 \u2022 \u2022 , K l ker } to extract n-gram features through sliding windows of length l win . Here K i \u2208 R l win \u00d7de and l ker is the number of kernels. Let w j\u2212l win +1:j \u2208 R l win \u00d7de be the concatenated vector of token embeddings in the j th window. The output of convolution operation is a matrix U \u2208 R l ker \u00d7(m+l win \u22121) where each element is computed by",
"cite_spans": [
{
"start": 70,
"end": 89,
"text": "(Zeng et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 90,
"end": 107,
"text": "Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 108,
"end": 124,
"text": "Ye et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 125,
"end": 141,
"text": "Ji et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Encoding",
"sec_num": "4.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "U ij = K i w j\u2212l win +1:j + b i",
"eq_num": "(15)"
}
],
"section": "Sentence Encoding",
"sec_num": "4.3.2"
},
{
"text": "where denotes inner product. In Piecewise Max-Pooling, U i is segmented to three parts U 1 i , U 2 i , U 3 i depending on whether an element in U i is on the left or right of the two entities, or between the two entities. Then max-pooling is applied to each segment, giving rise to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Encoding",
"sec_num": "4.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g ip = max(U p i ), 1 i l ker , 1 p 3 (16) Let g = g 1 , g 2 , g 3 . Then sentence encoding outputs v = ReLU(g) \u2022 h,",
"eq_num": "(17)"
}
],
"section": "Sentence Encoding",
"sec_num": "4.3.2"
},
{
"text": "where \u2022 is element-wise multiplication and h is a vector of Bernoulli random variables, representing dropouts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Encoding",
"sec_num": "4.3.2"
},
{
"text": "Let n be the number of sentences in a bag. We denote a matrix V \u2208 R n\u00d7(l ker \u00d73) consisting of each sentence vector v T . V k: and V :j are used to index the k th row vector and j th column vector of V respectively. Three kinds of sentenceselectors are used to construct the bag encoding. Mean-selector (Lin et al., 2016; Ye et al., 2017) :",
"cite_spans": [
{
"start": 303,
"end": 321,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 322,
"end": 338,
"text": "Ye et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "The bag encoding is computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x = 1 n n k=1 V k:",
"eq_num": "(18)"
}
],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "Max-selector (Jiang et al., 2016) : The j th element of bag encoding x is computed as",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "(Jiang et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x j = max(V :j )",
"eq_num": "(19)"
}
],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "Attention-selector Attention mechanism is extensively used for sentence selection in relation extraction by weighted summing of the sentence vectors, such as in (Lin et al., 2016; Ye et al., 2017; Su et al., 2018) . However, all these works assume that the labels are correct and only use the golden label embeddings to select sentences at training stage. We instead selecting sentences using all label embeddings r and construct a bag encoding for each label r \u2208 R. The output is then a list of vectors {x r }, in which the r th vector is calculated through attention mechanism as",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 180,
"end": 196,
"text": "Ye et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 197,
"end": 213,
"text": "Su et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "e l =V T l: Ar, \u03b1 k = exp(e k ) n l=1 exp(e l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": ",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "x r = n k=1 \u03b1 k V k: (20)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "where A \u2208 R dr\u00d7dr is a diagonal matrix and d r is the dimension of relation embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selectors",
"sec_num": "4.3.3"
},
{
"text": "We first conduct experiments on the widely used benchmark data set Riedel (Riedel et al., 2010) , and then on the TARCED (Zhang et al., 2017) data set. The latter allows us to control the noise level in the labels to observe the behavior and working mechanism of our proposed method. The code for our model is found on the Github page 3 .",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 121,
"end": 141,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Study",
"sec_num": "5"
},
{
"text": "The Riedel dataset 2 is a widely used DS dataset for relation extraction. It was developed in (Riedel et al., 2010) ",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation on the Riedel Dataset",
"sec_num": "5.1"
},
{
"text": "The Riedel dataset contains no ground-truth labels. The held-out evaluation (Mintz et al., 2009) is highly unreliable in measuring a model's performance against the ground truth. Since this study is concerned with discovering the ground-truth labels, the conventional held-out evaluation is no longer appropriate. For this reason, we annotate a subset of the original test data for evaluation purpose. Specifically, we annotate a bag x b by its all correct labels, and if no such labels exist, we label the bag NA. In total 3762 bags are annotated, which include all bags originally labelled as non-NA (\"Part 1\") and 2000 bags (\"Part 2\") selected from the bags originally labelled as NA but have relatively low score for NA label under a pre-trained PCNN+ATT (Lin et al., 2016) model. The rationale here is that we are interested in inspecting the model's performance in detecting the labels of known relations rather than NA relation. Table 1 : The statistics of ground-truth annotation. origin denotes the total number of originally assigned labels of these bags. correct and wrong denote the total number of correctly assigned and wrongly assigned labels. added denote the number of missing labels we added into these bags. The statistics of annotation is shown in Table 1 . Through the annotation, we notice that, about 36% of the original labels in original non-NA bags and 53% of labels in original NA bags are wrongly assigned. Similar statistics has been reported in previous works (Riedel et al., 2010; Feng et al., 2018) .",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 759,
"end": 777,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 1491,
"end": 1512,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF16"
},
{
"start": 1513,
"end": 1531,
"text": "Feng et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 936,
"end": 943,
"text": "Table 1",
"ref_id": null
},
{
"start": 1268,
"end": 1276,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ground-Truth Annotation",
"sec_num": "5.1.1"
},
{
"text": "For the Riedel dataset, we train the models on the noisy training set and test the models on the manually labeled test set. The precision-recall (PR) curve is reported to compare performance of models. Three baselines are considered:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Baselines",
"sec_num": "5.1.2"
},
{
"text": "\u2022 PCNN+MEAN (Lin et al., 2016; Ye et al., 2017) : A model using PCNN to encode sentences and a mean-selector to generate bag encoding. \u2022 PCNN+MAX (Jiang et al., 2016) : A model using PCNN to encode sentences and a maxselector to generate bag encoding. \u2022 PCNN+ATT (Lin et al., 2016; Ye et al., 2017; Su et al., 2018) : A model using PCNN to encode sentences and an attention-selector to generate bag encoding.",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 31,
"end": 47,
"text": "Ye et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 146,
"end": 166,
"text": "(Jiang et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 263,
"end": 281,
"text": "(Lin et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 282,
"end": 298,
"text": "Ye et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 299,
"end": 315,
"text": "Su et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Baselines",
"sec_num": "5.1.2"
},
{
"text": "We compare the three baselines with their nEM versions (namely using them as the Bag Encoding component in nEM), which are denoted with a \"+nEM\" suffix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Baselines",
"sec_num": "5.1.2"
},
{
"text": "Following previous work, we tune our models using three-fold validation on the training set. As the best configurations, we set d w = 50, d p = 5 and d r = 230. For PCNN, we set l win = 3, l ker = 230 and set the probability of dropout to 0.5. We use Adadelta (Zeiler, 2012) with default setting (\u03c1 = 0.95, \u03b5 = 1e \u22126 ) to optimize the models and the initial learning rate is set as 1. The batch size is fixed to 160 and the max length of a sentence is set as 120. For the noise model p Z|Y , we set \u03c6 0 N A = 0.3, \u03c6 1 N A = 0 for the NA label and \u03c6 0 r = 0, \u03c6 1 r = 0.9 for other labels r = N A. In addition, the number \u2206 of SGD updates in M-step is set to 2000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Detail",
"sec_num": "5.1.3"
},
{
"text": "The evaluation results on manually labeled test set are shown in Figure 4 . From which, we observe that the PR-curves of the nEM models are above their corresponding baselines by significant margins, especially in the low-recall regime. This observation demonstrates the effectiveness of the nEM model on improving the extracting performance upon the baseline models. We also observe that models with attention-selector achieve better performance than models with mean-selector and max-selector, demonstrating the superiority of the attention mechanism on sentence selection.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Predictive performance",
"sec_num": "5.1.4"
},
{
"text": "We then analyze the predicting probability of PCNN+ATT and PCNN+ATT+nEM on the ground-truth labels in the test set. We divide the predicting probability values into 5 bins and count the number of label within each bin. The result is shown in Figure 5(a) . We observe that the count for PCNN+ATT in bins 0.0 \u2212 0.2, 0.2 \u2212 0.4, 0.4 \u2212 0.6 and 0.6 \u2212 0.8 are all greater than PCNN+ATT+nEM. But in bin 0.8 \u2212 1.0, the count for PCNN+ATT+nEM is about 55% larger than PCNN+ATT. This observation indicates that nEM can promote the overall predicting scores of ground-truth labels. Figure 5 (b) compares PCNN+ATT and PCNN+ATT+nEM in their predictive probabilities on the frequent relations. The result shows that PCNN+ATT+nEM achieves higher average predicting probability on all these relations, except for the NA relation, on which PCNN+ATT+nEM nearly levels with PCNN+ATT. This phenomenon demonstrates that nEM tends to be more confident in predicting the correct labels. In this case, raising the predictive probability on the correct label does increase the models ability to make the correct decision, thereby improving performance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 253,
"text": "Figure 5(a)",
"ref_id": "FIGREF6"
},
{
"start": 570,
"end": 579,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Predictive performance",
"sec_num": "5.1.4"
},
{
"text": "TACRED is a large supervised relation extraction dataset collected in (Zhang et al., 2017) . The text corpus is generated from the TAC KBP evaluation of years 2009 \u2212 2015. Corpus from years 2009 \u2212 2012, year 2013 and year 2014 are used as training, validation and testing sets respectively. In all annotated sentences, 68, 124 are used for training, 22, 631 for validation and 15, 509 for testing. There are in total 42, 394 distinct entity pairs, where 33, 079 of these entity pairs appear in exactly one sentence. Since these 33, 079 entity pairs dominate the training set, we simply treat each sentence as a bag for training and testing. Another advantage of constructing single-sentence bags is that it allows us to pinpoint the correspondence between the correct label and its supporting sentence. The number of relations in TACRED dataset is 42, including a special relation \"no relation\", which is treated as NA in this study.",
"cite_spans": [
{
"start": 70,
"end": 90,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation on the TACRED Dataset",
"sec_num": "5.2"
},
{
"text": "To obtain insight into the working of nEM, we create a simulated DS dataset by inserting noisy labels into the training set of a supervised dataset, TACRED. Since the training set of TACRED was manually annotated, the labels therein may be regarded as the ground-truth labels. Training using this semi-synthetic dataset allows us to easily observe models' behaviour with respect to the noisy labels and the true labels. We inject artificial noise into the TACRED training data through the following precedure. For each bag x b in the training set, we generate a noisy label vectorz b from the observed ground-truth label vector y b . Specifically,z b is generated by flipping each element y b [r] from 0 to 1 or 1 to 0 with a probability p f . This precedure simulates a DS dataset through introducing wrong labels into training bags, thus corrupts the training dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Semi-synthetic Dataset",
"sec_num": "5.2.1"
},
{
"text": "Following (Zhang et al., 2017) , the common relation classification metrics Precision, Recall and F1 are used for evaluation. The PCNN model is used to generate the bag encoding since sentence selection is not needed in this setting. The same hyperparameter settings as in Section 5.1.3 are used in this experiment. For the noise model p Z[r]|Y [r] of PCNN +nEM, we set \u03c6 0 r = 0.1, \u03c6 1 r = 0.1 for each label r \u2208 R. The number \u2206 of SGD updates in each M-step is set to 1600.",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.2.2"
},
{
"text": "From Table 2 , we see that PCNN+nEM achieves better recall and F1 score than the PCNN model under various noise levels. Additionally, the recall and F1 margins between PCNN and PCNN+nEM increase with noise levels. This suggests that nEM keeps better performance than the corresponding baseline model under various level of training noise. We also observe that the precision of nEM is consistently lower than that of PCNN when noise is injected to TACRED. This is a necessary trade-off present in nEM. The training of nEM regards the training labels with less confidence based on noisy-label assumption. This effectively lowers the probability of seen training labels and considers the unseen labels also having some probability to occur. When trained this way, nEM, at the prediction time, tends to predict more labels than its baseline (PCNN) does. Note that in TACRED, each instance contains only a single ground truth label. Thus the tendency of nEM to predict more labels leads to the reduced precision. However, despite this tendency, nEM, comparing with PCNN, has a stronger capability in detecting the correct label and gives the better recall of nEM. The gain in recall dominates the loss in precision. ",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Test Results",
"sec_num": "5.2.3"
},
{
"text": "The predicting probabilities for the noise labels and the original true labels are also evaluated under the trained models. The results are shown in Figure 6 (left) which reveals that with increasing noise, the average probability for noisy label sets of PCNN and PCNN+nEM both increase and average scores for original label sets of PCNN and PCNN+nEM both decrease. The performance degradation of PCNN and PCNN+nEM under noise appears different. The average probability for noisy label sets rises with a higher slope in PCNN than in PCNN+nEM. Additionally, the average probability for the original label sets of PCNN+nEM is higher than or equal to PCNN at all noise levels. These observations confirms that the denoising capability of nEM is learned from effectively denoising the training set. ",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 157,
"text": "Figure 6",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Training Label Probabilities",
"sec_num": "5.2.4"
},
{
"text": "For each training bag x b and each artificial noisy label r, we track the probability Q b,r (y[r]) = 1 over EM iterations. This probability, measuring the likelihood of the noise label r being correct, is then averaged over r and over all bags x b . It can be seen in Figure 6 (right) that the average value of Q b,r (y[r]) decreases as the training progresses, leading the model to gradually ignore noisy labels. This demonstrates the effectiveness of EM iterations and validates the proposed EM-based framework.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 276,
"text": "Figure 6",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Effectiveness of EM Iterations",
"sec_num": "5.2.5"
},
{
"text": "We proposed a nEM framework to deal with the noisy-label problem in distance supervision relation extraction. We empirically demonstrated the effectiveness of the nEM framework, and provided insights on its working and behaviours through data with controllable noise levels. Our framework is a combination of latent variable models in probabilistic modelling with contemporary deep neural networks. Consequently, it naturally supports a training algorithm which elegantly nests the SGD training of any appropriate neural network inside an EM algorithm. We hope that our approach and the annotated clean testing data would inspire further research along this direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "6"
},
{
"text": "In fact, the approach should be called \"stochastic gradient ascent\", since we are maximizing the objective function not minimizing it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/AlbertChen1991/nEM 2 http://iesl.cs.umass.edu/riedel/ecml/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported partly by China 973 program (No. 2015CB358700), by the National Natural Science Foundation of China (No. 61772059, 61421003), by the Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC), by State Key Laboratory of Software Development Environment (No. SKLSDE-2018ZX-17) and by the Fundamental Research Funds for the Central Universities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Subsequence kernels for relation extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in Neural Information Processing Systems 18 [Neural Information Processing Systems, NIPS 2005",
"volume": "",
"issue": "",
"pages": "171--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. Subsequence kernels for relation extraction. In Advances in Neural Information Processing Sys- tems 18 [Neural Information Processing Systems, NIPS 2005, December 5-8, 2005, Vancouver, British Columbia, Canada], pages 171-178.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Maximum likelihood from incomplete data via the em algorithm",
"authors": [
{
"first": "P",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"M"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "Donald B",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the royal statistical society. Series B (methodological)",
"volume": "",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological), pages 1-38.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reinforcement learning for relation classification from noisy data",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "5779--5786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xi- aoyan Zhu. 2018. Reinforcement learning for re- lation classification from noisy data. In Proceed- ings of the Thirty-Second AAAI Conference on Ar- tificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5779- 5786.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers",
"authors": [
{
"first": "Kata",
"middle": [],
"last": "G\u00e1bor",
"suffix": ""
},
{
"first": "Davide",
"middle": [],
"last": "Buscaldi",
"suffix": ""
},
{
"first": "Anne-Kathrin",
"middle": [],
"last": "Schumann",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Qasemizadeh",
"suffix": ""
},
{
"first": "Ha\u00effa",
"middle": [],
"last": "Zargayouna",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Charnois",
"suffix": ""
}
],
"year": 2018,
"venue": "SemEval@NAACL-HLT",
"volume": "",
"issue": "",
"pages": "679--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kata G\u00e1bor, Davide Buscaldi, Anne-Kathrin Schu- mann, Behrang QasemiZadeh, Ha\u00effa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. In SemEval@NAACL-HLT, pages 679-688. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural expectation maximization",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Sjoerd Van Steenkiste",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6694--6704",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Greff, Sjoerd van Steenkiste, and J\u00fcrgen Schmidhuber. 2017. Neural expectation maximiza- tion. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Informa- tion Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6694-6704.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Knowledge-based weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke S. Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In The 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, Pro- ceedings of the Conference, 19-24 June, 2011, Port- land, Oregon, USA, pages 541-550.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distant supervision for relation extraction with sentence-level attention and entity descriptions",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3060--3066",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3060-3066.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Relation extraction with multi-instance multilabel convolutional neural networks",
"authors": [
{
"first": "Xiaotian",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "1471--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaotian Jiang, Quan Wang, Peng Li, and Bin Wang. 2016. Relation extraction with multi-instance multi- label convolutional neural networks. In COLING 2016, 26th International Conference on Computa- tional Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 1471-1480.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning flexible sprites in video layers",
"authors": [
{
"first": "Nebojsa",
"middle": [],
"last": "Jojic",
"suffix": ""
},
{
"first": "Brendan",
"middle": [
"J"
],
"last": "Frey",
"suffix": ""
}
],
"year": 2001,
"venue": "2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), with CD-ROM",
"volume": "",
"issue": "",
"pages": "199--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nebojsa Jojic and Brendan J. Frey. 2001. Learning flexible sprites in video layers. In 2001 IEEE Com- puter Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), with CD-ROM, 8-14 December 2001, Kauai, HI, USA, pages 199- 206.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7- 12, 2016, Berlin, Germany, Volume 1: Long Papers.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A soft-label method for noisetolerant distantly supervised relation extraction",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kexiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1790--1795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Liu, Kexiang Wang, Baobao Chang, and Zhi- fang Sui. 2017. A soft-label method for noise- tolerant distantly supervised relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1790-1795.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A dependency-based neural network for relation classification",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "285--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing, ACL 2015, July 26-31, 2015, Bei- jing, China, Volume 2: Short Papers, pages 285- 290.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix",
"authors": [
{
"first": "Bingfeng",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhanxing",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Songfang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "430--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao. 2017. Learning with noise: Enhance distantly su- pervised relation extraction with dynamic transition matrix. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguis- tics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 430-439.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation extrac- tion without labeled data. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 1003-1011.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "End-to-end relation extraction using lstms on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguis- tics, ACL 2016, August 7-12, 2016, Berlin, Ger- many, Volume 1: Long Papers.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Weakly-supervised relation extraction by pattern-enhanced embedding learning",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 World Wide Web Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1257--1266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Qu, Xiang Ren, Yu Zhang, and Jiawei Han. 2018. Weakly-supervised relation extraction by pattern-enhanced embedding learning. In Proceed- ings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pages 1257-1266.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2010",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCal- lum. 2010. Modeling relations and their men- tions without labeled text. In Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24, 2010, Proceedings, Part III, pages 148-163.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Classifying relations by ranking with convolutional neural networks",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "626--634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00edcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Process- ing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 626-634.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploring encoder-decoder model for distant supervised relation extraction",
"authors": [
{
"first": "Sen",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Ningning",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Shuguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ruiping",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018",
"volume": "",
"issue": "",
"pages": "4389--4395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sen Su, Ningning Jia, Xiang Cheng, Shuguang Zhu, and Ruiping Li. 2018. Exploring encoder-decoder model for distant supervised relation extraction. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4389-4395.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, EMNLP- CoNLL 2012, July 12-14, 2012, Jeju Island, Korea, pages 455-465.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "DSGAN: generative adversarial training for distant supervision relation extraction",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pengda",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018",
"volume": "1",
"issue": "",
"pages": "496--505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Yang Wang, Weiran Xu, and Pengda Qin. 2018a. DSGAN: generative adversarial training for distant supervision relation extraction. In Proceed- ings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 496-505.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Robust distant supervision relation extraction via deep reinforcement learning",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pengda",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018",
"volume": "1",
"issue": "",
"pages": "2137--2147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Yang Wang, Weiran Xu, and Pengda Qin. 2018b. Robust distant supervision relation extrac- tion via deep reinforcement learning. In Proceed- ings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2137-2147.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "On the convergence properties of the em algorithm. The Annals of statistics",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Cf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "95--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CF Jeff Wu. 1983. On the convergence properties of the em algorithm. The Annals of statistics, pages 95-103.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Classifying relations via long short term memory networks along shortest dependency paths",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunchuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1785--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest depen- dency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1785-1794.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A position encoding convolutional neural network based on dependency tree for relation classification",
"authors": [
{
"first": "Yunlun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yunhai",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Shulei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhi-Hong",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunlun Yang, Yunhai Tong, Shulei Ma, and Zhi-Hong Deng. 2016. A position encoding convolutional neural network based on dependency tree for re- lation classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 65-74.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Jointly extracting relations with class ties via effective deep ranking",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Wenhan",
"middle": [],
"last": "Chao",
"suffix": ""
},
{
"first": "Zhunchen",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1810--1820",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Ye, Wenhan Chao, Zhunchen Luo, and Zhoujun Li. 2017. Jointly extracting relations with class ties via effective deep ranking. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 1810-1820.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation ex- traction. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2002, Philadelphia, PA, USA, July 6-7, 2002.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distant supervision for relation extraction via piecewise convolutional neural networks",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1753--1762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1753-1762.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In COLING 2014, 25th International Conference on Computa- tional Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ire- land, pages 2335-2344.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Large scaled relation extraction with reinforcement learning",
"authors": [
{
"first": "Xiangrong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "5658--5665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangrong Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Large scaled relation extraction with rein- forcement learning. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5658-5665.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Positionaware attention and supervised data improve slot filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "35--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D. Manning. 2017. Position- aware attention and supervised data improve slot fill- ing. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 35-45.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Exploring various knowledge in relation extraction",
"authors": [
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "427--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation ex- traction. In ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Pro- ceedings of the Conference, 25-30 June 2005, Uni- versity of Michigan, USA, pages 427-434.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Assumptions on the cleanness of DS dataset."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The probabilistic framework of nEM"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The structure of Bag Encoding component."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "through aligning entity pairs from Freebase with the New York Times (NYT) corpus. There are 53 relations in the Riedel dataset, including \"NA\". The bags collected from the 2005-2006 corpus are used as the training set, and the bags from the 2007 corpus are used as the test set. The training data contains 281, 270 entity pairs and 522, 611 sentences; the testing data contains 96, 678 entity pairs and 172, 448 sentences."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "PR curves on DS test set."
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Predictive probabilities on the test set. CON, Nat, Com, PoL, PoD and PoB represent 'contains', 'nationality', 'company', 'place lived', 'place of death' and 'place of birth' relations respectively."
},
"FIGREF8": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Average probabilities for noisy and original labels (left) and average Q b,r (y[r]) over EM iterations(right)"
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "x, implemented using any suitable neural network. Postponing explaining the details of bag encoding to a later section (namely Section 4.3), we here specify the form of p Y [r]|X (y[r]|x) for each r: p Y [r]|X (1|x) = \u03c3 r T x + b r (5) where \u03c3(\u2022) is the sigmoid function, r is a |R|dimensional vector and b r is bias. That is, p Y [r]|X"
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Test performance on the TACRED dataset."
}
}
}
}