{
"paper_id": "D19-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:00:45.009120Z"
},
"title": "Self-Attention Enhanced CNNs and Collaborative Curriculum Learning for Distantly Supervised Relation Extraction",
"authors": [
{
"first": "Yuyun",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College Dublin",
"location": {
"country": "Ireland"
}
},
"email": "yuyun.huang@ucd.ie"
},
{
"first": "Jinhua",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {},
"email": "jinhua.du@aig.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distance supervision is widely used in relation extraction tasks, particularly when large-scale manual annotations are virtually impossible to conduct. Although Distantly Supervised Relation Extraction (DSRE) benefits from automatic labelling, it suffers from serious mislabelling issues, i.e. some or all of the instances for an entity pair (head and tail entities) do not express the labelled relation. In this paper, we propose a novel model that employs a collaborative curriculum learning framework to reduce the effects of mislabelled data. Specifically, we firstly propose an internal self-attention mechanism between the convolution operations in convolutional neural networks (CNNs) to learn a better sentence representation from the noisy inputs. Then we define two sentence selection models as two relation extractors in order to collaboratively learn and regularise each other under a curriculum scheme to alleviate noisy effects, where the curriculum could be constructed by conflicts or small loss. Finally, experiments are conducted on a widely-used public dataset and the results indicate that the proposed model significantly outperforms baselines including the state-of-the-art in terms of P@N and PR curve metrics, thus evidencing its capability of reducing noisy effects for DSRE.",
"pdf_parse": {
"paper_id": "D19-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "Distance supervision is widely used in relation extraction tasks, particularly when large-scale manual annotations are virtually impossible to conduct. Although Distantly Supervised Relation Extraction (DSRE) benefits from automatic labelling, it suffers from serious mislabelling issues, i.e. some or all of the instances for an entity pair (head and tail entities) do not express the labelled relation. In this paper, we propose a novel model that employs a collaborative curriculum learning framework to reduce the effects of mislabelled data. Specifically, we firstly propose an internal self-attention mechanism between the convolution operations in convolutional neural networks (CNNs) to learn a better sentence representation from the noisy inputs. Then we define two sentence selection models as two relation extractors in order to collaboratively learn and regularise each other under a curriculum scheme to alleviate noisy effects, where the curriculum could be constructed by conflicts or small loss. Finally, experiments are conducted on a widely-used public dataset and the results indicate that the proposed model significantly outperforms baselines including the state-of-the-art in terms of P@N and PR curve metrics, thus evidencing its capability of reducing noisy effects for DSRE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation Extraction (RE) is vital for NLP tasks such as information extraction, question answering and knowledge base completion. RE aims to identify the relationship between an entity pair (e 1 , e 2 ) in a sentence. For example, in the sentence \"[Bill Gates e 1 ] is the principal founder of [M icrosof t e 2 ]\", the relation extractor decodes the relation of founder for the entity pair Bill Gates (the person) and Microsoft (the company).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent supervised relation extraction research can be roughly categorised into two areas: fully supervised and distantly supervised relation extraction. Fully supervised relation extraction mainly depends on manually annotated training dataset (Zeng et al., 2014; dos Santos et al., 2015) . Ordinarily, human annotation on largescale datasets is costly and often practicably impossible. Distance supervision addresses this problem by using an existing knowledge base (e.g. DBpedia) to automatically annotating large-scale datasets, thus reducing the burden of manual annotation (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) . However, distance supervision often suffers from mislabelling problems. Figure 1 illustrates this incorrect labelling issue, which shows that not all sentences in a bag with the same entity pair express the labelled relation of person/company/founder. The worst case is that all sentences in a bag are mislabelled. Thus, one primary challenge in DSRE is to minimize the noisy labelling effects, which in turn, would let model learn from incorrect labelled datasets. Figure 1 : Mislabeling issue example in DSRE: the entity pairs bag containing three sentences is labeled as person/company/founder. However, the last sentence marked in red has no extractable pre-defined relation.",
"cite_spans": [
{
"start": 244,
"end": 263,
"text": "(Zeng et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 264,
"end": 288,
"text": "dos Santos et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 578,
"end": 598,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF14"
},
{
"start": 599,
"end": 621,
"text": "Hoffmann et al., 2011;",
"ref_id": "BIBREF7"
},
{
"start": 622,
"end": 644,
"text": "Surdeanu et al., 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 719,
"end": 727,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1113,
"end": 1121,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bill_Gates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "In order to design a solution to mitigate the effects of noisy data, we can treat our DSRE model learning procedure as analogous to two students training with a curriculum to answer a list of multiple-choice questions, where some difficult multiple-choice questions may have no correct answers (false positive). We base our proposed solution on the intuition that two students will compare and rethink their different answers during the learning process, thus regularising each other and improving their final grades. We call this process of two students learning together with a curriculum as Collaborative Curriculum Learning (CCL), where each student represents a network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "Inspired by the above intuition, we propose to use a bag-level selective sentence attention model (NetAtt) (Lin et al., 2016 ) and a maximum probability sentence model (NetMax) (Zeng et al., 2015) as two students learning collaboratively. Meanwhile, highly organised human education methods using a multilevel curriculum, from easy to hard, is an analogy for curriculum learning training, which advocates the adoption of similar multistage strategies (Bengio et al., 2009) . The curriculum learning training method for DSRE is used with the assumption that entity pair bags contain corrupted labelled sentences, which are difficult components to learn in the curriculum. By disregarding the effects of noisy samples (meaningless knowledge) during the training, the expectation is to boost the model's learning capability. Moreover, in order to accurately obtain the semantic representation of each sentence for our curriculum learning approach, we propose an internal CNNs self-attention mechanism to learn a better sentence representation in the DSRE setting.",
"cite_spans": [
{
"start": 107,
"end": 124,
"text": "(Lin et al., 2016",
"ref_id": "BIBREF12"
},
{
"start": 177,
"end": 196,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 451,
"end": 472,
"text": "(Bengio et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "The main contributions are summarised as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "(1) We make the first attempt to use the concept of curriculum learning for denoising DSRE and present a novel collaborative curriculum learning model to alleviate the effects of noisy sentences in an entity pair bag. In this model, we define two collaborative relation extractors to regularize each other and boost the model's learning capability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "(2) We propose conflicts and small loss tricks for our collaborative curriculum learning. Instead of using a separated complex noisy sentence filter and two-step training in baseline models, our model can alleviate noise effects during a single training and is easy to implement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "(3) We are the first to apply an internal CNNs self-attention mechanism to enhance a multilayer CNNs model for DSRE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "(4) We conduct thorough experiments on the widely-used NYT dataset, and achieve significant improvements over state-of-the-art models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "person / company / founder",
"sec_num": null
},
{
"text": "Most DSRE approaches fall under the framework of Multi-Instance Learning (MIL) (Riedel et al., 2010; Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016; Ji et al., 2017; Qin et al., 2018; Feng et al., 2018) . At the encoding step, a sentence representation is learned using handcrafted features or neural network models. Afterwards, in the sentence selection step, one or several sentences from an entity pair bag are chosen for further bag representation learning. Previously, statistical models (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) have used designed features, such as syntactic and lexical features, and have then been trained by logistic regression or expectation maximization.",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 101,
"end": 123,
"text": "Surdeanu et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 124,
"end": 142,
"text": "Zeng et al., 2015;",
"ref_id": "BIBREF24"
},
{
"start": 143,
"end": 160,
"text": "Lin et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 161,
"end": 177,
"text": "Ji et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 178,
"end": 195,
"text": "Qin et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 196,
"end": 214,
"text": "Feng et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 505,
"end": 525,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF14"
},
{
"start": 526,
"end": 548,
"text": "Hoffmann et al., 2011;",
"ref_id": "BIBREF7"
},
{
"start": 549,
"end": 571,
"text": "Surdeanu et al., 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "When adopting deep learning approaches, a single layer CNNs based model (Zeng et al., 2014) was exploited to extract sentence level features to attain fully supervised relation classification. For DSRE, Zeng et al. (2015) proposed an extended Piece-wise CNN (PCNN) approach and selected the most probable valid sentence to represent an entity pair bag, while the remaining sentences in the bag were ignored. Lin et al. (2016) and Ji et al. (2017) used all the sentences in a bag by assigning higher weights to valid labeled sentences and lower weights to noisy sentences. The selective sentence attention mechanism combined all weighted sentences as a bag representation. In addition, Ji et al. (2017) made use of entity description background knowledge and fused the external information into their PCNN-based model. The self-attention mechanism (Cheng et al., 2016; Parikh et al., 2016; Vaswani et al., 2017) , also called intra-attention, relates to different positions of a single sequence to learn the sequence representation. An internal CNNs states selfattention approach was proposed by Zhang et al. (2018) to improve Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) performance in generating high-quality images. Alt et al. (2019) extended Generative Pre-trained Transformer (GPT) to learn semantic and syntactic features for DSRE. Wang et al. (2019) used pretrained Transformer for multiple entity-relations extraction task. Du et al. (2018) utilized selfattention mechanisms for better MIL sentencelevel and bag-level representations. However, previous work has not considered to use self-attention over the internal CNNs model states for DSRE.",
"cite_spans": [
{
"start": 72,
"end": 91,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 203,
"end": 221,
"text": "Zeng et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 408,
"end": 425,
"text": "Lin et al. (2016)",
"ref_id": "BIBREF12"
},
{
"start": 430,
"end": 446,
"text": "Ji et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 847,
"end": 867,
"text": "(Cheng et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 868,
"end": 888,
"text": "Parikh et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 889,
"end": 910,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1095,
"end": 1114,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 1165,
"end": 1190,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 1238,
"end": 1255,
"text": "Alt et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 1357,
"end": 1375,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF23"
},
{
"start": 1451,
"end": 1467,
"text": "Du et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Deeper CNNs have positive effects on noisy NLP tasks (Conneau et al., 2017) . Huang and Wang (2017) used residual learning for multilayer deep CNNs to improve DSRE performance.",
"cite_spans": [
{
"start": 53,
"end": 75,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 78,
"end": 99,
"text": "Huang and Wang (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To better address issues relating to mislabelling, e.g., when some or all sentence labels in a bag are falsely positive, schemes to filter out noisy instances have been developed. Takamatsu et al. (2012) proposed a wrong label sentence filter method using linguistic features. Feng et al. (2018) proposed a model comprising of a relation classifier and an instance selector based on reinforcement learning (RL). The instance selector was designed to select possible correctly labelled sentences and was regularised by the rewards from the relation classifier. Qin et al. (2018) used the idea of GANs to develop a separated correct label indicator, which filters high confidence scoring instances for training on existing PCNN/CNNbased relation extractors (DSGAN). Unlike previous work, which filters out incorrectly labelled sentences to generate an approximate clean training dataset and then retrains on the filtered data to improve models, we instead train our model to actively and purposefully forget noisy entity pair bags, based on a collaborative curriculum learning strategy in a single training process. In doing so, we develop an approach to building the curriculum -the identified disagreements and losses of two collaborative student networks in our model.",
"cite_spans": [
{
"start": 180,
"end": 203,
"text": "Takamatsu et al. (2012)",
"ref_id": "BIBREF21"
},
{
"start": 277,
"end": 295,
"text": "Feng et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 560,
"end": 577,
"text": "Qin et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The mislabelled sentences from the distance supervision method are normally regarded as unwanted noise that can reduce the performance of relation extraction. To alleviate noisy effects, we propose a collaborative curriculum learning framework for DSRE. The architecture is shown in Figure 3 , consisting of three main components:",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 291,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "(1) input representation; (2) CNNs that are composed of convolution, self-attention and pooling; and (3) collaborative curriculum learning module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "To represent each input token, we use word2vec 1 (Mikolov et al., 2013) to obtain its embedding.",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs: Word and Position Embeddings",
"sec_num": "3.1"
},
{
"text": "Each word embedding (w t ) contains syntactic and semantic information. Similar to (Zeng et al., 2015) , we use position 1 https://code.google.com/p/word2vec/ embedding to assist the CNNs in measuring distances from the current word to the head and tail entities. As illustrated in Figure 2 , in the sentence, the distance of the word founder to the head entity is 4 and \u22122 to the tail entity. Figure 3 (a) further illustrates the use of these embeddings as input representation in CNNs, i.e. the input representation is a concatenation of word embedding and position embedding. In Figure 3 (a), it is assumed that the dimensions of word embedding d w and position embedding d p are 3 and 1 (as a simplified example for the sake of the figure), respectively. The total vector representation",
"cite_spans": [
{
"start": 83,
"end": 102,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 394,
"end": 402,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 582,
"end": 591,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Inputs: Word and Position Embeddings",
"sec_num": "3.1"
},
{
"text": "dimension d is d w + 2 \u00d7 d p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs: Word and Position Embeddings",
"sec_num": "3.1"
},
{
"text": "Self-attention for CNNs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "The sentence embedding matrix is formed by concatenating every vector representation horizontally (left panel Figure 3 ) and is represented as",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "s n = [w 1 : \u2022 \u2022 \u2022 : w t : \u2022 \u2022 \u2022 : w T ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": ", where s n is the input that feeds CNNs to learn a sentence representation. For an input sentence, we use a convolution filter W p to slide along s n as [w t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "\u2022 \u2022 \u2022 : w t+u\u22121 ] * W p + b,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "where * is the convolution symbol, b \u2208 R is a bias and p represents the p th filter in a filter set. W p \u2282 R u\u00d7d , where u is the length of the filter and d is the dimension of a word vector consisting of word and position embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "In order to learn a better sentence representation, we propose a self-attention mechanism which is performed directly over internal CNNs states as shown in Figure 3 (a). The self-attention function maps queries (Q) and corresponding keys (K) to compute a weight map. The output is a multiplication of the values (V) and the weight vector. We place this internal CNNs self-attention module after the first convolution state (C). Values, queries, and keys are computed by applying convolution again on C,",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "where V = C * W V , Q = C * W Q and K = C * W K .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "The attention map, is calculated using the softmax function, as shown in Equation (1), where Q T is the transpose of Q and \u2297 is the matrix multiplication operator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "map = exp(Q T \u2297 K) exp(Q T \u2297 K)",
"eq_num": "(1)"
}
],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "Subsequently, the weighted value is computed as \u03c4 V \u2297 map.C = cov(C + \u03c4 V \u2297 map) is then fed into a piece-wise max pooling layer. Where, \u03c4 learns gradually from 0 to 1 to assign more weight to the model. For an entity pair bag B containing N sentence representations, the j th sentence with maximum weight score \u03b1 j is selected by NetAtt, while x k is the k th sentence selected by NetMax. Conflicts between the two subnets are used to form a conflict loss L jk . Each network uses the same sentence representation and generates different bag representations to feed a softmax layer separately. The conflicts and loss from collaborative training are used as cues for curriculum building in section 3.5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualised Representation:",
"sec_num": "3.2"
},
{
"text": "Following the self-attention operations, we use a piece-wise max pooling operation to form the final sentence representation. The max pooling is a variant of pooling in standard CNNs, which applies the pooling to three convolution segments separated by head and tail entities (Zeng et al., 2015) . As shown in the column of \"piecewise pooling\" in Figure 3 (a), the grey dots i\u00f1 c, representing head and tail entities, split each vector into three pieces, which are denoted as C 1 ,C 2 ,C 3 , respectively. Thus, the piece-wise max pooling is expressed as",
"cite_spans": [
{
"start": 276,
"end": 295,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 347,
"end": 355,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Entity Position-aware Sentence Representation: Piece-wise Max Pooling",
"sec_num": "3.3"
},
{
"text": "{x 1 i , x 2 i , x 3 i } = max-pooling(C 1 ,C 2 ,C 3 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Position-aware Sentence Representation: Piece-wise Max Pooling",
"sec_num": "3.3"
},
{
"text": ". These three pooled vectors are then concatenated together to form a vector x i and a nonlinear function is applied to the output vector, such that x i = tanh(x i ) is the final representation of a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Position-aware Sentence Representation: Piece-wise Max Pooling",
"sec_num": "3.3"
},
{
"text": "After learning representations of all sentences in an entity pair bag, we then use two approaches to form the entity pair bag representation, as illustrated in Figure 3 (b), i.e., we employ a mechanism of averaging all sentences using attention scores (NetAtt) (Lin et al., 2016) , and a maximum probability sentence selection method (NetMax) (Zeng et al., 2015) , respectively, to learn the bag representation. Specifically, NetAtt assigns weight scores {\u03b1 1 , . . . , \u03b1 N } to all sentences in an entity pair bag, while NetMax will select the most reasonable sentence that has the highest probability.",
"cite_spans": [
{
"start": 261,
"end": 279,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 343,
"end": 362,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Bag Representation for Entity Pairs",
"sec_num": "3.4"
},
{
"text": "Ideal student networks are equipped with SOTA MIL selection mechanisms and could empirically generate conflicts during the selection, based on this criterion we use NetAtt and NetMax. NetAtt works by considering all sentences in a bag, but it also introduces noisy sentences while learning a bag representation. NetMax works by selecting the sentence with the highest probability in a bag as the bag representation, but it overlooks other valid sentences. Moreover, the two networks use different bag representations to feed classifiers and eventually generate disagreements. Therefore, we are motivated by the idea of combining their advantages and disagreements over sentence selection in a single framework to learn better bag representations and to reduce the noise effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag Representation for Entity Pairs",
"sec_num": "3.4"
},
{
"text": "Before introducing the proposed single framework, we first describe below how NetAtt and Net-Max work in bag representation learning. The sentence bag of an entity pair is denoted as B, which consists of representations of N sentences {x 1 , x 2 , . . . , x N }, where each sentence representation is learned from our self-attention based PCNN. The entity pair is expressed as (e 1 , e 2 ) and the bag's relation is r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag Representation for Entity Pairs",
"sec_num": "3.4"
},
{
"text": "To extract information from all sentences in a bag, a sentence-level attention mechanism is used to learn a weight score \u03b1 i for each sentence. Subse-quently, the bag representation S att bag for the entity pair's bag B is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NetAtt: Sentence-Level Attention",
"sec_num": "3.4.1"
},
{
"text": "S att bag = N i=1 \u03b1 i x i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NetAtt: Sentence-Level Attention",
"sec_num": "3.4.1"
},
{
"text": "We can see that the purpose of the weighted factor \u03b1 i is to give higher weights to correctly labeled instances and lower weights to wrongly labeled ones. Given the score \u03b2 i of a given sentence representation x i with a relation r, which is measured as: \u03b2 i (x i |r) = x i Ar (where A is a weighted diagonal matrix), the attention scores in a given bag B can be calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NetAtt: Sentence-Level Attention",
"sec_num": "3.4.1"
},
{
"text": "\u03b1 i (x i |r, B) = softmax(\u03b3\u03b2 i (x i |r)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NetAtt: Sentence-Level Attention",
"sec_num": "3.4.1"
},
{
"text": "where \u03b3 is set empirically and borrowed from the work by (Sutton and Barto, 1998) . Smaller \u03b3 will lead to equalisation in all sentences and larger \u03b3 will increase bias of the high scored sentence.",
"cite_spans": [
{
"start": 69,
"end": 81,
"text": "Barto, 1998)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NetAtt: Sentence-Level Attention",
"sec_num": "3.4.1"
},
{
"text": "NetMax assumes that at least one sentence in an entity pair bag reflects the bag's relation, and only one sentence with the maximum probability is selected to represent the bag, which is denoted as S max bag = x k , where x k is the k th sentence representation with maximum probability. As only one sentence is selected, the input o for the softmax classifier can be expressed as o = Kx k + b, where K is the transformation matrix and b is the bias. For a bag with relation r, the conditional probability is p(r|x k ; \u03b8) = sof tmax(o r ). For all sentences in a bag, the index k is computed by k = arg max k p(r|x k ; \u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NetMax: Maximum Probability",
"sec_num": "3.4.2"
},
{
"text": "As mentioned above, we consider the advantages and disagreements of sentence selection of Ne-tAtt and NetMax in a single framework so that they can learn to regularise each other so as to reduce the effects of noisy sentences. We propose a collaborative curriculum learning framework where NetAtt and NetMax are defined as two student networks and they learn together under a curriculum scheme. For DSRE, we assume that entity pair bags with wrongly labeled sentences are hard samples to be learned, while bags with correctly labeled sentences are easy samples. Figure 3(b) shows the architecture of our collaborative curriculum learning framework, where NetAtt and NetMax are trained collaboratively and regularised by each other. The curriculum vector v i for collaborative learning could be built by various schemes, for example, the conflicts (v c i ) of selecting the valid sentence in a bag between the two student networks, which will be detailed later.",
"cite_spans": [],
"ref_spans": [
{
"start": 562,
"end": 573,
"text": "Figure 3(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Collaborative Curriculum Learning",
"sec_num": "3.5"
},
{
"text": "The objective function is defined as J(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "S i ; \u03b8) = 1 m m i=1 j(S i ; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": ", where m is the total number of entity pair's bags in a mini-batch. S i is a set of {S att bag i , S max bag i }. j(S i ; \u03b8) is the objective function of one entity pair's bag defined in Equation 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "j(S i ; \u03b8) = \u03b7 log p(r i |S att bag i ; \u03b8)+ (1 \u2212 \u03b7) log p(r i |S max bag i ; \u03b8)",
"eq_num": "(2)"
}
],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "where, 0 < \u03b7 < 1 is an empirical value to assign weights to NetAtt and NetMax. With a curriculum, the model's minimisation problem can be formulated as in Equation 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "min \u03b8,v E(\u03b8, v, \u03bb) = 1 m m i=1 v i l i + L jk (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "where, l i = l(r i , j(S i ; \u03b8)) is the loss of each bag; r i is the relation of a bag; v i is the curriculum weight variable; L jk is the cross entropy loss of conflicts between the highest probability sentence indexes from NetAtt and NetMax, which aims to let them regularise each other during sentence selection; \u03b8 and \u03bb are the optimisation parameters of relation extractor and curriculum. The weighted loss is minimised by stochastic gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "Algorithm 1 Update using conflicts inputs: mini batch size m; j i att and k i max ; two students: p att (NetAtt), p max (NetMax); bag representation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "S att i , S max i for i in {1,2,3, . . . , m} do if j i att = k i max then # no conflicts v c i = 1 add S att i to set {S att } add S max i to set {S max } end end update p att with {S att } & p max with {S max }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.5.1"
},
{
"text": "Collaborative Learning Conflicts trick of two students are utlised to build a curriculum (v c i ): In a mini-batch of size m, where the i th entity pair's bag contains N sentences, j i att , k i max \u2208 [0 to N ) are the indexes of highest probably sentence selected by NetAtt and NetMax, respectively. For each entity pair bag in the batch, if j i att is not equal to k i max , then v c i = 0, representing a 'hard' sample (with conflicts). Otherwise v c i = 1, representing an 'easy' sample. The conflicts that occur during the collaborative training between NetAtt and NetMax are shown in Figure 3(b) . There is no extra curriculum network (f (v c ; \u03bb)) required to learn a v c i . When v c i is assigned to 0, the training procedure will forget the effects of 'hard' sample (i th bag) by multiplying 0 and its loss l i as shown in Eq.",
"cite_spans": [],
"ref_spans": [
{
"start": 590,
"end": 601,
"text": "Figure 3(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Curriculum Construction for",
"sec_num": "3.5.2"
},
{
"text": "(3). Algorithm 1 illustrates the logic to update the training using the conflicts curriculum. In a minibatch, entity pair bags with conflicts in sentence selection are dropped, the remaining bags are used to update the network parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Construction for",
"sec_num": "3.5.2"
},
{
"text": "Furthermore, various curriculum types could be used in CCL to alleviate the noise in DSRE. We utilize the small loss trick to build a curriculum which inspired by Jiang et al. (2018) . Specifically, to build a curriculum (v l i ), the loss (l i ) is used as an input constant feature to learn a curriculum vector. We use the MentorNet framework 2 (Jiang et al., 2018) as the curriculum network to learn an approximate predefined curriculum f (v l ; \u03bb). The approximation process is to train a curriculum model using synthetic data generated according to the predefined curriculum. The trained model is then used as the curriculum to guide the further model training. We use the predefined curriculum (Jiang et al., 2015) to guide our training.",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "Jiang et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 347,
"end": 367,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 700,
"end": 720,
"text": "(Jiang et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Construction for",
"sec_num": "3.5.2"
},
{
"text": "f (v l ; \u03bb) = v l i l i + 1 2 \u03bb 2 (v l i ) 2 \u2212 (\u03bb 1 + \u03bb 2 )v l i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Construction for",
"sec_num": "3.5.2"
},
{
"text": "We evaluate our model on the widely used New York Times (NYT) DSRE dataset 3 , which aligns Freebase entity relation with NYT corpus (Riedel et al., 2010) . The dataset uses the data from 2005 to 2006 as the training set and the remaining data, taken from 2007, as the test set. The processed dataset 4 was released by Lin et al. (2016) . We use the cleaned version of the processed dataset, which has removed duplicated sentences in training and test sets. In total, the training set consists of 522,611 sentences, 281,270 entity pairs and 18,252 relation facts. The testing set contains 172,448 sentences, 96,678 entity pairs, and 1,950 relation facts. The dataset contains 39,528 unique entities and 53 relations in total including an NA relation which represents no existing relation for given entity pairs in sentences.",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 319,
"end": 336,
"text": "Lin et al. (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Instead of obtaining a costly human annotated test data, we conduct a held-out evaluation (Riedel et al., 2010) in the experiments, as in previous work (Riedel et al., 2010; Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016) . Held-out evaluation compares relation facts predicted in test data with those relations identified in Freebase. It gives an approximate evaluation of the proposed model. As with the evaluation metrics used in the literature, we report our results using Precision-Recall curve (PR-curve) and Precision at N (P@N) metrics. PR-curve is used to understand the trade-off between precision and recall. Using all the test data, the plotted curve expresses the precision as a function of recall. P@N considers the cutoff topmost N precision values as a set. Each entity pair bag contains one or more instances and P@N considers all the multiple instances.",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 152,
"end": 173,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 174,
"end": 196,
"text": "Surdeanu et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 197,
"end": 215,
"text": "Zeng et al., 2015;",
"ref_id": "BIBREF24"
},
{
"start": 216,
"end": 233,
"text": "Lin et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "We compare our model with both statistical and deep learning baseline models. These baselines were evaluated on the same cleaned dataset. We exclude some recent models due to the dataset and reproducibility issues. Namely, recent models that were trained on a dataset, which was released by mistake, obtained higher results. These results in fact may be inaccurate. From the Github repository commit history and comments 5 : \"It has not deleted the mix part of testing data. The training sentence is 570000+, but in the paper is 520000+.\". The incorrect training dataset was replaced with the corrected one in March 2018, which might be the main reason as to why some MIL DSRE papers in 2018 reported nonreproducible results on the corrected training data. In all probability these works had commenced prior to March 2018 and were likely relying on the erroneous dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.3"
},
{
"text": "(1) Statistical Models: Mintz (Mintz et al., 2009) extracted sentence syntactic and lexical features and trained a multiclass logistic regression classifier. Hoffmann (Hoffmann et al., 2011 ) is a probabilistic, graphic model with MIL. MIMLRE (Surdeanu et al., 2012 ) is a MIL model that uses expectation maximization for classification.",
"cite_spans": [
{
"start": 30,
"end": 50,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF14"
},
{
"start": 167,
"end": 189,
"text": "(Hoffmann et al., 2011",
"ref_id": "BIBREF7"
},
{
"start": 243,
"end": 265,
"text": "(Surdeanu et al., 2012",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.3"
},
{
"text": "(2) Deep Learning Models: both CNN (Zeng et al., 2014) and PCNN (Zeng et al., 2015) Lin et al., 2016) and multi-instance learning (PCNN+ONE, CNN+ONE) (Zeng et al., 2015) .",
"cite_spans": [
{
"start": 35,
"end": 54,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 64,
"end": 83,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 84,
"end": 101,
"text": "Lin et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 150,
"end": 169,
"text": "(Zeng et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.3"
},
{
"text": "(3) ResCNN-9 (Huang and Wang, 2017): 9layers CNN model with residual identity shortcuts and three fully connected layers. The model outperforms CNN+ATT/ONE models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.3"
},
{
"text": "(4) State-of-the-art DSRE Noise Filter Systems: DSGAN (Qin et al., 2018) is a model to filter out noise instances. It unitizes GANs to remove potentially inaccurate sentences from original training data and further trains PCNN/CNN models on the filtered data to improve performance.",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "(Qin et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.3"
},
{
"text": "Batch size B 120 Learning rate 0.1, 0.4 Weight decay 0.00001 Burn-in epoch 5 Dropout probability 0.3 We fine-tune our models by validating and selecting the best model parameters. To accomplish this we set the gradient descent learning rate among {0.4, 0.2, 0.1, 0.01, 0.001}. Batch size B is set to {60, 120, 160}. The amount of CNN filter is set among {64, 128, 230, 256}. A dropout rate is in the set {0.1, 0.3}. All the parameters fine-tuned in our experiments are shown in Table 1 . Specifically, the best performance of [CNN+ATT/ONE+SelfAtt] with training parameters of learning rate:0.4 and CNN filters:230; the best performance of [PCNN+ATT/ONE+SelfAtt] with training parameters of learning rate:0.4 and CNN filters:128; and the best performance of [PCNN+ATT/ONE+SelfAtt+CCL] with training parameters of learning rate:0.1 and CNN filters:256. For other parameters, we follow the settings in the work of (Lin et al., 2016) , e.g. the maximum sentence length is limited to 70, the word and position embedding size is fixed to 50 and 5 respectively, and the CNNs filter window size is 3.",
"cite_spans": [
{
"start": 911,
"end": 929,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 478,
"end": 485,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Number of CNN filters 128, 230, 256",
"sec_num": null
},
{
"text": "Armed by the internal multilayer CNNs selfattention mechanism (named as SelfAtt), our model can learn a better representation from noisy data compared with the conventional CNN encoder used in PCNN/CNN. Each model has more than one convolution operation, as illustrated in Figure 3 (a). Prior to feeding these outputs from the first convolution operation into the selfattention module, we add a batch normalisation followed by an ReLU activation function.",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 281,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Effects of Internal CNNs Self-Attention",
"sec_num": "4.4"
},
{
"text": "Figures 4 & 5 show the PR-curve results attained by applying SelfAtt to CNN/PCNN based models. We also report the results of P@100, P@200, P@300 and the Mean for CNN/PCNN+ONE and CNN/PCNN+ATT with SelfAtt in the held-out evaluation. Table 3 shows the P@N values with test settings where all sentences in an entity pair's bag are taken into account. PR-curve and P@N results demonstrate that CNN/PCNN based approaches achieve improved results with the SelfAtt. PCNN/CNN models with SelfAtt also outperform ResCNN-9 in terms of P@N and PR-curve, which indicates that the proposed SelfAtt is beneficial for boosting the performance of the models learning from noisy inputs. The state-of-the-art DSGAN system demonstrates its ability to improve PCNN/CNN + ATT/ONE by filtering out noisy data. By comparing our SelfAtt with DSGAN, Figure 4 shows that SelfAtt significantly outperforms the DSGAN system in terms of CNN-based models.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 826,
"end": 834,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effects of Internal CNNs Self-Attention",
"sec_num": "4.4"
},
{
"text": "The intuition of adding an internal CNN states self-attention module to help DSRE task is that, 1/ a deeper CNN has positive effects on noisy NLP tasks (Conneau et al., 2017) , 2/ attention enhanced PCNN/CNN is expected to assign various weight scores to different sentence portions and will form a better representation in the DSRE setting.",
"cite_spans": [
{
"start": 152,
"end": 174,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Internal CNNs Self-Attention",
"sec_num": "4.4"
},
{
"text": "By applying the SelfAtt it could add more convolution layers into a model, as the internal self-attention used is placed between two convolution operations. Huang and Wang (2017) demonstrated that multi-layer ResCNNs network does achieve performance improvement by adding residual identity shortcuts, which aligns with the study that deeper CNN has positive effects on noisy NLP tasks (Conneau et al., 2017) . However, previous DSRE researches overlooked multilayer CNN/PCNN with ONE/ATT. To investigate multilayer effects of DSRE, we present multilayer CNN with ATT results in Table 2 , we observe that with ATT several multilayer models (e.g., 2 layers CNN+ATT) have improvements compared with single layer models, and overfitting occurs when larger convolution layers are employed. Thus, our SelfAtt design is expected to boost a sentence encoder with attention scores and alleviate noise effects by benefiting from a multilayer CNN network. From empirical testing, to apply attention more subtly, placing self-attention after the second convolution operation and followed by one convolution operation works well for all PCNN/CNN + ONE/ATT based models generally. We report results with this setting.",
"cite_spans": [
{
"start": 157,
"end": 178,
"text": "Huang and Wang (2017)",
"ref_id": "BIBREF8"
},
{
"start": 385,
"end": 407,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Effects of Internal CNNs Self-Attention",
"sec_num": "4.4"
},
{
"text": "avaya , which was once a division of lucent technologies and att before that , is one of the nation 's top makers of phone equipment , rivaling cisco , nortel and alcatel-lucent in providing internet-based communications to corporations .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Internal CNNs Self-Attention",
"sec_num": "4.4"
},
{
"text": "By looking at the weight scores from SeftAtt, we observe that different parts of a sentence obtain different attentions. For example, as the heat-map illustrates using attention scores, entities avaya and cisco obtain more attention than others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Internal CNNs Self-Attention",
"sec_num": "4.4"
},
{
"text": "We refer to the Collaborative Curriculum Learning using Conflicts Trick as CCL-CT, and using the Small Loss trick as CCL-SL. For the following experiments, we expect that by applying our CCL strategies, it will result in further improvements by reducing the undesirable effects of noise. In our framework of integrating collaborative curriculum learning, both NetAtt and NetMax could be used for testing. We report a comparison of NetAtt/NetMax+SelfAtt+CCL, PCNN+ONE/ATT and state-of-the-art DSGAN noise reduction as shown in Table 3 and Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 526,
"end": 533,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 538,
"end": 546,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effects of Collaborative Curriculum Learning",
"sec_num": "4.5"
},
{
"text": "CCL-CT utilises the conflicts from the collaborative training of NetMax and NetAtt to form a curriculum to guide the training. The two students form different bag representations to feed to Softmax layer separately, and they generate disagreements on the bag-level selection during training. In our experiment, each epoch reveals less than 10.9% disagreement, thus we drop less than 10.9% of the total entity pair bags during each epoch and the drop ratios work well in our experiments. From Figures 5(a) and 5(b), we can see that the CCL based models have further improvements in terms of PR-curves compared with PCNN+ATT/ONE+SelfAtt. The P@N results in Table 3 indicate that CCL further improves the model's performance when compared to PCNN+ATT/ONE+SelfAtt as well. Table 4 gives another comparison using AUC with all pvalues less than 5e-02 from t-test evaluation. The results indicate that the larger AUC, the better performance. A simple ensemble model of two networks (AUC: 0.371) has a similar result as a single model (NetAtt, AUC: 0.368). The main purpose of adding an additional student network is to introduce conflicts and to build the collaborative curriculum learning scheme. With the CCL strategies, our models improve performances by removing 'hard' (noisy) entity pair bags during training. When compared with the state-of-the-art DSGAN system, our models outperform both DS-GAN based ONE and ATT models.",
"cite_spans": [],
"ref_spans": [
{
"start": 655,
"end": 662,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 769,
"end": 776,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Effects of Collaborative Curriculum Learning",
"sec_num": "4.5"
},
{
"text": "We examine the small loss trick based curriculum using MentorNet (Jiang et al., 2018) . CCL-SL uses the loss from the collaborative training to build the curriculum vector, which commences by guiding the collaborative training from the burnin epoch (5 th ) and the optimal epoch result of (AUC:0.382, P@N mean:77.3%), which is reported for held-out evaluation starting at the burnin epoch. The results also demonstrate that various curriculum types (the conflict and loss tricks) could help to alleviate the noise in DSRE.",
"cite_spans": [
{
"start": 65,
"end": 85,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Collaborative Curriculum Learning",
"sec_num": "4.5"
},
{
"text": "Overall, our experimental results demonstrate that the proposed SelfAtt and CCL strategies for PCNN/CNN models significantly outperform baselines in terms of PR-curve and P@N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Collaborative Curriculum Learning",
"sec_num": "4.5"
},
{
"text": "To deal with the mislabelling issue in distantly supervised relation extraction, this paper details the development of a novel model based on a multilayer self-attention mechanism for CNNs and collaborative curriculum learning strategies with two students (NetAtt and NetMax). The internal self-attention model can learn a better sentence representation by taking advantage of deeper CNNs in terms of positive effects on noisy inputs. The CCL strategies can perform a collaborative training on NetAtt and NetMax by allowing them to regularize each other, in tandem with the removal of noisy sample effects. Two different tricks, namely conflicts tricks and small loss tricks, are utilized in the CCL framework. Experimental results on the commonly-used NYT dataset indicate that our proposed approaches significantly outperform state-of-the-art baseline models in terms of P@N and PR-curve evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "6 Acknowledgement",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/google/mentornet 3 http://iesl.cs.umass.edu/riedel/ecml/ 4 https://github.com/thunlp/NRE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/thunlp/NRE/commit/ 77025e5cc6b42bc1adf3ec46835101d162013659",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank co-authors Jingguang Han and Sha Liu ({jingguang.han, sha.liu}@ucd.ie) and anonymous reviewers for these insightful comments and suggestions. We would like to thank Emer Gilmartin (gilmare@tcd.ie) for helpful comments and presentation improvements. This research is funded by the Enterprise-Ireland Innovation Partnership Programme (Grant IP2017626).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-tuning pre-trained transformer language models to distantly supervised relation extraction",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Alt",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "H\u00fcbner",
"suffix": ""
},
{
"first": "Leonhard",
"middle": [],
"last": "Hennig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1388--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Alt, Marc H\u00fcbner, and Leonhard Hennig. 2019. Fine-tuning pre-trained transformer language models to distantly supervised relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1388-1398, Florence, Italy. ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Curriculum learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Louradour",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th annual international conference on machine learning",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international con- ference on machine learning, pages 41-48. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Long short-term memory-networks for machine reading",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "551--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 551-561.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Very deep convolutional networks for text classification",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2017,
"venue": "European Chapter of the Association for Computational Linguistics EACL'17",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Holger Schwenk, Lo\u00efc Barrault, and Yann Lecun. 2017. Very deep convolutional net- works for text classification. In European Chap- ter of the Association for Computational Linguistics EACL'17.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multi-level structured self-attentions for distantly supervised relation extraction",
"authors": [
{
"first": "Jinhua",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Jingguang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Dadong",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2216--2225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhua Du, Jingguang Han, Andy Way, and Dadong Wan. 2018. Multi-level structured self-attentions for distantly supervised relation extraction. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 2216-2225.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Reinforcement learning for relation classification from noisy data",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xi- aoyan Zhu. 2018. Reinforcement learning for rela- tion classification from noisy data. In Proceedings of AAAI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in Neural Information Processing Systems 27, pages 2672-2680.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies- Volume 1, pages 541-550. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep residual learning for weakly-supervised relation extraction",
"authors": [
{
"first": "Yiyao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1803--1807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "YiYao Huang and William Yang Wang. 2017. Deep residual learning for weakly-supervised relation ex- traction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1803-1807.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distant supervision for relation extraction with sentence-level attention and entity descriptions",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3060--3066",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3060-3066.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Self-paced curriculum learning",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Deyu",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Shiguang",
"middle": [],
"last": "Shan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2015,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander Hauptmann. 2015. Self-paced curricu- lum learning. In AAAI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhengyuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2018,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. Mentornet: Learning data- driven curriculum for very deep neural networks on corrupted labels. In ICML.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2124--2133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 2124-2133.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th ACL and the 4th IJC-NLP of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the Joint Conference of the 47th ACL and the 4th IJC- NLP of the AFNLP, pages 1003-1011. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dsgan: Generative adversarial training for distant supervision relation extraction",
"authors": [
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.09929"
]
},
"num": null,
"urls": [],
"raw_text": "Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Dsgan: Generative adversarial training for distant supervision relation extraction. arXiv preprint arXiv:1805.09929.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148-163. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Classifying relations by ranking with convolutional neural networks",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00edcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 joint conference on empirical methods in natural language processing and compu- tational natural language learning, pages 455-465. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Reinforcement learning: An introduction",
"authors": [
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"G"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barto",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S Sutton and Andrew G Barto. 1998. Rein- forcement learning: An introduction.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Reducing wrong labels in distant supervision for relation extraction",
"authors": [
{
"first": "Shingo",
"middle": [],
"last": "Takamatsu",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "721--729",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervi- sion for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers-Volume 1, pages 721-729. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Extracting multiple-relations in onepass with pre-trained transformers",
"authors": [
{
"first": "Haoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dakuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaoxiao",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Saloni",
"middle": [],
"last": "Potdar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.01030"
]
},
"num": null,
"urls": [],
"raw_text": "Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Pot- dar. 2019. Extracting multiple-relations in one- pass with pre-trained transformers. arXiv preprint arXiv:1902.01030.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distant supervision for relation extraction via piecewise convolutional neural networks",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "1753--1762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Pro- ceedings of the 2015 Conference on EMNLP, pages 1753-1762.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335-2344.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Self-attention generative adversarial networks",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Metaxas",
"suffix": ""
},
{
"first": "Augustus",
"middle": [],
"last": "Odena",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.08318"
]
},
"num": null,
"urls": [],
"raw_text": "Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. 2018. Self-attention gen- erative adversarial networks. arXiv preprint arXiv:1805.08318.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "relative distances from 'founder' to entities"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "(b) Collaborative Learning with Conflicts and Loss an entity pair bag contains N (a) A sentence embedding matrix with the entity pair (e 1 , e 2 ) consists of a word vector set of dimension (T \u00d7 d). T is the length of the given sentence and d is the length of the word and position vector. A convolution filter with a dimension of (u \u00d7 d) is sliding along the sentence representation. cov represents a convolution operation. The internal CNNs states (C) from the first convolution operation are fed into the self-attention module. A piece-wise max-pooling is employed at outputs (C) from last convolution layer. A sentence representation x i is learned after a nonlinear function. (b)"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "PR curves comparison of CNN/PCNN based models with SelfAtt module and baselines relation extraction models with bag level selective attention mechanism (PCNN+ATT, CNN+ATT) ("
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "PR curves comparison of PCNN based noise removal models"
},
"TABREF0": {
"type_str": "table",
"content": "
",
"num": null,
"html": null,
"text": "Gates is the principal founder of Microsoft Bill Gates founded Microsoft in 1975 Bill Gates speaking at a Microsoft held Conference"
},
"TABREF1": {
"type_str": "table",
"content": "",
"num": null,
"html": null,
"text": "Parameters setting for best results"
},
"TABREF3": {
"type_str": "table",
"content": "",
"num": null,
"html": null,
"text": "Multilayer CNNs+ATT"
},
"TABREF5": {
"type_str": "table",
"content": ": P@N results for models with internal CNNs |
self-attention and curriculum learning |
",
"num": null,
"html": null,
"text": ""
},
"TABREF7": {
"type_str": "table",
"content": "",
"num": null,
"html": null,
"text": "Comparison of AUC Results"
}
}
}
}