Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:03:14.900991Z"
},
"title": "Learning the Extraction Order of Multiple Relational Facts in a Sentence with Reinforcement Learning",
"authors": [
{
"first": "Xiangrong",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {
"laboratory": "NLPR",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "xiangrong.zeng@nlpr.ia.ac.cn"
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "NLPR",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "shizhu.he@nlpr.ia.ac.cn"
},
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Changsha University of Science & Technology",
"location": {
"postCode": "410114",
"settlement": "Changsha",
"country": "China"
}
},
"email": "zengdj@csust.edu.cn"
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "NLPR",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "kliu@nlpr.ia.ac.cn"
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "NLPR",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "jzhao@nlpr.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The multiple relation extraction task tries to extract all relational facts from a sentence. Existing works didn't consider the extraction order of relational facts in a sentence. In this paper we argue that the extraction order is important in this task. To take the extraction order into consideration, we apply the reinforcement learning into a sequence-to-sequence model. The proposed model could generate relational facts freely. Widely conducted experiments on two public datasets demonstrate the efficacy of the proposed method.",
"pdf_parse": {
"paper_id": "D19-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "The multiple relation extraction task tries to extract all relational facts from a sentence. Existing works didn't consider the extraction order of relational facts in a sentence. In this paper we argue that the extraction order is important in this task. To take the extraction order into consideration, we apply the reinforcement learning into a sequence-to-sequence model. The proposed model could generate relational facts freely. Widely conducted experiments on two public datasets demonstrate the efficacy of the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation extraction (RE) is a core task in natural language processing (NLP). RE can be used in information extraction (Wu and Weld, 2010) , question answering (Yih et al., 2015; Dai et al., 2016) and other NLP tasks. Most existing works assumed that a sentence only contains one relational facts (a relational fact, or a triplet, contains a relation and two entities). But in fact, a sentence often contains multiple relational facts (Zeng et al., 2018b) . The multiple relation extraction task tries to extract all relational facts from a sentence.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Wu and Weld, 2010)",
"ref_id": "BIBREF29"
},
{
"start": 160,
"end": 178,
"text": "(Yih et al., 2015;",
"ref_id": "BIBREF32"
},
{
"start": 179,
"end": 196,
"text": "Dai et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 435,
"end": 455,
"text": "(Zeng et al., 2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing works on multiple relation extraction task can be divided into five genres. 1) Pseu-doPipeline genre, including Miwa and Bansal (2016) ; Sun et al. (2018) . They first recognized all the entities of the sentence, then extracted features for each entity pair and predicted their relation. They trained the entity recognition model and relation prediction model together instead of separately. Therefore, we call them PseudoPipeline methods. 2) TableFilling genre, including Miwa and Sasaki (2014) ; Gupta et al. (2016) and . They maintained a entity-relation table and predicted a semantic tag (either entity tags or relation tags) for each cell in the table.",
"cite_spans": [
{
"start": 121,
"end": 143,
"text": "Miwa and Bansal (2016)",
"ref_id": "BIBREF15"
},
{
"start": 146,
"end": 163,
"text": "Sun et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 482,
"end": 504,
"text": "Miwa and Sasaki (2014)",
"ref_id": "BIBREF16"
},
{
"start": 507,
"end": 526,
"text": "Gupta et al. (2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to the predicted tags, they can recognize the entities and the relation between each entity pair. 3) NovelTagging genre, including Zheng et al. (2017) . This method can be seen as a development of TableFilling method. They assigned a pre-defined semantic tag to each word of the sentence and collected triplets based on the tags. Their tags include both entity and relation information. Therefore, they don't need to maintain a entity-relation table. 4) MultiHeadSelection genre, including Bekoulis et al. (2018a) and Bekoulis et al. (2018b) . They first recognized the entities, then they formulated the relation extraction task as a multi-head selection problem. For each entity, they calculated the score between it and every other entities for a given relation. The combination of the entity pair and relation with the score exceeding a threshold will be kept as a triplet. 5) Generative genre, including Zeng et al. (2018b) . They directly generate triplets one by one by a sequence-to-sequence model with copy mechanism (Gu et al., 2016; Vinyals et al., 2015) . To generate a triplet, they first generated the relation, then they copy the first entity and the second entity from the source sentence.",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "Zheng et al. (2017)",
"ref_id": "BIBREF40"
},
{
"start": 500,
"end": 523,
"text": "Bekoulis et al. (2018a)",
"ref_id": "BIBREF0"
},
{
"start": 528,
"end": 551,
"text": "Bekoulis et al. (2018b)",
"ref_id": "BIBREF1"
},
{
"start": 919,
"end": 938,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
},
{
"start": 1036,
"end": 1053,
"text": "(Gu et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 1054,
"end": 1075,
"text": "Vinyals et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, none of them have considered the extraction order of multiple triplets in a sentence. Given a sentence, the PseudoPipeline methods extract relations of different entity pairs separately. Although they jointly training the entity model and relation model, they ignore the influence between triplets actually. The TableFilling, NovelTagging and MultiHeadSelection methods extract the triplets in the word order of this sentence. They firstly deal with the first word, then the second one and so on. The generative method could generate triplets in any order actually. However, Zeng et al. (2018b) randomly choose the extraction order of the triplets in each sentence. Sorting the triplets in the sentences of training data beforehand with",
"cite_spans": [
{
"start": 584,
"end": 603,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cubanelle is in Arros negre, a dish from the Catalonia region.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "#1 : <Arros negre, food_region, Catalonia> #2 : <Arros negre, ingredient, Cubanelle> Figure 1 : Example of multiple relation extraction. In this example, it's easier to extract F 2 first. The extraction of F 1 can be benefit from F 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "global rules (e.g., alphabetical order) is straightforward. But one global sorting rule may not fit every sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "In this paper, we argue that the extraction order of triplets in a sentence is important. Take Figure 1 as example. It's difficult to extract F 1 first because we don't know what \"Arros negre\" is in the first place. Extracting F 2 is more straightforward as the key words \"dish\", \"region\" in the sentence is helpful. F 2 can help us to extract F 1 because now we are confident that \"Arros negre\" is some kind of food, so that \"ingredient\" is a suitable relation between \"Arros negre\" and \"Cubanelle\". From this intuitive example, we can see that the extracted triplets could influence the extraction of the remaining triplets.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 104,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "To automatically learning the extraction order of multiple relational facts in a sentence, we propose a sequence-to-sequence model and apply reinforcement learning (RL) on it. we follow the Generative genre because such a model could extract triplets in various order, which is convenient for us to explore the influence of triplets extraction order. Our model reads in a raw sentence and generates triplets one by one. Thus, all triplets in a sentence could be extracted. To take the triplets extraction order into consideration, we convert the triplets generation process as a RL process. The sequence-to-sequence model is regarded as the RL policy. The action is what we generate in each time step. We assume that a better generation order could lead to more valid generated triplets. The RL reward is related to the generated triplets. In general, the more triplets are correctly generated, the higher the reward. Unlike supervised learning with negative log likelihood (NLL) loss, which forces the model to generate triplets in the order of the ground truth, reinforcement learning allows the model generate triplets freely to achieve higher reward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "The main contributions of this work are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "\u2022 We discuss the triplets extraction order prob-lem in the multiple relation extraction task. In our knowledge, this problem has never been addressed before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "\u2022 We apply reinforcement learning method on a sequence-to-sequence model to handle this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "\u2022 We conduct widely experiments on two public datasets. Experimental results show that the proposed method outperform the strong baselines with 3.4% and 5.5% improvements respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relational Facts",
"sec_num": null
},
{
"text": "Given a sentence with two annotated entities (an entity pair), the relation classification task aims to identify the predefined relation between these two entities. Zeng et al. (2014) was among the first to apply neural networks in relation classification task. They adopted the Convolutional Neural Network (CNN) to learn the sentence representation automatically. In the following, dos Santos et al. 2015; Xu et al. (2015a) also applied CNN to extract relation. Xu et al. (2015b) utilized shortest dependency path between two entities with a LSTM (Hochreiter and Schmidhuber, 1997) based recurrent neural network. Zhou et al. (2016) applied attention mechanism to learn different weights for each word and used LSTM to represent sentence. These methods all assumed that the entity pair is given beforehand and a sentence only contains two entities. To extract both entities and relation from sentence, early works like Zelenko et al. (2003) ; Chan and Roth (2011) adopted pipeline methods. However, such pipeline methods neglect the relevance between entities and relation. Latter works focused on joint models that extract entities and relation jointly. Yu and Lam (2010); Li and Ji (2014); Miwa and Bansal (2016) relied on NLP tools to do feature engineering, which suffered from the error propagation problem. Miwa and Sasaki (2014) ; Gupta et al. (2016) ; applied neural networks to jointly extract entities and relations. They converted the relation extraction task into a table filling task. Zheng et al. (2017) took a step further and converted this task into a tagging task. They assigned a semantic tag to each word in the sentence and collected triplets according to the tag information. Bekoulis et al. (2018b,a) model the relation extraction task as a multi-head selec-tion problem. However, these models can not take triplet's extraction order into consideration. Sun et al. (2018) proposed a joint learning paradigm based on minimum risk training. Their method ignore the influence between relational facts. Zeng et al. (2018b) proposed an sequence-to-sequence model with copy mechanism to handle the overlapping problem in multiple relation extraction. They randomly choose a extraction order for each sentence.",
"cite_spans": [
{
"start": 165,
"end": 183,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 408,
"end": 425,
"text": "Xu et al. (2015a)",
"ref_id": "BIBREF30"
},
{
"start": 464,
"end": 481,
"text": "Xu et al. (2015b)",
"ref_id": "BIBREF31"
},
{
"start": 549,
"end": 583,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 616,
"end": 634,
"text": "Zhou et al. (2016)",
"ref_id": "BIBREF41"
},
{
"start": 921,
"end": 942,
"text": "Zelenko et al. (2003)",
"ref_id": "BIBREF35"
},
{
"start": 1315,
"end": 1337,
"text": "Miwa and Sasaki (2014)",
"ref_id": "BIBREF16"
},
{
"start": 1340,
"end": 1359,
"text": "Gupta et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 1700,
"end": 1725,
"text": "Bekoulis et al. (2018b,a)",
"ref_id": null
},
{
"start": 1879,
"end": 1896,
"text": "Sun et al. (2018)",
"ref_id": "BIBREF25"
},
{
"start": 2024,
"end": 2043,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "RL has attracted lot of attention recently. It has been successfully applied in many games (Mnih et al., 2015; Silver et al., 2016) . Narasimhan et al. (2015) ; He et al. (2016) applied RL on text based games. Narasimhan et al. (2016) employed deep Q-network to optimize a reward function that reflects the extraction accuracy while penalizing extra effort. applied policy gradient method to model future reward in chatbot dialogue. They designed a reward to promote three conversational properties: informativity, coherence and ease of answering. Su et al. 2016using on-line activate reward learning for policy optimization in spoken dialogue systems because the user feedback is often unreliable and costly to collect. Yu et al. (2017) applied RL method to overcome the limitations that the Generative Adversarial Net (GAN) in generating sequences of discrete tokens. Our work is related to Li et al. (2016); Yu et al. (2017) since we also apply RL to generate better sequences.",
"cite_spans": [
{
"start": 91,
"end": 110,
"text": "(Mnih et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 111,
"end": 131,
"text": "Silver et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 134,
"end": 158,
"text": "Narasimhan et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 161,
"end": 177,
"text": "He et al. (2016)",
"ref_id": "BIBREF9"
},
{
"start": 210,
"end": 234,
"text": "Narasimhan et al. (2016)",
"ref_id": "BIBREF19"
},
{
"start": 721,
"end": 737,
"text": "Yu et al. (2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There are several works that related to both relation extraction and RL, which are also related to our work. Zeng et al. (2018a) ; Feng et al. 2018; Qin et al. (2018) applied RL to distantly supervised relation extraction task. Zeng et al. (2018a) turned the bag relation prediction into an RL process. They assumed that the relation of the bag is determined by the relation of sentences from the bag. They set the final reward to +1 or -1 by comparing the predict bag relation with the gold relation. Feng et al. (2018) adopted policy gradient method to select high-quality sentences from the bag. The selected sentences are feed to the relation classifier and the relation classifier provides rewards to the instance selector. Similarly, Qin et al. (2018) explored a deep RL strategy to generate the false-positive indicator. Our work is different from them since we focus on supervised relation extraction task.",
"cite_spans": [
{
"start": 109,
"end": 128,
"text": "Zeng et al. (2018a)",
"ref_id": "BIBREF37"
},
{
"start": 149,
"end": 166,
"text": "Qin et al. (2018)",
"ref_id": "BIBREF20"
},
{
"start": 228,
"end": 247,
"text": "Zeng et al. (2018a)",
"ref_id": "BIBREF37"
},
{
"start": 502,
"end": 520,
"text": "Feng et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 740,
"end": 757,
"text": "Qin et al. (2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first introduce our basic model and then introduce how to apply RL on it. Similar to Zeng et al. (2018b) , our neural model is also a sequence-tosequence model with copy mechanism. It reads in a raw sentence and generates triplets one by one. Instead of training the model with NLL loss, we regard the triplets generation process as a RL process and optimize the model with REIN-FORCE (Williams, 1992) algorithm. Therefore, we don't have to determine the triplets order of each sentence beforehand, we let the model generate triplets freely. We show the RL process in Figure 2 .",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
},
{
"start": 377,
"end": 404,
"text": "REIN-FORCE (Williams, 1992)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 571,
"end": 579,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The sequence-to-sequence model with copy mechanism is a kind of CopyNet (Gu et al., 2016) or PointerNetwork (Vinyals et al., 2015) . Two components included in this model: encoder and decoder. The encoder is a bi-directional recurrent neural network, which is used to encode a variable-length sentence into a fixed-length vector. We denote the outputs of encoder as",
"cite_spans": [
{
"start": 72,
"end": 89,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 108,
"end": 130,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-Sequence Model with Copy Mechanism",
"sec_num": "3.1"
},
{
"text": "O E = [o E 1 , ..., o E n ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-Sequence Model with Copy Mechanism",
"sec_num": "3.1"
},
{
"text": "where o E i denotes the output of i-th word of the encoder and n is the sentence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-Sequence Model with Copy Mechanism",
"sec_num": "3.1"
},
{
"text": "The decoder is another recurrent neural network, which is used to generate triplets one by one. The NA-triplets will be generated if the valid triplets number is less than the maximum triplets number. 1 It takes three-time steps to generate one triplet. That is, in time step t (t = 1, 2, 3, ..., T ), if t%3 = 1, we predict the relation. If t%3 = 2, we copy the first entity and if t%3 = 0, we copy the second entity. T is the maximum decode time step. Note that T is always divisible by 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-Sequence Model with Copy Mechanism",
"sec_num": "3.1"
},
{
"text": "Suppose there are m predefined valid relations, in time step t (t = 1, 4, 7, ...), we calculate the confidence score for each valid relation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-Sequence Model with Copy Mechanism",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q r t = selu(o D t \u2022 W r t + b r t )",
"eq_num": "(1)"
}
],
"section": "Sequence-to-Sequence Model with Copy Mechanism",
"sec_num": "3.1"
},
{
"text": "where o D t is the output of decoder in time step t; W r t is the weight matrix and b r t is the bias in time step t; selu(\u2022) (Klambauer et al., 2017) is activation function. To allow the model to generate NAtriplet, we also calculate the confidence score for [1,1,1,0,0,0] Figure 2 : The RL process. The model reads in a raw sentence and generates triplets. Then, a reward is assigned to each time step based on the generated triplets. Lastly, the rewards is used to optimize the model. NA relation:",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "(Klambauer et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 274,
"end": 282,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sequence-to-Sequence Model with Copy Mechanism",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q N A t = selu(o D t \u2022 W N A t + b N A t )",
"eq_num": "(2)"
}
],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "where W N A t and b N A t are parameters in time step t. Then we concatenate q r t and q N A t and perform softmax to obtain the probability distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p r t = sof tmax([q r t ; q N A t ])",
"eq_num": "(3)"
}
],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "To copy the first entity in time step t (t = 2, 5, 8, ...), we calculate the confidence score of each word in source sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q e ti = selu([o D t ; o E i ] \u2022 w e t )",
"eq_num": "(4)"
}
],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "where q e ti is the confidence score of i-th word and w e t is the weight vector, in time step t. Similarly, to take the NA-triplet into consideration, we also calculate the confidence score for NA entity with Eq 2. We concatenate them and perform softmax to obtain the probability distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p e t = sof tmax([[q e t1 , ..., q e tn ]; q N A t ])",
"eq_num": "(5)"
}
],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "Copy the second entity in time step t (t = 3, 6, 9, ...) is almost the same as the first entity. The only difference is we also apply the mask (Zeng et al., 2018b) to avoid the copied two entities are the same.",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "(Zeng et al., 2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "Our model is similar to OneDecoder model and MultiDecoder model in Zeng et al. (2018b) . Compared with OneDecoder model, our model using different linear transformation parameters in different decoding time step. Compared with Multi-Decoder model, our model using only one decoder cell to decode all triplets. In our model, we didn't using attention mechanism because we found that the attention mechanism makes no difference to the results.",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reward Sentence",
"sec_num": null
},
{
"text": "We regard the triplets generation process as RL process. The loop in Figure 2 represents a RL episode. In each RL episode, the model reads in the raw sentence and generate output sequence. Then we gain triplets from the output sequence and calculate rewards based on them. Finally, we optimize the model with REINFORCE algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reinforcement Learning Process",
"sec_num": "3.2"
},
{
"text": "We use s t to denote the state of sentence x in decoding time step t. The state s t contains the already generated tokens\u0177 <t , the information of source sentence x and the model parameters \u03b8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "s t = (\u0177 <t , x, \u03b8) (6) Action",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "The action is what we predict (or copy) in each time step. In time step t and t%3 = 1, the model (policy) is required to determine the relation of the triplet; In time step t where t%3 = 2 or 0, the model is required to determine the first or second entity, which is copied from the source sentence. Therefore, the action space A is varied in different time step t. A = R, t%3 = 1 P, t%3 = 2, 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "where R is the predefined relations and P is the positions of source sentence. We denote the action sequence of the source sentence as a = [a 1 , ..., a T ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "Algorithm 1 Reward Assignment Input: Sampled action sequence a = [a 1 , ..., a T ]; Gold triplets set G; NA-triplet NA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "Output: Rewards of each action r = [r 1 , ..., r T ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "1: Number of generated triplets K = T /3; Generated triplets list F = [F 1 , ..., F K ]; Already generated triplets set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "V = {}. 2: for each i \u2208 [1, K] do 3: F i = [a 3 * i\u22122 , a 3 * i\u22121 , a 3 * i ] 4: end for 5: for each i \u2208 [1, K] do 6: r 3 * i\u22122 = r 3 * i\u22121 = r 3 * i = 0 7: if i \u2264 |G| then 8: if F i \u2208 G and F i / \u2208 V then 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "Add F i to V 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "r 3 * i\u22122 = r 3 * i\u22121 = r 3 * i = 1 11: end if 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "else if F i = NA then 13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "r 3 * i\u22122 = r 3 * i\u22121 = r 3 * i = 0.5 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "end if 15: end for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State",
"sec_num": null
},
{
"text": "The reward is used to guide the training, which is critical to RL training. However, we can't assign a reward to each step directly during the generation since we don't know whether each action we choose is good or not before we finish the generation. Remind that we could obtain a triplet in every three steps. Once we obtained a triplet, we can compare it with the gold triplets and know if this triplet is good or not. A well generated triplet means it's the same with one of the gold triplets and not the same with any already generated triplets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward",
"sec_num": null
},
{
"text": "When we obtained a good triplet after three steps, we assign reward 1 to each of these three steps. Otherwise, we assign reward 0 to them. After generating valid triplets, we may need to generate NA-triplets. We assign reward 0.5 to each of these three steps if we correctly generate NAtriplet and reward 0 otherwise. We show the details of the reward assignment in Algorithm 1 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward",
"sec_num": null
},
{
"text": "The model can be trained with either supervised learning loss or reinforcement learning loss. However, the supervised learning forces the model to 2 How to determine the reward in RL is difficult. We tried several different reward assignments but only this one works. generate triplets in the order of the ground truth while the reinforcement learning allows the model generate triplets freely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "Training the model with NLL loss requires a predefined ground truth sequence for each sentence. Suppose T is the maximum time step of decoder, we denote the ground truth sequence as [y 1 , ..., y t , ..., y T ]. Them the NLL loss for sentence x can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLL Loss",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = 1 T T t=1 \u2212log(p(y t |\u0177 <t , x, \u03b8))",
"eq_num": "(8)"
}
],
"section": "NLL Loss",
"sec_num": null
},
{
"text": "where\u0177 <t is the already generated tokens; p(\u2022|\u2022) is the conditional probability; \u03b8 is the parameters of the entire model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLL Loss",
"sec_num": null
},
{
"text": "Training the model with reinforcement learning only require the ground truth triplets for each sentence. The RL loss for sentence x is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RL Loss",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = 1 T T t=1 \u2212log(p(\u0177 t |\u0177 <t , x, \u03b8)) * r t",
"eq_num": "(9)"
}
],
"section": "RL Loss",
"sec_num": null
},
{
"text": "where\u0177 t is the sampled action and r t is the reward, in time step t. NYT dataset is proposed by Riedel et al. (2010) . This dataset is produced by distant supervision method which automatically aligns Freebase with New York Times news articles. Like Zheng et al. (2017); Zeng et al. (2018b) do, we ignore the noise in this dataset and use it as a supervised dataset. We use the pre-processed dataset used in Zeng et al. (2018b) , which contains 5000 sentences in the test set and 5000 sentences in the validation set and 56195 sentences in the train set. In the train set, there are 36868 sentences that contain one triplet, 19327 sentences that contain multiple triplets. In the test set, the sentence number are 3244 and 1756, respectively. There are 24 relations in total.",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF21"
},
{
"start": 409,
"end": 428,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RL Loss",
"sec_num": null
},
{
"text": "WebNLG dataset is proposed by Gardent et al. (2017) . This data set is originally created for Natural Language Generation (NLG) task. Given a group of triplets, annotators are asked to write a sentence which contains the information of all triplets in this group. We use the dataset preprocessed by Zeng et al. (2018b) and the train set contains 5019 sentences, the test set contains 703 sentences and the validation set contains 500 sentences. In the train set, there are 1596 sentences that contain one triplet, 3423 sentences contain multiple triplets. In the test set, the sentence number are 266 and 437, respectively. There are 246 different relations.",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "Gardent et al. (2017)",
"ref_id": "BIBREF6"
},
{
"start": 299,
"end": 318,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RL Loss",
"sec_num": null
},
{
"text": "Zeng et al. (2018b) only use LSTM as the model cell. In this paper, we report the results of both LSTM and GRU (Cho et al., 2014) . We follow the most settings from Zeng et al. (2018b) . The cell unit number is set to 1000; The embedding dimension is set to 100; The batch size is 100; The maximum time step T is 15, that is, we will extract 5 triplets for each sentence; We use Adam (Kingma and Ba, 2015) to optimize parameters and stop the training when we find the best result in validation set. For the NLL training, the learning rate in both dataset is 0.001. For the RL training, we first pretrain the model with NLL training (pretrain model achieves 80%-90% of the best NLL training performance), then training the model with RL. The RL learning rate is 0.0005.",
"cite_spans": [
{
"start": 111,
"end": 129,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 165,
"end": 184,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "We follow the evaluation metrics in Zeng et al. (2018b) . Our model can only copy one word for each entity and we use the last word of each entity to represent them. Triplet is regarded as correct when its relation, the first entity and the second entity are all correct. For example, suppose the gold triplets is < Barack Obama, president, U SA >, < Obama, president, U SA > is regarded as correct while < Obama, locate, U SA > and < Barack, prsident, U SA > are not. A triplet is regarded as NA-triplet when and only when its relation is NA relation and it has a NA entity pair. The predicted NA-triplet will be excluded. We use the standard micro Precision, Recall and F1 score to evaluate the results.",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.3"
},
{
"text": "To find out if the triplets extraction order of a sentence can make difference in multiple relation extraction task, we conduct widely experiments on both NYT and WebNLG dataset. We show the results of different extraction order of different models with LSTM cell in Table 1 . The results of models with GRU cell are shown in Appendix B. We box the best results of a model and the bold values are the best results in this dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results of Different Extraction Order",
"sec_num": "4.4"
},
{
"text": "CNN denotes the baseline with CNN classifier. We use the NLTK toolkit 3 to recognize the entities first. Then we combine every two entities as an entity pair. Every two entities can lead to two different entity pairs. For each entity pair, we apply a CNN classifier (Zeng et al., 2014) to determine the relation. We leave the details of this model in Appendix A. ONE and MULTI denotes the OneDecoder model and MultiDecoder model in Zeng et al. (2018b) . 4 NLL means the model is trained with NLL loss, which requires a predefined ground truth sequence for each sentence. For a sentence with N triplets, there are N ! (the factorial of N ) possible extraction order, which lead to N ! valid sequences. Shuffle means we randomly select one valid sequence as the ground truth sequence in every training epoch for a sentence. Fix-Unsort means we randomly select one valid sequence before training, and use the selected one as ground truth sequence during training. This strategy is used in Zeng et al. (2018b) . Alphabetical means we sort the triplets of a sentence in alphabetical order and build ground truth sequence based on the sorted triplets. Frequency means we sort the triplets of a sentence based on the relation frequency. We count the relation frequency from the training set. RL means the model is trained with reinforcement learning. In NYT dataset, we using Alphabetical strategy to pretrain the model, and in WebNLG dataset, we pretrain the model wtih Frequency strategy.",
"cite_spans": [
{
"start": 266,
"end": 285,
"text": "(Zeng et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 432,
"end": 451,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
},
{
"start": 986,
"end": 1005,
"text": "Zeng et al. (2018b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Different Extraction Order",
"sec_num": "4.4"
},
{
"text": "Form Table 1 , we can observe that: (a) The CNN baseline is not performing well because this model neglect the influence between triplets. (b) Compared with FixUnsort strategy, simply change the ground truth sequence in different training epoch (the Shuffle strategy) is also not good. The performance of OneDecoder drops from 0.566 to 0.552 in NYT dataset and 0.305 to 0.283 in WebNLG dataset. (c) In both dataset and for all models trained with NLL loss, sort the triplets in some order (Alphabetical or Frequency order) can lead to better performance. For example, our model achieves 0.617 F1 score under FixUnsort strategy, while 0.697 and 0.669 F1 score under Alphabetical and Frequency strategy in NYT dataset. This observation verifies that the triplets extracting order of a sentence is important in multiple relation extraction task. (d) Another interesting observation is that the NLL trained model can achieve the best F1 score in NYT dataset if we sort the triplets in alphabetical order, while in WebNLG dataset, we need to sort the triplets in relation frequency order. This observation demonstrates that a global sorting rule may not fit for every dataset. The Alphabetical strategy is better for NYT dataset while the Frequency strategy is better for WebNLG dataset. (e) We can also observe that the model trained with RL can achieve better result than any NLL sorting strategy. For example, in WebNLG dataset, MultiDecoder model only achieves 0.481 and 0.518 F1 score with Alphabetical and Frequency strategy, it achieves 0.564 F1 score with RL. Our model trained with RL achieves the best performance on both NYT and WebNLG dataset, which is 0.721 and 0.616. It's 3.4% and 5.5% improvements compared with the best global sorting rule on these two datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results of Different Extraction Order",
"sec_num": "4.4"
},
{
"text": "This experiment compares the extraction order of our model trained with RL. We test our model Comparison LSTM GRU FUNLL-FURL 0.326 0.390 FreqRL-FURL 0.446 0.435 on WebNLG dataset. The generated triplets (excluding NA-triplets) sequence of our model which is pretrained with FixUnsort strategy then trained with RL, is denoted as FURL. Similarly, the triplets sequence of our model which is pretrained with Frequency strategy then trained with RL is denoted as FreqRL. And the triplets sequence of our model which is trained with FixUnsort strategy is denoted as FUNLL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction Order Comparison",
"sec_num": "4.5"
},
{
"text": "Suppose A = [F a , F b , F c ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction Order Comparison",
"sec_num": "4.5"
},
{
"text": "is the generated triplets sequence of FURL for sentence x, B = [F a , F c ] is the generated triplets sequence of FUNLL for the same sentence x. F a , as well as F b and F c , is a triplet. The first triplet of the sequence A is F a , which is the same with the first triplet of the sequence B. But the second triplet of A is different from B. Therefore, there are only 1 triplet is in the same position for A and B. The triplets number is the maximum triplets number of A and B, which is 3 in this example. We calculate the order comparison of sentence x as 1/3 = 0.333. The order comparison of FUNLL and FURL (denote as FUNLL-FURL) is the mean value of all sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction Order Comparison",
"sec_num": "4.5"
},
{
"text": "We show the order comparison results of our model with LSTM and GRU cell in Table 2 . As we can see, although FURL model is pretrained by FUNLL, FURL is more alike FreqRL (0.446 and 0.435), rather then FUNLL (0.326 and 0.390).",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Extraction Order Comparison",
"sec_num": "4.5"
},
{
"text": "This experiment verified that after RL training, the model trend to generate triplets in the same order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction Order Comparison",
"sec_num": "4.5"
},
{
"text": "To verify the ability of extracting multiple relational facts, we conduct the experiment on NYT dataset of our model with LSTM cell. We show the results in Figure 3 As we can see, when the sentence only contains one triplet, our model trained with RL can achieve comparative performance with the strong baselines. When there are multiple triplets for a sentence, our model trained with RL outperform all baselines significantly. By training with RL, our model could extract triplets more precisely. Although the recall value is slightly lower then NLL training with Frequency strategy, it exceeds other baselines significantly. These observations demonstrate that RL training is effective to handle the multiple relation extraction task.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Multiple Relation Extraction",
"sec_num": "4.6"
},
{
"text": "Although we overcome all strong baselines by training the model with RL, there are still some weaknesses in our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weakness",
"sec_num": "5"
},
{
"text": "The first weakness is the decrease in recall. Table 1 shows that NLL training with Alphabetical or Frequency strategy achieves the highest recall in most cases. Training the model with RL achieves the highest precision and relatively low recall. This phenomenon demonstrates that the model trained with RL generates relatively fewer triplets. Although we can extract triplets more accurate, it is still a weakness of our method since we try to extract all triplets from a sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Weakness",
"sec_num": "5"
},
{
"text": "The second weakness is our model can only copy one word for each entity. Following Zeng et al. (2018b), we only copy the last word of an entity. But in reality, most entities contains more than one word. In the future, we will consider how to extract the complete entity. For example, we could add the BIO tag prediction in the encoder and train the BIO loss together with current loss. Therefore, we can recognize the complete entity with the help of BIO tags. Or, we can take two steps to generate one entity, one step for the head word and the other for the tail word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weakness",
"sec_num": "5"
},
{
"text": "In this paper, we discuss the multiple triplets extraction order problem in the multiple relation extraction task. We propose a sequence-to-sequence model with reinforcement learning to take the extraction order into consideration. Widely experiments on NYT dataset and WebNLG dataset are conducted and verified that the proposed method is effective in handling this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In this section, we will describe the details of the CNN baseline. This baseline is a pipeline method. For a sentence, we use the NLTK toolkit to recognize the entities first. Then, we combine each two entities as an entity pair and use a CNN relation classifier to predict their relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A The Details of the CNN Baseline",
"sec_num": null
},
{
"text": "For example, suppose we recognize 3 entities in sentence s, which denoted as e 1 , e 2 , e 3 . There are 6 different entity pairs (remind that < e 1 , e 2 > and < e 2 , e 1 > are different).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A The Details of the CNN Baseline",
"sec_num": null
},
{
"text": "The CNN classifier is basically the same with Zeng et al. (2014) . Each word is turned into a embedding which including it's word embedding and position embedding. After the convolution layer, we apply a maxpooling layer on it. Then we apply a two layer softmax classify layer to obtain the final results. We train the model with NLL loss.",
"cite_spans": [
{
"start": 46,
"end": 64,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A The Details of the CNN Baseline",
"sec_num": null
},
{
"text": "Specifically, the word embedding dimension is 100, the position embedding dimension is 5, we use 128 filters and the filter size is 3. The hidden layer size of softmax classifier is 100 and we use tanh as the activation function. We optimize the model with Adam optimizer (Kingma and Ba, 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A The Details of the CNN Baseline",
"sec_num": null
},
{
"text": "During evaluation, if the entity pair is classified into NA relation, we will exclude this triplet. Otherwise, the triplet will be regarded as a predict triplet. If the predicted triplet is the same as one of the gold triples, it will be regarded as correct triplet. To be fair, when comparing the entities in the triplets, we only compare the last word of each entity. As long as the last word of the extract entity is the same as the gold one, we regard it as correct. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A The Details of the CNN Baseline",
"sec_num": null
},
{
"text": "We show the results of different extraction order of models with GRU cell in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "B Results of GRU Cell",
"sec_num": null
},
{
"text": "NA-triplet is a special triplet proposed inZeng et al. (2018b). It's similar to the \"eos\" symbol in neural sentence generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nltk.org/4 We reimplement these two models and find that the attention mechanism is not important. Therefore, we report the results without applying the attention mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by the National Natural Science Foundation of China (No.61533018, No.61702512) and the independent research project of National Laboratory of Pattern Recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adversarial training for multi-context joint entity and relation extraction",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Bekoulis",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Deleu",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Develder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018a. Adversarial training for multi-context joint entity and relation extraction.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Joint entity recognition and relation extraction as a multi-head selection problem",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Bekoulis",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Deleu",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Develder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018b. Joint entity recogni- tion and relation extraction as a multi-head selection problem.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Exploiting syntactico-semantic structures for relation extraction",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "551--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extrac- tion. In Proceedings of ACL, pages 551-560.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the EMNLP, pages 1724-1734.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cfo: Conditional focused neural question answering with largescale knowledge bases",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "800--810",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Lei Li, and Wei Xu. 2016. Cfo: Condi- tional focused neural question answering with large- scale knowledge bases. In Proceedings of ACL, pages 800-810.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Reinforcement learning for relation classification from nosy data",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xi- aoyan Zhu. 2018. Reinforcement learning for rela- tion classification from nosy data. In Proceedings of AAAI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Creating training corpora for nlg micro-planners",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "179--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating train- ing corpora for nlg micro-planners. In Proceedings of ACL, pages 179-188.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1631--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL, pages 1631-1640.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Table filling multi-task recurrent neural network for joint entity and relation extraction",
"authors": [
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Schtze",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Andrassy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "2537--2547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pankaj Gupta, Hinrich Schtze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural net- work for joint entity and relation extraction. In Pro- ceedings of COLING, pages 2537-2547.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep reinforcement learning with a natural language action space",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1621--1630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Li- hong Li, Li Deng, and Mari Ostendorf. 2016. Deep reinforcement learning with a natural language ac- tion space. In Proceedings of ACL, pages 1621- 1630.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: a Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: a Method for Stochastic Optimization. In Proceed- ings of ICLR, pages 1-15.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Self-normalizing neural networks",
"authors": [
{
"first": "G\u00fcnter",
"middle": [],
"last": "Klambauer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Unterthiner",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Mayr",
"suffix": ""
},
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in NIPS",
"volume": "",
"issue": "",
"pages": "971--980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. In Advances in NIPS, pages 971- 980.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep Reinforcement Learning for Dialogue Generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1192--1202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep Re- inforcement Learning for Dialogue Generation. In Proceedings of EMNLP, pages 1192-1202.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Incremental joint extraction of entity mentions and relations",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "402--412",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of ACL, pages 402-412.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "End-to-end relation extraction using lstms on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1105--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of ACL, pages 1105- 1116.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Modeling joint entity and relation extraction with table representation",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1858--1869",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table repre- sentation. In Proceedings of EMNLP, pages 1858- 1869.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Human-level control through deep reinforcement learning",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Silver",
"suffix": ""
},
{
"first": "Andrei",
"middle": [
"A"
],
"last": "Rusu",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Veness",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Bellemare",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Riedmiller",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Fidjeland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ostrovski",
"suffix": ""
}
],
"year": 2015,
"venue": "Nature",
"volume": "518",
"issue": "7540",
"pages": "529--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas Fidjeland, Georg Ostrovski, et al. 2015. Human-level con- trol through deep reinforcement learning. Nature, 518(7540):529-533.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language understanding for textbased games using deep reinforcement learning",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Tejas",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text- based games using deep reinforcement learning. In Proceedings of EMNLP, pages 1-11.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving information extraction by acquiring external evidence with reinforcement learning",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Yala",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2355--2365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquir- ing external evidence with reinforcement learning. In Proceedings of EMNLP, pages 2355-2365.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Robust distant supervision relation extraction via deep reinforcement learning",
"authors": [
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. In Proceedings of ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ECML PKDD",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Proceedings of ECML PKDD, pages 148-163.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Classifying relations by ranking with convolutional neural networks",
"authors": [
{
"first": "Santos",
"middle": [],
"last": "Cicero Dos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "626--634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convo- lutional neural networks. In Proceedings of ACL, pages 626-634.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mastering the game of go with deep neural networks and tree search",
"authors": [
{
"first": "David",
"middle": [],
"last": "Silver",
"suffix": ""
},
{
"first": "Aja",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"J"
],
"last": "Maddison",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Guez",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Sifre",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Driessche",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Schrittwieser",
"suffix": ""
},
{
"first": "Veda",
"middle": [],
"last": "Antonoglou",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Panneershelvam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lanctot",
"suffix": ""
}
],
"year": 2016,
"venue": "Nature",
"volume": "529",
"issue": "7587",
"pages": "484--489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Ju- lian Schrittwieser, Ioannis Antonoglou, Veda Pan- neershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "On-line active reward learning for policy optimisation in spoken dialogue systems",
"authors": [
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrki",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Rojas Barahona",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "2431--2441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pei-Hao Su, Milica Gasic, Nikola Mrki, Lina M. Ro- jas Barahona, Stefan Ultes, David Vandyke, Tsung- Hsien Wen, and Steve Young. 2016. On-line ac- tive reward learning for policy optimisation in spo- ken dialogue systems. In Proceedings of ACL, pages 2431-2441.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extracting entities and relations with joint minimum risk training",
"authors": [
{
"first": "Changzhi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Shiliang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wenting",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kuang-Chih",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kewen",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2256--2265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changzhi Sun, Yuanbin Wu, Man Lan, Shiliang Sun, Wenting Wang, Kuang-Chih Lee, and Kewen Wu. 2018. Extracting entities and relations with joint minimum risk training. In Proceedings of EMNLP, pages 2256-2265.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Advances in NIPS",
"authors": [
{
"first": "D",
"middle": [
"D"
],
"last": "Lawrence",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sugiyama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in NIPS, pages 2692-2700.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Machine Learning",
"volume": "8",
"issue": "",
"pages": "229--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Williams. 1992. Simple Statistical Gradient- Following Algorithms for Connectionist Reinforce- ment Learning. Machine Learning, 8:229-256.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Open information extraction using wikipedia",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "118--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Wu and Daniel S. Weld. 2010. Open information extraction using wikipedia. In Proceedings of ACL, pages 118-127.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Semantic relation classification via convolutional neural networks with simple negative sampling",
"authors": [
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Songfang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "536--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classifi- cation via convolutional neural networks with sim- ple negative sampling. In Proceedings of EMNLP, pages 536-540.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Classifying relations via long short term memory networks along shortest dependency paths",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunchuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1785--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest depen- dency paths. In Proceedings of EMNLP, pages 1785-1794.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Semantic parsing via staged query graph generation: Question answering with knowledge base",
"authors": [
{
"first": "Wentau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Mingwei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1321--1331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wentau Yih, Mingwei Chang, Xiaodong He, and Jian- feng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowl- edge base. In Proceedings of ACL, pages 1321- 1331.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Seqgan: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "2852--2858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of AAAI, pages 2852-2858.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "1399--1407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proceedings of COLING, pages 1399-1407.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. J. Mach. Learn. Res., 3:1083-1106.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In Proceedings of COLING, pages 2335-2344.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Large scaled relation extraction with reinforcement learning",
"authors": [
{
"first": "Xiangrong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangrong Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018a. Large scaled relation extraction with rein- forcement learning. In Proceedings of AAAI.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Extracting relational facts by an end-to-end neural model with copy mechanism",
"authors": [
{
"first": "Xiangrong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018b. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the ACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "End-to-end neural relation extraction with global optimization",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1730--1740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global op- timization. In Proceedings of EMNLP, pages 1730- 1740.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Joint extraction of entities and relations based on a novel tagging scheme",
"authors": [
{
"first": "Suncong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongyun",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Yuexing",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1227--1236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extrac- tion of entities and relations based on a novel tagging scheme. In Proceedings of ACL, pages 1227-1236.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Attentionbased bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongwei",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "207--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. In Proceedings of ACL, pages 207-212.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": ", Khartoum, capital, Sudan, Khartoum[1,1,1,0,0,0] Reward SentenceNews of the list's existence unnerved officials in Khartoum, Sudan 's capital."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Results of our model with LSTM cell under different training strategies on NYT dataset."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": ". The left part of this figure shows the performance of sentences with one triplet. The right part shows the performance of sentences with multiple triplets."
},
"TABREF2": {
"content": "<table/>",
"text": "Results of different extraction order of models with LSTM cell.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table/>",
"text": "The extraction order comparison.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table/>",
"text": "Results of different extraction order of models with GRU cell.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}