|
{ |
|
"paper_id": "D19-1041", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:13:01.280583Z" |
|
}, |
|
"title": "Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction", |
|
"authors": [ |
|
{ |
|
"first": "Rujun", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Southern", |
|
"location": { |
|
"country": "California" |
|
} |
|
}, |
|
"email": "rujunhan@isi.edu" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Illinois at Urbana-Champaign", |
|
"location": {} |
|
}, |
|
"email": "qning2@illinois.edu" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Southern", |
|
"location": { |
|
"country": "California" |
|
} |
|
}, |
|
"email": "npeng@isi.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and the temporal relation labels jointly. Experiments show that the proposed method can improve both event extraction and temporal relation extraction over state-of-the-art systems, with the end-to-end F 1 improved by 10% and 6.8% on two benchmark datasets respectively.", |
|
"pdf_parse": { |
|
"paper_id": "D19-1041", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and the temporal relation labels jointly. Experiments show that the proposed method can improve both event extraction and temporal relation extraction over state-of-the-art systems, with the end-to-end F 1 improved by 10% and 6.8% on two benchmark datasets respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure 1a illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage IN-CLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since \"Hutu\" is actually not an event, a system is expected to annotate the relations between \"Hutu\" and all other nodes in the graph as NONE (i.e., no relation).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 387, |
|
"text": "Figure 1a", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, Figure 1 : An illustration of event and relation models in our proposed joint framework. (a) is a (partial) graph of the output of the relation extraction model. \"Hutu\" is not an event and hence all relations including it should be annotated as NONE. (b) and (c) are comparisons between a pipeline model and our joint model. i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013; Ning et al., 2017; Meng and Rumshisky, 2018) . Specifically, they built end-toend systems that extract events first and then predict temporal relations between them (Fig. 1b) . In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that ex-tracts both events and temporal relations simultaneously (see Fig. 1c ). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. 1a , if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event.", |
|
"cite_spans": [ |
|
{ |
|
"start": 568, |
|
"end": 590, |
|
"text": "(Verhagen et al., 2007", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 615, |
|
"text": "(Verhagen et al., , 2010", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 637, |
|
"text": "UzZaman et al., 2013;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 656, |
|
"text": "Ning et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 657, |
|
"end": 682, |
|
"text": "Meng and Rumshisky, 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 104, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 803, |
|
"end": 812, |
|
"text": "(Fig. 1b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1078, |
|
"end": 1085, |
|
"text": "Fig. 1c", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1290, |
|
"end": 1297, |
|
"text": "Fig. 1a", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one nonevent. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Results show that, according to the end-to-end F 1 score for temporal relation extraction, the proposed method improves CAEVO by 10% on TB-Dense, and improves Cog-CompTime (Ning et al., 2018c) by 6.8% on MA-TRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 192, |
|
"text": "(Ning et al., 2018c)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Existing event extraction methods in the temporal relation domain, as in the TempEval3 work-shop (UzZaman et al., 2013) , all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK (Bethard, 2013) and Navy-Time (Chambers, 2013) ). While other domains have shown progress on event extraction using neural methods (Nguyen and Grishman, 2015; Nguyen et al., 2016; Feng et al., 2016) , recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 119, |
|
"text": "TempEval3 work-shop (UzZaman et al., 2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 270, |
|
"text": "(Bethard, 2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 301, |
|
"text": "(Chambers, 2013)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 413, |
|
"text": "(Nguyen and Grishman, 2015;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 434, |
|
"text": "Nguyen et al., 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 453, |
|
"text": "Feng et al., 2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Early attempts on temporal relation extraction use local pair-wise classification with handengineered features (Mani et al., 2006; Verhagen et al., 2007; Chambers et al., 2007; Verhagen and Pustejovsky, 2008) . Later efforts, such as ClearTK (Bethard, 2013) , UTTime (Laokulrat et al., 2013) , NavyTime (Chambers, 2013) , and CAEVO improve earlier work with better linguistic and syntactic rules. Yoshikawa et al. (2009) ; Ning et al. (2017) ; Leeuwenberg and Moens (2017) explore structured learning for this task, and more recently, neural methods have also been shown effective (Tourille et al., 2017; Cheng and Miyao, 2017; Meng et al., 2017; Meng and Rumshisky, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 130, |
|
"text": "(Mani et al., 2006;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 153, |
|
"text": "Verhagen et al., 2007;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 154, |
|
"end": 176, |
|
"text": "Chambers et al., 2007;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 208, |
|
"text": "Verhagen and Pustejovsky, 2008)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 257, |
|
"text": "(Bethard, 2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 291, |
|
"text": "(Laokulrat et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 319, |
|
"text": "(Chambers, 2013)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 420, |
|
"text": "Yoshikawa et al. (2009)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 441, |
|
"text": "Ning et al. (2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 472, |
|
"text": "Leeuwenberg and Moens (2017)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 604, |
|
"text": "(Tourille et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 627, |
|
"text": "Cheng and Miyao, 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 646, |
|
"text": "Meng et al., 2017;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 672, |
|
"text": "Meng and Rumshisky, 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of \"joint\" has been studied for entityrelation extraction in many works. Miwa and Sasaki (2014) frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. Li and Ji (2014) define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. Miwa and Bansal (2016) leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. Zhang et al. (2017) combine the benefits of both neural net and global optimization with beam Figure 2 : Deep neural network architecture for joint structured learning. Note that on the structured learning layer, grey bars denote tokens being predicted as events. Edge types between events follow the same notations as in 1a. y e l = 0 (non-event), so all edges connecting to y e l are NONE. y e i = 1, y e j = 1, y e k = 1 (events) and hence edges between them are forced to be the same (y r ij = y r jk = y r ik = BEFORE in this example) by transitivity. These global assignments are input to compute the SSVM loss.", |
|
"cite_spans": [ |
|
{ |
|
"start": 375, |
|
"end": 397, |
|
"text": "Miwa and Sasaki (2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 836, |
|
"end": 858, |
|
"text": "Miwa and Bansal (2016)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 984, |
|
"end": 1003, |
|
"text": "Zhang et al. (2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1078, |
|
"end": 1086, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multitasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as R, all event candidates (both events and nonevents) as E, and all relation candidates as EE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Event-Relation Extraction Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our neural SSVM adapts the SSVM loss as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural SSVM", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "L = l X n=1 C M n \u21e5 max y n 2Y 0, (y n ,\u0177 n ) +S n R (1) + C ES n E \u21e4 + || || 2 , whereS n E = S(\u0177 n E ; x n ) S(y n E ; x n ) andS n R = S(\u0177 n R ; x n ) S(y n R ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural SSVM", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x n ); denotes model parameters, n indexes instances, M n = |E| n + |EE| n de-notes the total number of relations |E| n and events |EE| n in instance n. y n ,\u0177 n denote the gold and predicted global assignments of events and relations for instance n-each of which consists of either one hot vector representing true and predicted relation labels", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural SSVM", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "y n R ,\u0177 n R 2 {0, 1} |EE| , or entity la- bels y n E ,\u0177 n E 2 {0, 1} |E| .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural SSVM", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A maximum a posteriori probability (MAP) inference is needed to find y n , which we formulate as an interger linear programming (ILP) problem and describe more details in Section 3.3. (y n ,\u0177 n ) is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. C and C E are the hyper-parameters to balance the losses between event, relation and the regularizer, and S(y n E ; x n ), S(y n R ; x n ) are scoring functions, which we design a multi-tasking neural architecture to learn.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural SSVM", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The intuition behind the SSVM loss is that it requires the score of gold output structure y n to be greater than the score of the best output structure under the current model\u0177 n with a margin (y n ,\u0177 n ) 1 or else there will be some loss. The training objective is to minimize the loss.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural SSVM", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural SSVM", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information (Tourille et al., 2017; Cheng and Miyao, 2017; Meng et al., 2017) . Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. 2 , we skip the input layer for simplicity. 2 The bottom layer corresponds to contextualized word representations denoted as v k . We use (i, j) 2 EE to denote a candidate relation and i 2 E to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model (Devlin et al., 2018) . They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 159, |
|
"text": "(Tourille et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 182, |
|
"text": "Cheng and Miyao, 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 201, |
|
"text": "Meng et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 453, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 751, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 403, |
|
"end": 409, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Tasking Neural Scoring Function", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair (i, j) we take the forward and backward hidden vectors corresponding to them, f i , b i , f j , b j , and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as L i,j and only use simple features provided in the original datasets: token distance, tense, and polarity of events.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Tasking Neural Scoring Function", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labelswhich we refer to as the RNN-based scoring function in the following sections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Tasking Neural Scoring Function", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A MAP inference is needed both during training to obtain\u0177 n in the loss function (Equation 1), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in Bramsen et al. (2006) ; Chambers and Jurafsky (2008) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 470, |
|
"end": 491, |
|
"text": "Bramsen et al. (2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 522, |
|
"text": "Chambers and Jurafsky (2008)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MAP Inference", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation 2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective Function", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "y = arg max X (i,j)2EE X r2R y r i,j S(y r i,j , x) (2) + CE X k2E X e2{0,1}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective Function", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "y e k S(y e k , x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective Function", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "s.t. y r i,j , y e k 2 {0, 1} , X r2R y r i,j = 1, X e2{0,1} y e k = 1,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective Function", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "where y e k is a binary indicator of whether the kth candidate is an event or not, and y r i,j is a binary indicator specifying whether the global prediction of the relation between (i, j) is r 2 R. S(y e k , x), 8e 2 {0, 1} and S(y r i,j , x), 8r 2 R are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference\u0177 is a collection of optimal label assignments for all events and relation candidates in a fixed context. C E is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objective Function", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Event-Relation Consistency. Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "8(i, j) 2 EE, e P i r P i,j , e P j r P i,j and e N i + e N j r N i,j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "where e P i denotes an event and e N i denotes a nonevent token. r P i,j indicates positive relations: BE-FORE, AFTER, SIMULTANEOUS, INCLUDES, IS INCLUDED, VAGUE and r N i,j indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Symmetry and Transitivity Constraint. We also explore the symmetry and transitivity constraints of relations. They are specified as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "8(i, j), (j, k) 2 EE, y r i,j = yr j,i , (symmetry) y r 1 i,j + y r 2 j,k X r 3 2T rans(r 1 ,r 2 ) y r 3 i,k \uf8ff 1, (transitivity)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if r i,j = BEFORE, then r j,i = AFTER. The transitivity constraint rules that if (i, j), (j, k) and (i, k) pairs exist in the graph, the label (relation) prediction of (i, k) pair has to fall into the transitivity set specifyed by (i, j) and (j, k) pairs. The full transitivity table can be found in Ning et al. (2018a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 445, |
|
"text": "Ning et al. (2018a)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. 3 Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation 1 and re-optimize the network to adjust for global properties. We will provide more details in Section 4.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 95, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section 6 we will compare and contrast them and show why our proposed structured joint model works the best.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We run two event and relation extraction systems, CAEVO 4 and Cog-CompTime 5 (Ning et al., 2018c) , on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 97, |
|
"text": "(Ning et al., 2018c)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Multi-Task Model. This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Pipeline Joint Model. This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by Miwa and Bansal (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 686, |
|
"end": 708, |
|
"text": "Miwa and Bansal (2016)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Structured Joint Model. This is described in detail in Section 3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section 6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter T evt to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section 3.3. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Hyper-Parameters. All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pretrained BERT model to compute word embedding 6 and all MLP scoring functions have one hidden layer. 7 In the SSVM loss function, we fix the value of C = 1, but fine-tune C E in the objective function in Equation 2. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-theshelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix. 8", |
|
"cite_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 430, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 942, |
|
"end": 949, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Temporal relation corpora such as TimeBank (Pustejovsky et al., 2003) and RED (O'Gorman et al., 2016) facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely 6 We use a pre-trained BERT-Base model with 768 hidden size, 12 layers, 12 heads implemented by https://github.com/huggingface/ pytorch-pretrained-BERT 7 Let H, K denotes the dimension of (concatenated) vector from BiLSTM and number of output classes. MLP layer consists of |H| \u21e4 |K| + |K| \u21e4 |K| parameters 8 PyTorch code will be made available upon acceptance. annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts (Bethard et al., 2007; Ning et al., 2017) , which made both modeling and evaluation extremely difficult in previous event temporal relation research. The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task Ning et al., 2017; Cheng and Miyao, 2017; Meng and Rumshisky, 2018) . Recent data construction efforts such as MATRES (Ning et al., 2018a) further enhance the data quality by using a multi-axis annotation scheme and adopting a startpoint of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 69, |
|
"text": "(Pustejovsky et al., 2003)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 237, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 786, |
|
"text": "(Bethard et al., 2007;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 805, |
|
"text": "Ning et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1097, |
|
"end": 1115, |
|
"text": "Ning et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1116, |
|
"end": 1138, |
|
"text": "Cheng and Miyao, 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1139, |
|
"end": 1164, |
|
"text": "Meng and Rumshisky, 2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1215, |
|
"end": 1235, |
|
"text": "(Ning et al., 2018a)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1475, |
|
"end": 1482, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Temporal Relation Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following (Ning et al., 2018c) . Please refer to Figure 4 in the appendix for a visualizations of the two metrics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 452, |
|
"end": 472, |
|
"text": "(Ning et al., 2018c)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 491, |
|
"end": 499, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The main results of this paper can be found in Table 2. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and Table 3 : Further ablation studies on event and relation extractions. Relation (G) denotes train and evaluate using gold events to compose relation candidates, whereas Relation (E) means end-to-end relation extraction. \u2020 is the event extraction and pipeline relation extraction F1 scores for CAEVO . 57.0 \u2021 is the best previously reported micro-average score for temporal relation extraction based on gold events by Meng and Rumshisky (2018) . All MATRES baseline results are provided by Ning et al. (2018c) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 664, |
|
"end": 689, |
|
"text": "Meng and Rumshisky (2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 736, |
|
"end": 755, |
|
"text": "Ning et al. (2018c)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 255, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative. Table 3 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-toend relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on endto-end relation extraction, event extraction, and relation extraction on gold event pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 241, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "TB-Dense. The improvements over the singletask model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table 4 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%-which outperforms the baseline system by 10.0% and the single-task model by 5.1%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 222, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "End-to-End Relation Extraction", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%-which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MATRES.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "TB-Dense. Our structured joint model outperforms the CAEVO baseline by 3.5% and the single-task model by 1.3%. Improvements on event extraction can be difficult because our single-task model already works quite well with a close-to 89% F1 score, while the inter-annotator agreement for events in TimeBank documents is merely 87% (UzZaman et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 351, |
|
"text": "87% (UzZaman et al., 2013)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "MATRES. The structured model outperforms the the baseline model and the single-task model by 2.6% and 0.9% respectively. However, we observe that the multi-task model has a slight drop in event extraction performance over the singletask model (86.4% vs. 86.9%) . This indicates CAEVO Pipeline Joint Structure Joint P R F1 P R F1 P R F1 B 41.4 19.5 26.5 59.0 46.9 52.3 59.8 46.9 52.6 A 42.1 17.5 24.7 69.3 45.3 54.8 71.9 46.7 56.6 I 50.0 3.6 6.7 ------II 38.5 9.4 15.2 ------S 14.3 4.5 6.9 ------V 44.9 59.4 51.1 45.1 55.0 49.5 45.9 55.8 50.4 Avg 43.8 35.7 39.4 51.5 45.9 48.5 52.6 46.5 49.4 Table 4 : Model performance breakdown for TB-Dense. \"-\" indicates no predictions were made for that particular label, probably due to the small size of the training sample. BEFORE (B), AFTER (A), INCLUDES (I),", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 260, |
|
"text": "(86.4% vs. 86.9%)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 591, |
|
"end": 598, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "IS INCLUDED (II), SIMULTANEOUS (S), VAGUE (V)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "that incorporating relation signals are not particularly helpful for event extraction on MATRES. We speculate that one of the reasons could be the unique event characteristics in MATERS. As we described in Section 5.1, all events in MATRES are verbs. It is possible that a more concentrated single-task model works better when events are homogeneous, whereas a multi-task model is more powerful when we have a mixture of event types, e.g., both verbs and nouns as in TB-Dense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "TB-Dense. There is much prior work on relation extraction based on gold events in TB-Dense. Meng and Rumshisky (2018) proposed a neural model with global information that achieved the best results as far as we know. The improvement of our single-task model over that baseline is mostly attributable to the adoption of BERT embedding. We show that sharing the LSTM layer for both events and relations can help further improve performance of the relation classification task by 2.6%. For the joint models, since we do not train them on gold events, the evaluation would be meaningless. We simply skip this evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 117, |
|
"text": "Meng and Rumshisky (2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation Extraction with Gold Events", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "MATRES. Both single-task and multi-task models outperform the baseline by nearly 10%, while the improvement of multi-task over single task is marginal. In MATRES, a relation pair is equivalent to a verb pair, and thus the event prediction task probably does not provide much more information for relation extraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation Extraction with Gold Events", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "In Table 4 we further show the breakdown performances for each positive relation on TB-Dense. The breakdown on MATRES is shown in Table 10 in the appendix. BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense. We observe that the linguistic rule-based model, CAEVO, tends to have a more evenly spread-out performance, whereas our neural network-based models are more likely to have concentrated predictions due to the imbalance of the training sample across different label classes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 138, |
|
"text": "Table 10", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Relation Extraction with Gold Events", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Label Imbalance. One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS INCLUDED and VAGUE in the cross-entropy loss. Figure 3 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 306, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Global Constraints. In Table 6 we conduct an ablation study to understand the contributions from the event-relation prediction consis- Table 6 : Ablation Study on Global Constraints tency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency helps improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 30, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 135, |
|
"end": 142, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Error Analysis. By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table 7 with examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 240, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Type 1. Both events in Ex 1 are assigned low scores by the event module (<< 0.01). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Type 2. In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Type 3. The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task. Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for happened on board the Mavi Marmara was \"unintentional\" ... , according to the statement. Type 3: VAGUE relation 87 pairs Ex 3. Microsoft said it has identified 3 companies for the China program to run through June. The company gives each participating startup $ 20,000 to create ... Table 7 : Error Types Based on MATRES Test Data both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 614, |
|
"end": 621, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Note that if the best prediction is the same as the gold structure, the margin is zero; there will be no loss.2 Following the convention of event relation prediction literature Ning et al., 2018a,b), we only consider event pairs that occur in the same or neighboring sentences, but the architecture can be easily adapted to the case where inputs are longer than two sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We leave further investigation for future work.4 https://www.usna.edu/Users/cs/ nchamber/caevo/ 5 http://cogcomp.org/page/publication_ view/8444.2 End-to-End Event Temporal Relation ExtractionSingle-Task Model. The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as inFig. 2. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work is supported in part by Contracts W911NF-15-1-0543 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Cleartk-TimeML: A minimalist approach to TempEval", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation (Se-mEval 2013)", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "10--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Bethard. 2013. Cleartk-TimeML: A minimalist approach to TempEval 2013. In Second Joint Con- ference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh In- ternational Workshop on Semantic Evaluation (Se- mEval 2013), pages 10-14. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Timelines from text: Identification of syntactic temporal relations", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Klingenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the International Conference on Semantic Computing, ICSC '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--18", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICSC.2007.101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Bethard, James H. Martin, and Sara Klingen- stein. 2007. Timelines from text: Identification of syntactic temporal relations. In Proceedings of the International Conference on Semantic Computing, ICSC '07, pages 11-18, Washington, DC, USA. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Yoong Keok Lee, and Regina Barzilay", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Bramsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawan", |
|
"middle": [], |
|
"last": "Deshpande", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Bramsen, Pawan Deshpande, Yoong Keok Lee, and Regina Barzilay. 2006. Inducing temporal graphs. In EMNLP, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An annotation framework for dense event ordering", |
|
"authors": [ |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Cassidy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Mcdowell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "501--506", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-2082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation frame- work for dense event ordering. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 501-506. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Navytime: Event and time ordering from raw text", |
|
"authors": [ |
|
{ |
|
"first": "Nate", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "73--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nate Chambers. 2013. Navytime: Event and time or- dering from raw text. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 73-77, Atlanta, Georgia, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Dense event ordering with a multi-pass architecture", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Cassidy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Mcdowell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Jointly combining implicit constraints improves temporal ordering", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In EMNLP, Honolulu, United States.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Classifying temporal relations between events", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers, Shan Wang, and Dan Juraf- sky. 2007. Classifying temporal relations between events. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstra- tion Sessions, ACL '07, pages 173-176, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Classifying temporal relations by bidirectional LSTM over dependency paths", |
|
"authors": [ |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fei Cheng and Yusuke Miyao. 2017. Classifying tem- poral relations by bidirectional LSTM over depen- dency paths. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), volume 2, pages 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Predicting globally-coherent temporal structures from texts via endpoint inference and graph decomposition", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "IJ-CAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Denis and Philippe Muller. 2011. Predicting globally-coherent temporal structures from texts via endpoint inference and graph decomposition. In IJ- CAI, Barcelone, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Joint inference for event timeline construction", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Quang Xuan Do", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quang Xuan Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In EMNLP, Jeju, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A languageindependent neural network for event detection", |
|
"authors": [ |
|
{ |
|
"first": "Xiaocheng", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifu", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duyu", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-2011" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A language- independent neural network for event detection. pages 66-71.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "UTTime: Temporal relation classification using deep syntactic features", |
|
"authors": [ |
|
{ |
|
"first": "Natsuda", |
|
"middle": [], |
|
"last": "Laokulrat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshimasa", |
|
"middle": [], |
|
"last": "Tsuruoka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Chikayama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "88--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Natsuda Laokulrat, Makoto Miwa, Yoshimasa Tsu- ruoka, and Takashi Chikayama. 2013. UTTime: Temporal relation classification using deep syntactic features. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Pro- ceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 88-92, Atlanta, Georgia, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Structured learning for temporal relation extraction from clinical records", |
|
"authors": [ |
|
{ |
|
"first": "Artuur", |
|
"middle": [], |
|
"last": "Leeuwenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Francine", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1150--1158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Artuur Leeuwenberg and Marie-Francine Moens. 2017. Structured learning for temporal relation ex- traction from clinical records. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 1150-1158.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Incremental joint extraction of entity mentions and relations", |
|
"authors": [ |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "402--412", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-1038" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qi Li and Heng Ji. 2014. Incremental joint extrac- tion of entity mentions and relations. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 402-412, Baltimore, Maryland. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Machine learning of temporal relations", |
|
"authors": [ |
|
{ |
|
"first": "Inderjeet", |
|
"middle": [], |
|
"last": "Mani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Wellner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [ |
|
"Min" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "753--760", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1220175.1220270" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Ma- chine learning of temporal relations. In Proceedings of the 21st International Conference on Compu- tational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, pages 753-760, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Context-Aware neural model for temporal information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Yuanliang", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rumshisky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuanliang Meng and Anna Rumshisky. 2018. Context- Aware neural model for temporal information ex- traction. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Temporal information extraction for question answering using syntactic dependencies in an lstm-based architecture", |
|
"authors": [ |
|
{ |
|
"first": "Yuanliang", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rumshisky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Romanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "887--896", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuanliang Meng, Anna Rumshisky, and Alexey Ro- manov. 2017. Temporal information extraction for question answering using syntactic dependencies in an lstm-based architecture. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 887-896.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "End-to-end relation extraction using LSTMs on sequences and tree structures", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1105--1116", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1105" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Modeling joint entity and relation extraction with table representation", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yutaka", |
|
"middle": [], |
|
"last": "Sasaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1858--1869", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1200" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table repre- sentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869, Doha, Qatar. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Joint event extraction via recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Thien Huu Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "300--309", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr- ishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, pages 300-309, San Diego, California. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Event detection and domain adaptation with convolutional neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Huu", |
|
"middle": [], |
|
"last": "Thien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A structured learning approach to temporal relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhili", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiang Ning, Zhili Feng, and Dan Roth. 2017. A struc- tured learning approach to temporal relation extrac- tion. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Joint reasoning for temporal and causal relations", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhili", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. 2018a. Joint reasoning for temporal and causal rela- tions. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A Multi-Axis annotation scheme for event temporal relations", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiang Ning, Hao Wu, and Dan Roth. 2018b. A Multi- Axis annotation scheme for event temporal relations. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "CogCompTime: A tool for understanding time in natural language", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhili", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haoruo", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, and Dan Roth. 2018c. CogCompTime: A tool for under- standing time in natural language. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Richer Event Description: Integrating event coreference with temporal, causal and bridging annotation", |
|
"authors": [ |
|
{ |
|
"first": "Kristin", |
|
"middle": [], |
|
"last": "Tim O'gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Wright-Bettner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of 2nd Workshop on Computing News Storylines", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "47--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim O'Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer Event Description: Integrating event coreference with temporal, causal and bridg- ing annotation. In Proceedings of 2nd Workshop on Computing News Storylines, pages 47-56. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The TIMEBANK corpus", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Sauri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "See", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Setzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beth", |
|
"middle": [], |
|
"last": "Sundheim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Day", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Ferro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Corpus linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "647--656", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Pustejovsky, Patrick Hanks, Roser Sauri, An- drew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, and Lisa Ferro. 2003. The TIMEBANK corpus. In Cor- pus linguistics, pages 647-656.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Neural architecture for temporal relation extraction: a bi-lstm approach for detecting narrative containers", |
|
"authors": [ |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Tourille", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Ferret", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aurelie", |
|
"middle": [], |
|
"last": "Neveol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Tannier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "224--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julien Tourille, Olivier Ferret, Aurelie Neveol, and Xavier Tannier. 2017. Neural architecture for tem- poral relation extraction: a bi-lstm approach for de- tecting narrative containers. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), vol- ume 2, pages 224-230.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "SemEval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations", |
|
"authors": [ |
|
{ |
|
"first": "Naushad", |
|
"middle": [], |
|
"last": "Uzzaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hector", |
|
"middle": [], |
|
"last": "Llorens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naushad UzZaman, Hector Llorens, Leon Derczyn- ski, James Allen, Marc Verhagen, and James Puste- jovsky. 2013. SemEval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 1-9. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "SemEval-2007 task 15: TempEval temporal relation identification", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Schilder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Hepple", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations, Se-mEval '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. SemEval-2007 task 15: TempEval temporal relation identification. In Proceedings of the 4th In- ternational Workshop on Semantic Evaluations, Se- mEval '07, pages 75-80, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Temporal processing with the TARSQI toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "22Nd International Conference on on Computational Linguistics: Demonstration Papers, COLING '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Verhagen and James Pustejovsky. 2008. Tem- poral processing with the TARSQI toolkit. In 22Nd International Conference on on Computa- tional Linguistics: Demonstration Papers, COLING '08, pages 189-192, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "SemEval-2010 task 13: Tempeval-2", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Saur\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Caselli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Verhagen, Roser Saur\u00ed, Tommaso Caselli, and James Pustejovsky. 2010. SemEval-2010 task 13: Tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10, pages 57-62, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Jointly identifying temporal relations with Markov logic", |
|
"authors": [ |
|
{ |
|
"first": "Katsumasa", |
|
"middle": [], |
|
"last": "Yoshikawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masayuki", |
|
"middle": [], |
|
"last": "Asahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "405--413", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katsumasa Yoshikawa, Sebastian Riedel, Masayuki Asahara, and Yuji Matsumoto. 2009. Jointly identi- fying temporal relations with Markov logic. In Pro- ceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 405-413. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "End-to-end neural relation extraction with global optimization", |
|
"authors": [ |
|
{ |
|
"first": "Meishan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guohong", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1730--1740", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1182" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global op- timization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1730-1740, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"text": "; Denis and Muller (2011); Do et al. (2012); Ning et al. (2017).", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Performances from a single-task relation model under different class weights. Left-axis: overall model; right-axis: two minority relations.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Data overview. Note that the numbers reported for MATRES do not include the AQUAINT section." |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Corpus</td><td>Models</td><td>P</td><td>Event R</td><td>F1</td><td>P</td><td>Relation R</td><td>F1</td></tr><tr><td>TB-</td><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "Dense Structrued Joint Model (Ours) 89.2 92.6 90.9 52.6 46.5 49.4 Chambers et al. (2014) 97.2 79.4 87.4 43.8 35.7 39.4 MATRES Structrued Joint Model (Ours) 87.1 88.5 87.8 59.0 60.2 59.6 Ning et al. (2018c) 83.5 87.0 85.2 48.4 58.0 52.8" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Micro-average</td><td/><td>TB-Dense</td><td/><td/><td>MATRES</td><td/></tr><tr><td>F1 (%)</td><td colspan=\"6\">Event Relation (G) Relation (E) Event Relation (G) Relation (E)</td></tr><tr><td>Baselines</td><td>87.4 \u2020</td><td>57.0 \u2021</td><td>39.4 \u2020</td><td>85.2</td><td>65.9</td><td>52.8</td></tr><tr><td>Single-task</td><td>88.6</td><td>61.9</td><td>44.3</td><td>86.9</td><td>75.3</td><td>57.2</td></tr><tr><td>Multi-task</td><td>89.2</td><td>64.5</td><td>48.4</td><td>86.4</td><td>75.5</td><td>58.7</td></tr><tr><td>Pipeline Joint</td><td>90.3</td><td>-</td><td>48.5</td><td>87.2</td><td>-</td><td>58.5</td></tr><tr><td colspan=\"2\">Structured Joint 90.9</td><td>-</td><td>49.4</td><td>87.8</td><td>-</td><td>59.6</td></tr></table>", |
|
"text": "Event and Relation Extraction Results on TB-Dense and MATRES" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Label Size Breakdown in the Test Data" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Event predicted as non-event 189 pairs Ex 1. What Microsoft gets are developers around the world working on ideas that could potentially open up Kinect for Windows ... Type 2: NONE predicted as true relation 135 pairs Ex 2. Mr. Netanyahu told Mr. Erdogan that what" |
|
} |
|
} |
|
} |
|
} |