{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:14.604412Z" }, "title": "Effective Distant Supervision for Temporal Relation Extraction", "authors": [ { "first": "Xinyu", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Texas at Austin", "location": {} }, "email": "xinyuzhao@utexas.edu" }, { "first": "Shih-Ting", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Texas at Austin", "location": {} }, "email": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Texas at Austin", "location": {} }, "email": "gdurrett@cs.utexas.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A principal barrier to training temporal relation extraction models in new domains is the lack of varied, high quality examples and the challenge of collecting more. We present a method of automatically collecting distantlysupervised examples of temporal relations. We scrape and automatically label event pairs where the temporal relations are made explicit in text, then mask out those explicit cues, forcing a model trained on this data to learn other signals. We demonstrate that a pre-trained Transformer model is able to transfer from the automatically labeled examples to humanannotated benchmarks in both zero-shot and few-shot settings, and that the masking scheme is important in improving generalization. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "A principal barrier to training temporal relation extraction models in new domains is the lack of varied, high quality examples and the challenge of collecting more. We present a method of automatically collecting distantlysupervised examples of temporal relations. We scrape and automatically label event pairs where the temporal relations are made explicit in text, then mask out those explicit cues, forcing a model trained on this data to learn other signals. We demonstrate that a pre-trained Transformer model is able to transfer from the automatically labeled examples to humanannotated benchmarks in both zero-shot and few-shot settings, and that the masking scheme is important in improving generalization. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Temporal relation extraction has largely focused on identifying pairwise relationships between events in text. Past work on annotating temporal relations has struggled to devise annotations schemes which are both comprehensive and easy to judge (Pustejovsky et al., 2003; . However, even simplified annotation schemes designed for crowdsourcing (Ning et al., 2018b; Vashishtha et al., 2019) can struggle to acquire high-accuracy judgments about nebulous phenomena, leading to a scarcity of high-quality labeled data. Compared to tasks like syntactic parsing (Bies et al., 2012) or natural language inference (Williams et al., 2018) , there are thus fewer resources for temporal relation extraction in other domains.", "cite_spans": [ { "start": 245, "end": 271, "text": "(Pustejovsky et al., 2003;", "ref_id": "BIBREF20" }, { "start": 345, "end": 365, "text": "(Ning et al., 2018b;", "ref_id": "BIBREF16" }, { "start": 366, "end": 390, "text": "Vashishtha et al., 2019)", "ref_id": "BIBREF22" }, { "start": 558, "end": 577, "text": "(Bies et al., 2012)", "ref_id": "BIBREF0" }, { "start": 608, "end": 631, "text": "(Williams et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we present a method of automatically gathering distantly-labeled temporal relation examples. Unlike traditional distant supervision methods (Mintz et al., 2009) , we do not rely on a knowledge base, but instead on heuristic cues that we will then mask out, forcing the model to make inferences from the remaining context. We explore two types of cues, but focus primarily on events that are anchored to orderable timexes (Goyal and Durrett, 2019) . These examples can be collected and labeled using an automatic system . By then masking the explicit temporal indicators, a model trained on these examples can no longer learn trivial timex-based rules, but must instead attend to more general temporal context cues. We show that a pre-trained model fine-tuned on this data learns general, implicit cues that transfer more broadly to human-annotated benchmarks. This observation follows a trend of recent work showing pre-trained models' ability to generalize from synthetic data to natural data (Xu et al., 2020; Marzoev et al., 2020) .", "cite_spans": [ { "start": 154, "end": 174, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF13" }, { "start": 435, "end": 460, "text": "(Goyal and Durrett, 2019)", "ref_id": "BIBREF6" }, { "start": 1008, "end": 1025, "text": "(Xu et al., 2020;", "ref_id": null }, { "start": 1026, "end": 1047, "text": "Marzoev et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We implement our approach with pre-trained Transformer models (Devlin et al., 2019; Liu et al., 2019; Clark et al., 2020 ) similar to a state-of-theart temporal relation extraction model from the literature (Han et al., 2019) . Our model is able to effectively transfer from a distantly-labeled dataset to the MATRES benchmark (Ning et al., 2018b) when used to supplement a small number of indomain or out-of-domain samples.", "cite_spans": [ { "start": 62, "end": 83, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF5" }, { "start": 84, "end": 101, "text": "Liu et al., 2019;", "ref_id": null }, { "start": 102, "end": 120, "text": "Clark et al., 2020", "ref_id": "BIBREF4" }, { "start": 207, "end": 225, "text": "(Han et al., 2019)", "ref_id": "BIBREF7" }, { "start": 327, "end": 347, "text": "(Ning et al., 2018b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our base classification model consists of a pretrained Transformer (Vaswani et al., 2017) model with an appended linear classification layer, represented in Figure 1 . For the majority of our experiments, we use RoBERTa (Liu et al., 2019 ), which we found to work better than BERT (Devlin et al., 2019) and ELECTRA (Clark et al., 2020) for domain transfer. We chose a single set of hyperparameters by tuning to match the performance of Han et al. (2019) ; for details see Appendix A.", "cite_spans": [ { "start": 67, "end": 89, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF23" }, { "start": 220, "end": 237, "text": "(Liu et al., 2019", "ref_id": null }, { "start": 281, "end": 302, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 315, "end": 335, "text": "(Clark et al., 2020)", "ref_id": "BIBREF4" }, { "start": 436, "end": 453, "text": "Han et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 157, "end": 165, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Classification Model", "sec_num": "2" }, { "text": "A single example consists of an event pair Figure 1 : Classification model consisting of a pretrained transformer model and a linear layer. Event tokens are represented by t i , t j and correspond to output embeddings e i , e j , which are used by the linear classifier to produce a distribution over labels.", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Classification Model", "sec_num": "2" }, { "text": "in context, which may be a single sentence or two sentences, and a label from {AFTER, BE-FORE, EQUALS, VAGUE}, following the annotation scheme in MATRES. Each example is tokenized to yield input tokens T = [t 1 , t 2 , ..., t n ], with event tokens t i , t j \u2208 T . For events consisting of multiple sub-word tokens, we track only the first token position. We use the convention of passing events in text order, where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Model", "sec_num": "2" }, { "text": "1 \u2264 i < j \u2264 n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Model", "sec_num": "2" }, { "text": "The language model then produces output embeddings [e 1 , e 2 , ..., e n ]. For classification, we select the embeddings e i , e j corresponding to the event token positions, and combine them into a classification vector, c = [e i ; e j ; e i e j ; e i \u2212 e j ] where represents elementwise multiplication. Finally, a linear classification layer produces a distribution over the four relation labels. Training is done by maximizing likelihood of labeled samples. We implement this model using PyTorch (Paszke et al., 2019) and pre-trained models from HuggingFace's Transformers library (Wolf et al., 2019) .", "cite_spans": [ { "start": 500, "end": 521, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF18" }, { "start": 585, "end": 604, "text": "(Wolf et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Classification Model", "sec_num": "2" }, { "text": "We benchmark our classification model by training and evaluating on MATRES. We achieve an F1 of 79.8 with RoBERTa (Liu et al., 2019) and an F1 of 80.3 with ELECTRA (Clark et al., 2020) , demonstrating that our model replicates state-of-the-art performance achieved by local models (only considering arcs in isolation), currently 80.3 F1 (Han et al., 2019) .", "cite_spans": [ { "start": 114, "end": 132, "text": "(Liu et al., 2019)", "ref_id": null }, { "start": 164, "end": 184, "text": "(Clark et al., 2020)", "ref_id": "BIBREF4" }, { "start": 337, "end": 355, "text": "(Han et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Classification Model", "sec_num": "2" }, { "text": "We aim to create a method of automatically gathering high-quality data that can be applied to unlabeled text. To this end, we focus on two techniques identifying explicit temporal indicators. First, we identify single-sentence examples where event pairs are automatically labeled via explicit discourse connectives. Second, we scrape occurrences of event pairs that are anchorable to timexes which determine their relation. We will see that this second technique is substantially better, and analyze some factors contributing to the performance delta. Although neither technique captures the gamut of phenomena found in human-labeled data, pre-trained models' generalization capabilities and a masking technique tailored for this setting are two tools that enable effective transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning from Distant Data", "sec_num": "3" }, { "text": "For both techniques, we scrape distant examples from English Gigaword Fifth Edition (Parker et al., 2011) . We extract samples from a balance of the different news sources present in the dataset. In both cases, we use the Stanford CoreNLP lexicalized parser (Manning et al., 2014) to generate parse trees for the source text, which can be timeconsuming at scale. However, we can pre-filter sentences based on the presence of timexes or target discourse connectives, and so in practice we only rarely need to invoke the parser. Table 9 in the Appendix shows collected data samples, and we describe these two collection methods in more detail below.", "cite_spans": [ { "start": 84, "end": 105, "text": "(Parker et al., 2011)", "ref_id": "BIBREF17" }, { "start": 258, "end": 280, "text": "(Manning et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 527, "end": 534, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Learning from Distant Data", "sec_num": "3" }, { "text": "Words like before, after, during, until, prior to, and others can indicate the temporal status of events in text explicitly. Past work has shown that complex relations can be learned from discourse connectives in non-temporal settings (Nie et al., 2019) , so such connectives can be powerful indicators. We focus on before and after in this work, as these are the most common and straightforward to map to a temporal relation.", "cite_spans": [ { "start": 235, "end": 253, "text": "(Nie et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Temporal Connectives", "sec_num": "3.1" }, { "text": "To identify connected event pairs, we use the Stanford CoreNLP lexicalized parser (Manning et al., 2014) to produce parse trees. We then search for a related event pair by 1) identifying the connective, 2) finding the closest parent verb phrase, and 3) finding the closest child verb phrase. These become the events for the example. When this identifies modals or auxiliaries, we take the corresponding main verb. The label for the example is simply determined by the before or after connective. Examples are listed in Appendix E; on inspection, we found this method to be reliable. ing of their associated events, assuming each event can be appropriately linked to the timex.", "cite_spans": [ { "start": 82, "end": 104, "text": "(Manning et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Temporal Connectives", "sec_num": "3.1" }, { "text": "We use CAEVO to detect event pairs and link events to timexes, which include explicit datetimes (January, 1961), relative times (tomorrow) and other natural language indicators (now, until recently). This approach, which yields both single-and cross-sentence examples, was explored by Goyal and Durrett (2019) , who noisily labeled data to evaluate their timex embedding model. First, the input document is annotated by CAEVO with events and timexes using its parse trees. Two of its sieves are then applied: the AD-JACENTVERBTIMEX sieve identifies events that are anchored to time expressions via a direct path in the syntactic parse tree, then the TIMETIME-SIEVE uses a small set of deterministic rules to label relations between timexes. These two sieves have high precisions, of 0.74 and 0.90, respectively . Figure 2 shows the result of applying both sieves. Finally, the system is able to infer the relations between events that are anchored to comparable timexes (i.e finished before published), giving us event pairs usable for training.", "cite_spans": [ { "start": 285, "end": 309, "text": "Goyal and Durrett (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 813, "end": 821, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Events Anchored to Time Expressions", "sec_num": "3.2" }, { "text": "The resulting datasets are reasonably balanced between BEFORE and AFTER, with sparse EQUAL examples and no VAGUE examples. A more detailed label breakdown is included in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Events Anchored to Time Expressions", "sec_num": "3.2" }, { "text": "These distant examples are gleaned from \"trivial\" indicators in the text, which a model like BERT (Devlin et al., 2019) will overfit to. We observe that our RoBERTa classifier yields 99.8% accuracy on a held out BeforeAfter dataset, and evaluating with the same DistantTimex train/test split of Goyal and Durrett (2019) results in 96.6% test accuracy.", "cite_spans": [ { "start": 98, "end": 119, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Example Masking", "sec_num": "3.3" }, { "text": "In order to combat this, we mask the explicit tem-poral cues identified by our weak labeling process. Our goal is to induce the model to learn the label from the remaining tokens, including the event instances themselves and the broader context. Masking is performed prior to subword tokenization, so each word or timex gets one mask token per word. For our BeforeAfter examples, we simply mask the temporal connective. For our DistantTimex examples, we use the timex tags generated by CAEVO to mask all identified timexes present in the context. This results in masking of not only explicit timexes (e.g. dates, times), but also of natural language timexes (e.g. previously, recently). This may occasionally result in \"sparse\" training examples that have a high proportion of mask tokens. Refer to Tables 7-8 in Appendix E for concrete examples. In spite of this masking, our model is able to classify distantly-labeled timex examples with 85% accuracy when evaluating on a held-out set, well above a majority baseline. This indicates that there are other temporal cues that the model can use to determine the temporal relation.", "cite_spans": [], "ref_spans": [ { "start": 799, "end": 809, "text": "Tables 7-8", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Example Masking", "sec_num": "3.3" }, { "text": "We evaluate our distant dataset on several axes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "(1) In zero-shot, few-shot, or transfer settings, to what extent can this help existing models? (2) How important is each facet of our distant extraction setup? (3) What can we say about the distribution of our data from these techniques? We focus our evaluation on the English MATRES dataset (Ning et al., 2018b) , a four-class temporal relation dataset of chiefly newswire data drawn from a number of different sources. 2 We found significant variance in model performance in our transfer for small data settings, so most results use average best performance or majority-vote ensembled performance of three randomly seeded models trained in the same setting.", "cite_spans": [ { "start": 293, "end": 313, "text": "(Ning et al., 2018b)", "ref_id": "BIBREF16" }, { "start": 422, "end": 423, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "To get the clearest picture of the differences in our distant setups, we first evaluate in a setting of zero-shot adaptation to MATRES. We train on different distantly labeled dataset sizes in order to establish a relationship between example quantity and generalization performance. Our results presented in Table 1 show that the DistantTimex data works substantially better than BeforeAfter: with the Distant-Timex data, there is a correlation between adding more distantly-labeled examples and increased performance. While both of these rules target particular narrow slices of data, the set with explicit timexes appears to be broader than that with before/after connectives, and hence BERT can learn to generalize better. 3", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Comparison of Distant Datasets", "sec_num": null }, { "text": "Using Fewer Labeled Examples In a more realistic setting, we assume access to small amounts of pre-existing labeled data, using roughly 10% of existing datasets. evaluating on MATRES using small amounts of either MATRES or UDS-T data in conjunction with our distant data; three models are randomly initialized and trained for each setting. In both settings, adding distant data improves substantially over just using the in-domain MATRES data, and the best model performance is only around 4 F 1 worse than the in-domain MATRES results using the entire train set. We also show that this data can stack with data from UDS-T (Vashishtha et al., 2019) and improve transfer over raw UDS-T. This is in spite of very different event distributions between UDST and MATRES and a complete lack of examples of VAGUE relations during training. Table 3 , we test the effect of masking on model generalization. We train our model on the collected distant examples with and without masking, and report ensembled evaluation results on the MATRES test set. Our comparison shows that masking causes an increase in generalization for DistantTimex examples, but little change in BeforeAfter transfer, which still performs similar to the majority baseline.", "cite_spans": [ { "start": 623, "end": 648, "text": "(Vashishtha et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 833, "end": 840, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Comparison of Distant Datasets", "sec_num": null }, { "text": "We report the most frequent events from each dataset in Table 4 . MATRES is highly focused on reporting verbs, but the distant data has a much flatter distribution. BeforeAfter features more light verbs whereas DistantTimex features events with more complex semantics; possibly the model can learn more regular and meaningful patterns from such data, or relevant cues from a more similar event distribution (than found in BeforeAfter). We present event-label tuples in Appendix D.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Understanding the data distribution", "sec_num": null }, { "text": "There is little direct prior work on using this kind of distant supervision for temporal relation extraction. Past work has studied automatic extraction of typical inter-event orderings (Chklovski and Pantel, 2004; Ning et al., 2018a; Yao and Huang, 2018) to aid downstream temporal tasks, but these approaches represent events as single words (predicates) taken out of context, so the knowledge they can capture is limited. The commonsense acquisition method of Zhou et al. (2020) learns more sophisticated information, but more about unary properties of events (typical time, duration) rather than relational knowledge. Lin et al. (2020) achieve a somewhat similar goal, but make a strong assumption about narrative-structured corpora and do not evaluate on in-context temporal relation extraction. Our technique does not use a knowledge base like classic distant supervision methods (Mintz et al., 2009) . However, because we eventually mask out the explicit temporal indicators, we are still using temporal information \"external\" to the final example to derive the label, hence why we invoke this term. A related concept is the idea of labeling functions (Ratner et al., 2016; Hancock et al., 2018) , which are used to automatically construct training data for new domains. However, to our knowledge, these techniques have not been applied to temporal relation extraction, nor used in conjunction with masking as we do.", "cite_spans": [ { "start": 186, "end": 214, "text": "(Chklovski and Pantel, 2004;", "ref_id": "BIBREF3" }, { "start": 215, "end": 234, "text": "Ning et al., 2018a;", "ref_id": "BIBREF15" }, { "start": 235, "end": 255, "text": "Yao and Huang, 2018)", "ref_id": null }, { "start": 886, "end": 906, "text": "(Mintz et al., 2009)", "ref_id": "BIBREF13" }, { "start": 1159, "end": 1180, "text": "(Ratner et al., 2016;", "ref_id": "BIBREF21" }, { "start": 1181, "end": 1202, "text": "Hancock et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We use explicit temporal cues to automatically identify examples of temporal relations between events in text. By masking these trivial features, a pre-trained Transformer model can learn from the remaining context and generalize to humanannotated benchmarks. Comparing performance for two distant labeling methods-using discourse connectives and linking events to time expressionsindicates that richer temporal cues exist in the second case. The scope of identified time expressions encompasses both explicit datetimes and natural language indicators (\"now\", \"recently\", etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "While datetimes may be more common in news and historical articles, relative time expressions are present in diverse domains such as literature and colloquial texts. Where such indicators exist, our approach may be used to automatically collect distantly labeled temporal relations. More broadly, we believe that this label-and-mask paradigm could be used to collect targeted training data for a variety of NLP tasks. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "As mentioned previously, we selected a set of hyperparameters by fine-tuning to approximate the performance of recent state of the art on MATRES, achieving an F1 of 79.8 with RoBERTa (base) and 80.3 with ELECTRA. Specifically, we arrived at a learning rate of 2e-5 with a warmup proportion of 0.1. Our batch size varied from 16-25 based on hardware.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model Implementation", "sec_num": null }, { "text": "Full implementation can be found at https:// github.com/xyz-zy/distant-temprel B Label Composition Across Datasets in VAGUE relations (which have the lowest IAA in MATRES, and are particularly difficult to resolve) but emphasize the two most prominent classes. In few-shot settings, our model sees and trains on VAGUE relations from MATRES.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model Implementation", "sec_num": null }, { "text": "The VAGUE label is defined to express indeterminacy and also has the lowest inter-annotator agreement in the MATRES dataset. These examples are also relatively more difficult for models to learn. Tables 6 and 7 present an expanded view of our results, adding the evaluation accuracy on only {BEFORE, AFTER, EQUALS} examples. As expected, we observe that model performance increases across the board under this evaluation. 1.69% ('said', 'said', VAGUE) 0.67% ('said', 'said', AFTER) 0.25% ('said', 'told', BEFORE) 0.21% ('called', 'said', BEFORE) 0.21% ('killed', 'said', BEFORE) 0.16% ('found', 'said', BEFORE) 0.15% ('added', 'said', BEFORE) 0.15% ('found', 'said', AFTER) 0.14% ('declined', 'said', BEFORE) 0.14% DistantTimex ('won', 'won', BEFORE) 0.44% ('died', 'died', BEFORE) 0.26% ('annexed', 'seized', BEFORE) 0.26% ('born', 'graduated', BEFORE) 0.25% ('crashes', 'killed', AFTER) 0.20% ('killed', 'torched', AFTER) 0.19% ('crashed', 'crashed', AFTER) 0.17% ('died', 'married', BEFORE) 0.16% ('end', 'remove', BEFORE) 0.14% ('died', 'killed', BEFORE) 0.14% Table 8 : Top 10 event-event-relation tuples per dataset as a percentage of total event mentions.", "cite_spans": [ { "start": 428, "end": 451, "text": "('said', 'said', VAGUE)", "ref_id": null }, { "start": 458, "end": 481, "text": "('said', 'said', AFTER)", "ref_id": null }, { "start": 488, "end": 512, "text": "('said', 'told', BEFORE)", "ref_id": null }, { "start": 519, "end": 545, "text": "('called', 'said', BEFORE)", "ref_id": null }, { "start": 552, "end": 578, "text": "('killed', 'said', BEFORE)", "ref_id": null }, { "start": 585, "end": 610, "text": "('found', 'said', BEFORE)", "ref_id": null }, { "start": 617, "end": 642, "text": "('added', 'said', BEFORE)", "ref_id": null }, { "start": 649, "end": 673, "text": "('found', 'said', AFTER)", "ref_id": null }, { "start": 680, "end": 708, "text": "('declined', 'said', BEFORE)", "ref_id": null }, { "start": 757, "end": 781, "text": "('died', 'died', BEFORE)", "ref_id": null }, { "start": 860, "end": 888, "text": "('crashes', 'killed', AFTER)", "ref_id": null }, { "start": 895, "end": 923, "text": "('killed', 'torched', AFTER)", "ref_id": null }, { "start": 930, "end": 959, "text": "('crashed', 'crashed', AFTER)", "ref_id": null }, { "start": 966, "end": 993, "text": "('died', 'married', BEFORE)", "ref_id": null }, { "start": 1018, "end": 1025, "text": "BEFORE)", "ref_id": null }, { "start": 1032, "end": 1058, "text": "('died', 'killed', BEFORE)", "ref_id": null } ], "ref_spans": [ { "start": 196, "end": 210, "text": "Tables 6 and 7", "ref_id": "TABREF9" }, { "start": 1065, "end": 1072, "text": "Table 8", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "C Performance without VAGUE Examples", "sec_num": null }, { "text": "(won-won, died-died). These examples frequently come from sentences or sentence pairs discussing related events of the same type, using dates to contrast them. Table 9 presents a sample of our distantly labeled data. Examples (a-b) show that our BeforeAfter parsing scheme can correctly identify linked events across sentence spans. Examples (c-d) display a variety in parsed syntactic structures that link events to timexes. Tables 10 and 11 present examples of our masking scheme on Be-foreAfter and DistantTimex examples respectively.", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 167, "text": "Table 9", "ref_id": null }, { "start": 426, "end": 442, "text": "Tables 10 and 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "D Most Common Event-Label Tuples", "sec_num": null }, { "text": "Notably in Table 11 , multi-word timexes result in one mask token per word. All identified timexes in the examples are masked, even if they are not directly linked to the events in consideration.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 19, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Distant Example Masking", "sec_num": null }, { "text": "Code and datasets available at: https://github. com/xyz-zy/distant-temprel", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We choose not to evaluate on the UDS-T dataset(Vashishtha et al., 2019), treating it solely as a training source. In our experiments, converting from real-valued time span annotations into categorical event-pair labels required dealing with significant disagreement among annotators. Despite trying several resolution strategies, none of our in-domain fine-tuned Transformer models performed much better than a majority baseline, indicating high noise or an extremely challenging setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "One possible reason is that explicit indicators like before and after may be used explicitly to communicate temporal information where it cannot be otherwise inferred, but timexes are often used to communicate more specific details about events where the relation may already be clear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Thanks to the anonymous reviewers for their helpful comments. This material is also based on research that is in part supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory (AFRL), DARPA, or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "English Web Treebank. LDC2012T13. Linguistic Data Consortium", "authors": [ { "first": "Ann", "middle": [], "last": "Bies", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Mott", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Warner", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Kulick", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English Web Treebank. LDC2012T13. Lin- guistic Data Consortium, Philadelphia, PA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An Annotation Framework for Dense Event Ordering", "authors": [ { "first": "Taylor", "middle": [], "last": "Cassidy", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "501--506", "other_ids": { "DOI": [ "10.3115/v1/P14-2082" ] }, "num": null, "urls": [], "raw_text": "Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An Annotation Frame- work for Dense Event Ordering. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 501-506, Baltimore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dense Event Ordering with a Multi-Pass Architecture", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Cassidy", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "273--284", "other_ids": { "DOI": [ "10.1162/tacl_a_00182" ] }, "num": null, "urls": [], "raw_text": "Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense Event Ordering with a Multi-Pass Architecture. Transactions of the Association for Computational Linguistics, 2:273- 284.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "VerbOcean: Mining the web for fine-grained semantic verb relations", "authors": [ { "first": "Timothy", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Chklovski and Patrick Pantel. 2004. VerbO- cean: Mining the web for fine-grained semantic verb relations. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Process- ing, pages 33-40, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Electra: Pretraining text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Embedding Time Expressions for Deep Temporal Ordering Models", "authors": [ { "first": "Tanya", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4400--4406", "other_ids": { "DOI": [ "10.18653/v1/P19-1433" ] }, "num": null, "urls": [], "raw_text": "Tanya Goyal and Greg Durrett. 2019. Embedding Time Expressions for Deep Temporal Ordering Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4400-4406, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction", "authors": [ { "first": "Rujun", "middle": [], "last": "Han", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "434--444", "other_ids": { "DOI": [ "10.18653/v1/D19-1041" ] }, "num": null, "urls": [], "raw_text": "Rujun Han, Qiang Ning, and Nanyun Peng. 2019. Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 434-444.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Training classifiers with natural language explanations", "authors": [ { "first": "Braden", "middle": [], "last": "Hancock", "suffix": "" }, { "first": "Paroma", "middle": [], "last": "Varma", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Bringmann", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1884--1895", "other_ids": { "DOI": [ "10.18653/v1/P18-1175" ] }, "num": null, "urls": [], "raw_text": "Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R\u00e9. 2018. Training classifiers with natural lan- guage explanations. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884- 1895, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Nathanael Chambers, and Greg Durrett. 2020. Conditional Generation of Temporallyordered Event Sequences", "authors": [ { "first": "Shih-Ting", "middle": [], "last": "Lin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shih-Ting Lin, Nathanael Chambers, and Greg Dur- rett. 2020. Conditional Generation of Temporally- ordered Event Sequences. arXiv cs.CL 2012.15786.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Stanford CoreNLP Natural Language Processing Toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Association for Computational Linguistics (ACL) System Demon- strations, pages 55-60.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Unnatural language processing: Bridging the gap between synthetic and natural language data", "authors": [ { "first": "Alana", "middle": [], "last": "Marzoev", "suffix": "" }, { "first": "M", "middle": [ "Frans" ], "last": "Samuel Madden", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Kaashoek", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alana Marzoev, Samuel Madden, M. Frans Kaashoek, Michael Cafarella, and Jacob Andreas. 2020. Unnat- ural language processing: Bridging the gap between synthetic and natural language data.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" }, { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "DisSent: Learning Sentence Representations from Explicit Discourse Relations", "authors": [ { "first": "Allen", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4497--4510", "other_ids": { "DOI": [ "10.18653/v1/P19-1442" ] }, "num": null, "urls": [], "raw_text": "Allen Nie, Erin Bennett, and Noah Goodman. 2019. DisSent: Learning Sentence Representations from Explicit Discourse Relations. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4497-4510, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improving Temporal Relation Extraction with a Globally Acquired Statistical Resource", "authors": [ { "first": "Qiang", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haoruo", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "841--851", "other_ids": { "DOI": [ "10.18653/v1/N18-1077" ] }, "num": null, "urls": [], "raw_text": "Qiang Ning, Hao Wu, Haoruo Peng, and Dan Roth. 2018a. Improving Temporal Relation Extraction with a Globally Acquired Statistical Resource. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 841-851, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Multi-Axis Annotation Scheme for Event Temporal Relations", "authors": [ { "first": "Qiang", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1318--1328", "other_ids": { "DOI": [ "10.18653/v1/P18-1122" ] }, "num": null, "urls": [], "raw_text": "Qiang Ning, Hao Wu, and Dan Roth. 2018b. A Multi- Axis Annotation Scheme for Event Temporal Rela- tions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1318-1328, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "English Gigaword Fifth Edition. LDC2011T07. Linguistic Data Consortium", "authors": [ { "first": "Robert", "middle": [], "last": "Parker", "suffix": "" }, { "first": "David", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kazuaki", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword Fifth Edi- tion. LDC2011T07. Linguistic Data Consortium, Philadelphia, PA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learn- ing Library.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Advances in Neural Information Processing Systems", "authors": [ { "first": "H", "middle": [], "last": "In", "suffix": "" }, { "first": "H", "middle": [], "last": "Wallach", "suffix": "" }, { "first": "A", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "F", "middle": [], "last": "Beygelzimer", "suffix": "" }, { "first": "E", "middle": [], "last": "Buc", "suffix": "" }, { "first": "R", "middle": [], "last": "Fox", "suffix": "" }, { "first": "", "middle": [], "last": "Garnett", "suffix": "" } ], "year": null, "venue": "", "volume": "32", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9 Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The TimeBank corpus", "authors": [ { "first": "James", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "Roser", "middle": [], "last": "Saur\u00ed", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "See", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Beth", "middle": [], "last": "Sundheim", "suffix": "" }, { "first": "David", "middle": [], "last": "Day", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "Marcia", "middle": [], "last": "Lazo", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Corpus Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Pustejovsky, Patrick Hanks, Roser Saur\u00ed, Andrew See, Rob Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, and Marcia Lazo. 2003. The TimeBank cor- pus. Proceedings of Corpus Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Data programming: Creating large training sets, quickly. NIPS'16", "authors": [ { "first": "Alexander", "middle": [], "last": "Ratner", "suffix": "" }, { "first": "Christopher", "middle": [ "De" ], "last": "Sa", "suffix": "" }, { "first": "Sen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Selsam", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "3574--3582", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data pro- gramming: Creating large training sets, quickly. NIPS'16, page 3574-3582, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fine-Grained Temporal Relation Extraction", "authors": [ { "first": "Siddharth", "middle": [], "last": "Vashishtha", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Aaron", "middle": [ "Steven" ], "last": "White", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2906--2919", "other_ids": { "DOI": [ "10.18653/v1/P19-1280" ] }, "num": null, "urls": [], "raw_text": "Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-Grained Temporal Relation Extraction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 2906-2919, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "undefinedukasz Kaiser, and Illia Polosukhin", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Attention", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Beyond connectives, another cue is the explicit presence of timexes. An example is shown inFigure2: the years 1951 and 1961 determine the order-", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Process of identifying distantly labeled eventevent relations using CAEVO. First, events and timexes are identified and two of CAEVO's sieves are applied. Then, using transitivity, event-event relation is inferred from associated timexes.", "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Results on MATRES after training on distant data, with explicit temporal cues masked. Results are esembled from three models trained in the same setting.", "content": "
DistantTimex Examples
Labeled Set EvalNone 1k5k10k
Avg. Dev F1 64.2 67.4 75.2 75.6
MATRES 1kAvg. Test F1 60.9 66.1 73.6 73.7 Ens. Dev F1 70.2 74.5 76.7 76.6
Ens. Test F1 66.5 72.0 75.0 75.5
Avg. Dev F1 68.7 62.8 72.0 70.8
UDS-T 5kAvg. Test F1 66.2 60.9 69.5 69.8 Ens. Dev F1 70.1 64.6 73.2 72.1
Ens. Test F1 68.2 62.3 70.7 71.8
", "num": null, "html": null, "type_str": "table" }, "TABREF2": { "text": "", "content": "
: Evaluation results on MATRES when adding
automatically collected examples to small amounts of
human-annotated training data. Using more Distant-
Timex data is able to improve performance substan-
tially over not using any (None).
", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "text": "", "content": "
shows results from
", "num": null, "html": null, "type_str": "table" }, "TABREF4": { "text": "Transfer comparison for data with and without masking. Results are ensembled from three models trained in the same setting.", "content": "
MATRESDistantTimex BeforeAfter UDST 5k
said16.0% won2.02% was 2.60% is5.8%
killed1.2% died1.69% said 2.08% was 3.3%
found 1.0% said1.67% came 1.70% have 2.9%
says0.9% began 1.41% is1.06% are 2.5%
told0.8% joined 1.10% began 0.99% be 2.2%
called 0.8% took 1.10% made 0.85% get 1.5%
reported 0.7% set1.04% have 0.78% had 1.4%
saying 0.7% killed 1.04% left 0.77% know1.4%
say0.7% born 0.96% had 0.77% do 1.1%
was0.6% held0.93% be0.73% go 1.9%
", "num": null, "html": null, "type_str": "table" }, "TABREF5": { "text": "", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF6": { "text": "is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS),.", "content": "
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sen-
tence understanding through inference. In Proceed-
ings of the 2018 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume
1 (Long Papers), pages 1112-1122, New Orleans,
Louisiana. Association for Computational Linguis-
tics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow-
icz, and Jamie Brew. 2019. HuggingFace's Trans-
formers: State-of-the-art Natural Language Process-
ing. ArXiv, abs/1910.03771.
Silei Xu, Sina Semnani, Giovanni Campagna, and
Monica Lam. 2020. AutoQA: From databases to QA
semantic parsers with only synthetic training data.
In Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 422-434, Online. Association for Computa-
tional Linguistics.
Wenlin Yao and Ruihong Huang. 2018. Temporal
Event Knowledge Acquisition via Identifying Narra-
tives. In Proceedings of the 56th Annual Meeting of
the Association for Computational Linguistics (Vol-
ume 1: Long Papers), pages 537-547, Melbourne,
Australia. Association for Computational Linguis-
tics.
Ben Zhou, Qiang Ning, Daniel Khashabi, and Dan
Roth. 2020. Temporal common sense acquisition
with minimal supervision. In Proceedings of the
58th Annual Meeting of the Association for Compu-
tational Linguistics, pages 7579-7589, Online. As-
sociation for Computational Linguistics.
", "num": null, "html": null, "type_str": "table" }, "TABREF7": { "text": "", "content": "
compares the label distribution of our dis-
tant data against the human-annotated MATRES
dataset. In comparison, our datasets are lacking
", "num": null, "html": null, "type_str": "table" }, "TABREF8": { "text": "Training data label distribution. Both of our distant datasets contain 10k examples; the MATRES training set contains 9.7k examples.", "content": "
MATRES\u2212Vague
Training DataSplit PRF1Acc.
Majority LabelDev 52.6 60.2 56.1 -Test 50.7 58.6 54.3 -
MATRESDev 77.1 85.5 81.1 85.5 Test 75.1 84.8 79.6 84.8
1k Dev 54.1 61.9 57.7 61.9
1k Test 50.8 58.7 54.5 58.7
DistantTimex5k Dev 60.4 69.1 64.5 69.1 5k Test 61.9 71.5 66.4 69.2
10k Dev 64.0 73.2 68.3 73.2
10k Test 61.5 71.1 66.0 71.1
BeforeAfter10k Dev 53.4 61.1 57.0 61.1 10k Test 51.6 59.7 55.3 59.7
", "num": null, "html": null, "type_str": "table" }, "TABREF9": { "text": "Expanded view of results on MATRES, comparing performance on only on {BEFORE, AFTER, EQUALS} examples (\"\u2212 Vague\") versus the entire eval set. Presented results are majority-vote ensembled from three models trained in the same setting.", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF10": { "text": "", "content": "
presents a comparison of the most com-
mon (event1, event2, label) tuples across datasets.
In MATRES, the most common tuples are largely
(event, \"said\", BEFORE) events. The DistantTimex
data features many examples of same-verb pairs
", "num": null, "html": null, "type_str": "table" }, "TABREF11": { "text": "Comparison of majority-vote ensembled performance on {BEFORE, AFTER, EQUALS} examples (\"\u2212Vague\") versus performance on the entire test set. Performance is higher without vague examples, and increases with the number of DistantTimex examples added.", "content": "
MATRES
('said', 'said', BEFORE)
", "num": null, "html": null, "type_str": "table" } } } }