ACL-OCL / Base_JSON /prefixB /json /bionlp /2020.bionlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:09.474518Z"
},
"title": "A BERT-based One-Pass Multi-Task Model for Clinical Temporal Relation Extraction",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Loyola University",
"location": {
"settlement": "Chicago"
}
},
"email": "ddligach@luc.edu"
},
{
"first": "Farig",
"middle": [],
"last": "Sadeque",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": "bethard@email.arizona.edu"
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recently BERT has achieved a state-of-theart performance in temporal relation extraction from clinical Electronic Medical Records text. However, the current approach is inefficient as it requires multiple passes through each input sequence. We extend a recently-proposed one-pass model for relation classification to a one-pass model for relation extraction. We augment this framework by introducing global embeddings to help with long-distance relation inference, and by multi-task learning to increase model performance and generalizability. Our proposed model produces results on par with the state-of-the-art in temporal relation extraction on the THYME corpus and is much \"greener\" in computational cost.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recently BERT has achieved a state-of-theart performance in temporal relation extraction from clinical Electronic Medical Records text. However, the current approach is inefficient as it requires multiple passes through each input sequence. We extend a recently-proposed one-pass model for relation classification to a one-pass model for relation extraction. We augment this framework by introducing global embeddings to help with long-distance relation inference, and by multi-task learning to increase model performance and generalizability. Our proposed model produces results on par with the state-of-the-art in temporal relation extraction on the THYME corpus and is much \"greener\" in computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The analysis of many medical phenomena (e.g., disease progression, longitudinal effects of medications, treatment regimen and outcomes) heavily depends on temporal relation extraction from the clinical free text embedded in the Electronic Medical Records (EMRs). At a coarse level, a clinical event can be linked to the document creation time (DCT) as Document Time Relations (DocTimeRel), with possible values of BEFORE, AFTER, OVER-LAP, and BEFORE OVERLAP (Styler IV et al., 2014) . At a finer level, a narrative container (Pustejovsky and Stubbs, 2011) can temporally subsume an event as a contains relation. The THYME corpus (Styler IV et al., 2014) consists of EMR clinical text and is annotated with time expressions (TIMEX3), events (EVENT), and temporal relations (TLINK) using an extension of TimeML (Pustejovsky et al., 2003; Pustejovsky and Stubbs, 2011) . It was used in the Clinical Temp-Eval series (Bethard et al., 2015 (Bethard et al., , 2016 .",
"cite_spans": [
{
"start": 458,
"end": 482,
"text": "(Styler IV et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 525,
"end": 555,
"text": "(Pustejovsky and Stubbs, 2011)",
"ref_id": "BIBREF18"
},
{
"start": 629,
"end": 653,
"text": "(Styler IV et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 809,
"end": 835,
"text": "(Pustejovsky et al., 2003;",
"ref_id": "BIBREF17"
},
{
"start": 836,
"end": 865,
"text": "Pustejovsky and Stubbs, 2011)",
"ref_id": "BIBREF18"
},
{
"start": 913,
"end": 934,
"text": "(Bethard et al., 2015",
"ref_id": "BIBREF2"
},
{
"start": 935,
"end": 958,
"text": "(Bethard et al., , 2016",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While the performance of DocTimeRel models has reached above 0.8 F1 on the THYME corpus, the CONTAINS task remains a challenge for both conventional learning approaches (Sun et al., 2013; Bethard et al., 2015 Bethard et al., , 2016 and neural models (structured perceptrons (Leeuwenberg and Moens, 2017) , convolutional neural networks (CNNs) , and Long Short-Term memory (LSTM) networks (Tourille et al., 2017; Lin et al., 2018; Galvan et al., 2018) ). The difficulty is that the limited labeled data is insufficient for training deep neural models for complex linguistic phenomena. Some recent work (Lin et al., 2019) has used massive pre-trained language models (BERT; Devlin et al., 2018) and their variations (Lee et al., 2019) for this task and significantly increased the CONTAINS score by taking advantage of the rich BERT representations. However, that approach has an input representation that is highly wasteful -the same sentence must be processed multiple times, once for each candidate relation pair.",
"cite_spans": [
{
"start": 169,
"end": 187,
"text": "(Sun et al., 2013;",
"ref_id": "BIBREF23"
},
{
"start": 188,
"end": 208,
"text": "Bethard et al., 2015",
"ref_id": "BIBREF2"
},
{
"start": 209,
"end": 231,
"text": "Bethard et al., , 2016",
"ref_id": "BIBREF3"
},
{
"start": 274,
"end": 303,
"text": "(Leeuwenberg and Moens, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 388,
"end": 411,
"text": "(Tourille et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 412,
"end": 429,
"text": "Lin et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 430,
"end": 450,
"text": "Galvan et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 601,
"end": 619,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 672,
"end": 692,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 714,
"end": 732,
"text": "(Lee et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inspired by recent work in Green AI (Schwartz et al., 2019; Strubell et al., 2019) , and one-pass encodings for multiple relations extraction (Wang et al., 2019) , we propose a one-pass encoding mechanism for the CONTAINS relation extraction task, which can significantly increase the efficiency and scalability. The architecture is shown in Figure 1. The three novel modifications to the original one-pass relational model of Wang et al. (2019) are: (1) Unlike Wang et al. (2019) , our model operates in the relation extraction setting, meaning it must distinguish between relations and nonrelations, as well as classifying by relation type.",
"cite_spans": [
{
"start": 36,
"end": 59,
"text": "(Schwartz et al., 2019;",
"ref_id": null
},
{
"start": 60,
"end": 82,
"text": "Strubell et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 142,
"end": 161,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 427,
"end": 445,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF25"
},
{
"start": 462,
"end": 480,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 342,
"end": 348,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) We introduce a pooled embedding for relational classification across long distances. Wang et al. (2019) focused on short-distance relations, but clinical CONTAINS relations often span multiple sentences, so a sequence-level embedding is necessary for such long-distance inference. (3) We use the same BERT encoding of the input instance for both Figure 1 : Model Architecture. e1, e2, and t represent entity-embeddings for \"surgery\", \"scheduled\", and \"March 11, 2014\" respectively. G is the pooled embedding for the entire input instance. DocTimeRel and CONTAINS tasks, i.e. adding multi-task learning (MTL) on top of one-pass encoding. DocTimeRel and CONTAINS are related tasks. For example, if a medical event A happens BEFORE the DCT, while event B happens AFTER the DCT, it is unlikely that there is a CONTAINS relation between A and B. MTL provides an effective way to leverage useful knowledge learned in one task to benefit other tasks. What is more, MTL can potentially employ a regularization effect that alleviates overfitting to a specific task.",
"cite_spans": [
{
"start": 89,
"end": 107,
"text": "Wang et al. (2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 350,
"end": 358,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Apache cTAKES (Savova et al., 2010)(http:// ctakes.apache.org) is used for segmenting and tokenizing the THYME corpus in order to generate instances. Each instance is a sequence of tokens with the gold standard event and time expression annotations marked in the token sequences by logging their positional information. Using the entity-aware self-attention based on relative distance (Wang et al., 2019) , we can encode every entity, E i , by its BERT embedding, e i . If an entity e i consists of multiple tokens (many time expressions are multi-token), it is average-pooled (local pool in Figure 1 ) over the embedding of the corresponding tokens in the last BERT layer.",
"cite_spans": [
{
"start": 385,
"end": 404,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 592,
"end": 600,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "For the CONTAINS task, we create relation candidates from all pairs of entities within an input sequence. Each candidate is represented by the concatenation of three embeddings, e i , e j , and G, as [G:e i :e j ], where G is an average-pooled embedding over the entire sequence, and is different from the embedding of [CLS] token. The [CLS] token is the conventional token BERT inserts at the start of every input sequence and its embedding is viewed as the representation of the entire sequence. The concatenated embedding is passed to a linear classifier to predict the CONTAINS, CONTAINED-BY, or NONE relation,r ij , as in eq. (1).",
"cite_spans": [
{
"start": 336,
"end": 341,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "P (r ij |x, E i , E j )=sof tmax(W L [G : e i : e j ] + b) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "W L \u2208 R 3dz\u00d7lr , d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "z is the dimension of the BERT embedding, l r = 3 for the CONTAINS labels, b is the bias, and x is the input sequence. Similarly, for the DocTimeRel (dtr) task we feed each entity's embedding, e i , together with the global pooling G, to another linear classifier to predict the entity's five \"temporal statuses\": TIMEX if the entity is a time expression or the dtr type (BEFORE, AFTER, etc.) if the entity is an event:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (d tr i |x, E i ) = sof tmax(W D [G : e i ] + b)",
"eq_num": "(2)"
}
],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "where W D \u2208 R 2dz\u00d7l d , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "l d = 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "For the combined task, we define loss as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(r ij , r ij ) + \u03b1(L(d tr i , dtr i ) + L(d tr j , dtr j ))",
"eq_num": "(3)"
}
],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "wherer ij is the predicted relation type,d tr i and dtr j are the predicted temporal statuses for E i and E j respectively, r ij is the gold relation type, and dtr i and dtr j are the gold temporal statuses. \u03b1 is a weight to balance CONTAINS loss and dtr loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin Tasks",
"sec_num": "2.1"
},
{
"text": "Following Lin et al. (2019), we use a set window of tokens (Token-Window) disregarding natural sentence boundaries for generating instances. BERT may still take punctuation tokens into account. Each token sequence is limited by a set number of entities (Entity-Window) to be processed. We apply a sliding token window (windows may overlap), thus every entity gets processed. Positional information for each entity is output along the token sequence and is propagated through different layers via the entity-aware self-attention mechanism (Wang et al., 2019) .",
"cite_spans": [
{
"start": 538,
"end": 557,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Window-based token sequence processing",
"sec_num": "2.2"
},
{
"text": "3 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Window-based token sequence processing",
"sec_num": "2.2"
},
{
"text": "We adopt the THYME corpus (Styler IV et al., 2014) for model fine-tuning and evaluation. The 2019's system without and with self-training using silver instances (system predictions on a unlabeled colon cancer set). We tested a one pass system with just argument embeddings; with the [CLS] token as the global context vector ([CLS]); with argument embeddings plus a globally pooled context vector (Pooling); and with global pooling as well as multi-task learning (MTL) with DocTimeRel.",
"cite_spans": [
{
"start": 26,
"end": 50,
"text": "(Styler IV et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Settings",
"sec_num": "3.1"
},
{
"text": "one-pass multi-task model is fine-tuned on the THYME Colon Cancer training set with uncased BERT base model, using the code released by Wang et al. 20191 as a base. The batch size is set to 4, the learning rate is selected from (1e-5, 2e-5, 3e-5, 5e-5), the Token-Window size is selected from (60, 70, 100), the Entity-Window size is selected from (8, 10, 16), the training epochs are selected from (2, 3, 4, 5), the clipping distance k (the maximum relative position to consider) is selected from (3, 4, 5), and \u03b1 is selected from (0.01, 0.05). A single NVIDIA GTX Titan Xp GPU is used for the computation. The best model is selected on the Colon cancer development set and tested on the Colon cancer test set, and on THYME Brain cancer test set for portability assessment. Table 1 shows performance of our one-pass models for the CONTAINS task on the Clinical TempEval colon cancer test set. The one-pass (OP) model alone obtains an F1 score of 0.659. Adding the [CLS] token as the global context vector increases the F1 score to 0.669. Using a globally averagepooled context vectors G instead of [CLS] improves performance to 0.680, better than the multipass model without silver instances (Lin et al., 2019) . Applying the MTL setting, the one-pass twin-task (CONTAINS and DocTimeRel) model without any silver data reaches 0.686 F1, which is on par with the multi-pass model trained with additional silver instances on the CONTAINS task, 1 https://github.com/helloeve/mre-in-one-pass 0.684 F1 (Lin et al., 2019) . Table 2 shows the performance of our one-pass models for the DocTimeRel task on the Clinical TempEval colon cancer test set. The single-task model achieves 0.88 weighted average F1, while the MTL model compromises the performance to 0.86 F1. Of note, this result is not directly comparable to Bethard et al. (2016) results because the Clinical TempEval evaluation script does not take into account if an entity is correctly recognized as a time expression (TIMEX). There are two types of entities in the THYME annotation: events and time expressions (TIMEX). The Bethard et al. (2016) evaluation on DocTimeRel was focused on all events, and classified an event into four Doc-TimeRel types. Our evaluation was for all entities. For a given entity, we classify it as a TIMEX or an event; if it is an event, we classify it into four DocTimeRel types, for a total of five classes. Table 3 shows the portability of our one-pass models on the THYME brain cancer test set. Without any tuning on brain cancer data, the MTL model with global pooling performs at 0.582 F1, which is better than the multi-pass model trained with additional silver instances (0.565 F1) reported in Lin et al. (2019) , trading roughly equal amounts of precision for recall to obtain a better balance. Without MTL, the one-pass CON-TAINS model with global context embeddings (One-pass+Pooling) achieves 0.566 F1 on the brain cancer test set, significantly lower than the MTL Model flops/inst inst # Ratio OP 218,767,889 20k 1 OP+MTL 218,783,260 20k 1 Multi-pass 218,724,880 427k 23 Multi-pass+Silver 218,724,880 497k 25 Table 4 : Computational complexity in flops per instance (flops/inst)\u00d7total number of instances (inst#).",
"cite_spans": [
{
"start": 1193,
"end": 1211,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1497,
"end": 1515,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 1811,
"end": 1832,
"text": "Bethard et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 2081,
"end": 2102,
"text": "Bethard et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 2687,
"end": 2704,
"text": "Lin et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 775,
"end": 782,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1518,
"end": 1525,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 2395,
"end": 2402,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 2984,
"end": 3127,
"text": "# Ratio OP 218,767,889 20k 1 OP+MTL 218,783,260 20k 1 Multi-pass 218,724,880 427k 23 Multi-pass+Silver 218,724,880 497k 25 Table 4",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data and Settings",
"sec_num": "3.1"
},
{
"text": "model (using a Wilcoxon Signed-rank test over document-by-document comparisons, as in (Cherry et al., 2013) , p-value=0.01962). Table 4 shows the computational burden for different models in terms of floating point operations (flops). The flops are derived from TensorFlow's profiling tool on saved model graphs. The second column is the flops per one training instance, the third column lists the number of instances for different model settings. The total computational complexity for one training epoch is thus the multiplication between column 2 and 3. The Ratio column is the relative ratio of total complexity using the OP total flops as the comparator. For relation extraction, all entities within a sequence must be paired. If there are n entities in a token sequence, there are n \u00d7 (n \u2212 1)/2 ways to combine those entities for relational candidates. The multi-pass model would encode the same sequence n \u00d7 (n \u2212 1)/2 times, while the one-pass model would only encode it once and add the pairing computation on top of the BERT encoding represented in Figure 1 with very minor increase in computation per one instance (about 43K flops); and the MTL model adds another 15k flops; but they are of the same magnitude, 219K flops. The one-pass models save a lot of passes on the training instances, 20k vs. 497k, which results in a significant difference in computational load, 1 vs. 25, which could be several hours to several days difference in GPU hours. The exact number of training instances processed by the one-pass model is affected by the Token-Window and Entity-Window hyper-parameters. However, even in the worst case scenario, when the Token-Window is set to 100, and the Entity-Window is set to 8, there are 108K training instances for the one-pass model, which is still substantially fewer training instances than what are used for the multi-pass model. In addition, since the one-pass models do not run the extra steps used for generating silver instances (Lin et al., 2019) , the time savings is even greater.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Cherry et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 1973,
"end": 1991,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 4",
"ref_id": null
},
{
"start": 1058,
"end": 1066,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on THYME",
"sec_num": "3.2"
},
{
"text": "Through table 1 row 3-5, we can see that sequencewise embedding, either global pooling G or [CLS] , is important for clinical temporal relation extraction which involves long-distance relations that may go across multiple natural sentences. Entity embeddings are good for tasks that focus on shortdistance relations (such as (G\u00e1bor et al., 2018) ), but may not be sufficient for picking enough context for long-distance relations.",
"cite_spans": [
{
"start": 92,
"end": 97,
"text": "[CLS]",
"ref_id": null
},
{
"start": 325,
"end": 345,
"text": "(G\u00e1bor et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Combining MTL with a one-pass mechanism produces a more efficient and generalizable model. With merely additional 15k flops (table 4 row 1 and 2), the model achieves high performance for both tasks. However, we found that it is hard for both tasks to get top performance. If the weight for dtr loss is increased, the dtr F1 increases at the cost of the CONTAINS scores. Even though the majority of entities in CONTAINS relations have aligned dtr values (e.g., in Figure 2 (#1), both entities have matching dtr value, AFTER), some relations do have conflicted dtr values. For example, in Figure 2 (#2), the dtr for screening is BEFORE, while test is a BEFORE OVERLAP (the present perfect tense signifies tests happened in the past but lasts through present, hence BEFORE OVERLAP). Even though it is a gold CONTAINS annotation, the model may be confused by an event that happened in the past (screening) to contain another event (test) that is longer than its temporal scope. Due to these conflicts, we thus pick the more challenging CONTAINS task as our priority and set \u03b1 relatively low (0.01) in order to optimize the model towards the CONTAINS task, ignoring some of the dtr errors or conflicts. In the meantime, the MTL setting does help prevent the model from overfitting to one specific task, thus achieving some level of generalization. The significant 1.6% increase in F1-score on the Brain test set in table 3 demonstrates the improved generalizability.",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 471,
"text": "Figure 2",
"ref_id": null
},
{
"start": 587,
"end": 595,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In conclusion, we built a \"green\" model for a challenging problem. Deployed on a single gpu with 25 times better efficiency, it succeeded in both temporal tasks, achieved better generalizability, and suited to other pre-trained models (Liu et al., 2019; Alsentzer et al., 2019; Beltagy et al., 2019; Lan et al., 2019; Yang et al., 2019, etc.) ",
"cite_spans": [
{
"start": 235,
"end": 253,
"text": "(Liu et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 254,
"end": 277,
"text": "Alsentzer et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 278,
"end": 299,
"text": "Beltagy et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 300,
"end": 317,
"text": "Lan et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 318,
"end": 342,
"text": "Yang et al., 2019, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "CONTAINS Relations with matching(#1)/conflicting(#2) DocTimeRel values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2:",
"sec_num": null
}
],
"back_matter": [
{
"text": "The study was funded by R01LM10090, R01GM114355 and UG3CA243120 from the Unites States National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We thank the anonymous reviewers for their valuable suggestions and criticism. The Titan Xp GPU used for this research was donated by the NVIDIA Corporation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical bert embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Willie",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.03323"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John R Murphy, Willie Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scibert: A pretrained language model for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3606--3611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3606- 3611.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semeval-2015 task 6: Clinical tempeval",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "806--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, Leon Derczynski, Guergana Savova, Guergana Savova, James Pustejovsky, and Marc Ver- hagen. 2015. Semeval-2015 task 6: Clinical temp- eval. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 806-814.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semeval-2016 task 12: Clinical tempeval. Proceedings of SemEval",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "Wei-Te",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "1052--1062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, Guergana Savova, Wei-Te Chen, Leon Derczynski, James Pustejovsky, and Marc Verhagen. 2016. Semeval-2016 task 12: Clinical tempeval. Proceedings of SemEval, pages 1052-1062.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semeval-2017 task 12: Clinical tempeval. Proceedings of the 11th International Workshop on Semantic Evaluation",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "563--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, Guergana Savova, Martha Palmer, James Pustejovsky, and Marc Verhagen. 2017. Semeval-2017 task 12: Clinical tempeval. Proceed- ings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 563-570.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "la recherche du temps perdu: extracting temporal relations from medical text in the 2012 i2b2 nlp challenge",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Berry",
"middle": [],
"last": "De Bruijn",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of the American Medical Informatics Association",
"volume": "20",
"issue": "5",
"pages": "843--848",
"other_ids": {
"DOI": [
"10.1136/amiajnl-2013-001624"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Cherry, Xiaodan Zhu, Joel Martin, and Berry de Bruijn. 2013. la recherche du temps perdu: ex- tracting temporal relations from medical text in the 2012 i2b2 nlp challenge. Journal of the American Medical Informatics Association, 20(5):843-848.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural temporal relation extraction",
"authors": [
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitriy Dligach, Timothy Miller, Chen Lin, Steven Bethard, and Guergana Savova. 2017. Neural tem- poral relation extraction. EACL 2017, page 746.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "SemEval-2018 task 7: Semantic relation extraction and classification in scientific papers",
"authors": [
{
"first": "Kata",
"middle": [],
"last": "G\u00e1bor",
"suffix": ""
},
{
"first": "Davide",
"middle": [],
"last": "Buscaldi",
"suffix": ""
},
{
"first": "Anne-Kathrin",
"middle": [],
"last": "Schumann",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Qasemizadeh",
"suffix": ""
},
{
"first": "Ha\u00effa",
"middle": [],
"last": "Zargayouna",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Charnois",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "679--688",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1111"
]
},
"num": null,
"urls": [],
"raw_text": "Kata G\u00e1bor, Davide Buscaldi, Anne-Kathrin Schu- mann, Behrang QasemiZadeh, Ha\u00effa Zargayouna, and Thierry Charnois. 2018. SemEval-2018 task 7: Semantic relation extraction and classification in scientific papers. In Proceedings of The 12th Inter- national Workshop on Semantic Evaluation, pages 679-688, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Investigating the challenges of temporal relation extraction from clinical text",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Galvan",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Matsuda",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "55--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Galvan, Naoaki Okazaki, Koji Matsuda, and Kentaro Inui. 2018. Investigating the challenges of temporal relation extraction from clinical text. In Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis, pages 55-64.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/bioinformatics/btz682"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Structured learning for temporal relation extraction from clinical records",
"authors": [
{
"first": "Tuur",
"middle": [],
"last": "Leeuwenberg",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuur Leeuwenberg and Marie-Francine Moens. 2017. Structured learning for temporal relation extraction from clinical records. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Self-training improves recurrent neural networks performance for temporal relation extraction",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Amiri",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "165--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Lin, Timothy Miller, Dmitriy Dligach, Hadi Amiri, Steven Bethard, and Guergana Savova. 2018. Self-training improves recurrent neural networks performance for temporal relation extraction. In Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis, pages 165-176.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Representations of time expressions for temporal relation extraction with convolutional neural networks",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2017,
"venue": "BioNLP",
"volume": "",
"issue": "",
"pages": "322--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2017. Repre- sentations of time expressions for temporal rela- tion extraction with convolutional neural networks. BioNLP 2017, pages 322-327.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A bert-based universal model for both within-and cross-sentence clinical temporal relation extraction",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "65--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2019. A bert-based universal model for both within-and cross-sentence clinical temporal relation extraction. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 65-71.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Timeml: Robust specification of event and temporal expressions in text",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Castano",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sauri",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2003,
"venue": "New directions in question answering",
"volume": "3",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Jos\u00e9 M Castano, Robert Ingria, Roser Sauri, Robert J Gaizauskas, Andrea Set- zer, Graham Katz, and Dragomir R Radev. 2003. Timeml: Robust specification of event and tempo- ral expressions in text. New directions in question answering, 3:28-34.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Increasing informativeness in temporal annotation",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Amber",
"middle": [],
"last": "Stubbs",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "152--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky and Amber Stubbs. 2011. Increas- ing informativeness in temporal annotation. In Pro- ceedings of the 5th Linguistic Annotation Workshop, pages 152-160. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications",
"authors": [
{
"first": "K",
"middle": [],
"last": "Guergana",
"suffix": ""
},
{
"first": "James",
"middle": [
"J"
],
"last": "Savova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Masanz",
"suffix": ""
},
{
"first": "Jiaping",
"middle": [],
"last": "Philip V Ogren",
"suffix": ""
},
{
"first": "Sunghwan",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Karin",
"middle": [
"C"
],
"last": "Sohn",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"G"
],
"last": "Kipper-Schuler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chute",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "17",
"issue": "5",
"pages": "507--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guergana K Savova, James J Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C Kipper- Schuler, and Christopher G Chute. 2010. Mayo clin- ical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and ap- plications. Journal of the American Medical Infor- matics Association, 17(5):507-513.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Energy and policy considerations for deep learning in nlp",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3645--3650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in nlp. Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3645-3650.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Temporal annotation in the clinical domain",
"authors": [
{
"first": "I",
"middle": [
"V"
],
"last": "William F Styler",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Finan",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Piet",
"middle": [
"C"
],
"last": "De Groen",
"suffix": ""
},
{
"first": "Brad",
"middle": [],
"last": "Erickson",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "143--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William F Styler IV, Steven Bethard, Sean Finan, Martha Palmer, Sameer Pradhan, Piet C de Groen, Brad Erickson, Timothy Miller, Chen Lin, Guergana Savova, et al. 2014. Temporal annotation in the clin- ical domain. Transactions of the Association for Computational Linguistics, 2:143-154.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Evaluating temporal relations in clinical text: 2012 i2b2 challenge",
"authors": [
{
"first": "Weiyi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of the American Medical Informatics Association",
"volume": "20",
"issue": "5",
"pages": "806--813",
"other_ids": {
"DOI": [
"10.1136/amiajnl-2013-001628"
]
},
"num": null,
"urls": [],
"raw_text": "Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013. Evaluating temporal relations in clinical text: 2012 i2b2 challenge. Journal of the American Medical Informatics Association, 20(5):806-813.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural architecture for temporal relation extraction: A bi-lstm approach for detecting narrative containers",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Tourille",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
},
{
"first": "Aurelie",
"middle": [],
"last": "Neveol",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Tannier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "224--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julien Tourille, Olivier Ferret, Aurelie Neveol, and Xavier Tannier. 2017. Neural architecture for tem- poral relation extraction: A bi-lstm approach for de- tecting narrative containers. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), vol- ume 2, pages 224-230.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extracting multiple-relations in one-pass with pre-trained transformers",
"authors": [
{
"first": "Haoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dakuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaoxiao",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Saloni",
"middle": [],
"last": "Potdar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1371--1377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. 2019. Extracting multiple-relations in one-pass with pre-trained transformers. Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1371-1377.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in Neural Infor- mation Processing Systems 32, pages 5754-5764.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Multi-pass</td><td colspan=\"3\">0.735 0.613 0.669</td></tr><tr><td>Multi-pass+Silver</td><td colspan=\"3\">0.674 0.695 0.684</td></tr><tr><td>One-pass</td><td colspan=\"3\">0.647 0.671 0.659</td></tr><tr><td>One-pass+[CLS]</td><td colspan=\"3\">0.665 0.673 0.669</td></tr><tr><td>One-pass+Pooling</td><td colspan=\"3\">0.670 0.689 0.680</td></tr><tr><td colspan=\"4\">One-pass+Pooling+MTL 0.686 0.687 0.686</td></tr></table>",
"num": null,
"text": "Model performance of CONTAINS relation on colon cancer test set. Multi-pass baselines are from Lin et al."
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">: Model performance in F1-scores of tem-</td></tr><tr><td colspan=\"3\">poral statuses on colon cancer test set.</td><td>Single:</td></tr><tr><td colspan=\"4\">One-pass+Pooling for a single dtr Task; MTL: One-</td></tr><tr><td colspan=\"4\">pass+Pooling for twin tasks: CONTAINS and dtr.</td></tr><tr><td>Model</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Lin et al. (2019)</td><td colspan=\"3\">0.473 0.700 0.565</td></tr><tr><td>One-pass+Pooling</td><td colspan=\"3\">0.506 0.643 0.566</td></tr><tr><td colspan=\"4\">One-pass+Pooling+MTL 0.545 0.624 0.582</td></tr></table>",
"num": null,
"text": ""
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Model performance of CONTAINS relation on brain cancer test set."
}
}
}
}