{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:33.797265Z" }, "title": "Global Locality in Biomedical Relation and Event Extraction", "authors": [ { "first": "Elaheh", "middle": [], "last": "Shafieibavani", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research", "location": { "country": "Australia" } }, "email": "elaheh.shafieibavani@ibm.com" }, { "first": "Antonio", "middle": [ "Jimeno" ], "last": "Yepes", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research", "location": { "country": "Australia" } }, "email": "antonio.jimeno@au1.ibm.com" }, { "first": "Xu", "middle": [], "last": "Zhong", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research", "location": { "country": "Australia" } }, "email": "" }, { "first": "David", "middle": [], "last": "Martinez Iraola", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research", "location": { "country": "Australia" } }, "email": "david.martinez.iraola1@ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Due to the exponential growth of biomedical literature, event and relation extraction are important tasks in biomedical text mining. Most work only focus on relation extraction, and detect a single entity pair mention on a short span of text, which is not ideal due to long sentences that appear in biomedical contexts. We propose an approach to both relation and event extraction, for simultaneously predicting relationships between all mention pairs in a text. We also perform an empirical study to discuss different network setups for this purpose. The best performing model includes a set of multi-head attentions and convolutions, an adaptation of the transformer architecture, which offers self-attention the ability to strengthen dependencies among related elements, and models the interaction between features extracted by multiple attention heads. Experiment results demonstrate that our approach outperforms the state of the art on a set of benchmark biomedical corpora including BioNLP 2009, 2011, 2013 and BioCreative 2017 shared tasks.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Due to the exponential growth of biomedical literature, event and relation extraction are important tasks in biomedical text mining. Most work only focus on relation extraction, and detect a single entity pair mention on a short span of text, which is not ideal due to long sentences that appear in biomedical contexts. We propose an approach to both relation and event extraction, for simultaneously predicting relationships between all mention pairs in a text. We also perform an empirical study to discuss different network setups for this purpose. The best performing model includes a set of multi-head attentions and convolutions, an adaptation of the transformer architecture, which offers self-attention the ability to strengthen dependencies among related elements, and models the interaction between features extracted by multiple attention heads. Experiment results demonstrate that our approach outperforms the state of the art on a set of benchmark biomedical corpora including BioNLP 2009, 2011, 2013 and BioCreative 2017 shared tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Event and relation extraction has become a key research topic in natural language processing with a variety of practical applications especially in the biomedical domain, where these methods are widely used to extract information from massive document sets, such as scientific literature and patient records. This information contains the interactions between named entities such as proteinprotein, drug-drug, chemical-disease, and more complex events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Relations are usually described as typed, sometimes directed, pairwise links between defined named entities (Bj\u00f6rne et al., 2009) . Event extraction differs from relation extraction in the sense that an event has an annotated trigger word (e.g., a verb), and could be an argument of other events to connect more than two entities. Event extraction is a more complicated task compared to relation extraction due to the tendency of events to capture the semantics of texts. For clarity, Figure 1 shows an example from the GE11 shared task corpus that includes two nested events. Recently, deep neural network models obtain state-of-the-art performance for event and relation extraction. Two major neural network architectures for this purpose include Convolutional Neural Networks (CNNs) (Santos et al., 2015; Zeng et al., 2015) and Recurrent Neural Networks (RNNs) (Mallory et al., 2015; Verga et al., 2015; Zhou et al., 2016) . While CNNs can capture the local features based on the convolution operations and are more suitable for addressing short sentence sequences, RNNs are good at learning long-term dependency features, which are considered more suitable for dealing with long sentences. Therefore, combining the advantages of both models is the key point for improving biomedical event and relation extraction performance .", "cite_spans": [ { "start": 108, "end": 129, "text": "(Bj\u00f6rne et al., 2009)", "ref_id": "BIBREF0" }, { "start": 786, "end": 807, "text": "(Santos et al., 2015;", "ref_id": "BIBREF18" }, { "start": 808, "end": 826, "text": "Zeng et al., 2015)", "ref_id": "BIBREF25" }, { "start": 864, "end": 886, "text": "(Mallory et al., 2015;", "ref_id": "BIBREF12" }, { "start": 887, "end": 906, "text": "Verga et al., 2015;", "ref_id": "BIBREF23" }, { "start": 907, "end": 925, "text": "Zhou et al., 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 485, "end": 493, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, encoding long sequences to incorporate long-distance context is very expensive in RNNs (Verga et al., 2018) due to their computational dependence on the length of the sequence. In addition, computations could not be parallelized since each token's representation requires as input the representation of its previous token. In contrast, CNNs can be executed entirely in parallel across the sequence, and have shown good performance in event and relation extraction (Bj\u00f6rne and Salakoski, 2018) . However, the amount of context incorporated into a single token's representation is limited by the depth of the network, and very deep networks can be difficult to learn (Hochreiter, 1998) .", "cite_spans": [ { "start": 96, "end": 116, "text": "(Verga et al., 2018)", "ref_id": "BIBREF24" }, { "start": 473, "end": 501, "text": "(Bj\u00f6rne and Salakoski, 2018)", "ref_id": "BIBREF2" }, { "start": 674, "end": 692, "text": "(Hochreiter, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address these problems, self-attention networks (Parikh et al., 2016; Lin et al., 2017) come into play. They have shown promising empirical results in various natural language processing tasks, such as information extraction (Verga et al., 2018) , machine translation (Vaswani et al., 2017) and natural language inference (Shen et al., 2018) . One of their strengths lies in their high parallelization in computation and flexibility in modeling dependencies regardless of distance by explicitly attending to all the elements. In addition, their performance can be improved by multi-head attention (Vaswani et al., 2017) , which projects the input sequence into multiple subspaces and applies attention to the representation in each subspace.", "cite_spans": [ { "start": 51, "end": 72, "text": "(Parikh et al., 2016;", "ref_id": "BIBREF16" }, { "start": 73, "end": 90, "text": "Lin et al., 2017)", "ref_id": "BIBREF11" }, { "start": 228, "end": 248, "text": "(Verga et al., 2018)", "ref_id": "BIBREF24" }, { "start": 271, "end": 293, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" }, { "start": 325, "end": 344, "text": "(Shen et al., 2018)", "ref_id": "BIBREF21" }, { "start": 600, "end": 622, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a new neural network model that combines multi-head attention mechanisms with a set of convolutions to provide global locality in biomedical event and relation extraction. Convolutions capture the local structure of text, while self-attention learns the global interaction between each pair of words. Hence, our approach models locality for self-attention while the interactions between features are learned by multi-head attentions. The experiment results over the biomedical benchmark corpora show that providing global locality outperforms the existing state of the art for biomedical event and relation extraction. The proposed architecture is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 683, "end": 691, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Conducting a set of experiments over the corpora of the shared tasks for BioNLP 2009, 2011 and 2013, and BioCreative 2017, we compare the performance of our model with the best-performing system (TEES) (Bj\u00f6rne and Salakoski, 2018) in the shared tasks. The results we achieve via precision, recall, and F-score demonstrate that our model obtains state-of-the-art performance. We also empirically assess three variants of our model and elaborate on the results further in the experiments.", "cite_spans": [ { "start": 202, "end": 230, "text": "(Bj\u00f6rne and Salakoski, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. Section 2 summarizes the background. The data, and the proposed approach are explained in Sections 3 and 4 respectively. Section 5 explains the experiments and discusses the achieved results. Finally, Section 6 summarizes the findings of the paper and presents future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Biomedical event and relation extraction have been developed thanks to the contribution of corpora generated for community shared tasks (Kim et al., 2009 (Kim et al., , 2011 N\u00e9dellec et al., 2013; Segura Bedmar et al., 2011 Krallinger et al., 2017) . In these tasks, relevant biomedical entities such as genes, proteins and chemicals are given and the information extraction methods aim to identify relations alone or relations and events together within a sentence span.", "cite_spans": [ { "start": 136, "end": 153, "text": "(Kim et al., 2009", "ref_id": "BIBREF6" }, { "start": 154, "end": 173, "text": "(Kim et al., , 2011", "ref_id": "BIBREF7" }, { "start": 174, "end": 196, "text": "N\u00e9dellec et al., 2013;", "ref_id": "BIBREF15" }, { "start": 197, "end": 223, "text": "Segura Bedmar et al., 2011", "ref_id": "BIBREF20" }, { "start": 224, "end": 248, "text": "Krallinger et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "A variety of methods have been evaluated on these tasks, which range from rule based methods to more complex machine learning methods, either supported by shallow or deep learning approaches. Some of the deep learning based methods include CNNs (Bj\u00f6rne and Salakoski, 2018; Santos et al., 2015; Zeng et al., 2015) and RNNs (Li et al., 2019; Mallory et al., 2015; Verga et al., 2015; Zhou et al., 2016) . CNNs will identify local context relations while their performance may suffer when entities need to be identified in a broader context. On the other hand, RNNs are difficult to parallelize while they do not fully solve the long dependency problem (Verga et al., 2018) . Moreover, such approaches are proposed for relation extraction, but not to extract nested events. In this work, we intend to improve over existing methods. We combine a set of parallel multi-head attentions with a set of 1D convolutions to provide global locality in biomedical event and relation extraction. Our approach models locality for self-attention while the interactions between features are learned by multi-head attentions. We evaluate our model on data from the shared tasks for BioNLP 2009, 2011 and 2013, and BioCreative 2017.", "cite_spans": [ { "start": 245, "end": 273, "text": "(Bj\u00f6rne and Salakoski, 2018;", "ref_id": "BIBREF2" }, { "start": 274, "end": 294, "text": "Santos et al., 2015;", "ref_id": "BIBREF18" }, { "start": 295, "end": 313, "text": "Zeng et al., 2015)", "ref_id": "BIBREF25" }, { "start": 323, "end": 340, "text": "(Li et al., 2019;", "ref_id": "BIBREF10" }, { "start": 341, "end": 362, "text": "Mallory et al., 2015;", "ref_id": "BIBREF12" }, { "start": 363, "end": 382, "text": "Verga et al., 2015;", "ref_id": "BIBREF23" }, { "start": 383, "end": 401, "text": "Zhou et al., 2016)", "ref_id": "BIBREF28" }, { "start": 651, "end": 671, "text": "(Verga et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The BioNLP Event Extraction tasks provide the most complex corpora with often large sets of event types and at times relatively small corpus sizes. Our proposed approach achieves higher performance on the GE09, GE11, EPI11, ID11, REL11, GE13, CG13 and PC13 BioNLP Shared Task corpora, compared to the top performing system (TEES) (Bj\u00f6rne and Salakoski, 2018) for both relation and event extraction in these tasks. Since the annotations for the test sets of the BioNLP Shared Task corpora are not provided, we uploaded our predictions to the task organizers' servers for evaluation. .", "cite_spans": [ { "start": 330, "end": 358, "text": "(Bj\u00f6rne and Salakoski, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": ". .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Figure 2: Our model architecture for biomedical event and relation extraction: The embedding vectors are merged together before the multi-head attention and convolution layers. The global max pooling is then applied to the results of these operations. Finally, the output layer shows the predicted labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Head Attention", "sec_num": null }, { "text": "The CHEMPROT corpus in the BioCreative VI Chemical-Protein relation extraction task (CP17) also provides a standard comparison with current methods in relation extraction. The CHEMPROT corpus is relatively large compared to its low number of five relation types. Our model outperforms the best-performing system (TEES) (Bj\u00f6rne and Salakoski, 2018) for relation extraction in this task.", "cite_spans": [ { "start": 319, "end": 347, "text": "(Bj\u00f6rne and Salakoski, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-Head Attention", "sec_num": null }, { "text": "We develop and evaluate our approach on a number of event and relation extraction corpora. These corpora originate from three BioNLP Shared Tasks (Kim et al., 2009; Bj\u00f6rne and Salakoski, 2011; N\u00e9dellec et al., 2013) and the BioCreative VI Chemical-Protein relation extraction task (Krallinger et al., 2017) . The BioNLP corpora cover various domains of molecular biology and provide the most complex event annotations. The BioCreative corpora use pairwise relation annotations. Table 1 shows information about these corpora.", "cite_spans": [ { "start": 146, "end": 164, "text": "(Kim et al., 2009;", "ref_id": "BIBREF6" }, { "start": 165, "end": 192, "text": "Bj\u00f6rne and Salakoski, 2011;", "ref_id": "BIBREF1" }, { "start": 193, "end": 215, "text": "N\u00e9dellec et al., 2013)", "ref_id": "BIBREF15" }, { "start": 281, "end": 306, "text": "(Krallinger et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 478, "end": 485, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "For further analysis and experiments, we also used the AMIA gene-mutation corpus available in (Jimeno Yepes et al., 2018 Table 1 : Information about the domain, number of event and entity types (E), number of event argument and relation types (I), and number of sentences (S), related to the corpora of the biomedical shared tasks tions between genes and mutations. We extracted about 30% of the training set as the validation set.", "cite_spans": [ { "start": 102, "end": 120, "text": "Yepes et al., 2018", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We propose a new biomedical event extraction model that is mainly built upon multi-head attentions to learn the global interactions between each pair of tokens, and convolutions to provide locality. The proposed neural network architecture consists of 4 parallel multi-head attentions followed by a set of 1D convolutions with window sizes 1, 3, 5 and 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Our model attends to the most important tokens in the input features 1 , and enhances the feature extraction of dependent elements across multiple heads, irrespective of their distance. Moreover, we model locality for multi-head attentions by restricting the attended tokens to local regions via convolutions. The relation and event extraction task is modelled as a graph representation of events and relations (Bj\u00f6rne and Salakoski, 2018) . Entities and event triggers are nodes, and relations and event arguments are the edges that connect them. An event is modelled as a trigger node and its set of outgoing edges. Relation and event extraction are performed through the following classification tasks: (i) Entity and Trigger Detection, which is a named-entity recognition task where entities and event triggers in a sentence span are detected to generate the graph nodes; (ii) Relation and Event Detection, where relations and event arguments are predicted for all valid pairs of entity and trigger nodes to create the graph edges; (iii) Event Duplication, where each event is classified as an event or a negative which causes unmerging in the graph 2 ; (iv) Modifier Detection, in which event modality (speculation or negation) is detected. In relation extraction tasks where entities are given, only the second classification task is partially used.", "cite_spans": [ { "start": 411, "end": 439, "text": "(Bj\u00f6rne and Salakoski, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The same network architecture is used for all four classification tasks, with the number of predicted labels changing between tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The input is modelled in the context of a sentence window, centered around the target entity, relation or event. The sentence is modelled as a linear sequence of word tokens. Following the work in (Bj\u00f6rne and Salakoski, 2018), we use a set of embedding vectors as the input features, where each unique word token is mapped to the relevant vector space embeddings. We use the pre-trained 200-dimensional word2vec vectors (Mikolov et al., 2013) induced on a combination of the English Wikipedia and the millions of biomedical research articles from PubMed and PubMed Central (Moen and Ananiadou, 2013), along with the 8-dimensional embeddings of relative positions, and distances learned from the input corpus. Following the work in (Zeng et al., 2014) , we use Distance features, where the relative distances to tokens of interest are mapped to their own vec-tors. We also consider Relative Position features to identify the locations and roles (i.e., entities, event triggers, and arguments) of tokens in the classified structure. Finally, these embeddings with their learned weights 3 are concatenated together to shape an n-dimensional vector e i for each word token. This merged input sequence is then processed by a set of parallel multi-head attentions followed by convolutional layers.", "cite_spans": [ { "start": 420, "end": 442, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF13" }, { "start": 731, "end": 750, "text": "(Zeng et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Inputs", "sec_num": "4.1" }, { "text": "Self-attention networks produce representations by applying attention to each pair of tokens from the input sequence, regardless of their distance. According to the previous work (Vaswani et al., 2017) , multi-head attention applies self-attention multiple times over the same inputs using separately normalized parameters (attention heads) and combines the results, as an alternative to applying one pass of attention with more parameters. The intuition behind this modeling decision is that dividing the attention into multiple heads makes it easier for the model to learn to attend to different types of relevant information with each head. The self-attention updates input embeddings e i by performing a weighted sum over all tokens in the sequence, weighted by their importance for modeling token i. Given an input sequence E = {e 1 , ..., e I } \u2208 R I\u00d7d , the model first projects each input to a key k, value v, and query q, using separate affine transformations with ReLU activations (Glorot et al., 2011) . Here, k, v, and q are each in R ij for head h between tokens i and j are computed using scaled dot-product attention:", "cite_spans": [ { "start": 179, "end": 201, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" }, { "start": 991, "end": 1012, "text": "(Glorot et al., 2011)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-head Attention", "sec_num": "4.2" }, { "text": "a h ij = \u03c3( q h i T k h j \u221a d ) (1) o h i = j v h j s h ij", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head Attention", "sec_num": "4.2" }, { "text": "where o h i is the output of the attention head h. denotes element-wise multiplication and \u03c3 indicates a softmax along the jth dimension. The scaled attention is meant to aid optimization by flattening the softmax and better distributing the gradients (Vaswani et al., 2017) . The outputs of the individual attention heads are concatenated into o i as:", "cite_spans": [ { "start": 252, "end": 274, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-head Attention", "sec_num": "4.2" }, { "text": "o i = [o 1 i ; ...; o H i ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head Attention", "sec_num": "4.2" }, { "text": "Herein, all layers use residual connections between the output of the multi-headed attention and its input. Layer normalization (Lei Ba et al., 2016) , LN (.), is then applied to the output:", "cite_spans": [ { "start": 128, "end": 149, "text": "(Lei Ba et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multi-head Attention", "sec_num": "4.2" }, { "text": "m i = LN (e i + o i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head Attention", "sec_num": "4.2" }, { "text": "The multi-head attention layer uses a softmax activation function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head Attention", "sec_num": "4.2" }, { "text": "The multi-head attentions are then followed by a set of parallel 1D convolutions with window sizes 1, 3, 5 and 7. Adding these explicit n-gram modelings helps the model to learn to attend to local features. Our convolutions use the ReLU activation function. We use C(.) to denote a convolutional operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutions", "sec_num": "4.3" }, { "text": "The convolutional portion of the model is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutions", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c i = ReLU (C(m i ))", "eq_num": "(2)" } ], "section": "Convolutions", "sec_num": "4.3" }, { "text": "Global max pooling is then applied to each 1D convolution and the resulting features are merged together into an output vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutions", "sec_num": "4.3" }, { "text": "Finally, the output layer performs the classification, where each label is represented by one neuron. The classification layer uses the sigmoid activation function. Classification is performed as multilabel classification where each example may have zero, one or multiple positive labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification", "sec_num": "4.4" }, { "text": "We use the adam optimizer with binary crossentropy and a learning rate of 0.001. Dropout of 0.1 is also applied at two steps of merging input features and global max pooling to provide generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification", "sec_num": "4.4" }, { "text": "We have conducted a set of experiments to evaluate our proposed approach over the benchmark biomedical corpora. In addition to evaluating our main model (4MHA-4CNN), we have evaluated the performance of three variants of our proposed approach: (i) 4MHA: 4 parallel multi-head attentions apply self-attention multiple times over the input features; (ii) 1MHA: only 1 multi-head attention applies self-attention to the input features; (iii) 4CNN-4MHA: multiple self-attentions are applied to the input features via a set of 1D convolutions 4 . The 4CNN architecture matches the best performing configuration (4CNN -mixed 5 X ensemble) 5 used by TEES (Bj\u00f6rne and Salakoski, 2018) , which is composed of four 1D convolutions with window sizes 1, 3, 5 and 7. In our models and TEES, we set the number of filters for the convolutions to 64. The number of heads for multi-head attentions is also set to 8. The reported results of TEES are achieved by running their out-of-the-box system for different tasks.", "cite_spans": [ { "start": 648, "end": 676, "text": "(Bj\u00f6rne and Salakoski, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "Since training a single model can be prone to overfitting if the validation set is too small (Bj\u00f6rne and Salakoski, 2018), we use mixed 5 model ensemble, which takes 5-best models (out of 20), ranked with micro-averaged F-score on randomized train/validation set split, and considers their averaged predictions. These ensemble predictions are calculated for each label as the average of all the models' predicted confidence scores. Precision, recall, and F-score of the proposed approach and its variants are compared to TEES in Table 2 . Our model (4MHA-4CNN) obtains the state-of-the-art results compared to those of the top performing system (TEES) in different shared tasks: BioNLP (GE09, GE11, EPI11, ID11, REL11, GE13, CG13, PC13), BioCreative (CP17), and the AMIA dataset.", "cite_spans": [], "ref_spans": [ { "start": 529, "end": 536, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "Analyzing the results, we observe that the proposed 4MHA-4CNN model has the best F-score in the majority of datasets except for EPI11, ID11 and CG13, where the proposed MHA models (i.e., 1MHA and 4MHA) have the best F-score and recall. These tasks are related to epigenetics and post-translational modifications (EPI11), infection diseases (ID11) and cancer genetics (CG13), where events typically require long dependencies in most of the cases. It explains why the MHA-alone models are better than when combined with convolutions. The F-scores achieved by 4MHA-4CNN and 4MHA models on GE09 dataset are also very close. In many cases, when using the configurations in which MHA is applied to the input features, both precision and recall are better compared to other configurations. Moreover, having four parallel MHAs applied to the input features outperforms 1MHA and the other potential variants 6 .", "cite_spans": [ { "start": 899, "end": 900, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "In terms of precision, the advantage of applying 4CNN versus 4MHA to the merged input features depends on the dataset. On PC13, the precision when using 4CNN on the merged input features is much higher compared to other configurations, but the recall is significantly lower.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "The proposed 4MHA-4CNN model has also Table 2 : Precision, Recall and F-score, measured on the corpora of various shared tasks for our models, and the state of the art. The best scores (the first and the second highest scores) for each task are bolded and highlighted, respectively. All the results (except those of CP17 and AMIA) are evaluated using the official evaluation program/server of each task.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "good recall, except for EPI11, ID11, and CG13, where 4MHA is better. As mentioned before, the addition of convolutions after the multi-head atten-tions might be less useful in these three sets, since sentences in these topics describe interactions for which long context dependencies are present.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "The The presence presence of of activating activating TSH-R TSH-R mutations mutations has has also also been been demonstrated demonstrated in in differentiated differentiated thyroid thyroid carcinomas carcinomas . . Overall, our observations support the hypothesis that higher recall/F-score is obtained in configurations in which 4MHA is applied first to the merged input features, where CNNs are not as convenient as MHAs to deal with long dependencies.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 242, "text": "The presence presence of of activating activating TSH-R TSH-R mutations mutations has has also also been been demonstrated demonstrated in in differentiated differentiated thyroid thyroid carcinomas carcinomas .", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "Besides improving the previous state of the art, the results indicate that combining multi-head attention with convolution provides an effective performance compared to individual components. Among the variants of our model, 4MHA also outperforms TEES over all the shared tasks reported in Table 2 . Even though convolutions are quite effective (Bj\u00f6rne and Salakoski, 2018) on their own, multihead attentions improve their performance being able to deal with longer dependencies. Figure 3 shows the multi-head attention (sum of the attention of all heads) of the \"relation and event detection\" classification task for different proposed network architectures (4MHA-4CNN, 1MHA, and 4MHA) on a sample sentence \"The presence of activating TSH-R mutations has also been demonstrated in differentiated thyroid carcinomas.\". In the 4MHA and 4MHA-4CNN models, the four multi-head attention layers contribute distinctively different attentions from each other. This allows the 4MHA and 4MHA-4CNN models to independently exploit more relationships between the to-kens than the 1MHA model. In addition, the convolutions make the 4MHA-4CNN model have more focused attentions on certain important tokens than the 4MHA model.", "cite_spans": [ { "start": 346, "end": 374, "text": "(Bj\u00f6rne and Salakoski, 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 290, "end": 298, "text": "Table 2", "ref_id": null }, { "start": 481, "end": 489, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5.1" }, { "text": "Considering the computational complexity, according to the work in (Vaswani et al., 2017) , selfattention has a cost that is quadratic with the length of the sequence, while the convolution cost is quadratic with respect to the representation dimension of the data. The representation dimension of the data is typically higher compared to the length of individual sentences. Outperforming convolutions in terms of computational complexity and F-score, multi-head attention mechanisms seem to be better suited. Although the addition of convolutions after the multi-head makes the model more expensive, the lower representation dimension of the filters reduces the cost.", "cite_spans": [ { "start": 67, "end": 89, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.1" }, { "text": "We have performed error analysis on the baseline system (TEES), and our approach 7 over the genemutation AMIA and CP17 datasets 8 , and observed the following sources of error. Relations involving multiple entities: This is a major source of false negatives for TEES, while our approach exhibits a more robust behavior and achieves full recall. The reason would be the ability of multi-head attention to jointly attend to information from different representation subspaces at different positions (Vaswani et al., 2017) . In an example from the AMIA dataset (Figure 4 (a) ), there is a \"has mutation\" relationship between the term \"mutations\" and the three gene-protein entities of \"MLH1\", \"MSH2\", and \"MSH6\". While the stateof-the-art approach only finds the relation between the mutation and the first gene-protein (MLH1) and ignores the other two relations, our approach captures the relations between the mutation and all three entities (MLH1, MSH2, and MSH6).", "cite_spans": [ { "start": 497, "end": 519, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 558, "end": 571, "text": "(Figure 4 (a)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.2" }, { "text": "Long-distance dependencies: TEES also seems to have difficulty in annotating long-distance relations, as in the missed relation between \"deletions\" and \"TGF-\u03b2\" in an example from the AMIA dataset (Figure 4 (b) ), which is captured by our approach. We explored this issue further by plotting the performance of different proposed architectures and that of TEES over different distances. We relied on the CP17 dataset, since the test set is considerably larger than AMIA. We performed this analysis for the best performing network architecture proposed (4MHA-4CNN) along with 4MHA and 4CNN architectures separately as the individual components, to study how these architectures behave in capturing distant relations. We measure the distance as the number of tokens between the farthest entities involved in a relation, by employing the tokenization carried out by the TEES pre-processing tool. The results are provided in Figure 3 . Regardless of the evaluation metric used, we observe that the scores decrease at longer distances, and 4MHA outperforms the other two architectures, which lies in the ability of multi-head attention to capture long distance dependencies. This experiment shows how 4MHA provides glob-ality in 4MHA-4CNN, which slightly outperforms 4CNN in longer distances.", "cite_spans": [], "ref_spans": [ { "start": 196, "end": 209, "text": "(Figure 4 (b)", "ref_id": "FIGREF3" }, { "start": 920, "end": 928, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.2" }, { "text": "Negative or speculative contexts: Regarding the false positives for TEES that are generally well handled by our system, the annotation of speculative or negative language seems to be problematic. For instance, as depicted in Figure 4 (c) , TEES incorrectly captures the relation between \"mutation\" and \"SMAD2\", despite the negative cue, \"inactivating\". Even though our approach correctly ignores this false positive in the short distance, it still captures speculative long dependencies, which motivates a natural extension of our work in future.", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 237, "text": "Figure 4 (c)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.2" }, { "text": "We have proposed a novel architecture based on multi-head attention and convolutions, which deals with the long dependencies typical of biomedical literature. The results show that this architecture outperforms the state of the art on existing biomedical information extraction corpora. While multihead attention identifies long dependencies in extracting relations and events, convolutions provide the additional benefit of capturing more local relations, which improves the performance of existing approaches. The finding that CNN-before-MHA is outperformed by MHA-before-CNN is very interesting and could be used as a competitive baseline for future work. Our ongoing work includes generalizing our findings to other non-biomedical information extraction tasks. Current work is focused on event and relation extraction from a single short/long sentence; we would like to experiment with additional contents to study the behaviour of these models across sentence boundaries (Verga et al., 2018) . Finally, we intend to extend our approach to deal with speculative contexts by considering more semantic linguistic features, e.g., sense embeddings (Rothe and Sch\u00fctze, 2015) on biomedical literature.", "cite_spans": [ { "start": 976, "end": 996, "text": "(Verga et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We choose different embeddings for each task/dataset to be in line with TEES.2 Since events are n-ary relations, event nodes may overlap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The only exception is for the word vectors, where the original weights are used to provide generalization to words outside the task's training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also conducted experiments with 1CNN-1MHA and 1MHA-1CNN, which are excluded due to the poor performance.5 We use 4CNN to represent this configuration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The experiment with 8MHA, and multiple MHAs one after the other on the whole sequence are excluded from the paper due to the poor perfromance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We consider the same configuration for the convolutions in both TEES and our approach.8 We only use these datasets for error analysis due to the limited access to the gold set of other datasets. Hence, this error analysis only covers relation extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extracting complex biological events with rich graphbased feature sets", "authors": [ { "first": "Jari", "middle": [], "last": "Bj\u00f6rne", "suffix": "" }, { "first": "Juho", "middle": [], "last": "Heimonen", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Antti", "middle": [], "last": "Airola", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Pahikkala", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Salakoski", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task", "volume": "", "issue": "", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jari Bj\u00f6rne, Juho Heimonen, Filip Ginter, Antti Airola, Tapio Pahikkala, and Tapio Salakoski. 2009. Ex- tracting complex biological events with rich graph- based feature sets. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task, pages 10-18. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generalizing biomedical event extraction", "authors": [ { "first": "Jari", "middle": [], "last": "Bj\u00f6rne", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Salakoski", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the BioNLP Shared Task 2011 Workshop", "volume": "", "issue": "", "pages": "183--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jari Bj\u00f6rne and Tapio Salakoski. 2011. Generalizing biomedical event extraction. In Proceedings of the BioNLP Shared Task 2011 Workshop, pages 183- 191. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Biomedical event extraction using convolutional neural networks and dependency parsing", "authors": [ { "first": "Jari", "middle": [], "last": "Bj\u00f6rne", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Salakoski", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the BioNLP 2018 workshop", "volume": "", "issue": "", "pages": "98--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jari Bj\u00f6rne and Tapio Salakoski. 2018. Biomedi- cal event extraction using convolutional neural net- works and dependency parsing. In Proceedings of the BioNLP 2018 workshop, pages 98-108.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deep sparse rectifier neural networks", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics", "volume": "", "issue": "", "pages": "315--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Pro- ceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315- 323.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" } ], "year": 1998, "venue": "International Journal of Uncertainty", "volume": "6", "issue": "02", "pages": "107--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter. 1998. The vanishing gradient prob- lem during learning recurrent neural nets and prob- lem solutions. International Journal of Uncer- tainty, Fuzziness and Knowledge-Based Systems, 6(02):107-116.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A hybrid approach for automated mutation annotation of the extended human mutation landscape in scientific literature", "authors": [ { "first": "Antonio Jimeno", "middle": [], "last": "Yepes", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mackinlay", "suffix": "" }, { "first": "Natalie", "middle": [], "last": "Gunn", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Schieber", "suffix": "" }, { "first": "Noel", "middle": [], "last": "Faux", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Downton", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Goudey", "suffix": "" }, { "first": "Richard L", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2018, "venue": "AMIA Annual Symposium Proceedings", "volume": "2018", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonio Jimeno Yepes, Andrew MacKinlay, Natalie Gunn, Christine Schieber, Noel Faux, Matthew Downton, Benjamin Goudey, and Richard L Martin. 2018. A hybrid approach for automated mutation annotation of the extended human mutation land- scape in scientific literature. In AMIA Annual Sym- posium Proceedings, volume 2018, page 616. Amer- ican Medical Informatics Association.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Overview of bionlp'09 shared task on event extraction", "authors": [ { "first": "Jin-Dong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Yoshinobu", "middle": [], "last": "Kano", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshi- nobu Kano, and Jun'ichi Tsujii. 2009. Overview of bionlp'09 shared task on event extraction. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task, pages 1-9. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Overview of genia event task in bionlp shared task", "authors": [ { "first": "Jin-Dong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Toshihisa", "middle": [], "last": "Takagi", "suffix": "" }, { "first": "Akinori", "middle": [], "last": "Yonezawa", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the BioNLP Shared Task 2011 Workshop", "volume": "", "issue": "", "pages": "7--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin-Dong Kim, Yue Wang, Toshihisa Takagi, and Aki- nori Yonezawa. 2011. Overview of genia event task in bionlp shared task 2011. In Proceedings of the BioNLP Shared Task 2011 Workshop, pages 7-15. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Overview of the biocreative vi chemicalprotein interaction track", "authors": [ { "first": "Martin", "middle": [], "last": "Krallinger", "suffix": "" }, { "first": "Obdulia", "middle": [], "last": "Rabal", "suffix": "" }, { "first": "A", "middle": [], "last": "Saber", "suffix": "" }, { "first": "", "middle": [], "last": "Akhondi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the sixth BioCreative challenge evaluation workshop", "volume": "1", "issue": "", "pages": "141--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Krallinger, Obdulia Rabal, Saber A Akhondi, et al. 2017. Overview of the biocreative vi chemical- protein interaction track. In Proceedings of the sixth BioCreative challenge evaluation workshop, volume 1, pages 141-146.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Biomedical event extraction based on knowledgedriven tree-lstm", "authors": [ { "first": "Diya", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diya Li, Lifu Huang, Heng Ji, and Jiawei Han. 2019. Biomedical event extraction based on knowledge- driven tree-lstm. In NAACL-HLT.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A structured self-attentive sentence embedding", "authors": [ { "first": "Zhouhan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Minwei", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Cicero", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.03130" ] }, "num": null, "urls": [], "raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Large-scale extraction of gene interactions from full-text literature using deepdive", "authors": [ { "first": "K", "middle": [], "last": "Emily", "suffix": "" }, { "first": "Ce", "middle": [], "last": "Mallory", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Russ B", "middle": [], "last": "Re", "suffix": "" }, { "first": "", "middle": [], "last": "Altman", "suffix": "" } ], "year": 2015, "venue": "Bioinformatics", "volume": "32", "issue": "1", "pages": "106--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily K Mallory, Ce Zhang, Christopher Re, and Russ B Altman. 2015. Large-scale extraction of gene interactions from full-text literature using deep- dive. Bioinformatics, 32(1):106-113.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributional semantics resources for biomedical text processing", "authors": [], "year": 2013, "venue": "Proceedings of LBM", "volume": "", "issue": "", "pages": "39--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "SPFGH Moen and Tapio Salakoski2 Sophia Anani- adou. 2013. Distributional semantics resources for biomedical text processing. Proceedings of LBM, pages 39-44.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Overview of bionlp shared task 2013", "authors": [ { "first": "Claire", "middle": [], "last": "N\u00e9dellec", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Bossy", "suffix": "" }, { "first": "Jin-Dong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jung-Jae", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the BioNLP Shared Task 2013 Workshop", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claire N\u00e9dellec, Robert Bossy, Jin-Dong Kim, Jung- Jae Kim, Tomoko Ohta, Sampo Pyysalo, and Pierre Zweigenbaum. 2013. Overview of bionlp shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 1-7.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A decomposable attention model for natural language inference", "authors": [ { "first": "P", "middle": [], "last": "Ankur", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Das", "suffix": "" }, { "first": "", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.01933" ] }, "num": null, "urls": [], "raw_text": "Ankur P Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes", "authors": [ { "first": "Sascha", "middle": [], "last": "Rothe", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1507.01127" ] }, "num": null, "urls": [], "raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. Au- toextend: Extending word embeddings to embed- dings for synsets and lexemes. arXiv preprint arXiv:1507.01127.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Classifying relations by ranking with convolutional neural networks", "authors": [ { "first": "Cicero", "middle": [], "last": "Nogueira Dos Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1504.06580" ] }, "num": null, "urls": [], "raw_text": "Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. arXiv preprint arXiv:1504.06580.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Extraction of drug-drug interactions from biomedical texts (ddiextraction 2013). Association for Computational Linguistics", "authors": [ { "first": "Isabel", "middle": [], "last": "Segura Bedmar", "suffix": "" }, { "first": "Paloma", "middle": [], "last": "Mart\u00ednez", "suffix": "" }, { "first": "Mar\u00eda Herrero", "middle": [], "last": "Zazo", "suffix": "" } ], "year": 2013, "venue": "", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabel Segura Bedmar, Paloma Mart\u00ednez, and Mar\u00eda Herrero Zazo. 2013. Semeval-2013 task 9: Ex- traction of drug-drug interactions from biomedical texts (ddiextraction 2013). Association for Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The 1st ddiextraction-2011 challenge task: Extraction of drug-drug interactions from biomedical texts", "authors": [ { "first": "Isabel", "middle": [], "last": "Segura Bedmar", "suffix": "" }, { "first": "Paloma", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "Daniel S\u00e1nchez", "middle": [], "last": "Cisneros", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabel Segura Bedmar, Paloma Martinez, and Daniel S\u00e1nchez Cisneros. 2011. The 1st ddiextraction-2011 challenge task: Extraction of drug-drug interactions from biomedical texts.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Disan: Directional self-attention network for rnn/cnn-free language understanding", "authors": [ { "first": "Tao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tianyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Long", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Shirui", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chengqi", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Di- rectional self-attention network for rnn/cnn-free lan- guage understanding. In Thirty-Second AAAI Con- ference on Artificial Intelligence.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Multilingual relation extraction using compositional universal schema", "authors": [ { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "David", "middle": [], "last": "Belanger", "suffix": "" }, { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.06396" ] }, "num": null, "urls": [], "raw_text": "Patrick Verga, David Belanger, Emma Strubell, Ben- jamin Roth, and Andrew McCallum. 2015. Multilin- gual relation extraction using compositional univer- sal schema. arXiv preprint arXiv:1511.06396.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Simultaneously self-attending to all mentions for full-abstract biological relation extraction", "authors": [ { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.10569" ] }, "num": null, "urls": [], "raw_text": "Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. arXiv preprint arXiv:1802.10569.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Distant supervision for relation extraction via piecewise convolutional neural networks", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1753--1762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1753- 1762.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Relation classification via convolutional deep neural network", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via con- volutional deep neural network.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A hybrid model based on neural networks for biomedical relation extraction", "authors": [ { "first": "Yijia", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hongfei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhihao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shaowu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuanyuan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "Journal of biomedical informatics", "volume": "81", "issue": "", "pages": "83--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Zhang, Hongfei Lin, Zhihao Yang, Jian Wang, Shaowu Zhang, Yuanyuan Sun, and Liang Yang. 2018. A hybrid model based on neural networks for biomedical relation extraction. Journal of biomedi- cal informatics, 81:83-92.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Attention-based bidirectional long short-term memory networks for relation classification", "authors": [ { "first": "Peng", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Zhenyu", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Bingchen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hongwei", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "207--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), vol- ume 2, pages 207-212.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Example of nested events from GE11 shared task", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": ", where d indicates the hidden size, and H is the number of heads. The attention weights a h", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Visualization of multi-head attention in different architectures", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Error analysis of TEES and our approach over the gene-mutation AMIA dataset", "type_str": "figure", "uris": null }, "TABREF1": { "type_str": "table", "text": "). The training/testing sets contain 2656/385 mentions of mutations, and 2799/280 of genes or proteins, and 1617/130 rela-", "html": null, "content": "
CorpusDomainEIS
GE09Molecular Biology10 6 11380
GE11Molecular Biology10 6 14958
EPI11Epigenetics and PTM:s 16 6 11772
ID11Infection Diseases11 75118
REL11Entity Relations12 11351
GE13Molecular Biology15 68369
CG13Cancer Genetics42 95938
PC13Pathway Curation24 95040
CP17Chemical-Protein Int.-5 24594
", "num": null }, "TABREF4": { "type_str": "table", "text": "Empirical evaluation of long-distance dependencies on CP17", "html": null, "content": "", "num": null } } } }