{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:27.806826Z" }, "title": "A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction", "authors": [ { "first": "Saadullah", "middle": [], "last": "Amin", "suffix": "", "affiliation": { "laboratory": "German Research Center for Artificial Intelligence (DFKI) Multilinguality and Language Technology Lab", "institution": "", "location": {} }, "email": "saadullah.amin@dfki.de" }, { "first": "Katherine", "middle": [ "Ann" ], "last": "Dunfield", "suffix": "", "affiliation": { "laboratory": "German Research Center for Artificial Intelligence (DFKI) Multilinguality and Language Technology Lab", "institution": "", "location": {} }, "email": "katherine.dunfield@dfki.de" }, { "first": "Anna", "middle": [], "last": "Vechkaeva", "suffix": "", "affiliation": { "laboratory": "German Research Center for Artificial Intelligence (DFKI) Multilinguality and Language Technology Lab", "institution": "", "location": {} }, "email": "anna.vechkaeva@dfki.de" }, { "first": "G\u00fcnter", "middle": [], "last": "Neumann", "suffix": "", "affiliation": { "laboratory": "German Research Center for Artificial Intelligence (DFKI) Multilinguality and Language Technology Lab", "institution": "", "location": {} }, "email": "guenter.neumann@dfki.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Relation extraction (RE) remains an important natural language processing task for understanding the interaction between entities that appear in texts. In supervised settings (GuoDong et al., 2005; Zeng et al., 2014; Wang et al., 2016) , obtaining finegrained relations for the biomedical domain is challenging due to not only the annotation costs, but the added requirement of domain expertise. Distant supervision (DS), however, provides a meaningful way to obtain large-scale data for RE (Mintz et al., 2009; Hoffmann et al., 2011) , but this form of data collection also tends to result in an increased amount of noise, as the target relation may not always be expressed (Takamatsu et al., 2012; Ritter et al., 2013) . Exemplified in Figure 1 , the last two Figure 1 : Example of a distantly supervised bag of sentences for a knowledge base tuple (neurofibromatosis 1, breast cancer) with special order sensitive entity markers to capture the position and the latent relation direction with BERT for predicting the missing relation. sentences can be seen as potentially noisy evidence, as they do not explicitly express the given relation.", "cite_spans": [ { "start": 175, "end": 197, "text": "(GuoDong et al., 2005;", "ref_id": "BIBREF4" }, { "start": 198, "end": 216, "text": "Zeng et al., 2014;", "ref_id": "BIBREF28" }, { "start": 217, "end": 235, "text": "Wang et al., 2016)", "ref_id": "BIBREF23" }, { "start": 491, "end": 511, "text": "(Mintz et al., 2009;", "ref_id": "BIBREF14" }, { "start": 512, "end": 534, "text": "Hoffmann et al., 2011)", "ref_id": "BIBREF8" }, { "start": 675, "end": 699, "text": "(Takamatsu et al., 2012;", "ref_id": "BIBREF21" }, { "start": 700, "end": 720, "text": "Ritter et al., 2013)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 738, "end": 746, "text": "Figure 1", "ref_id": null }, { "start": 762, "end": 770, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since individual instance labels may be unknown (Wang et al., 2018) , we instead build on the recent findings of Wu and He (2019) and Soares et al. (2019) in using positional markings and latent relation direction (Figure 1 ), as a signal to mitigate noise in bag-level multiple instance learning (MIL) for distantly supervised biomedical RE. Our approach greatly simplifies previous work by Dai et al. (2019) with following contributions:", "cite_spans": [ { "start": 48, "end": 67, "text": "(Wang et al., 2018)", "ref_id": "BIBREF24" }, { "start": 113, "end": 129, "text": "Wu and He (2019)", "ref_id": "BIBREF26" }, { "start": 134, "end": 154, "text": "Soares et al. (2019)", "ref_id": "BIBREF18" }, { "start": 392, "end": 409, "text": "Dai et al. (2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 214, "end": 223, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We extend sentence-level relation enriched BERT (Wu and He, 2019) to bag-level MIL.", "cite_spans": [ { "start": 50, "end": 67, "text": "(Wu and He, 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate that the simple applications of this model under-perform and require knowledge base order-sensitive markings, ktag, to achieve state-of-the-art performance. This data encoding scheme captures the latent relation direction and provides a simple way to reduce noise in distant supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We make our code and data creation pipeline publicly available: https://github.com/ suamin/umls-medline-distant-re", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In MIL-based distant supervision for corpus-level RE, earlier works rely on the assumption that at least one of the evidence samples represent the target relation in a triple (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) . Recently, piecewise convolutional neural networks (PCNN) (Zeng et al., 2014) have been applied to DS (Zeng et al., 2015) , with notable extensions in selective attention (Lin et al., 2016) and the modelling of noise dynamics (Luo et al., 2017) . Han et al. (2018a) proposed a joint learning framework for knowledge graph completion (KGC) and RE with mutual attention, showing that DS improves downstream KGC performance, while KGC acts as an indirect signal to filter textual noise. Dai et al. (2019) extended this framework to biomedical RE, using improved KGC models, ComplEx (Trouillon et al., 2017) and SimplE (Kazemi and Poole, 2018) , as well as additional auxiliary tasks of entity-type classification and named entity recognition to mitigate noise. Pre-trained language models, such as BERT (Devlin et al., 2019) , have been shown to improve the downstream performance of many NLP tasks. Relevant to distant RE, Alt et al. (2019) extended the OpenAI Generative Pre-trained Transformer (GPT) model (Radford et al., 2019) for bag-level MIL with selective attention (Lin et al., 2016) . enriched pre-training stage with KB entity information, resulting in improved performance. For sentence-level RE, Wu and He (2019) proposed an entity marking strategy for BERT (referred to here as R-BERT) to perform relation classification. Specifically, they mark the entity boundaries with special tokens following the order they appear in the sentence. Likewise, Soares et al. (2019) studied several data encoding schemes and found marking entity boundaries important for sentence-level RE. With such encoding, they further proposed a novel pre-training scheme for distributed relational learning, suited to few-shot relation classification (Han et al., 2018b) .", "cite_spans": [ { "start": 175, "end": 196, "text": "(Riedel et al., 2010;", "ref_id": "BIBREF16" }, { "start": 197, "end": 219, "text": "Hoffmann et al., 2011;", "ref_id": "BIBREF8" }, { "start": 220, "end": 242, "text": "Surdeanu et al., 2012)", "ref_id": "BIBREF20" }, { "start": 302, "end": 321, "text": "(Zeng et al., 2014)", "ref_id": "BIBREF28" }, { "start": 346, "end": 365, "text": "(Zeng et al., 2015)", "ref_id": "BIBREF27" }, { "start": 415, "end": 433, "text": "(Lin et al., 2016)", "ref_id": "BIBREF12" }, { "start": 470, "end": 488, "text": "(Luo et al., 2017)", "ref_id": "BIBREF13" }, { "start": 491, "end": 509, "text": "Han et al. (2018a)", "ref_id": "BIBREF6" }, { "start": 728, "end": 745, "text": "Dai et al. (2019)", "ref_id": "BIBREF2" }, { "start": 823, "end": 847, "text": "(Trouillon et al., 2017)", "ref_id": "BIBREF22" }, { "start": 852, "end": 883, "text": "SimplE (Kazemi and Poole, 2018)", "ref_id": null }, { "start": 1044, "end": 1065, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 1165, "end": 1182, "text": "Alt et al. (2019)", "ref_id": "BIBREF0" }, { "start": 1250, "end": 1272, "text": "(Radford et al., 2019)", "ref_id": "BIBREF15" }, { "start": 1316, "end": 1334, "text": "(Lin et al., 2016)", "ref_id": "BIBREF12" }, { "start": 1451, "end": 1467, "text": "Wu and He (2019)", "ref_id": "BIBREF26" }, { "start": 1981, "end": 2000, "text": "(Han et al., 2018b)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our work builds on these findings, in particular, we extend the BERT model (Devlin et al., 2019) for bag-level MIL, similar to Alt et al. (2019) . More importantly, noting the significance of sentenceordered entity marking in sentence-level RE (Wu and He, 2019; Soares et al., 2019) , we introduce the knowledge-based entity marking strategy suited to bag-level DS. This naturally encodes the information stored in KB, reducing the inherent noise.", "cite_spans": [ { "start": 75, "end": 96, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 127, "end": 144, "text": "Alt et al. (2019)", "ref_id": "BIBREF0" }, { "start": 244, "end": 261, "text": "(Wu and He, 2019;", "ref_id": "BIBREF26" }, { "start": 262, "end": 282, "text": "Soares et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Bag-level MIL for Distant RE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Let E and R represent the set of entities and relations from a knowledge base KB, respectively. For h, t \u2208 E and r \u2208 R, let (h, r, t) \u2208 KB be a fact triple for an ordered tuple (h, t). We denote all such (h, t) tuples by a set G + , i.e., there exists some r \u2208 R for which the triple (h, r, t) belongs to the KB, called positive groups. Similarly, we denote by G \u2212 the set of negative groups, i.e., for all r \u2208 R, the triple (h, r, t) does not belong to KB. The union of these groups is represented by G = G + \u222a G \u2212 1 . We denote by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "B g = [s (1) g , ..., s (m)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "g ] an unordered sequence of sentences, called bag, for g \u2208 G such that the sentences contain the group g = (h, t), where the bag size m can vary. Let f be a function that maps each element in the bag to a low-dimensional relation representation [r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "(1) g , ..., r (m) g ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "With o, we represent the bag aggregation function, that maps instance level relation representation to a final bag representation b g = o(f (B g )). The goal of distantly supervised bag-level MIL for corpus-level RE is then to predict the missing relation r given the bag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "Wu and He (2019) and Soares et al. (2019) showed that using special markers for entities with BERT in the order they appear in a sentence encodes the positional information that improves the performance of sentence-level RE. It allows the model to focus on target entities when, possibly, other entities are also present in the sentence, implicitly doing entity disambiguation and reducing noise. In contrast, for bag-level distant supervision, the noisy channel be attributed to several factors for a given triple (h, r, t) and bag B g : 1. Evidence sentences may not express the relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": "2. Multiple entities appearing in the sentence, requiring the model to disambiguate target entities among other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": "3. The direction of missing relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": "4. Discrepancy between the order of the target entities in the sentence and knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": "To address (1), common approaches are to learn a negative relation class NA and use better bag aggregation strategies (Lin et al., 2016; Luo et al., 2017; Alt et al., 2019) . For (2), encoding positional information is important, such as, in PCNN (Zeng et al., 2014) , that takes into account the relative positions of head and tail entities (Zeng et al., 2015) , and in (Wu and He, 2019; Soares et al., 2019) for sentence-level RE. To account for (3) and (4), multi-task learning with KGC and mutual attention has proved effective (Han et al., 2018a; Dai et al., 2019) . Simply extending sentence sensitive marking to bag-level can be adverse, as it enhances (4) and even if the composition is uniform, it distributes the evidence sentence across several bags. On the other hand, expanding relations to multiple sub-classes based on direction (Wu and He, 2019) , enhances class imbalance and also distributes supporting sentences. To jointly address (2), (3) and (4), we introduce KB sensitive encoding suitable for bag-level distant RE. Formally, for a group g = (h, t) and a matching sentence s", "cite_spans": [ { "start": 118, "end": 136, "text": "(Lin et al., 2016;", "ref_id": "BIBREF12" }, { "start": 137, "end": 154, "text": "Luo et al., 2017;", "ref_id": "BIBREF13" }, { "start": 155, "end": 172, "text": "Alt et al., 2019)", "ref_id": "BIBREF0" }, { "start": 247, "end": 266, "text": "(Zeng et al., 2014)", "ref_id": "BIBREF28" }, { "start": 342, "end": 361, "text": "(Zeng et al., 2015)", "ref_id": "BIBREF27" }, { "start": 371, "end": 388, "text": "(Wu and He, 2019;", "ref_id": "BIBREF26" }, { "start": 389, "end": 409, "text": "Soares et al., 2019)", "ref_id": "BIBREF18" }, { "start": 532, "end": 551, "text": "(Han et al., 2018a;", "ref_id": "BIBREF6" }, { "start": 552, "end": 569, "text": "Dai et al., 2019)", "ref_id": "BIBREF2" }, { "start": 844, "end": 861, "text": "(Wu and He, 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": "(i) g with tokens (x 0 , ..., x L ) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": ", we add special tokens $ and\u02c6to mark the entity spans as: Sentence ordered: Called s-tag, entities are marked in the order they appear in the sentence. Following Soares et al. 2019, let s 1 = (i, j) and s 2 = (k, l) be the index pairs with 0 < i < j \u2212 1, j < k, k \u2264 l \u2212 1 and l \u2264 L delimiting the entity mentions e 1 = (x i , ..., x j ) and e 2 = (x k , ..., x l ) respectively. We mark the boundary of s 1 with $ and s 2 with\u02c6. Note, e 1 and e 2 can be either h or t. KB ordered: Called k-tag, entities are marked in the order they appear in the KB. Let s h = (i, j) and s t = (k, l) be the index pairs delimiting head (h) and tail (t) entities, irrespective of the order they appear in the sentence. We mark the boundary of s h with $ and s t with\u02c6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": "The s-tag annotation scheme is followed by Soares et al. (2019) and Wu and He (2019) for span identification. In Wu and He (2019) , each relation type r \u2208 R is further expanded to two sub-classes as r(e 1 , e 2 ) and r(e 2 , e 1 ) to capture direction, while holding the s-tag annotation as fixed. For DS-based RE, since the ordered tuple (h, t) is given, the task is reduced to relation classification without direction. This side information is encoded in data with k-tag, covering (2) but also (3) and (4). To account for (1), we also experiment with selective attention (Lin et al., 2016) which has been widely used in other works (Luo et al., 2017; Han et al., 2018a; Alt et al., 2019) . ", "cite_spans": [ { "start": 113, "end": 129, "text": "Wu and He (2019)", "ref_id": "BIBREF26" }, { "start": 574, "end": 592, "text": "(Lin et al., 2016)", "ref_id": "BIBREF12" }, { "start": 635, "end": 653, "text": "(Luo et al., 2017;", "ref_id": "BIBREF13" }, { "start": 654, "end": 672, "text": "Han et al., 2018a;", "ref_id": "BIBREF6" }, { "start": 673, "end": 690, "text": "Alt et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Entity Markers", "sec_num": "3.2" }, { "text": "BERT (Devlin et al., 2019) is used as our base sentence encoder, specifically, BioBERT (Lee et al., 2020) , and we extend R-BERT (Wu and He, 2019) to bag-level MIL. Figure 2 shows the model's architecture with k-tag. Consider a bag B g of size m for a group g \u2208 G representing the ordered tuple (h, t), with corresponding spans", "cite_spans": [ { "start": 5, "end": 26, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 87, "end": 105, "text": "(Lee et al., 2020)", "ref_id": "BIBREF10" }, { "start": 129, "end": 146, "text": "(Wu and He, 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 165, "end": 173, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "S g = [(s (1) h , s (1) t ), ..., (s (m) h , s (m) t )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "obtained with k-tag, then for a pair of sentences in the bag and spans, (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "s (i) , (s (i) h , s (i) t ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": ", we can represent the model in three steps, such that the first two steps represent the map f and the final step o, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "1. SENTENCE ENCODING: BERT is applied to the sentence and the final hidden state H (i) 0 \u2208 R d , corresponding to the [CLS] token, is passed through a linear layer 3 W (1) \u2208 R d\u00d7d with tanh(.) activation to obtain the global sentence information in h ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "i) g = [h (i) 0 ; h (i) h ; h (i) t ] \u2208 R 3d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "3. BAG AGGREGATION: After applying the first two steps to each sentence in the bag, we obtain [r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "(1) g , ..., r (m) g ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "With a final linear layer consisting of a relation matrix M r \u2208 R |R|\u00d73d and a bias vector b r \u2208 R |R| , we aggregate the bag information with o in two ways: Average: The bag elements are averaged as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "b g = 1 m m i=1 r (i) g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "Selective attention (Lin et al., 2016) : For a row r in M r representing the relation r \u2208 R, we get the attention weights as:", "cite_spans": [ { "start": 20, "end": 38, "text": "(Lin et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "\u03b1 i = exp(r T r (i) g ) m j=1 exp(r T r (j) g ) b g = m i=1 \u03b1 i r (i) g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "Following b g , a softmax classifier is applied to predict the probability p(r|b g ; \u03b8) of relation r being a true relation with \u03b8 representing the model parameters, where we minimize the cross-entropy loss during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.3" }, { "text": "Similar to (Dai et al., 2019) , UMLS 4 (Bodenreider, 2004) is used as our KB and MEDLINE abstracts 5 as our text source. A data summary is shown in Table 1 (see Appendix A for details on the data creation pipeline). We approximate the same statistics as reported in Dai et al. (2019) for relations and entities, but it is important to note that the data does not contain the same samples. We divided triples into train, validation and test sets, and following (Weston et al., 2013; Dai et al., 2019) , we make sure that there is no overlapping facts across the splits. Additionally, we add another constraint, i.e., there is no sentence-level overlap between the training and held-out sets. To perform groups negative sampling, for the collection of evidence sentences supporting NA relation type bags, we extend KGC open-world assumption to bag-level MIL (see A.3). 20% of the data is reserved for testing, and of the remaining 80%, we use 10% for validation and the rest for training. ", "cite_spans": [ { "start": 11, "end": 29, "text": "(Dai et al., 2019)", "ref_id": "BIBREF2" }, { "start": 39, "end": 58, "text": "(Bodenreider, 2004)", "ref_id": "BIBREF1" }, { "start": 266, "end": 283, "text": "Dai et al. (2019)", "ref_id": "BIBREF2" }, { "start": 460, "end": 481, "text": "(Weston et al., 2013;", "ref_id": "BIBREF25" }, { "start": 482, "end": 499, "text": "Dai et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "We compare each tagging scheme, s-tag and k-tag, with average (avg) and selective attention (attn) bag aggregation functions. To test the setup of Wu and He (2019), which follows s-tag, we expand each relation type (exprels) r \u2208 R to two sub-classes r(e 1 , e 2 ) and r(e 2 , e 1 ) indicating relation direction from first entity to second and vice versa. For all experiments, we used batch size 2, bag size 16 with sampling (see A.4 for details on bag composition), learning rate 2e \u22125 with linear decay, and 3 epochs. As the standard practice (Weston et al., 2013) , evaluation is performed through constructing candidate triples by combining the entity pairs in the test set with all relations (except NA) and ranking the resulting triples. The extracted triples are matched against the test triples and the precision-recall (PR) curve, area under the PR curve (AUC), F1 measure, and Precision@k, for k in {100, 200, 300, 2000, 4000, 6000} are reported.", "cite_spans": [ { "start": 545, "end": 566, "text": "(Weston et al., 2013)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Models and Evaluation", "sec_num": "4.2" }, { "text": "Performance metrics are shown in Table 2 and plots of the resulting PR curves in Figure 3 . Since our data differs from Dai et al. (2019) , the AUC cannot be directly compared. However, Precision@k indicates the general performance of extracting the true triples, and can therefore be compared. Generally, models annotated with k-tag perform significantly better than other models, with k-tag+avg achieving state-of-the-art Precision@{2k,4k,6k} compared to the previous best (Dai et al., 2019 ). The best model of Dai et al. (2019) uses PCNN sentence encoder, with additional tasks of SimplE (Kazemi and Poole, 2018) based KGC and KG-attention, entity-type classification and named entity recognition. In contrast our data-driven method, k-tag, greatly simplifies this by directly encoding the KB information, i.e., order of the head and tail entities and therefore, the latent relation direction. Consider again the example in Figure 1 where our source triple (h, r, t) is (neurofibromatosis 1, associated genetic condition, breast cancer), and only last sentence has the same order of entities as KB. This discrepancy is conveniently resolved (note in Figure 2 , for last sentence the extracted entities ", "cite_spans": [ { "start": 120, "end": 137, "text": "Dai et al. (2019)", "ref_id": "BIBREF2" }, { "start": 475, "end": 492, "text": "(Dai et al., 2019", "ref_id": "BIBREF2" }, { "start": 514, "end": 531, "text": "Dai et al. (2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 81, "end": 89, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 928, "end": 936, "text": "Figure 1", "ref_id": null }, { "start": 1154, "end": 1162, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Bag Agg. AUC F1 P@100 P@200 P@300 P@2k P@4k P@6k sentence order is flipped to KG order when concatenating, unlike s-tag) with k-tag. We remark that such knowledge can be seen as learned, when jointly modeling with KGC, however, considering the task of bag-level distant RE only, the KG triples are known information and we utilize this information explicitly with k-tag encoding. As PCNN (Zeng et al., 2015) can account for the relative positions of head and tail entities, it also performs better than the models tagged with s-tag using sentence order. Similar to Alt et al. (2019) 6, we also note that the pre-trained contextualized models result in sustained long tail performance. s-tag+exprels reflects the direct application of Wu and He (2019) to bag-level MIL for distant RE. In this case, the relations are explicitly extended to model entity direction appearing first to second in the sentence, and vice versa. This implicitly introduces independence between the two sub-classes of the same relation, limiting the gain from shared knowledge. Likewise, with such expanded relations, class imbalance is further enhanced to more fine-grained classes.", "cite_spans": [ { "start": 388, "end": 407, "text": "(Zeng et al., 2015)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Though selective attention (Lin et al., 2016) has been shown to improve the performance of distant RE (Luo et al., 2017; Han et al., 2018a; Alt et al., 2019) , models in our experiments with such an attention mechanism significantly underperformed, in each case bumping the area under the PR curve and making it flatter. We note that more than 50% of bags are under-sized, in many cases, with only 1-2 sentences, requiring repeated over-sampling to match fixed bag size, therefore, making it difficult for attention to learn a distribution over the bag with repetitions, and further adding noise. For such cases, the distribution should ideally be close to uniform, as is the case with averaging, resulting in better performance. We see that the models with k-tag perform better than the s-tag with average aggregation showing consistent performance for long-tail relations.", "cite_spans": [ { "start": 27, "end": 45, "text": "(Lin et al., 2016)", "ref_id": "BIBREF12" }, { "start": 102, "end": 120, "text": "(Luo et al., 2017;", "ref_id": "BIBREF13" }, { "start": 121, "end": 139, "text": "Han et al., 2018a;", "ref_id": "BIBREF6" }, { "start": 140, "end": 157, "text": "Alt et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "This work extends BERT to bag-level MIL and introduces a simple data-driven strategy to reduce the noise in distantly supervised biomedical RE. We note that the position of entities in sentence and the order in KB encodes the latent direction of relation, which plays an important role for learning under such noise. With a relatively simple methodology, we show that this can sufficiently be achieved by reducing the need for additional tasks and highlighting the importance of data quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In this section, we explain the steps taken to create the data for distantly-supervised (DS) biomedical relation extraction (RE). We highlight the importance of a data creation pipeline as the quality of data plays a key role in the downstream performance of our model. We note that a pipeline is likewise important for generating reproducible results, and contributes toward the possibility of having either a benchmark dataset or a repeatable set of rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Data Pipeline", "sec_num": null }, { "text": "The fact triples were obtained for English concepts, filtering for RO relation types only (Dai et al., 2019) . We collected 9.9M (CUI head, relation text, CUI tail) triples, where CUI represents the concept unique identifier in UMLS.", "cite_spans": [ { "start": 90, "end": 108, "text": "(Dai et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "A.1 UMLS processing", "sec_num": null }, { "text": "From 34.4M abstracts, we extracted 160.4M unique sentences. To perform fast and scalable search, we use the Trie data structure 7 to index all the textual descriptions of UMLS entities. In obtaining a clean set of sentences, we set the minimum and maximum sentence character length to 32 and 256 respectively, and further considered only those sentences where matching entities are mentioned only once. The latter decision is to lower the noise that may come when only one instance of multiple occurrences is marked for a matched entity. With these constraints, the data was reduced to 118.7M matching sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 MEDLINE processing", "sec_num": null }, { "text": "Recall the entity groups G = G + \u222aG \u2212 (Section 3.1). For training with NA relation class, we generate hard negative samples with an open-world assumption (Soares et al., 2019; Lerer et al., 2019) suited to bag-level multiple instance learning (MIL). From 9.9M triples, we removed the relation type and collected 9M CUI groups in the form of (h, t). Since each CUI is linked to more than one textual form, all of the text combinations for two entities must be considered for a given pair, resulting in 531M textual groups T for the 586 relation types.", "cite_spans": [ { "start": 154, "end": 175, "text": "(Soares et al., 2019;", "ref_id": "BIBREF18" }, { "start": 176, "end": 195, "text": "Lerer et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "A.3 Groups linking and negative sampling", "sec_num": null }, { "text": "Next, for each matched sentence, let P 2 s denote the size 2 permutations of entities present in the sentence, then T \u2229 P 2 s return groups which are present in KB and have matching evidence (positive groups, G + ). Simultaneously, with a probability of 1 2 , we remove the h or t entity from this group and replace it with a novel entity e in the sentence, such that the resulting group (e, t) or (h, e) belongs to G \u2212 . This method results in sentences that are seen both for the true triple, as well as for the invalid ones. Further using the constraints that the relation group sizes must be between 10 to 1500, we find 354 8 relation types (approximately the same as Dai et al. (2019) ) with 92K positive groups and 2.1M negative groups, which were reduced to 64K by considering a random subset of 70% of the positive groups. Table 1 provides these summary statistics.", "cite_spans": [ { "start": 672, "end": 689, "text": "Dai et al. (2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 831, "end": 838, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "A.3 Groups linking and negative sampling", "sec_num": null }, { "text": "For bag composition, we created bags of constant size by randomly under-or over-sampling the sentences in the bag to avoid larger bias towards common entities (Soares et al., 2019). The true distribution had a long tail, with more than 50% of the bags having 1 or 2 sentences. We defined a bag to be uniform, if the special markers represent the same entity in each sentence, either h or t. If the special markers can take on both h or t, we consider that bag to have a mix composition. The k-tag scheme, on the other hand, naturally generates uniform bags. Further, to support the setting of Wu and He (2019), we followed the s-tag scheme and expanded the relations by adding a suffix to denote the directions as r(e 1 , e 2 ) or r(e 2 , e 1 ), with the exception of the NA class, resulting in 709 classes. For fair comparisons with k-tag, we generated uniform bags with s-tag as well, by keeping e 1 and e 2 the same per bag. Due to these bag composition and class expansion (in one setting, exprels) differences, we generated three different splits, supporting each scheme, with the same test sets in cases where the classes are not expanded and a different test set when the classes are expanded. Table A .1 shows the statistics for these splits. ", "cite_spans": [], "ref_spans": [ { "start": 1201, "end": 1209, "text": "Table A", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "A.4 Bag composition and data splits", "sec_num": null }, { "text": "The sets are disjoint, G + \u2229 G \u2212 = \u2205", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "x0 =[CLS] and xL =[SEP]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Each linear layer is implicitly assumed with a bias vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use 2019 release: umls-2019AB-full 5 https://www.nlm.nih.gov/bsd/medline. html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Their model does not use any entity marking strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/vi3k6i5/flashtext", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviewers for helpful feedback. The work was partially funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 777107 through the project Precise4Q and by the German Federal Ministry of Education and Research (BMBF) through the project DEEPLEE (01IW17001).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fine-tuning pre-trained transformer language models to distantly supervised relation extraction", "authors": [ { "first": "Christoph", "middle": [], "last": "Alt", "suffix": "" }, { "first": "Marc", "middle": [], "last": "H\u00fcbner", "suffix": "" }, { "first": "Leonhard", "middle": [], "last": "Hennig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1388--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph Alt, Marc H\u00fcbner, and Leonhard Hennig. 2019. Fine-tuning pre-trained transformer language models to distantly supervised relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1388-1398.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The unified medical language system (UMLS): integrating biomedical terminology", "authors": [ { "first": "Olivier", "middle": [], "last": "Bodenreider", "suffix": "" } ], "year": 2004, "venue": "Nucleic acids research", "volume": "32", "issue": "1", "pages": "267--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Bodenreider. 2004. The unified medical language system (UMLS): integrating biomed- ical terminology. Nucleic acids research, 32(suppl 1):D267-D270.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Distantly supervised biomedical knowledge acquisition via knowledge graph based attention", "authors": [ { "first": "Qin", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Naoya", "middle": [], "last": "Inoue", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Reisert", "suffix": "" }, { "first": "Ryo", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Extracting Structured Knowledge from Scientific Publications", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Dai, Naoya Inoue, Paul Reisert, Ryo Takahashi, and Kentaro Inui. 2019. Distantly supervised biomedical knowledge acquisition via knowledge graph based attention. In Proceedings of the Work- shop on Extracting Structured Knowledge from Sci- entific Publications, pages 1-10.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Exploring various knowledge in relation extraction", "authors": [ { "first": "Zhou", "middle": [], "last": "Guodong", "suffix": "" }, { "first": "Su", "middle": [], "last": "Jian", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Jie", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Min", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "427--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation ex- traction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427-434. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "OpenNRE: An open and extensible toolkit for neural relation extraction", "authors": [ { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Deming", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of EMNLP-IJCNLP: System Demonstrations", "volume": "", "issue": "", "pages": "169--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Han, Tianyu Gao, Yuan Yao, Deming Ye, Zhiyuan Liu, and Maosong Sun. 2019. OpenNRE: An open and extensible toolkit for neural relation extraction. In Proceedings of EMNLP-IJCNLP: System Demon- strations, pages 169-174.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural knowledge acquisition via mutual attention between knowledge graph and text", "authors": [ { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Han, Zhiyuan Liu, and Maosong Sun. 2018a. Neu- ral knowledge acquisition via mutual attention be- tween knowledge graph and text. In Thirty-Second AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation", "authors": [ { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ziyun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4803--4809", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018b. FewRel: A large-scale supervised few-shot relation classifica- tion dataset with state-of-the-art evaluation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4803- 4809.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Knowledgebased weak supervision for information extraction of overlapping relations", "authors": [ { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Congle", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies- Volume 1, pages 541-550. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Simple embedding for link prediction in knowledge graphs", "authors": [ { "first": "David", "middle": [], "last": "Seyed Mehran Kazemi", "suffix": "" }, { "first": "", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2018, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "4284--4295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In Advances in neural information processing sys- tems, pages 4284-4295.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining", "authors": [ { "first": "Jinhyuk", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Wonjin", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Sungdong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Donghyeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sunkyu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chan", "middle": [], "last": "Ho So", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2020, "venue": "Bioinformatics", "volume": "36", "issue": "4", "pages": "1234--1240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "PyTorch-BigGraph: A largescale graph embedding system", "authors": [ { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Timothee", "middle": [], "last": "Lacroix", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Wehrstedt", "suffix": "" }, { "first": "Abhijit", "middle": [], "last": "Bose", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Peysakhovich", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd SysML Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Lerer, Ledell Wu, Jiajun Shen, Timothee Lacroix, Luca Wehrstedt, Abhijit Bose, and Alex Peysakhovich. 2019. PyTorch-BigGraph: A large- scale graph embedding system. In Proceedings of the 2nd SysML Conference, Palo Alto, CA, USA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Neural relation extraction with selective attention over instances", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shiqi", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2124--2133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2124-2133.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix", "authors": [ { "first": "Bingfeng", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Yansong", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhanxing", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Songfang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.03995" ] }, "num": null, "urls": [], "raw_text": "Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao. 2017. Learning with noise: Enhance distantly su- pervised relation extraction with dynamic transition matrix. arXiv preprint arXiv:1705.03995.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distant supervision for relation extraction without labeled data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" }, { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "2", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Modeling relations and their mentions without labeled text", "authors": [ { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Limin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "148--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148-163. Springer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Modeling missing data in distant supervision for information extraction", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Mausam", "middle": [], "last": "", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2013, "venue": "Transactions of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "367--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Et- zioni. 2013. Modeling missing data in distant su- pervision for information extraction. Transactions of the Association for Computational Linguistics, 1:367-378.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Matching the blanks: Distributional similarity for relation learning", "authors": [ { "first": "", "middle": [], "last": "Livio Baldini", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Soares", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Fitzgerald", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Ling", "suffix": "" }, { "first": "", "middle": [], "last": "Kwiatkowski", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2895--2905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learn- ing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895-2905.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "ERNIE: Enhanced representation through knowledge integration", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Xuyi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Danxiang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.09223" ] }, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multi-instance multi-label learning for relation extraction", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning", "volume": "", "issue": "", "pages": "455--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 joint conference on empirical methods in natural language processing and compu- tational natural language learning, pages 455-465. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Reducing wrong labels in distant supervision for relation extraction", "authors": [ { "first": "Shingo", "middle": [], "last": "Takamatsu", "suffix": "" }, { "first": "Issei", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", "volume": "1", "issue": "", "pages": "721--729", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervi- sion for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers-Volume 1, pages 721-729. Association for Computational Linguis- tics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Knowledge graph completion via complex tensor factorization", "authors": [ { "first": "Th\u00e9o", "middle": [], "last": "Trouillon", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Christopher R Dance", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "", "middle": [], "last": "Bouchard", "suffix": "" } ], "year": 2017, "venue": "The Journal of Machine Learning Research", "volume": "18", "issue": "1", "pages": "4735--4772", "other_ids": {}, "num": null, "urls": [], "raw_text": "Th\u00e9o Trouillon, Christopher R Dance,\u00c9ric Gaussier, Johannes Welbl, Sebastian Riedel, and Guillaume Bouchard. 2017. Knowledge graph completion via complex tensor factorization. The Journal of Ma- chine Learning Research, 18(1):4735-4772.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Relation classification via multi-level attention cnns", "authors": [ { "first": "Linlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Gerard", "middle": [ "De" ], "last": "Melo", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1298--1307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linlin Wang, Zhu Cao, Gerard De Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1298- 1307.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Revisiting multiple instance neural networks", "authors": [ { "first": "Xinggang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yongluan", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Wenyu", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Pattern Recognition", "volume": "74", "issue": "", "pages": "15--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinggang Wang, Yongluan Yan, Peng Tang, Xiang Bai, and Wenyu Liu. 2018. Revisiting multiple instance neural networks. Pattern Recognition, 74:15-24.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Connecting language and knowledge bases with embedding models for relation extraction", "authors": [ { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" } ], "year": 2013, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1366--1371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for re- lation extraction. In Conference on Empirical Meth- ods in Natural Language Processing, pages 1366- 1371.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Enriching pretrained language model with entity information for relation classification", "authors": [ { "first": "Shanchan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "He", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "2361--2364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2361-2364.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Distant supervision for relation extraction via piecewise convolutional neural networks", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1753--1762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Pro- ceedings of the 2015 conference on empirical meth- ods in natural language processing, pages 1753- 1762.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Relation classification via convolutional deep neural network", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2335--2344", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335-2344.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "Multiple instance learning (MIL) based baglevel relation classification BERT with KB ordered entity marking (Section 3.2). Special markers $ and\u02c6always delimit the span of head (h s , h e ) and tail (t s , t e ) entities regardless of their order in the sentence. The markers captures the positions of entities and latent relation direction.", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "pooled representations are then passed through a shared linear layer W (2) \u2208 R d\u00d7d with tanh(.) activation to get h get the final latent relation representation, we concatenate the pooled entities representation with [CLS] as r", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "Precision-Recall (PR) curve for different models.", "type_str": "figure" }, "TABREF0": { "content": "
TriplesEntities Relations Pos. Groups Neg. Groups
169,438 27,403 35592,07064,448
", "type_str": "table", "html": null, "num": null, "text": "Overall statistics of the data." }, "TABREF1": { "content": "", "type_str": "table", "html": null, "num": null, "text": "Relation extraction results for different model configurations and data splits." }, "TABREF3": { "content": "
ModelSet Type Triples Triples (w/o NA) Groups Sentences (Sampled)
train92,97248,56392,9721,487,552
k-tagvalid13,5558,39915,963255,408
test33,88820,98838,860621,760
train91,55547,588125,8522,013,632
s-tagvalid13,5558,39922,497359,952
test33,88820,98855,080881,280
train125,15571,402125,4392,007,024
s-tag+exprelsvalid22,60416,29822,607361,712
test55,08339,28255,094881,504
", "type_str": "table", "html": null, "num": null, "text": "1: Different data splits." } } } }