{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:19.043962Z" }, "title": "Improving Biomedical Analogical Retrieval with Embedding of Structural Dependencies", "authors": [ { "first": "Amandalynne", "middle": [], "last": "Paullada", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "paullada@uw.edu" }, { "first": "Bethany", "middle": [], "last": "Percha", "suffix": "", "affiliation": {}, "email": "bethany.percha@mssm.edu" }, { "first": "Trevor", "middle": [], "last": "Cohen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "cohenta@uw.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Inferring the nature of the relationships between biomedical entities from text is an important problem due to the difficulty of maintaining human-curated knowledge bases in rapidly evolving fields. Neural word embeddings have earned attention for an apparent ability to encode relational information. However, word embedding models that disregard syntax during training are limited in their ability to encode the structural relationships fundamental to cognitive theories of analogy. In this paper, we demonstrate the utility of encoding dependency structure in word embeddings in a model we call Embedding of Structural Dependencies (ESD) as a way to represent biomedical relationships in two analogical retrieval tasks: a relationship retrieval (RR) task, and a literature-based discovery (LBD) task meant to hypothesize plausible relationships between pairs of entities unseen in training. We compare our model to skipgram with negative sampling (SGNS), using 19 databases of biomedical relationships as our evaluation data, with improvements in performance on 17 (LBD) and 18 (RR) of these sets. These results suggest embeddings encoding dependency path information are of value for biomedical analogy retrieval.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Inferring the nature of the relationships between biomedical entities from text is an important problem due to the difficulty of maintaining human-curated knowledge bases in rapidly evolving fields. Neural word embeddings have earned attention for an apparent ability to encode relational information. However, word embedding models that disregard syntax during training are limited in their ability to encode the structural relationships fundamental to cognitive theories of analogy. In this paper, we demonstrate the utility of encoding dependency structure in word embeddings in a model we call Embedding of Structural Dependencies (ESD) as a way to represent biomedical relationships in two analogical retrieval tasks: a relationship retrieval (RR) task, and a literature-based discovery (LBD) task meant to hypothesize plausible relationships between pairs of entities unseen in training. We compare our model to skipgram with negative sampling (SGNS), using 19 databases of biomedical relationships as our evaluation data, with improvements in performance on 17 (LBD) and 18 (RR) of these sets. These results suggest embeddings encoding dependency path information are of value for biomedical analogy retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Distributed vector space models of language have been shown to be useful as representations of relatedness and can be applied to information retrieval and knowledge base augmentation, including within the biomedical domain (Cohen and Widdows, 2009) . A vast amount of knowledge on biomedical relationships of interest, such as therapeutic relationships, drug-drug interactions, and adverse drug events, exists in largely human-curated knowledge bases (Zhu et al., 2019) . However, the rate at which new papers are published means new relationships are being discovered faster than human curators can manually update the knowledge bases. Furthermore, it is appealing to automatically generate hypotheses about novel relationships given the information in scientific literature (Swanson, 1986) , a process also known as 'literaturebased discovery.' A trustworthy model should also be able to reliably represent known relationships that are validated by existing literature.", "cite_spans": [ { "start": 223, "end": 248, "text": "(Cohen and Widdows, 2009)", "ref_id": "BIBREF2" }, { "start": 451, "end": 469, "text": "(Zhu et al., 2019)", "ref_id": "BIBREF37" }, { "start": 776, "end": 791, "text": "(Swanson, 1986)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Neural word embedding techniques such as word2vec 1 and fastText 2 are a widely-used and effective approach to the generation of vector representations of words (Mikolov et al., 2013a) and biomedical concepts (De Vine et al., 2014) . An appealing feature of these models is their capacity to solve proportional analogy problems using simple geometric operators over vectors (Mikolov et al., 2013b) . In this way, it is possible to find analogical relationships between words and concepts without the need to specify the relationship type explicitly, a capacity that has recently been used to identify therapeutically-important drug/gene relationships for precision oncology (Fathiamini et al., 2019) . However, neural embeddings are trained to predict co-occurrence events without consideration of syntax, limiting their ability to encode information about relational structure, which is an essential component of cognitive theories of analogical reasoning (Gentner and Markman, 1997) . Additionally, recent work (Peters et al., 2018) has found that contextualized word embeddings from language models such as ELMo, when evaluated on analogy tasks, perform worse on semantic relation tasks than static embedding models.", "cite_spans": [ { "start": 161, "end": 184, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF20" }, { "start": 209, "end": 231, "text": "(De Vine et al., 2014)", "ref_id": "BIBREF7" }, { "start": 374, "end": 397, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF21" }, { "start": 674, "end": 699, "text": "(Fathiamini et al., 2019)", "ref_id": "BIBREF9" }, { "start": 957, "end": 984, "text": "(Gentner and Markman, 1997)", "ref_id": "BIBREF10" }, { "start": 1013, "end": 1034, "text": "(Peters et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The present work explores the utility of encoding syntactic structure in the form of dependency paths into neural word embeddings for analogical retrieval of biomedical relations. To this end, we build and evaluate vector space models for representing biomedical relationships, using a corpus of dependency-parsed sentences from biomedical literature as a source of grammatical representations of relationships between concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We compare two methods for learning biomedical concept embeddings, the skip-gram with negative sampling (SGNS) algorithm (Mikolov et al., 2013a) and Embedding of Semantic Predications (ESP) (Cohen and Widdows, 2017) , which adapts SGNS to encode concept-predicate-concept triples. In the current work, we adapt ESP to encode dependency paths, an approach we call Embedding of Structural Dependencies (ESD). We train ESD and SGNS on a corpus of approximately 70 million sentences from biomedical research paper abstracts from Medline, and evaluate each model's ability to solve analogical retrieval problems derived from various biomedical knowledge bases. We train ESD on concept-path-concept triples extracted from these sentences, and SGNS on full sentences that have been minimally preprocessed with named entities (see \u00a73). Figure 1 shows the pipeline from training to evaluation.", "cite_spans": [ { "start": 121, "end": 144, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF20" }, { "start": 190, "end": 215, "text": "(Cohen and Widdows, 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 828, "end": 836, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From an applications perspective, we aim to evaluate the utility of these representations of relationships for two tasks. The first involves correctly identifying a concept that is related in a particular way to another concept, when this relationship has already been described explicitly in the biomedical literature. This task is related to the NLP task of relationship extraction, but rather than considering one sentence at a time, distributional models represent information from across all of the instances in which this pair have co-occurred, as well as information about relationships between similar concepts. We refer to this task as relationship retrieval (RR). The second task involves identifying concepts that are related in a particular way to one another, where this relationship has not been described in the literature previously. We refer to this task as literature-based discovery (LBD), as identifying such implicit knowledge is the main goal of this field (Swanson, 1986) .", "cite_spans": [ { "start": 979, "end": 994, "text": "(Swanson, 1986)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate on four kinds of biomedical relationships, characterized by the semantic types of the entity pairs involved, namely chemical-gene, chemical-disease, gene-gene, and gene-disease relationships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The following paper is structured as follows. \u00a72 describes vector space models of language as they are evaluated for their ability to solve proportional analogy problems, as well as prior work in encoding dependency paths for downstream applications in relation extraction. \u00a73 presents the dependency path corpus from Percha and Altman (2018) . \u00a74 summarizes the knowledge bases from which we develop our evaluation data sets. \u00a75 describes the training details for each vector space model. \u00a76 and \u00a77 describe the methods and results for the RR and LBD evaluation paradigms. \u00a78 and \u00a79 offer discussion and conclude the paper. Code and evaluation data will be made available at https://github.com/amandalynne/ESD.", "cite_spans": [ { "start": 318, "end": 342, "text": "Percha and Altman (2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We look to prior work in using proportional analogies as a test of relationship representation in the general domain with existing studies on vector space models trained on generic English. While our biomedical data is largely in English, we constrain our evaluation to specific biomedical concepts and relationships as we apply and extend established methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Vector space models of semantics have been applied in information retrieval, cognitive science and computational linguistics for decades (Turney and Pantel, 2010) , with a resurgence of interest in recent years. Mikolov et al. (2013a) and Mikolov et al. (2013b) introduce the skip-gram architecture. This work demonstrated the use of a continuous vector space model of language that could be used for analogical reasoning when vector offset methods are applied, providing the following canonical example: if x i is the vector corresponding to word i, x king -x man + x woman yields a vector that is close in proximity to x queen . This result suggests that the model has learned something about semantic gender. They identified some other linguistic patterns recoverable from the vector space model, such as pluralization: x apple -x apples \u2248 x car -x cars , and developed evaluation sets of proportional analogy problems that have since been widely used as benchmarks for distributional models (see for example (Levy et al., 2015) ).", "cite_spans": [ { "start": 137, "end": 162, "text": "(Turney and Pantel, 2010)", "ref_id": "BIBREF27" }, { "start": 212, "end": 234, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF20" }, { "start": 239, "end": 261, "text": "Mikolov et al. (2013b)", "ref_id": "BIBREF21" }, { "start": 1012, "end": 1031, "text": "(Levy et al., 2015)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Vector space models of language and analogical reasoning", "sec_num": null }, { "text": "However, work soon followed that pointed out some of the shortcomings of attributing these results to the models' analogical reasoning capacity. For example, Linzen (2016) showed that the vector for 'queen' is itself one of the nearest neighbors to the vector for 'woman,' and so it can be argued that the model does not actually learn relational information that can be applied to analogical reasoning, but rather, can rely on the direct similarity between the target terms in the analogy to produce desirable results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector space models of language and analogical reasoning", "sec_num": null }, { "text": "Furthermore, Gladkova et al. (2016) introduce the Better Analogy Test Set (BATS) to provide an evaluation set for analogical reasoning that includes a broader set of semantic and syntactic relationships between words. This set proved far more challenging for embedding-based approaches. Newman-Griffis et al. (2017) provide results of vector offset methods applied to a dataset of biomedical analogies derived from UMLS triples, showing that certain biomedical relationships are more difficult to learn with analogical reasoning than others.", "cite_spans": [ { "start": 13, "end": 35, "text": "Gladkova et al. (2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Vector space models of language and analogical reasoning", "sec_num": null }, { "text": "Because the aim of this project is to robustly learn a handful of biomedical relationships, we are less concerned about the linguistic generalizability of these particular representations, but future work will examine the application of these vector space models to analogies in the general domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector space models of language and analogical reasoning", "sec_num": null }, { "text": "Levy and Goldberg (2014a) adapt the SGNS model to encode direct dependency relationships, rather than dependency paths. In this approach, a dependency-type/relative pair is treated as a target for prediction when the head of a phrase is observed (e.g. P (scientist/nsubj|discovers)). The dependency-based skipgram embeddings were shown to better reflect the functional roles of words than those trained on narrative text, which tended to emphasize topical associations. Recent work (Zhang et al. (2018) , Zhou et al. (2018) , Li et al. (2019) ) has also integrated dependency path representations in neural architectures for biomedical relation extraction, framing it as a classification task rather than an analogical reasoning task. The work of Washio and Kato (2018) is perhaps the most closely related to our approach, in that neural embeddings are trained on word-path-word triples. Aside from our application of domainspecific Named Entity Recognition (NER), a key methodological difference between this work and the current work is that their approach represents word pairs as a linear transformation of the concatenation of their embeddings, while we use XOR as a binding operator (following the approach of Kanerva (1996) ), which was first used to model biomedical analogical retrieval with semantic predications extracted from the literature by Cohen et al. (2011) 3 . On account of the use of a binding operator, individual entities, pairs of entities and dependency paths are all represented in a common vector space.", "cite_spans": [ { "start": 482, "end": 502, "text": "(Zhang et al. (2018)", "ref_id": "BIBREF35" }, { "start": 505, "end": 523, "text": "Zhou et al. (2018)", "ref_id": "BIBREF36" }, { "start": 526, "end": 542, "text": "Li et al. (2019)", "ref_id": "BIBREF18" }, { "start": 747, "end": 769, "text": "Washio and Kato (2018)", "ref_id": "BIBREF29" }, { "start": 1216, "end": 1230, "text": "Kanerva (1996)", "ref_id": "BIBREF13" }, { "start": 1356, "end": 1375, "text": "Cohen et al. (2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency embeddings", "sec_num": null }, { "text": "We train both the ESD and SGNS models on data released by Percha and Altman (2018) . This corpus 4 consists of about 70 million sentences from a subset of MEDLINE (approximately 16.5 million abstracts) which have PubTator (Wei et al., 2013) annotations applied to identify phrases that denote names of chemicals (including drugs and other chemicals of interest), genes (and the proteins they code for), and diseases (including side effects Figure 2 : Example of a path of dependencies between two entities of interest. The full parse is not shown, but rather, the minimum path of dependency relations between the two entities given the sentence. and other phenotypes). Throughout this paper, we use these shorthand names for each of these categories, following the convention established in Wei et al. (2013) and followed by Percha and Altman (2018) .", "cite_spans": [ { "start": 58, "end": 82, "text": "Percha and Altman (2018)", "ref_id": "BIBREF24" }, { "start": 222, "end": 240, "text": "(Wei et al., 2013)", "ref_id": "BIBREF30" }, { "start": 791, "end": 808, "text": "Wei et al. (2013)", "ref_id": "BIBREF30" }, { "start": 825, "end": 849, "text": "Percha and Altman (2018)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 440, "end": 448, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Text Data", "sec_num": "3" }, { "text": "The following example sentence from an article processed by PubTator shows how multi-word phrases that denote biomedical entities of interest, in this case atypical depression and seasonal affective disorder, are concatenated by underscores to constitute single tokens:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Data", "sec_num": "3" }, { "text": "Chromium has a beneficial effect on eating-related atypical symptoms of depression, and may be a valuable agent in treating atypical depression and seasonal affective disorder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Data", "sec_num": "3" }, { "text": "Percha and Altman (2018) also provide pruned Stanford dependency (De Marneffe and Manning, 2008) parses for the sentences in the corpus, consisting, for each sentence, of the minimal path of dependency relations connecting pairs of biomedical named entities identified by PubTator. Specifically, they extract dependency paths that connect chemicals to genes, chemicals to diseases, genes to diseases, and genes to genes. Figure 2 shows an example of a dependency path of relations between two terms, risperidone and rage. We use these dependency paths as representations for predicates that denote biomedical relationships of interest by concatenating the string representations of each path element, which are shown below the sentence in Figure 2 . Following Percha and Altman (2018), we exclude paths that denote a coordinating conjunction between elements and paths that denote an appositive construction, both of which are highly common in the set. In this corpus of 70 million sentences, there are about 44 million unique dependency paths that connect concepts of interest, the vast majority (around 40 million) of which appear just once in the corpus. 540,011 of these paths appear at least 5 times in the corpus.", "cite_spans": [ { "start": 69, "end": 96, "text": "Marneffe and Manning, 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 421, "end": 429, "text": "Figure 2", "ref_id": null }, { "start": 739, "end": 747, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Text Data", "sec_num": "3" }, { "text": "We construct our evaluation data sets with exemplars from knowledge bases for four primary kinds of biomedical relationships, characterized by the interactions between pairs of entities of the following types: chemical-gene, chemical-disease, gene-disease, and gene-gene.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "4" }, { "text": "We evaluate on pairs of entities from the following knowledge bases: DrugBank (Wishart et al., 2018), Online Mendelian Inheritance in Man (OMIM) (Hamosh et al., 2005) , PharmGKB (PGKB) (Whirl-Carrillo et al., 2012), Reactome (Fabregat et al., 2016) , Side Effect Resource (SIDER) (Kuhn et al., 2016) , and Therapeutic Target Database (TTD) Wang et al. (2020) .", "cite_spans": [ { "start": 145, "end": 166, "text": "(Hamosh et al., 2005)", "ref_id": "BIBREF12" }, { "start": 225, "end": 248, "text": "(Fabregat et al., 2016)", "ref_id": "BIBREF8" }, { "start": 280, "end": 299, "text": "(Kuhn et al., 2016)", "ref_id": "BIBREF14" }, { "start": 340, "end": 358, "text": "Wang et al. (2020)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "4" }, { "text": "Each knowledge base consists of pairs of entities that relate in a specific way. For example, SIDER Side Effects consists of chemical-diseasetyped pairs such that the chemical is known to have the disease as a side effect, e.g. (sertraline, insomnia). Meanwhile, another chemical-disease pair from a different database, Therapeutic Target Database (TTD) indications, is such that the chemical is indicated as a treatment for the disease, e.g. (carphenazine, schizophrenia). In constructing our evaluation sets, we process all terms such that they are lower-cased, and multi-word terms are concatenated by underscores. Furthermore, we eliminate from our evaluation sets any knowledge base terms that do not appear in the training corpus described in \u00a73 at least 5 times. It should be noted that across these sets, a single biomedical entity may appear with numerous spellings and naming conventions. Table 2 shows the corresponding relationship type for each of the knowledge bases we use, as well as the number of pairs from each that are used in our evaluation data. The relationship retrieval data consists of knowledge base pairs that appear in our training corpus connected by a dependency path at least once, while the literature-based discovery targets are those knowledge base pairs that do not appear connected by a dependency path in the corpus.", "cite_spans": [], "ref_spans": [ { "start": 899, "end": 906, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "4" }, { "text": "SGNS With SGNS, a shallow neural network is trained to estimate the probability of encountering a context term, t c , within a sliding window centered on an observed term, t o . The training objective involves maximizing this probability for true context terms P (t c |t o ), and minimizing it for randomly drawn counterexamples t \u00acc , P (t \u00acc |t o ), with probability estimated as the sigmoid function of the scalar product between the input weight vector for the observed term and the output weight vector of the context term, \u03c3(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "5" }, { "text": "\u2212 \u2192 t o . \u2212 \u2212 \u2192 t c|\u00acc )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "5" }, { "text": ". We used the Semantic Vectors 5 implementation of SGNS (which performs similarly to the fastText implementation across a range of analogical retrieval benchmarks (Cohen and Widdows, 2018) ) to train 250-dimensional embeddings, with a sliding window radius of two, on the complete set of full sentences from the corpus described in \u00a73 as the training corpus. As previously mentioned, multi-word phrases corresponding to named entities recognized by the PubTator system in these sentences are concatenated by underscores, and consequently receive a single vector representation. ESD With ESD, a shallow neural network is trained to estimate the probability of encountering the object, o, of a subject-predicate-object triple sP o. The training objective involves maximizing this probability for true objects P (o|s, P ) and minimizing it for randomly drawn counterexamples, \u00aco, P (\u00aco|s, P ). We adapted the Semantic Vectors 5 implementation of ESP to encode dependency paths, with binary vectors as representational basis (Widdows and Cohen, 2012) and the non-negative normalized Hamming distance (N N HD) to estimate the similarity between them. NNHD = max 0, 1 \u2212 2 \u00d7 Hamming distance dimensionality", "cite_spans": [ { "start": 163, "end": 188, "text": "(Cohen and Widdows, 2018)", "ref_id": "BIBREF4" }, { "start": 1021, "end": 1046, "text": "(Widdows and Cohen, 2012)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "5" }, { "text": "With this representational paradigm, probability can be estimated as N N HD(o, s \u2297 P ), where \u2297 represents the use of pairwise exclusive OR as a binding operator, in accordance with the Binary Spatter Code (Kanerva, 1996) . While ESP was originally developed to encode knowledge extracted from the literature using a small set of predefined predicates (e.g. TREATS), we adapt it here to encode a large variety (n=546,085) of dependency paths. For training, we concatenate the dependency relations (the underscored parts in Figure 2 ) into a single predicate token for which a vector is learned. Some examples of path tokens (concatenated dependency relations) can be seen in Table 1 . Unlike the original ESP implementation where predicate vectors were held constant, we permit dependency path vectors to evolve during training 6 . Further details on ESP can be found in (Cohen and Widdows, 2017) . For the current work, we set the dimensionality at 8000 bits (as this is equivalent in representational capacity to 250-dimensional single precision real vectors). For ESD, Table 1 shows the nearest neighboring dependency path vectors to the bound product I(metf ormin) \u2297 O(diabetes), illustrating paths that indicate the relationship between these terms, and ESD's capability to learn similar representations for paths with similar meaning.", "cite_spans": [ { "start": 206, "end": 221, "text": "(Kanerva, 1996)", "ref_id": "BIBREF13" }, { "start": 871, "end": 896, "text": "(Cohen and Widdows, 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 523, "end": 531, "text": "Figure 2", "ref_id": null }, { "start": 675, "end": 682, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1072, "end": 1079, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Training Details", "sec_num": "5" }, { "text": "Both SGNS and ESD were trained over five epochs, with a subsampling threshold of 10 \u22125 , a minimum term frequency threshold of 5 (which includes concatenated dependency paths for ESD), and a maximum frequency threshold of 10 6 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "5" }, { "text": "We use a proportional analogy ranked retrieval task for both the RR and LBD tasks, following prior work as described in \u00a72. Figure 3 visualizes this process. From a set of (X, Y) entity pairs from a knowledge base, given a term C and all terms D such that (C, D) is a pair in the set, we select n random (A, B) cue pairs from a disjoint set of pairs. We refer to (C, D) pairs as 'target pairs,' correct D completions as 'targets,' and (A, B) pairs as 'cues.' The vectors for the cue terms (A, B) and the term C are summed in the following fashion to produce the resulting vector v. Given an analogical pair A:B::C:D, where A and C, B and D are of the same semantic type, respectively, we develop cue vectors for the target D in each model as follows:", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Evaluation Methods", "sec_num": "6" }, { "text": "SGN S : \u2212 \u2192 v = \u2212 \u2192 B \u2212 \u2212 \u2192 A + \u2212 \u2192 C ESD : \u2212 \u2192 v = \u2212\u2212\u2192 I(A) \u2297 \u2212 \u2212\u2212 \u2192 O(B) \u2297 \u2212\u2212\u2192 I(C)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methods", "sec_num": "6" }, { "text": "SCORE PATH 0.974 controlled nmod start entity end entity amod controlled 0.935 add-on nmod start entity end entity amod add-on 0.565 reduces nsubj start entity reduces dobj requirement requirement nmod end entity 0.537 associated compound start entity end entity nsubj associated 0.516 start entity conj efficacy efficacy acl treating treating dobj end entity 0.438 treatment amod start entity treatment nmod end entity where I and O represent the input and output weight vectors of the ESD model, respectively. The SGNS method is the same as the 3COSADD method as described in Levy and Goldberg (2014b) .", "cite_spans": [ { "start": 578, "end": 603, "text": "Levy and Goldberg (2014b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methods", "sec_num": "6" }, { "text": "A K-nearest neighbor search is performed for v (using cosine distance for SGNS, NNHD for ESD) over the search space, and we record the ranks for each correct D target. The search space is constrained such that it consists of those terms from our training corpus that have a vector in both ESD and SGNS, a total of about 300,000 terms overall. For ESD, this space consists of the output weight vectors for each concept. For the proportional analogy task using K-nearest neighbors to rank completions to the analogy, the desired outcome is for the correct targets to be highly similar to the analogy cue vector v, such that the highest ranks are assigned to the correct target terms D in a search over the entire vector space. In this fashion, we perform this KNN search for every (X, Y) pair in the knowledge base and record the ranks for correct targets. We then compare the ranks of terms D across both vector spaces; the higher the ranks, the better the model is at capturing relational similarity. Table 2 shows, for each knowledge base, how many total unique X terms and total (X, Y) pairs are used for each task. Additionally, we show the average number of correct Y terms per X and the maximum number of correct Y terms per X. For the relationship retrieval task, we consider those (X, Y) pairs which are connected by at least one dependency path in our corpus. Meanwhile, (X, Y) pairs for the LBD task must not be connected by a dependency path in the corpus (we treat these heldout pairs as a proxy for estimating the quality of novel hypotheses). We know from the (X, Y) pair's presence in the knowledge base that it is a gold standard pair for the given relationship type, but from the models' perspective this information is not available from the text alone. Thus, we believe it is a good test of the models' ability to generate plausible hypotheses. To reiterate, the methodology for both the relationship retrieval and literaturebased discovery evaluations is the same; the only difference is in which pairs of terms from each knowledge base are used for evaluation data.", "cite_spans": [], "ref_spans": [ { "start": 1001, "end": 1008, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation Methods", "sec_num": "6" }, { "text": "We examine the role of increasing the number of cues in improving retrieval. For example, for a given (C, D) target pair, we can combine vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methods", "sec_num": "6" }, { "text": "Literature-based Discovery Total X Total Pairs Mean Y / X Max Y / X Total X Total Pairs Mean Y / X Max Y / X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship Retrieval", "sec_num": null }, { "text": "Gene Targets (DrugBank) 1626 6290 4 107 3569 37162 10 420 PGKB 535 2089 4 48 1563 28053 18 144 Agonists (TTD) 148 172 1 3 307 462 2 7 Antagonists (TTD) 188 200 1 2 508 620 1 5 Gene Targets (TTD) 1179 1436 1 7 4088 6430 2 15 Inhibitors (TTD) 522 669 1 7 1273 2082 2 for multiple (A, B) pairs with the C term vector to produce a final cue vector that is closer to the target D. When multiple cues are used, we superpose the cue vector for each of the cues, and normalize the resulting vector, with normalization of real vectors to unit length in SGNS, and normalization of binary vectors using the majority rule with ties split at random with ESD. Cues are always selected from the subset of knowledge base pairs that co-occur in our training corpus. We ensure that none of the (A, B) cue terms overlap with each other, nor with the (C, D) target terms, to assure that self-similarity does not inflate performance. We produced results for a range of 1, 5, 10, 25, and 50 cues, finding that the best results come from using 25 cues; we only report these resulting scores in \u00a77.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 316, "text": "Targets (DrugBank) 1626 6290 4 107 3569 37162 10 420 PGKB 535 2089 4 48 1563 28053 18 144 Agonists (TTD) 148 172 1 3 307 462 2 7 Antagonists (TTD) 188 200 1 2 508 620 1 5 Gene Targets (TTD) 1179 1436 1 7 4088 6430 2 15 Inhibitors (TTD) 522 669 1 7 1273 2082 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Chem-Gene", "sec_num": null }, { "text": "As a baseline inspired partly by Linzen (2016), we compute the similarity of vectors for B and D terms and C and D terms compared directly to each other, omitting the analogical task. The intuition here is that C and D terms are potentially close together in the vector space merely due to frequent co-occurrence in the corpus, and any analogical reasoning performance is merely relying on that fact. Meanwhile, terms B and D can be close together in the vector space simply because they are the same semantic type, and thus occur in similar contexts. In this case, relational analogy might not explain the performance, but mere distributional similarity. In the B:D comparison setting, cues B are added together to create a single cue vector with which to perform the KNN ranking over terms in which to find the target term D. These cue terms B are extracted from the same A, B cue pairs as those used for the full analogy setting to ensure a reasonable comparison across methods. In the C:D comparison setting, no cues are aggregated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chem-Gene", "sec_num": null }, { "text": "We present qualitative and quantitative results for each vector space model's ability to represent and retrieve relational information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "Qualitative Results Table 3 shows a side-byside comparison of the top 10 retrieved terms given the vector for the term risperidone composed with 25 randomly selected (drug, indication) cues from SIDER. The goal is to complete the proportional analogy corresponding to the treatment relationship. Of the top 10 terms retrieved in the ESD vector space, 4 are correct completions to the analogy, while 3 more are plausible completions based on literature. 'Tardive oromandibular dystonia,' while of the correct semantic type targeted by this analogy, is actually a side effect of risperidone. A majority of the retrieved results, however, are known or plausible treatment targets. Meanwhile, most of the top 10 terms retrieved by SGNS are names of other drugs that are similar to risperidone. Additionally, 'psychiatric and visual disturbances' and 'tardive dyskinesia' are side effects of risperidone, not treatment targets. Notably, all of the results retrieved with ESD are of the correct semantic type, i.e., they are disorders, while SGNS retrieves a mix of drugs and side effects.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "Quantitative Results For each C term in each evaluation set, we record the ranks of all D tar-rank ESD (ours) SGNS 1 separation anxiety risperidone \u00d7 2 schizophrenia olanzapine \u00d7 3 depressed state quetiapine \u00d7 4 bipolar mania aripiprazole \u00d7 5 tardive oromanibular dystonia clozapine \u00d7 6 treatment of trichotillomania * psychiatric and visual disturbances 7 pervasive developmental disorder (NOS) * ziprasidone \u00d7 8 borderline personality disorder amisulpride \u00d7 9 psychotic disorders paliperidone \u00d7 10 mania tardive dyskinesia get terms resulting from the K-nearest neighbor search. For ease of comparison, we normalize all raw ranks by the length of the full search space (324363 terms in total), and then subtract this value from 1 so that lower ranks (i.e., better results) are displayed as higher numbers, for ease of interpretation. For a baseline score, we ran a simulation in which the entire search space was shuffled randomly 100 times, and recorded the median ranks of multiple target D terms, given some C. We find that the median rank for D terms in a randomly shuffled space tended toward the middle of the ranked list. Thus, the baseline score is established as 0.5; any score lower than this means the model performed worse than a random shuffle at retrieving target terms. In Table 4 , 1 is the highest possible score, and 0 is the lowest. We report results at 25 (A, B) cues, the setting for which performance was best for both ESD and SGNS. 'Full' in Table 4 refers to evaluation with a full A:B::C:D analogy, while 'B:D' refers to the baseline that compares vectors for terms directly, rather than using relational information. We do not report C:D comparison results, as they were categorically worse than both Full and B:D results.", "cite_spans": [], "ref_spans": [ { "start": 1290, "end": 1297, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1467, "end": 1474, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "The results in Table 4 show that ESD outperforms SGNS on the RR task for 18 of 19 databases, and for 17 of 19 databases on the LBD task. It is clear that literature-based discovery is harder than relationship retrieval, as the scores are generally lower across the board for this task. We discuss the results for each task separately.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "For a total of 12 out of 19 sets, ESD on full analogies outperforms ESD on direct B:D comparisons, suggesting that the model has learned generalizable relationship information for these types of relations rather than relying on distributional term similarity. Because gene-gene pairs consist of entities of the same semantic type, it can be argued that B:D similarity should be very high, and yet scores are higher for the full analogy over the B:D baseline for most of these sets, for both ESD and SGNS. For SIDER side effects, the B:D baseline for ESD shows higher scores than the full analogy for both LBD and RR; one reason for this could be that there is a high degree of side effect overlap between drugs, and so the side effect terms themselves are highly similar to each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship retrieval", "sec_num": "8.1" }, { "text": "The best performance on a majority of the sets comes from the ESD B:D model, suggesting that the model relies on term similarity over relational information for performance. Although SGNS doesn't perform the best overall, the full analogy model tends to outperform its B:D counterpart, suggesting that SGNS has managed to extrapolate relational information to the retrieval of held-out targets. As previously mentioned, performance on this task is made difficult due to the lack of normalization of concepts across our datasets. Additionally, as gold-standard targets in the databases. Considering the case of SIDER, which is built from automatically extracted information (not human-curated) the plausible results here are missing from the database but are supported by evidence from published papers (e.g. Oravecz and\u0160tuhec (2014)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Literature-based discovery", "sec_num": "8.2" }, { "text": "We have compared two vector space models of language, Embedding of Structural Dependencies and Skip-gram with Negative Sampling, for their ability to represent biomedical relationships from literature in an analogical retrieval task. Our results suggest that encoding structural information in the form of dependency paths connecting biomedical entities of interest can improve performance on two analogical retrieval tasks, relationship retrieval and literature-based discovery. In future work, we would like to compare our methods with knowledge base completion techniques using contextualized vectors from language models as in Bosselut et al. (2019) as another method applicable to literaturebased discovery.", "cite_spans": [ { "start": 631, "end": 653, "text": "Bosselut et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "9" }, { "text": "For related work, seeWiddows and Cohen (2014) 4 Version 7 of the corpus retrieved at https://zenodo. org/record/3459420", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/semanticvectors/semanticvectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This capability has been used to to predict drug interactions, with performance exceeding that of models with orders of magnitude more parameters(Burkhardt et al., 2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by U.S. National Library of Medicine Grant No. R01 LM011563. The authors would like to thank the anonymous reviewers for their feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "COMET: Commonsense transformers for automatic knowledge graph construction", "authors": [ { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4762--4779", "other_ids": { "DOI": [ "10.18653/v1/P19-1470" ] }, "num": null, "urls": [], "raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762-4779, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Predicting adverse drug-drug interactions with neural embedding of semantic predications", "authors": [ { "first": "A", "middle": [], "last": "Hannah", "suffix": "" }, { "first": "Devika", "middle": [], "last": "Burkhardt", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Mower", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2019, "venue": "AMIA Annual Symposium Proceedings", "volume": "2019", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hannah A Burkhardt, Devika Subramanian, Justin Mower, and Trevor Cohen. 2019. Predicting ad- verse drug-drug interactions with neural embedding of semantic predications. In AMIA Annual Sympo- sium Proceedings, volume 2019, page 992. Ameri- can Medical Informatics Association.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Empirical distributional semantics: methods and biomedical applications", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" } ], "year": 2009, "venue": "Journal of biomedical informatics", "volume": "42", "issue": "2", "pages": "390--405", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Cohen and Dominic Widdows. 2009. Empiri- cal distributional semantics: methods and biomedi- cal applications. Journal of biomedical informatics, 42(2):390-405.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Embedding of semantic predications", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" } ], "year": 2017, "venue": "Journal of biomedical informatics", "volume": "68", "issue": "", "pages": "150--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Cohen and Dominic Widdows. 2017. Embed- ding of semantic predications. Journal of biomedi- cal informatics, 68:150-166.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bringing order to neural word embeddings with embeddings augmented by random permutations (earp)", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "465--475", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Cohen and Dominic Widdows. 2018. Bringing order to neural word embeddings with embeddings augmented by random permutations (earp). In Pro- ceedings of the 22nd Conference on Computational Natural Language Learning, pages 465-475.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Finding schizophrenia's prozac emergent relational similarity in predication space", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Schvaneveldt", "suffix": "" }, { "first": "Thomas C", "middle": [], "last": "Rindflesch", "suffix": "" } ], "year": 2011, "venue": "International Symposium on Quantum Interaction", "volume": "", "issue": "", "pages": "48--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Cohen, Dominic Widdows, Roger Schvan- eveldt, and Thomas C Rindflesch. 2011. Finding schizophrenia's prozac emergent relational similar- ity in predication space. In International Symposium on Quantum Interaction, pages 48-59. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The stanford typed dependencies representation", "authors": [ { "first": "Marie-Catherine De", "middle": [], "last": "Marneffe", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Coling 2008: proceedings of the workshop on cross-framework and cross-domain parser evaluation", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine De Marneffe and Christopher D Man- ning. 2008. The stanford typed dependencies repre- sentation. In Coling 2008: proceedings of the work- shop on cross-framework and cross-domain parser evaluation, pages 1-8. Association for Computa- tional Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Medical semantic similarity with a neural language model", "authors": [ { "first": "Guido", "middle": [], "last": "Lance De Vine", "suffix": "" }, { "first": "Bevan", "middle": [], "last": "Zuccon", "suffix": "" }, { "first": "Laurianne", "middle": [], "last": "Koopman", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Sitbon", "suffix": "" }, { "first": "", "middle": [], "last": "Bruza", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 23rd ACM international conference on conference on information and knowledge management", "volume": "", "issue": "", "pages": "1819--1822", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance De Vine, Guido Zuccon, Bevan Koopman, Lau- rianne Sitbon, and Peter Bruza. 2014. Medical se- mantic similarity with a neural language model. In Proceedings of the 23rd ACM international confer- ence on conference on information and knowledge management, pages 1819-1822.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The reactome pathway knowledgebase", "authors": [ { "first": "Antonio", "middle": [], "last": "Fabregat", "suffix": "" }, { "first": "Konstantinos", "middle": [], "last": "Sidiropoulos", "suffix": "" }, { "first": "Phani", "middle": [], "last": "Garapati", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Gillespie", "suffix": "" }, { "first": "Kerstin", "middle": [], "last": "Hausmann", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Haw", "suffix": "" }, { "first": "Bijay", "middle": [], "last": "Jassal", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Jupe", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Korninger", "suffix": "" }, { "first": "Sheldon", "middle": [], "last": "Mckay", "suffix": "" } ], "year": 2016, "venue": "Nucleic acids research", "volume": "44", "issue": "D1", "pages": "481--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonio Fabregat, Konstantinos Sidiropoulos, Phani Garapati, Marc Gillespie, Kerstin Hausmann, Robin Haw, Bijay Jassal, Steven Jupe, Florian Korninger, Sheldon McKay, et al. 2016. The reactome pathway knowledgebase. Nucleic acids research, 44(D1):D481-D487.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rapamycin-mtor+ braf=? using relational similarity to find therapeutically relevant druggene relationships in unstructured text", "authors": [ { "first": "Safa", "middle": [], "last": "Fathiamini", "suffix": "" }, { "first": "M", "middle": [], "last": "Amber", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Vijaykumar", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Nora", "middle": [ "S" ], "last": "Holla", "suffix": "" }, { "first": "Funda", "middle": [], "last": "Sanchez", "suffix": "" }, { "first": "Elmer", "middle": [ "V" ], "last": "Meric-Bernstam", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Bernstam", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2019, "venue": "Journal of biomedical informatics", "volume": "90", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Safa Fathiamini, Amber M Johnson, Jia Zeng, Vi- jaykumar Holla, Nora S Sanchez, Funda Meric- Bernstam, Elmer V Bernstam, and Trevor Cohen. 2019. Rapamycin-mtor+ braf=? using rela- tional similarity to find therapeutically relevant drug- gene relationships in unstructured text. Journal of biomedical informatics, 90:103094.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Structure mapping in analogy and similarity", "authors": [ { "first": "Dedre", "middle": [], "last": "Gentner", "suffix": "" }, { "first": "", "middle": [], "last": "Arthur B Markman", "suffix": "" } ], "year": 1997, "venue": "American psychologist", "volume": "52", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dedre Gentner and Arthur B Markman. 1997. Struc- ture mapping in analogy and similarity. American psychologist, 52(1):45.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Analogy-based detection of morphological and semantic relations with word embeddings: What works and what doesn't", "authors": [ { "first": "Anna", "middle": [], "last": "Gladkova", "suffix": "" }, { "first": "Aleksandr", "middle": [], "last": "Drozd", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Matsuoka", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL-HLT SRW", "volume": "", "issue": "", "pages": "47--54", "other_ids": { "DOI": [ "10.18653/v1/N16-2002" ] }, "num": null, "urls": [], "raw_text": "Anna Gladkova, Aleksandr Drozd, and Satoshi Mat- suoka. 2016. Analogy-based detection of morpho- logical and semantic relations with word embed- dings: What works and what doesn't. In Proceed- ings of the NAACL-HLT SRW, pages 47-54, San Diego, California, June 12-17, 2016. ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Online mendelian inheritance in man (omim), a knowledgebase of human genes and genetic disorders", "authors": [ { "first": "Ada", "middle": [], "last": "Hamosh", "suffix": "" }, { "first": "Alan", "middle": [ "F" ], "last": "Scott", "suffix": "" }, { "first": "Joanna", "middle": [ "S" ], "last": "Amberger", "suffix": "" }, { "first": "Carol", "middle": [ "A" ], "last": "Bocchini", "suffix": "" }, { "first": "", "middle": [], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Mckusick", "suffix": "" } ], "year": 2005, "venue": "Nucleic acids research", "volume": "33", "issue": "1", "pages": "514--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ada Hamosh, Alan F Scott, Joanna S Amberger, Carol A Bocchini, and Victor A McKusick. 2005. Online mendelian inheritance in man (omim), a knowledgebase of human genes and genetic disor- ders. Nucleic acids research, 33(suppl 1):D514- D517.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Binary spatter-coding of ordered k-tuples", "authors": [ { "first": "Pentti", "middle": [], "last": "Kanerva", "suffix": "" } ], "year": 1996, "venue": "International Conference on Artificial Neural Networks", "volume": "", "issue": "", "pages": "869--873", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pentti Kanerva. 1996. Binary spatter-coding of ordered k-tuples. In International Conference on Artificial Neural Networks, pages 869-873. Springer.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The sider database of drugs and side effects", "authors": [ { "first": "Michael", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "Ivica", "middle": [], "last": "Letunic", "suffix": "" }, { "first": "Lars", "middle": [ "Juhl" ], "last": "Jensen", "suffix": "" }, { "first": "Peer", "middle": [], "last": "Bork", "suffix": "" } ], "year": 2016, "venue": "Nucleic acids research", "volume": "44", "issue": "D1", "pages": "1075--1079", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Kuhn, Ivica Letunic, Lars Juhl Jensen, and Peer Bork. 2016. The sider database of drugs and side effects. Nucleic acids research, 44(D1):D1075- D1079.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Dependencybased word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "302--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 302-308.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Linguistic regularities in sparse and explicit word representations", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the eighteenth conference on computational natural language learning", "volume": "", "issue": "", "pages": "171--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014b. Linguistic regularities in sparse and explicit word representa- tions. In Proceedings of the eighteenth conference on computational natural language learning, pages 171-180.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving distributional similarity with lessons learned from word embeddings", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "211--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Integrating shortest dependency path and sentence sequence into a deep learning framework for relation extraction in clinical text", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhihao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yaoyun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "BMC medical informatics and decision making", "volume": "19", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiheng Li, Zhihao Yang, Chen Shen, Jun Xu, Yaoyun Zhang, and Hua Xu. 2019. Integrating shortest de- pendency path and sentence sequence into a deep learning framework for relation extraction in clinical text. BMC medical informatics and decision making, 19(1):22.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Issues in evaluating semantic spaces using word analogies", "authors": [ { "first": "", "middle": [], "last": "Tal Linzen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", "volume": "", "issue": "", "pages": "13--18", "other_ids": { "DOI": [ "10.18653/v1/W16-2503" ] }, "num": null, "urls": [], "raw_text": "Tal Linzen. 2016. Issues in evaluating semantic spaces using word analogies. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representa- tions for NLP, pages 13-18, Berlin, Germany. As- sociation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word repre- sentations in vector space. In Proceedings of Inter- national Conference on Learning Representations (ICLR).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Insights into analogy completion from the biomedical domain", "authors": [ { "first": "Denis", "middle": [], "last": "Newman-Griffis", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Fosler-Lussier", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "19--28", "other_ids": { "DOI": [ "10.18653/v1/W17-2303" ] }, "num": null, "urls": [], "raw_text": "Denis Newman-Griffis, Albert Lai, and Eric Fosler- Lussier. 2017. Insights into analogy completion from the biomedical domain. In BioNLP 2017, pages 19-28, Vancouver, Canada,. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Trichotillomania successfully treated with risperidone and naltrexone: a geriatric case report", "authors": [ { "first": "Robert", "middle": [], "last": "Oravecz", "suffix": "" }, { "first": "", "middle": [], "last": "Matej\u0161tuhec", "suffix": "" } ], "year": 2014, "venue": "Journal of the American Medical Directors Association", "volume": "15", "issue": "4", "pages": "301--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Oravecz and Matej\u0160tuhec. 2014. Trichotillo- mania successfully treated with risperidone and nal- trexone: a geriatric case report. Journal of the Amer- ican Medical Directors Association, 15(4):301-302.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A global network of biomedical relationships derived from text", "authors": [ { "first": "Bethany", "middle": [], "last": "Percha", "suffix": "" }, { "first": "", "middle": [], "last": "Russ B Altman", "suffix": "" } ], "year": 2018, "venue": "Bioinformatics", "volume": "34", "issue": "15", "pages": "2614--2624", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bethany Percha and Russ B Altman. 2018. A global network of biomedical relationships derived from text. Bioinformatics, 34(15):2614-2624.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Dissecting contextual word embeddings: Architecture and representation", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1499--1509", "other_ids": { "DOI": [ "10.18653/v1/D18-1179" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 1499-1509, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Fish oil, raynaud's syndrome, and undiscovered public knowledge. Perspectives in biology and medicine", "authors": [ { "first": "", "middle": [], "last": "Don R Swanson", "suffix": "" } ], "year": 1986, "venue": "", "volume": "30", "issue": "", "pages": "7--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Don R Swanson. 1986. Fish oil, raynaud's syndrome, and undiscovered public knowledge. Perspectives in biology and medicine, 30(1):7-18.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "From frequency to meaning: Vector space models of semantics", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Journal of artificial intelligence research", "volume": "37", "issue": "", "pages": "141--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research, 37:141-188.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Therapeutic target database 2020: enriched resource for facilitating research and early development of targeted therapeutics. Nucleic acids research", "authors": [ { "first": "Yunxia", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Song", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Fengcheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhengwen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Runyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "48", "issue": "", "pages": "1031--1041", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yunxia Wang, Song Zhang, Fengcheng Li, Ying Zhou, Ying Zhang, Zhengwen Wang, Runyuan Zhang, Jiang Zhu, Yuxiang Ren, Ying Tan, et al. 2020. Therapeutic target database 2020: enriched re- source for facilitating research and early develop- ment of targeted therapeutics. Nucleic acids re- search, 48(D1):D1031-D1041.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Filling missing paths: Modeling co-occurrences of word pairs and dependency paths for recognizing lexical semantic relations", "authors": [ { "first": "Koki", "middle": [], "last": "Washio", "suffix": "" }, { "first": "Tsuneaki", "middle": [], "last": "Kato", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.03411" ] }, "num": null, "urls": [], "raw_text": "Koki Washio and Tsuneaki Kato. 2018. Filling missing paths: Modeling co-occurrences of word pairs and dependency paths for recognizing lexical semantic relations. arXiv preprint arXiv:1809.03411.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Pubtator: a web-based text mining tool for assisting biocuration", "authors": [ { "first": "Chih-Hsuan", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2013, "venue": "Nucleic acids research", "volume": "41", "issue": "W1", "pages": "518--522", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2013. Pubtator: a web-based text mining tool for assisting biocuration. Nucleic acids research, 41(W1):W518- W522.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Pharmacogenomics knowledge for personalized medicine", "authors": [ { "first": "Michelle", "middle": [], "last": "Whirl-Carrillo", "suffix": "" }, { "first": "Ellen", "middle": [ "M" ], "last": "Mcdonagh", "suffix": "" }, { "first": "Li", "middle": [], "last": "Hebert", "suffix": "" }, { "first": "", "middle": [], "last": "Gong", "suffix": "" }, { "first": "", "middle": [], "last": "Sangkuhl", "suffix": "" }, { "first": "", "middle": [], "last": "Thorn", "suffix": "" }, { "first": "Teri", "middle": [ "E" ], "last": "Russ B Altman", "suffix": "" }, { "first": "", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2012, "venue": "Clinical Pharmacology & Therapeutics", "volume": "92", "issue": "4", "pages": "414--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michelle Whirl-Carrillo, Ellen M McDonagh, JM Hebert, Li Gong, K Sangkuhl, CF Thorn, Russ B Altman, and Teri E Klein. 2012. Pharma- cogenomics knowledge for personalized medicine. Clinical Pharmacology & Therapeutics, 92(4):414- 417.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Real, complex, and binary semantic vectors", "authors": [ { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2012, "venue": "International Symposium on Quantum Interaction", "volume": "", "issue": "", "pages": "24--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominic Widdows and Trevor Cohen. 2012. Real, complex, and binary semantic vectors. In Interna- tional Symposium on Quantum Interaction, pages 24-35. Springer.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Reasoning with vectors: A continuous model for fast robust inference", "authors": [ { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2014, "venue": "Logic Journal of the IGPL", "volume": "23", "issue": "2", "pages": "141--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominic Widdows and Trevor Cohen. 2014. Reason- ing with vectors: A continuous model for fast robust inference. Logic Journal of the IGPL, 23(2):141- 173.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Drugbank 5.0: a major update to the drugbank database for", "authors": [ { "first": "", "middle": [], "last": "David S Wishart", "suffix": "" }, { "first": "D", "middle": [], "last": "Yannick", "suffix": "" }, { "first": "An", "middle": [ "C" ], "last": "Feunang", "suffix": "" }, { "first": "Elvis", "middle": [ "J" ], "last": "Guo", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Jason", "middle": [ "R" ], "last": "Marcu", "suffix": "" }, { "first": "Tanvir", "middle": [], "last": "Grant", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Sajed", "suffix": "" }, { "first": "Carin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Zinat", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Sayeeda", "suffix": "" } ], "year": 2018, "venue": "Nucleic acids research", "volume": "46", "issue": "D1", "pages": "1074--1082", "other_ids": {}, "num": null, "urls": [], "raw_text": "David S Wishart, Yannick D Feunang, An C Guo, Elvis J Lo, Ana Marcu, Jason R Grant, Tanvir Sajed, Daniel Johnson, Carin Li, Zinat Sayeeda, et al. 2018. Drugbank 5.0: a major update to the drug- bank database for 2018. Nucleic acids research, 46(D1):D1074-D1082.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A hybrid model based on neural networks for biomedical relation extraction", "authors": [ { "first": "Yijia", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hongfei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhihao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shaowu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuanyuan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "Journal of biomedical informatics", "volume": "81", "issue": "", "pages": "83--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Zhang, Hongfei Lin, Zhihao Yang, Jian Wang, Shaowu Zhang, Yuanyuan Sun, and Liang Yang. 2018. A hybrid model based on neural networks for biomedical relation extraction. Journal of biomedi- cal informatics, 81:83-92.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Chemical-induced disease relation extraction with dependency information and prior knowledge", "authors": [ { "first": "Huiwei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shixian", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Yunlong", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhuang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chengkun", "middle": [], "last": "Lang", "suffix": "" }, { "first": "Yingyu", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2018, "venue": "Journal of biomedical informatics", "volume": "84", "issue": "", "pages": "171--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huiwei Zhou, Shixian Ning, Yunlong Yang, Zhuang Liu, Chengkun Lang, and Yingyu Lin. 2018. Chemical-induced disease relation extraction with dependency information and prior knowledge. Jour- nal of biomedical informatics, 84:171-178.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Drug knowledge bases and their applications in biomedical informatics research", "authors": [ { "first": "Yongjun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Elemento", "suffix": "" }, { "first": "Jyotishman", "middle": [], "last": "Pathak", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Briefings in bioinformatics", "volume": "20", "issue": "4", "pages": "1308--1321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongjun Zhu, Olivier Elemento, Jyotishman Pathak, and Fei Wang. 2019. Drug knowledge bases and their applications in biomedical informatics research. Briefings in bioinformatics, 20(4):1308-1321.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Overview of training and evaluation pipeline. Two embedding models, Embedding of Structural Dependencies (ESD) and Skip-gram with Negative Sampling (SGNS), are trained on data from a corpus of \u224870 million sentences from Medline. The resulting representations are then evaluated on data collected from biomedical knowledge bases.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Overview of analogical ranked retrieval paradigm.", "type_str": "figure", "uris": null }, "TABREF0": { "type_str": "table", "content": "", "text": "Nearest neighboring dependency path embeddings to I(metformin) \u2297 O(diabetes) where I and O indicate input and output weight vectors respectively.", "num": null, "html": null }, "TABREF2": { "type_str": "table", "content": "
: Total unique X terms, total (X, Y) pairs, average number of correct Y terms per X, and maximum number
of correct Y terms per X for each knowledge base.
", "text": "", "num": null, "html": null }, "TABREF3": { "type_str": "table", "content": "
: Top 10 results for a K-nearest neighbor search over terms for treatment targets for the drug risperidone
(an antipsychotic drug), using 25 (drug, indication) pairs from SIDER as cues. Bolded terms are correct targets,
i.e., they are listed as treatment targets for risperidone in SIDER. * : a disorder that risperidone treats or might treat,
based on external literature or a synonym for a target from SIDER; \u00d7: a chemical, i.e., something that could not
be a treatment target for a drug.
", "text": "", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "
Relationship retrievalLBD
ESD (ours)SGNSESD (ours)SGNS
FullB:DFullB:DFullB:DFullB:D
", "text": "shows, several top ranked terms are plausible analogy completions, but do not appear as DrugBank) 0.912 0.897 0.839 0.212 0.715 0.806 0.496 0.250 PGKB 0.969 0.994 0.705 0.361 0.737 0.918 0.366 0.317 Agonists (TTD) 0.997 0.907 0.998 0.647 0.802 0.781 0.924 0.708 Antagonists (TTD) 1.000 0.900 0.999 0.732 0.802 0.703 0.831 0.750 Gene Targets (TTD) 0.998 0.867 0.994 0.387 0.746 0.760 0.625 0.479 Inhibitors (TTD) 0.998 0.874 0.993 0.415 0.773 0.759 0.682 0.392 Chem-Disease Side Effects (SIDER) 0.997 0.999 0.967 0.942 0.952 0.994 0.799 0.932 Drug Indication (SIDER) 1.000 0.995 0.949 0.588 0.969 0.988 0.663 0.605 Biomarker-Disease (TTD) 0.996 0.997 0.944 0.781 0.932 0.977 0.799 0.726", "num": null, "html": null }, "TABREF5": { "type_str": "table", "content": "", "text": "Results for relationship retrieval (RR) and literature-based discovery (LBD) for full analogy (A:B::C:D) and B:D retrieval. Scores are displayed here as the median of scores (1 -normalized rank) for all D terms in a knowledge base evaluation set.", "num": null, "html": null } } } }